Managed Care 3.0

Charting a better health care future Joan C. Barrett

Photo: iStock.com/Olivier Le Moal

No doubt, the fate of the Affordable Care Act (ACA) will be a major issue in the upcoming U.S. 2020 elections. And it should be. There are still many outstanding questions on how we finance health care and how to make it accessible to everyone. But, regardless of what happens with the ACA, the cost and quality of health care must be addressed. As Figure 1 shows, the per capita cost of health care in the United States is about double that of comparable countries. This makes health care unaffordable for many Americans. In fact, one of the few things all political parties agree on is that the cost of prescription drugs should be the no. 1 health care issue for Congress.1

Not only is the cost of care too high, but we are not getting our money’s worth, either. The United States often ranks last in quality relative to comparable countries.2

Figure 1: 2016 Per Capita Health Care Expenditures

Hover Over Image for Specific Data

Source: Kirzinger, Ashley, Audrey Kearney, and Mollyann Brodie. Health Tracking Poll—September 2019: Health Care Policy in Congress and on the Campaign Trail. Kaiser Family Foundation, September 12, 2019 (accessed February 14, 2020).

The cost and quality of health care are not new issues. Early attempts to address the issue were relatively harsh, with many claims being denied due to “medical necessity” and/or “preexisting conditions” clauses. This phase, Managed Care 1.0, slowly gave way to a more consumer-friendly phase, Managed Care 2.0, which emphasized utilization review guidelines, evidence-based medicine and disease management programs. We are now entering a third phase, Managed Care 3.0, which will feature personalized treatment and prevention, new technologies, new data sources and new reimbursement methodologies.

Will Managed Care 3.0 be more successful than previous efforts to control cost and improve quality? The potential is certainly there, but successful implementation will require a focus on key health care drivers, good data and improved analytical techniques.

THE DISEASE BURDEN

Regardless of how health care is financed, the system must address the underlying disease burden, especially for chronic diseases. According to the Centers for Disease Control and Prevention (CDC), 90 percent of the total cost of health care is associated with people who have one or more chronic diseases, such as hypertension, diabetes and cancer.3 The major risk factors for chronic conditions include age, smoking and obesity. As Figure 2 shows, the population of the United States is younger than other countries, and we smoke less. Our obesity rate, however, is almost double that of other countries.4 

Figure 2: Key Risk Factors in the U.S.

Hover Over Image for Specific Data

Source: Kamal, Rabah, Cynthia Cox, and Erik Blumenkranz. What Do We Know About Social Determinants of Health in the U.S. and Comparable Countries? Peterson-KFF, November 21, 2017 (accessed February 14, 2020).

The lower smoking rates are not accidental. In the early 1960s, smoking was very much a part of the American culture. Characters often smoked in movies and television, and cigarette ads were everywhere. Even the Flintstones smoked! Then, in 1964, the Surgeon General issued a report describing the harmful health effects of smoking. The report was followed by a number of action steps, including a ban on cigarette advertising on TV, an increase in the federal tax on cigarettes, the introduction of over-the-counter nicotine medication and smoking bans in most public buildings. It took almost 50 years, but these efforts have been somewhat successful. Smoking rates slowly declined from 43 percent in 19645 to 14 percent in 2018.6 Although these results are impressive, about 9 percent of all U.S. health expenditures are still attributable to smoking.7

The efforts to reduce smoking are a good example of the traditional, broad-brush approach used to address health care issues. This broad-brush approach also works well for some people in maintaining a healthy weight: all it takes is a reasonable diet and exercise routine. In recent years, new technology, like the Fitbit, provides additional incentives to exercise and an easy feedback mechanism. Overall, however, a broad-brush approach has not been successful in reducing obesity rates. Obesity rates have risen from 54.9 percent in the 1988–1994 time period to 71.6 percent in the 2015–2016 time period.8 About 10 to 15 percent of all health care costs have been attributed to obesity-related conditions.9

Maybe we will eventually see a reduction in obesity rates like we saw in the smoking rates—but maybe not. Losing weight is a very personal journey, especially for diabetics and others who need to tailor their diet and exercise plan in a way that helps them manage their disease and that reflects their lifestyle and financial situation.

People who have diabetes, by definition, have high glucose levels. When a person eats carbohydrates, the body converts the carbs to glucose, a simple sugar. The pancreas produces insulin, which then determines if the glucose will be immediately converted into energy or stored as fat for later use. Too little insulin can result in weight gain, too much insulin can result in dizziness and other symptoms of a low-sugar event. Continuous glucose monitors help identify when spikes occur, but the diabetic may or may not know what to do. Do you take more insulin? Eat something? The ability to design a personal diet and exercise plan is an area that requires more research. This should become easier over time as we learn more about artificial intelligence (AI) and other analytical techniques.

THE BUSINESS OF HEALTH CARE

Regardless of what happens with the ACA, each provider organization must determine the best way to provide the needed and required services while still paying rent, paying insurance premiums and meeting payroll. An important part of running a business is setting and negotiating prices. Over the last 60 years, prices have been one of the major drivers of health care cost increases. Since 1960, per capita health expenses have grown an average of 8 percent per year, while the medical consumer price index has grown an average of 5.4 percent.10 More than half the increase in health care costs has been due to price increases.

This leads to two major questions:

  • Are provider reimbursement levels reasonable?
  • How can we lower the price of health care without sacrificing quality?

The answer to the first question is subjective. Everyone has their own opinion based on their unique perspective. Take the cost of an office visit, for example. The cost of a typical low-acuity office visit is usually between $50 and $100. That may sound high to a consumer who simply wants some cough medicine for a cold, but the provider would point to the cost of running a practice, including medical liability insurance, office staff, medical school debt and more.

The answer to the second question is much more complex. Several other countries control costs by regulating provider reimbursement levels. This approach may control costs, but it can lead to unintended consequences. For example, doctors actually took to the streets in protest of salaries in Germany in 2006. In the United States, prices for Medicare and Medicaid reimbursement are mostly regulated. But other segments, like private insurance, are not. There has been some interest in more regulation, especially for pharmacy prices, but it seems the United States is taking a three-pronged approach to managing prices: increased efficiency and quality control, value-based reimbursement and alternate settings.

Because health care is so personal, improving quality and efficiency will never be as scalable as things are in other areas, like manufacturing. Many of the efforts developed under Managed Care 1.0 and 2.0 are still in play, but now they are being supplemented with advances in technology like computer-assisted imaging. Computer-assisted imaging is often more accurate than just a visual examination of the results. In theory, the additional cost of the new technology will be offset by fewer errors and disease complications. It is unlikely, however, that new technology will replace the role of a physician entirely. The physician will still need to visually validate the machine’s findings, interpret the results in light of the patient’s overall health status, communicate with the patient and coordinate follow-up care. New technology is often expensive, however, so it remains to be seen if this will be a net savings.

Many ascribe the high cost of health care to the lack of accountability associated with the fee-for-service reimbursement methodology. As a result, we are seeing increased emphasis on value-based reimbursement methodologies, which generally include quality and efficiency components. Also, providers are assuming more risk. Value-based reimbursement methodologies got off to a slow start, so we are just now beginning to develop a critical mass and are seeing some savings. Currently about 30 percent of all payments are value-based contracts, and this number is rising. Although value-based reimbursement methodologies show great promise, there are still some issues that need to be addressed. As Figure 3 shows, the United States has fewer hospitals and physicians than other countries.11 That gives providers a great deal of leverage in determining prices, whether it is in political sway with Medicare and Medicaid price setting or in negotiations with health plans. 

MEASURING SUCCESS

The ultimate measure of success for these efforts will be lower trends for per capita health care. To achieve this success, however, we will need to be able to determine which efforts actually reduce cost, improve quality and/or enhance the consumer experience. New measurement techniques will be needed in light of the increased speed to market and the personalization of solutions.

Well-controlled clinical trials have always been, and probably always will be, the gold standard for determining the effectiveness of a new drug or device. There is good reason for this. Under a clinical trial, subjects are randomly selected to be in a test group that receives the treatment under investigation or a control group that does not. The random selection process minimizes bias and maximizes the validity of the findings on an “all other things being equal” basis. Clinical trials, however, have their drawbacks. They are usually expensive, and they take a long time to complete.

Actuaries have long used techniques similar to clinical trials in evaluating cost-saving efforts like disease management programs. For example, many actuaries use the participant/nonparticipant methodology, which, as the name implies, compares the results for those who participate in the program and those who do not. This method is not as expensive to administer as a true clinical trial, but the results are more biased. The individual decides whether or not to participate. There is no random selection.

What should these new techniques look like? First, they will be designed to answer the question, “What works for me?” That’s different than the the current question, “Does this drug, device or intervention work in general, all other things being equal?” Take recent changes in the treatment of diabetes. Before the introduction of the continuous glucose monitor (CGM), diabetics relied on one to four finger pricks per day to measure their glucose level as the main day-to-day tool for controlling blood sugar. Finger pricks were usually done just before a meal. Based on the result, the person could adjust either their insulin injection level and/or food intake. While this was helpful, the diabetic had no way to determine if they were experiencing extremely high or extremely low sugar events in the interim. With a continuous glucose monitor, the person wears a sensor that electronically captures glucose events at regular intervals during the day. Armed with this information, and with the help of a physician, a person with diabetes can not only detect extreme events, but also determine which foods are causing glucose spikes. Under both systems, the person with diabetes and their physician will rely on a periodic A1C test to measure the overall effectiveness of the treatment plan.

Over time, the results from a group of individual diabetics can be collected and analyzed for general trends. This is fundamentally different from the current approach, which starts with a large study group and then attempts to apply the results at the individual level, often taking a trial-and-error approach.

DATA

A good analysis requires good data. Until recently, data related to a clinical trial was limited to data collected in the course of conducting the clinical trial. Similarly, data in a look-alike study was limited primarily to claims data. One of the hallmarks of Managed Care 3.0 will be the availability of new sources of data.

Electronic health records (EHRs) are perhaps the most significant new source of information. EHRs include information not available from other sources, such as procedures and drugs ordered by the physician, and lab results. This type of information has the potential to quickly identify and fill gaps in care. On the flip side, this may result in information overload, a situation under which a physician is bombarded with potential gaps with little or no information about how to prioritize the results. Prioritizing gaps in care is another area for further research. One more note: Many providers have complained that the input process under many EHR systems is unwieldy and time-consuming. Until that problem is solved, data from the systems must be used with caution.

In addition to EHRs, there are many other sources of data, including:

  • Data from telemonitoring devices, which can alert a consumer or a physician when an adverse event is about to happen
  • Click-stream data that shows what sites consumers visited and what topics they searched, which can be valuable in determining if a consumer is concerned about a potential symptom or is planning a procedure in the near future
  • Census and other government data, which may capture social determinants of health, an area of growing interest

Of course, before using any source of data, the researcher needs to determine the accuracy, reliability and relevance of it. The researcher also must determine if the applicable privacy standards are being met.

WHAT’S NEXT?

The promise of Managed Care 3.0 is that new data sources, technology and analytical techniques will enable us to personalize the treatment and prevention of diseases and better manage the business of health care. The risk is that implementation of Managed Care 3.0 will be haphazard, leading to duplication of effort and misdiagnosis of conditions. The question now is, “How can we ensure a successful implementation?”

In the United States, there is no single government board or agency to oversee the cost and quality of health care. Instead, we rely on myriad individuals and organizations to push the agenda forward. Some of these individuals will be overly enthusiastic and optimistic, while others will be unduly cautious and pessimistic. It will be up to actuaries and other analytical thinkers to build the framework necessary for a successful implementation.

Joan C. Barrett, FSA, MAAA, is a consulting actuary with Axene Health Partners LLC.

Copyright © 2020 by the Society of Actuaries, Schaumburg, Illinois.