Measuring return on investment for DM isn’t easy

Because disease management programs do not provide instantaneous savings, the decision to invest in them represents a belief that savings will occur down the line as a result of the programs’ effectiveness. A November 2003 Academy Health issue brief, Evaluating ROI in State Disease Management Programs, says the challenge for policy-makers is to prove that disease management both improves health and yields an economic return. The brief was published by the State Coverage Initiatives of the Robert Wood Johnson Institute in Washington, DC.

Thomas Wilson, an epidemiologist and principal with Wilson Research of Loveland, OH, says if the fees that states pay to develop their own disease management programs or to contract with a vendor are less than the money they save from decreases in use of impatient and outpatient care, prescription drugs, and other services, then states have earned a positive return on investment (ROI).

Measuring the financial return associated with disease management programs is difficult because changes in health care costs over time cannot be assumed to be due only to the disease management intervention in the population that received the disease management services. Other external factors also could have played a role, Mr. Wilson says. For example, costs could have dropped in the disease management population because that group recently had been exposed to a heavily promoted new drug, or because they experienced a change in benefit design, or any of a number of other external factors.

According to Mr. Wilson, there are two general approaches to calculating ROI — direct and indirect. A direct assessment of ROI uses primary data only. It must include at least one outcome metric that is available in both the intervention and reference populations. Mr. Wilson says that to substantiate an ROI estimate, it is strongly recommended that measurement for at least one proximate outcome metric be taken in both groups as well. Assuming the clinical metrics change in the same direction as the financial metric, it will provide some assurance that the financial impact was paralleled by a change in a clinical metric.

Using secondary data

An indirect ROI assessment, according to Mr. Wilson, is one that uses secondary data, such as that found in a benchmark-type design. Such an analysis should include at least one proximate outcome metric; the ultimate outcome is inferred. An example of an indirect assessment would be the imputed savings of lowering blood pressure over a five-year period, based on an acceptable formula for calculating savings. Mr. Wilson says it should be noted that ROI assessments are easily biased by both nonequivalence and lack of comparability.

ROI is always an estimate, he says, and there are many biases that can influence it. "For any study," Mr. Wilson says, "pure objectivity and independence is a myth. Therefore, one should still assess the extent to which potential conflicts of interest might compromise the results. Ideally, the evaluator: 1) is qualified; 2) has little or no conflict of interest bias prior to the study; and 3) is not subject to any pressure, whether direct or indirect, from the client during the study."

Given the uncertainty associated with disease management analyses, he cautions, it probably is not possible to prove that disease management positively affects ROI by the legal standard of "beyond a reasonable doubt." A more reasonable goal, he says, is to aim for a "preponderance of evidence," which means that ROI assessments should not be based on a single study. Rather, evidence should be refreshed constantly with new data, which will assure those who pay the health care bills that the investments they made months or years earlier were intelligent ones.

Along the same lines, a Center for Studying Health System Change issue brief, Disease Management: A Leap of Faith to Lower-Cost, Higher-Quality Health Care, said that disease management programs are difficult to evaluate systematically because they are rarely implemented consistently across health plans and vendors and often evolve over time. Several studies have demonstrated that specific disease management programs can improve patient care and reduce service utilization, the report says, but the evidence varies widely across health conditions and types of interventions.

The authors noted that most health plans are interested in programs that can produce relatively short-term reductions in health care utilization and costs, because high membership turnover makes it difficult for plans to capture long-term savings. But employers may value longer-term results beyond those of interest to health plans, such as reductions in absenteeism and work-related injuries and improvements in worker productivity and satisfaction.

The center’s survey of several markets seems to indicate that disease management programs are achieving desired results in some, but not all, health plan settings. The report noted that a Seattle plan that dropped most of its programs in 2002 found that only a prenatal care program for high-risk pregnancies produced a positive return on investment and improved patient outcomes. The plan’s other programs reportedly were expensive to administer and served only limited numbers of members.

Other plans in Seattle; Greenville, SC; and Miami reported that some disease management programs can improve clinical performance or patient outcomes, although some still lack clear evidence of economic return on investment.

Insurer sees value

One insurer, convinced in the cost-effectiveness, has been offering lower premiums to fully insured employers that include disease and/or case management programs in their health plans.

Like health plans, the survey report said, employers have difficulty evaluating disease management effectiveness. A few large employers operating disease management programs independently of health plans have found evidence of program achievements. The center reported that one employer that offered an evidence-based program to manage workers’ serious medical conditions found that one in 16 patients was misdiagnosed, creating meaningful opportunities to improve care. That employer also reported saving more than $2 in health care costs for every $1 spent on disease management.

Overall, however, relatively few employers have been able to assess the performance of disease management programs for their specific employee populations, the report explained. In part, this is because health plans often do not have enough participants from any single employer to support employer-specific assessments, and many employers have not attempted to model systematically the health or economic effects of disease management activities on their work force.

"Lack of consistent evidence of improved quality and reduced costs has prevented more rapid acceptance of disease management programs, according to some employers," the report concluded.

Economic viability in DM

Finally, what the American Association of Health Plans/Health Insurance Association of America (AAHP/HIAA) described as groundbreaking research announced at a Washington, DC, briefing Nov. 6, 2003, purports to overcome limitations in previous research and show that some disease management programs are economically viable. The associations said some studies in peer-reviewed literature used well-designed methodologies to find significant cost savings in disease management but were based on the experiences of a single health plan and the extent to which such results can be generalized is uncertain.

Likewise, other studies did not adequately address important methodological issues such as the statistical phenomenon known as "regression to the mean," something that occurs when, for example, a disease management program enrolls patients who had particularly high utilization of health care services in the year before the program started. Costs for such patients would be expected to fall — to regress to the mean — in later periods of time, regardless of whether the patients participated in a disease management program. Failure to control for regression to the mean could result in the effects of the disease management program being overstated.

The AAHP/HIAA survey looked at 10 health plans that operate 25 different disease management programs for congestive heart failure, coronary artery disease, congestive obstructive pulmonary disease, low back pain, and end state renal disease. Researchers said the 10 plans represent a variety of model types, geographic regions, and enrolled members. Included were Medicare and Medicaid beneficiaries, enrollees in commercial HMOs and preferred provider organizations, and members of self-funded employer plans. The AAHP/HIAA said the disease management programs had similar elements that could be replicated in other health plan settings, including patient education materials such as information on self-care management, telephone-based nurse case management, distribution of information to physicians about clinical practice guidelines and utilization patterns, and home visits.

Each plan was asked if it had evaluated the effects of its disease management programs on utilization or cost and, if so, what methodology was used to conduct the evaluation. "As expected," the researchers added, "no evaluation met the gold standard’ of a randomized, controlled study — health plans face enormous barriers to randomly assigning patients to treatment and control groups in a pure experimental design. But all the plans used valid, nonexperimental methods in an effort to rule out plausible, alternative explanations [such as regression to the mean] for any decreases in utilization or cost among disease management enrollees."

The report concentrated on eight of the 25 evaluations that were particularly thorough in ruling out plausible alternative explanations. The researchers said that most eliminated the potential for selection bias, which would occur if those selected for disease management were systematically different from those not selected, by enrolling all people with a particular condition in the program for that condition, rather than enrolling only an advantageous subset of people with the particular condition. All the evaluations tracked utilization and costs for several time periods before the disease management program began, and for several time periods after the one-year point. Some evaluations achieved a form of randomization by gradually phasing in the disease management enrollment across geographic areas, simulating the effect of a randomized, controlled study.

According to the report, while quality of care was not the main focus of the survey, all of the disease management programs showed improvements in clinical outcomes consistent with published literature. "It appears clear," according to the researchers, "that when a chronic condition such as asthma or diabetes is managed appropriately on a continuing basis, patients enjoy a better quality of life and avoid many medical crises."

Key points made by the researchers included:

  • Asthma disease management programs reduce total health care costs and show a strong return on investment.
  • Disease management programs for congestive heart failure reduce emergency department visits and inpatient admissions by one-third.
  • Disease management programs for low back pain provide a strong return on investment.
  • Diabetes disease management programs reduce per-member, per-month costs, inpatient days, inpatient costs, and total costs.
  • Disease management programs for multiple chronic conditions provide a major ROI.

The researchers explained the eight evaluations they reported on "break new ground in overcoming the limitations of previous research on disease management. Unlike some earlier studies, these evaluations are valid because they address important methodological issues, such as the statistical phenomenon known as regression to the mean. And these evaluations are generalizable because they cover multiple health plans, different areas of the country, and a diverse range of people of various ages from different socioeconomic backgrounds."

[Contact Mr. Wilson at (513) 289-3743. E-mail: twilson@wilsonresearch-llc.com. For the Center for Studying Health System Change report, go to: www.hschange.org. Phone: (202) 484-5261. For the AAHP/HIAA report, go to: www.aahp.org.]