The trusted source for
healthcare information and
DMAA suggests approach for evaluating DM programs
Association steered clear of standardization
In a quest to meet the growing need for disease management programs to demonstrate their effectiveness, a group of industry stakeholders has come up with a set of processes that can be used to show the outcomes of disease management.
The Disease Management Association of America’s (DMAA) steering committee suggests a preferred approach, not a mandated or standardized approach, for disease management program evaluations, emphasizes Karen Fitzner, PhD, DMAA director of research and program development.
"The steering committee and all those who reviewed the report agree that we shouldn’t be pushing toward standardization at this point," she says.
Fitzner likens the state of today’s disease management programs to the early days of home video systems, when both VHS and Betamax formats were available.
"The natural evolution of the products will help determine what the standards are," she says.
In the meantime, disease management providers must begin to measure their outcomes using scientifically validated methodologies to prove that the interventions they make have an effect on patient health and health care costs.
"Purchasers of disease management programs need to be able to evaluate each on a level playing field, and competing programs need to be able to demonstrate their value," Fitzner says.
It’s not enough to measure the financial savings a disease management program generates, she points out.
Employers are looking at clinical outcomes and decreases in absenteeism, among other factors, Fitzner adds.
"Disease management programs need to be able to prove that they enhance clinical outcomes or stop the decline. It’s hard to get support for a program unless you can show that outcomes are improving care," she adds.
Types of metrics
Evaluation should include multiple metrics and should show the link between the intervention and the outcomes, she says. The metrics include:
• Clinical measures: These should be disease-specific outcomes, such as HEDIS measures and other benchmark measures that show that patients are managing their conditions better. These include changes in blood pressure, hemoglobin A1C levels, or other indicators that diseases are under control, improvements in laboratory and other tests, and reduction in complications on a long-term basis.
• Utilization: This is the real economic measure. What are the resources that are being used? This may include lengths of stay, admissions, or visits to the physician. In a well-managed population, these should be decreasing, Fitzner says.
• Financial measures: This tabulates both direct and indirect cost savings, such as per-member per-month savings and dollar savings.
• Humanistic measures: These include how satisfied patients are with the program, whether they are staying with the program, whether purchasers are continuing to buy disease management services, and whether the physicians are happy with the program.
Reporting outcomes is difficult because disease management programs must adapt to local market conditions and customer desires but they still must be accountable, she says.
There are a lot of variations in disease management programs, Fitzner points out.
Some are developed in-house. Others are purchased from vendors. Some are off-the-shelf products that can be customized for an individual organization.
"There is a large amount of difference with what the products are and the needs for each particular disease state. Disease management runs the gamut from diabetes to back pain to high-risk maternity. They are very different disease states requiring very different interventions," she says.
The committee recommends that organizations hire a third party to conduct an objective review of the disease management projects.
"A third-party review takes away the incentive for anybody to overstate or understate the results. It forces the provider or disease management service to agree on definitions of various terms and statements, and it adds a higher level of credibility," she says.
"It’s much easier to have a study accepted for publication that has been done by a third-party research entity that knows what they’re doing. Transparency and credibility is enhanced but, to date, we haven’t seen a lot of those kinds of studies in literature," Fitzner says.
The committee recommends comparing participants in a program to an equivalent group of nonparticipants, emphasizing that the control group and participants should have comparable severity of disease state, benefit structure, and demographic characteristics.
"There must be a control group, even if the control group is the same population prior to intervention," Fitzner points out.
The control group could be people who chose not to participate in the program or those who are in a different part of the country or an employee group that is not participating in disease management.
The strongest studies compare the group that didn’t have the intervention with the one who did both before the disease management program started and after the program was implemented.
Measure true impact
If a disease management program already is in place, the most practical method to assess its impact is a pre-intervention/post-intervention design, which evaluates the same population before and after the interventions, Fitzner suggests.
Make sure that what you measure is the true impact of the program: improved outcomes, return to work, decreased utilization.
The results of some outcomes studies have been questioned because only the sickest of the sick were included in the denominator, impacting the results, Fitzner says.
Including the entire population in your study and evaluating the overall changes eliminates the problem, she adds.
She points out that patients with chronic illnesses tend to move toward the average in terms of wellness whether they have interventions or not, she says.
"People don’t really stay sick forever. It’s not that they are doing fantastically wonderful or fantastically poorly, there’s just a move toward the center over time, which may not be due to the influence of the program. If you don’t include the entire population, you’re opening yourself up for a lot of problems," she says.
(Editor’s note: For more information on the recommendations for assessing disease management outcomes, see the DMAA web site at www.dmaa.org.)