How to prove a program works: Measure, measure
Too many measure too little
It would be hard to find someone who would argue that patient education and other components of disease management programs are a bad idea, but it would be harder still to find someone who could prove such a program is successful, says Joseph Zibrak, MD, associate director of pulmonary and critical care medicine at Beth Israel Deaconess Medical Center in Boston.
Zibrak, an adviser to Northbrook, IL-based Caremark International's Care Pattern program for asthmatics, is studying ways to evaluate the impact of asthma disease management programs.
Need to put it all together
Many people who run disease management programs could tell you how they have reduced hospital admissions, emergency department visits, or pharmaceutical costs. They might even be able to say they have improved patients' quality of life. But hardly anyone puts together all the data necessary to prove complete success, says Zibrak, and that can lead to mistakes that cost a lot in terms of money and quality of life.
"Studies show that patient education has a measurable impact on patient outcomes," Zibrak says. "But most disease management programs haven't looked at whether they have an equivalent benefit. They've made the assumption that their programs are beneficial based on the literature. They think any strategy will work."
He points out four data feeds that must be collected and compared against each other to prove successful:
1. Patient satisfaction.
A program needs an instrument to detect if people find the program useful and understandable.
2. Quality-of-life information.
A program needs a validated instrument, such as the Asthma Type Questionnaire from the Health Data Institute, to see if the program is improving patients' day-to-day lives. How frequently do their asthma symptoms wake them up at night? Are they having emotional problems stemming from their disease? Are they missing work or school because of the disease?
3. Claims history.
A program needs to gather information on the use of health care resources, such as emergency department, physician visits, and hospital admissions.
4. Population norms.
You need to be able to compare data such as hospital admissions against the number of admissions you would expect among asthmatics not receiving the intervention. "People generally get interested in enrolling in a disease management program after they've had a significant asthma event, such as a hospitalization," Zibrak says.
"But the literature shows that 80% of people who have such an episode will not have another one in the year following it. So your data will always look good unless you have a control group that didn't have the same intervention." A program can claim it reduced those episodes by 60%, which sounds quite successful, but if you could expect an 80% reduction by doing nothing, then you haven't helped the situation, he says.
"Until you have this information, it's just religion," he says. "You're asking people to believe these programs make a difference without being able to prove it. Why would you take limited resources and invest in a disease management program if you really can't show that doing that is going to help control costs or improve quality of life? The program sounds great in year one, but how are you going to justify it in year two?"
One data collection problem for asthma programs is the trend of managed care organizations taking asthma treatment out of the hospital and into urgent care centers, Zibrak says. Events that were recorded as hospital admissions two or three years ago now are not reported as such. It looks like a decrease in admissions, but really it's just a difference in coding.
Another impact of the managed care environment is that more expensive drugs are being prescribed to control asthma. The literature shows that the majority of asthmatics are on suboptimal medication regimens, and disease management programs have identified the patients who need more drugs. That increases the cost of asthma therapy, but with sufficient data collection, you can determine if those higher costs are offset by a reduction in emergency visits or hospital admissions.
A helpful impact of managed care should be the move toward integrated provider networks, Zibrak says, that will give organizations more control over all aspects of costs. If patients can get drugs from any pharmacy and can choose any hospital, you won't be able to get a clear picture. But if they're receiving services within a closed network, you should be able to track all the information you need to prove global benefits.
A bad case of tunnel vision
The danger in only looking at one type of data is that you could completely miss the picture, Zibrak says. You could, for example, decide that your pharmacy costs are too high and cut down on the amount of inhaled steroids your physicians prescribe. That would certainly reduce drug costs, but the impact could be more hospital admissions. Or you could choose to stop giving out spacers and peak flow meters, but that could result in children missing more days of school.
"I've listened to many presenters talk about how successful their asthma programs have been, but they're just looking at one factor, such as quality of life. That doesn't make it for me. It's just too small a piece of the picture."
Zibrak suggests several tips to improve data collection:
· Set up the data collection mechanism when you set up your program. Make it part of your strategy.
· Use resources that already exist. All of the tools you need are out there; the trick is putting them all together.
· Look for a comprehensive database. A number of companies are developing these.
· Encourage cooperation and data sharing among organizations such as hospitals, primary care practices, and pharmacies.
· Encourage employer participation so you can track work absenteeism, an important quality-of-life measure.
[For more information on data collection in disease management programs, contact the following source: Joseph Zibrak, MD, Beth Israel Deaconess Medical Center, Boston. (617) 632-9975.]