Look at comparative data with an eye toward quality
Look at comparative data with an eye toward quality
Distinguish between expected, unexpected variation
By Patrice Spath, ART
Brown-Spath Associates
Forest Grove, OR
Most health care organizations are involved in local, regional, or nationwide comparative performance measurement initiatives. Gathering data for these projects can be time-consuming. Nonetheless, significant quality improvement benefits can be achieved if the data are used wisely.
Quality management professionals play an important supportive role in helping caregivers interpret comparative results and initiate improvement projects. The data-gathering exercise will be futile if the recipients of the information don’t know how to apply the new knowledge that can be gained from the comparative results that periodically arrive in the mail.
The data provided by a measurement project illustrate your facility’s performance measure rate for each indicator and compares your rate to that of other project participants. A typical comparative report is shown in the chart on p. 15. In this report, the primary cesarean rate at fictitious Memorial Hospital is being compared to a peer group of facilities. The actual rate for Memorial Hospital is represented by the line that appears to the right of the bar. The average rate for all hospitals is represented by the line that appears to the left of the bar. The bar represents the spread of cesarean rates that fall within two standard deviations from the mean each quarter. Many measurement project administrators provide standard deviations for the data set either graphically or numerically. This allows pro viders to see how their rate of performance compares to the peer group.
To understand how to react to comparative data like this and see how these data can assist in your performance improvement efforts, it is important to first discuss the concept of process variation. A process is a series of interconnected and interdependent performance steps. These are the steps taken by health care professionals as they perform their daily patient care activities. In most instances, the processes performed by individual providers are interrelated with those performed by another provider — where one process ends, another begins. Thus we have health care systems, which merely are a network of interconnected and interdependent processes designed to achieve a specific goal.
The numbers presented in any comparative performance measurement project represent the output from a system of interrelated processes in your hospital. For example, if a patient develops bacteremia during her hospital stay, a series of patient care activities occurred, and the end result was an infection. A measure of your nosocomial infection rate tells you the number of infections that result from the "way you do things" at your hospital. This includes the processes performed by nurses, physicians, operating room technicians, housekeeping, and so on. All processes have some sort of variation inherent in them. For example, the manner in which nurses prep patients for surgery will vary slightly from one day to the next. Process variation is also related to the unique types of patients treated in your facility each day. Because variation is inherent in all processes and systems, you should expect to see fluctuations in your indicator data.
Health care providers must learn to distinguish between an expected level of variation and an unexpected level of variation. An expected level of variation would be judged due to common causes, and unexpected variation would be due to special causes. At a minimum, the presence of special-cause variation requires investigation and resolution. However, the organization also may choose to reduce common-cause variation through initiation of a performance improvement project.
What is the extent of process variation?
For each performance measure, your rate is compared to the rate reported by other hospitals. A standard deviation is usually calculated for each indicator, and you are shown where your performance falls in relation to the standard deviation. Many organizations begin an analysis of the cause of variation when the data show them to be 2+ standard deviations (above or below) from the mean rate. By definition, if you are two standard deviations from the mean there is only a 5% chance that the result is due to common-cause or random variation. Conversely, there is a 95% chance the result is statistically significant. If your rate varies more than two standard deviations from the mean, this would be considered an unexpected or special-cause variation that requires further investigation. Such an investigation should have taken place at Memorial Hospital when they discovered that their cesarean rate for the months of April through June was more than two standard deviations from the mean for all facilities.
Because most providers will rarely fall two standard deviations away from the mean for any indicator, some hospitals consider using 1.5 standard deviations from the mean as a starting point for further investigation. At 1.5 standard deviations from the mean, there is only one chance in 10 that the result is due to chance or random variation. Conversely, there are nine chances in 10 that the result is statistically significant.
When a hospital identifies itself as varying 1.5 or 2 standard deviations from the mean for any performance measure, the following common set of actions should be taken:
• Confirm the accuracy of the data submitted to the project coordinator.
— Verify the numbers in your internal database.
— Verify the numbers from your abstract sheets or other input documents against the values on the comparative report.
• Review the data definitions for the indicator.
— Verify that the data you collected complied with the approved data definitions found in the project’s data dictionary.
— Discuss your data definitions with other hospitals in your peer grouping, call the project coordinator to verify your definitions, and discuss the definition with the people coding or abstracting the data in your hospital.
If you find data quality problems to be the cause of the variation, stop your analysis at this point and resubmit corrected data to the project headquarters (or proceed as advised by the measurement project coordinator).
Analyze the variation
If you are satisfied that the variation is not caused by data collection and/or definitional problems, your next step is to initiate a more detailed analysis of the variation. The precise steps of the internal analysis of rate variations will vary from hospital to hospital and will be different for each performance measure. Generally, the analysis and performance improvement actions follow the steps detailed in the chart at right. Note that a team is organized to evaluate the cause of variation. Team members should include representatives from all disciplines thought to be involved in the processes affecting the performance rate.
It is important the team understands their charge. They should be constantly reminded that the data are not intended to discriminate between "good" and "bad" practices. The data merely show that something is different at your hospital, and it is the team’s responsibility to discover what those differences might be. To guard against people’s natural tendency to "rationalize away" variations, have the team brainstorm all possible causes.
One issue to be considered is whether or not your patient population is sicker than patients treated at other facilities. Many performance measures include a severity-of-illness or risk-adjustment factor. For example, the Quality Indicator Project of the Maryland Hospital Association uses the risk index derived from the Centers for Disease Control and Prevention’s National Nosocomial Infection Surveillance System when calculating and reporting surgical site infection rates. Patient risk adjustments such as this can help clinicians evaluate whether observed outcomes should be attributed to provider variation or to other causes, such as the mix of patients treated. If the performance measurement data are considered to be risk-adjusted, the team should determine whether all dimensions of risk have been adequately accounted for. The dimensions of risk that are commonly considered important patient characteristics are listed below:
— age, sex, race, and ethnicity (demographics);
— acute clinical stability;
— principal diagnosis and its severity;
— comorbid illnesses, chronic diseases;
— physical functional status;
— cognitive and psychosocial functioning;
— cultural and socioeconomic attributes;
— patient attitudes and preferences for outcomes.
Do your best to account for risk
If the comparison data are derived solely from claims data, all of these dimensions will not be accounted for. That’s why it is important to further investigate the impact of relevant dimensions on your performance results. Likewise, not all indicators are risk-adjusted. For example, intrinsic patient risk and other factors are not likely to be reflected in primary cesarean rates. For this reason, relevant risk issues may need to be explored further by the investigation team. It is impossible to control for all risk factors, but your organization can better interpret comparisons of outcomes across hospitals with knowledge of what risk factors have been accounted for in the data comparisons. For example, an organization would have to exert caution in comparing outcomes between public hospitals that see many poor patients and suburban hospitals that treat a relatively affluent clientele unless the data have been risk-adjusted for socioeconomic status.
Remember, however, that risk adjustment is necessary only for factors that should actually affect the process. Although many organizations try to risk-adjust for everything, adjustment is not needed for factors that should have no influence on the process. For instance, there is no need to adjust for socioeconomic characteristics when measuring patient satisfaction with the courtesy of service. Providers should strive to be courteous to all patients regardless of socioeconomic status.
Involvement in a comparative performance measurement project does not stop once the data have been gathered and submitted to the project headquarters. The most important task is to analyze the results. It is far too easy to shrug off performance differences by blaming them on patient severity or unique circumstances. Gather data to validate the team’s assumptions about what might be causing the variation. When process improvement opportunities are identified, take the appropriate actions and then continue to monitor performance. Rather than viewing analysis of comparative data as a "fault/no-fault" exercise, use the information as the first step toward making the best even better.
Related Reading
Iezzoni LI. Risk Adjustment for Measuring Healthcare Outcomes. Chicago: Health Administration Press; 1997.
Subscribe Now for Access
You have reached your article limit for the month. We hope you found our articles both enjoyable and insightful. For information on new subscriptions, product trials, alternative billing arrangements or group and site discounts please call 800-688-2421. We look forward to having you as a long-term member of the Relias Media community.