Comparing apples to apples? CQA has a new audit to make sure
Comparing apples to apples? CQA has a new audit to make sure
Clear definitions, accurate coding are keys to like comparisons
When it comes to making sure your hospital is comparing like clinical outcomes and not "apples to oranges," there are no easy answers. In a recent report, the Health Care Advisory Board, a Washington, DC-based think tank and research company, calls outcomes tracking and measurement "easily the most complex topic" it has studied in the past decade.
The lack of industry standardization makes it difficult to conduct accurate comparisons that lead to quality improvements. Meanwhile, procedures for tracking outcomes at many hospitals are fragmented, cumbersome, and outdated. Individual hospital departments often have different systems in place to measure quality, which can lead to inaccurate comparisons.
In fact, the National Committee for Quality Assurance (NCQA) in Washington, DC, has become so concerned about the variability in the ways health plans collect and calculate information for its Health Employer Data and Information Set that it has instituted new audit compliance standards to correct the problem, says Barry Scholl, a spokesman for the NCQA.
"If there is no standardized methodology for the audits in any case, then the mere fact that the audit has been performed isn’t really that helpful," Scholl says. "Having standardization is what creates an environment of true comparability. And that yields information that consumers, employers, and regulators can have some faith in."
The Joint Commission on the Accreditation of Healthcare Organizations in Oakbrook Terrace, IL, is moving in a similar direction with its implementation of the ORYX electronic outcomes reporting system. (See QI/TQM, May 1997, p. 68.) The Joint Commission hopes to build a national database of outcomes benchmarks, made comparable through its accreditation processes and surveys.
8 common measurement problems
In a 1993 report, The Coming Scrutiny of Hospital and Health System Quality, the Health Care Advisory Board noted eight recurring problems with outcomes measurement:
• Measurements are not statistically meaningful or accurate.
• Results are not comparable across institutions.
• Measures are not severity-adjusted (for patient acuity).
• Measures are not risk-adjusted (for patient demographics).
• Some indicators have no direct link to provider action.
• Data cannot be collected (due to technological constraints).
• Data are prohibitively expensive to collect.
• Results are easily manipulated.
Inconsistent coding is at the root of many of these problems, says Janice Schriefer, RN, MSN, MBA, CCRN, director of outcomes management for Butterworth Health System in Grand Rapids, MI. "You must have clear data definitions, so you have to use the same coding system," Schriefer explains. "Sometimes that will be ICD-9 coding or DRG-related coding, but the coding system is essential if you are going to have accurate, comparable data."
Schriefer further recommends that you should have your coding system checked by experts to ensure its validity and reliability. Also, use definitions that are available from research literature to tell you how other researchers have classified the information, she adds. "You always want to use a classification scheme that has been used previously and let people know what classification scheme you are using," says Schriefer. "That way, there is no confusion along the way."
Accuracy depends on coding
Carolyn Godfrey, RN, director of quality risk management at Kaiser Permanente Foundation Hospital in Los Angeles, agrees that the complexity of the task demands that fundamentals like coding be simple and clear-cut. Kaiser’s computer system is networked to all 11 of its Southern California facilities. Physicians at the health centers are able to compare outcomes data from through-out the system, thanks to accurate coding and classifications.
"For example, our inpatient data is coded into our system using the ICD-9 coding system," she says. "We do a lot of comparative outcomes data among ourselves, checking to see how we’re doing with respect to other Kaiser facilities. But we also compare ourselves to other facilities outside our network, so we need accurate outcomes data to do that. Our system gives us the accuracy we need."
Health facilities take the initiative
As the health care industry struggles to agree on standards that will improve outcomes comparisons, many hospitals and hospital systems, such as Cypress, CA-based PacifiCare, are coming up with their own ideas and putting them into place.
When staff at PacifiCare began collecting and analyzing hip-replacement data, they discovered that, in 1994, the hip-replacement rate for the Medicare population in one of they system’s physician groups was nearly zero, says Cheryl Brady, a spokesman for PacifiCare. Analysts initially suspected that doctors in that group were rationing care and set out to track and compare data and outcomes.
The data, however, turned up some surpris-ing facts. Physician assistants had visited the home of every elderly patient to eliminate the hazards that could cause a fall and a broken hip. The data pointed to a simple practice for improving patient care, which provided a model that could be modified and implemented in other departments.
Starting from scratch
Another facility that put its own tracking and comparison plan in place is St. Vincent Medical Center in Toledo, OH. Two years ago, the facility decided to scrap its separate, departmental programs and replace them with a severity adjustment system that would unite all the medical departments and their outcomes data to create a single quality improvement program.
"At first, just getting everyone together was the biggest difficulty," says Coleta Schmidlin, administrative director of quality management at St. Vincent. "But after we established focus groups to discuss all the issues, it began to all come together."
The severity adjustment system works by means of an elaborate algorithm that takes into account such factors as the patient’s age, sex, procedure, and comorbidities. A weight is assigned to each of these factors and a severity adjustment level between one (least severe) and five (most severe) is established.
Outcomes vary within DRGs
Schmidlin says the system has proved effective in helping staff make accurate comparisons among patients. "If you’re comparing one patient who is very severe, with lots of comorbid conditions, to a patient who has only one or no comorbid conditions, the outcomes are likely to be very different," she says.
"You can’t just lump all the patients together. Just because they fall under one DRG doesn’t mean they’re all equally sick," Schmidlin explains. "By adjusting for severity, you can start comparing apples to apples."
Subscribe Now for Access
You have reached your article limit for the month. We hope you found our articles both enjoyable and insightful. For information on new subscriptions, product trials, alternative billing arrangements or group and site discounts please call 800-688-2421. We look forward to having you as a long-term member of the Relias Media community.