Study slams Joint Commission’s approach to measuring quality
Study slams Joint Commission’s approach to measuring quality
Standards have little relationship to patient care and safety, critics charge
Criticism of the Joint Commission on Accreditation of Healthcare Organizations (JCAHO) is nothing new, but the most recent study landed a solid punch to the gut. The Joint Commission’s standards and all the work it forces upon health care providers are just window dressing that has little effect on the health and safety of patients, according to the researchers behind the study.
They say the Joint Commission has not paid enough attention to the quality measures that have proven important in other industries, in particular, the costs and financial outcomes that most businesses use to measure success. Judge health care organizations by the same standards, and patients will benefit, they say.
Nonsense, said the Joint Commission, which, in an official statement, "categorically [rejected]" the study’s findings and conclusions.
But between the two vehemently opposed sides, some observers say the research is one more example of how the Joint Commission isn’t all it hopes to be. Even if the study is imprecise in pointing out the problems, they say, it’s another alarm bell going off about the overall insufficiency of the Joint Commission’s quality measures.
The study was conducted by John R. Griffith and Jeffrey A. Alexander, researchers from the University of Michigan School of Public Health in Ann Arbor, MI. When it was published in the January 2002 issue of the journal Quality Management in Health Care, it caught the attention of every quality professional who struggles to comply with the Joint Commission standards. The study suggests that there is no relationship between Medicare-based measures of mortality and complications and the scores assigned to hospitals by the Joint Commission.
Griffith and Alexander’s study suggests a serious need to review the system that ostensibly assures quality in 95% of U.S. acute care hospitals, and which is used for Medicare certification and often for state licensure. For 50 years, the researchers say, the Joint Commission has used almost exclusively structural and process measures without tracking more direct performance measures, such as the number of deaths or unexpected complications, the hospital’s financial strength, or its ability to adapt to the latest treatment approaches.
Griffith and Alexander examined Medicare outcomes by comparing Joint Commission scores against Medicare inpatient data prepared annually for "The 100 Top Hospitals: Benchmarks for Success" study by Solucient LLC in Evanston, IL. Solucient, does not compete with the Joint Com-mission but provided $10,000 in funding for a detailed analysis of the stability of its "100 Top" criteria and recommendations on changes.
The research showed problems with Solucient’s ranking system, but the health care industry was more interested in the criticism of the Joint Commission. According to Griffith and Alexander, Solucient’s mortality and complications indexing, adjusted for differences in the kinds of patients and cases treated, shows patients’ chances of having serious adverse events, such as death or complications, are about twice as great at the bottom 20% of hospitals as at the top 20% of hospitals. On other measures, the top quintile is about 1½ times as good.
The researchers advocate the "balanced scorecard" theory, which is increasingly popular among leading companies. Griffith explains that this theory suggests that a successful organization should achieve in a variety of performance areas, including safety, stable financing, and efficiency, as well as in relations with its customers and workers. Such an organization also would work at the kinds of things Joint Commission criteria emphasize, such as staffing in care units and compliance with codes. Performance data such as Solucient’s suggest that America’s hospitals need real improvement, Griffith says.
"The differences between best and worst are too big to ignore," he says. "Most hospitals make a huge effort to raise their accreditation scores. If they put that energy in a smarter direction, we’d all be better off."
One consultant who has worked closely with the Joint Commission standards says the University of Michigan results may be valid. Even if the methods are off, the conclusions are worth some attention, says Ann Kobs, MS, RN, president and CEO of Type I Solutions in Cape Coral, FL.
According to Kobs, quality professionals increasingly are exasperated with the Joint Commission. She consults with providers all over the country to help them comply with Joint Commission standards, and she has seen firsthand how hard some work and how little they see a connection to quality, Kobs says.
The big weak point, as Kobs sees it, is the way the Joint Commission treats its surveyors. The surveyors themselves, she says, are by and large competent and well intentioned, but the Joint Commission does not give them the support and structure they need to do a better job.
"There is no inter-rater reliability among surveyors. No two surveyors look at things the same way," she says. "They could fix it, and if they would, people would be a lot happier."
Even though there are many good Joint Com-mission surveyors, the system does not ensure that they all are adequately trained and that the inadequate ones are weeded out, Kobs says. The result is that surveyors often create problems for providers that would be unnecessary if the surveyors all understood their work as well as their bosses at the Joint Commission headquarters.
"I’ve taught surveyors, and they come up with some of the goofiest things out of their mouths," she says. "The number of hospitals that have turned things upside down just because a surveyor said’ is enormous. People end up rewriting job descriptions and making other big changes for no reason, and that’s big bucks."
Kobs says that she actually has a lot of sympathy for the conditions under which the surveyors must work. They are, on the whole, underpaid and overworked, she says.
"They don’t even get paid for their days of travel," she says. "Then they have a tight budget for the hotel, cars, and meals. Every year, the Joint Commission adds more things for the surveyors to do."
Years of concern about the Joint Commission and the increasing burden it places on accredited organizations is leading some to consider quitting the system. That is a legitimate option, but don’t expect the decision to be easy. (For more on quitting the Joint Commission, see "Look to other quality measures besides the Joint Commission," in this issue.)
Part of the problem with the Joint Commission’s approach, the researchers say, is that hospitals do not get enough information on how they might improve. About one hospital in 12 gets a nearly perfect score, with no Type I recommendations, and only three of 100 get "conditional" accreditation, according to statistics from the Joint Commission. With so many hospitals receiving similar scores, the system does not encourage competition, Griffith says. There is also a problem with understanding exactly what factors contribute to high performance. Joint Commission criteria are a consensus of what seems to work, rather than practices tested against real performance, he explains.
"JCAHO scores four dozen separate activities in hospitals, calculates a weighted overall score, and makes a final decision to accredit. We expected to see good’ JCAHO hospitals get good’ performance scores — be safe, well financed, efficient, and progressive," he says. "The data show the hospitals with the worst JCAHO scores have as good performance as the group with the best."
The disconnect suggests either something was left out or something went wrong, and Griffith says he thinks it is something left out.
"It could be problems with the way JCAHO’s inspectors assign the scores, but it’s more likely the things JCAHO does not measure, particularly employee and doctor learning and enthusiasm."
The Joint Commission immediately issued a statement calling the University of Michigan study "seriously flawed." The statement went on to say that the Joint Commission "categorically rejects the findings and conclusions," largely because the measures used by the researchers are not valid when applied to health care quality.
Paul Schyve, MD, senior vice president of the Joint Commission, tells Hospital Peer Review that the two measures alleged by the authors to be indices of the quality and safety of health care — the mortality index and the complications index — have long been dismissed by credible researchers and other experts in performance measurement. This is because both measures are global in nature, meaning they apply to a wide variety of patients who present to hospitals with everything from nonurgent conditions to extreme emergencies, and both are very difficult to adjust for risk. Adjusting outcomes expectations based on how sick patients are and how many coexisting diseases they have doesn’t work well.
"The problem with this study is that the measures used were not appropriate measures," Schyve says. "They used seven measures, and two of the measures — mortality and complications — seem, on the face of it, to be related to quality and safety, but they’re really not when you look at the data."
In fact, the Centers for Medicare & Medicaid Services stopped publishing data on a similar measure — hospital-specific mortality rates — for these very reasons almost a decade ago, he says.
"The measures used in this report, therefore, fall well short of being a gold standard against which accreditation or any other evaluative process might be assessed," Schyve says.
The mortality index and complications index used by the authors as measures of hospital quality and safety are not found in any contemporary measure sets currently being developed or in use for purposes of measuring and improving the quality and safety of health care in the United States, Schyve says. These include:
- measures used by CMS’s Professional Review Organizations to monitor and improve hospital care;
- highly respected HEDIS measures that have been developed and are used by the National Committee for Quality Assurance to monitor and report the results of services provided under health plans;
- the Joint Commission’s own consensus-based hospital core measure sets that are due for implementation later this year;
- hospital core measures being identified by the National Quality Forum.
According to Schyve, the forum increasingly is being viewed as the "final common pathway" for the endorsement of quality-related measures.
Schyve says the fatal flaw in the University of Michigan research is that it treats health care like any other business. Health care is unique, and some measures that are useful in other industries just don’t work, he says.
"In other industries, the quality of what an organization produces has been related to its business success. If your product is good, you make money, and if you’re making money, your product must be good," Schyve says. "That relationship has not been shown in health care. Those who choose and receive the service typically are not the ones who pay for the service, and those who invest in improving the service are not necessarily the ones who will reap benefits from that improved service."
Also, a significant portion of health care is provided in not-for-profit settings and government facilities. All of those unique factors are known to the Joint Commission and should be known to other researchers, Schyve says.
Griffith says he has heard this criticism before but disagrees. "I have a hard time understanding why the health care field can’t think in these terms."
JCAHO points to ORYX initiative
Schyve says the University of Michigan research will not change the way the Joint Commission measures quality.
"The authors of the article appear to have ignored, or not to have been aware of, the substantial body of work and evidence in support of currently accepted measures of hospital quality and safety," the Joint Commission said in its formal statement.
The remaining five measures used by the authors in the study address hospital financial or business performance, but Schyve says the Joint Commission accreditation standards solely address the safety and quality of care provided in hospitals and other types of health care organizations. And that’s intentional, Schyve says. The reason is not that the Joint Commission has never heard of the financial measures used by other types of industries, as the researchers suggest. "There are no studies in the referenced literature, including the study at hand, that establish any defined relationships between hospital financial or business performance and the safety and quality of care provided by that hospital," he says. "The authors have therefore engaged in an interesting but speculative and meaningless apples and oranges’ comparison."
Still, the Joint Commission is not ignoring all of the factors that the researchers say need more attention. Schyve points to the Joint Commission’s ORYX initiative. That project incorporates performance measurement into the accreditation process and is aimed at creating critical links between accreditation and the outcomes of patient care.
Subscribe Now for Access
You have reached your article limit for the month. We hope you found our articles both enjoyable and insightful. For information on new subscriptions, product trials, alternative billing arrangements or group and site discounts please call 800-688-2421. We look forward to having you as a long-term member of the Relias Media community.