While action generally hailed, some point out potential pitfalls
In a move widely welcomed by health care quality professionals, the Joint Commission on Accreditation of Healthcare Organi-zations (JCAHO) and the Centers for Medicare & Medicaid Services (CMS) have signed an agreement to completely align current and future common Hospital Quality Measures in their condition-specific measure sets.
These measures are included in JCAHO’s ORYX Core Measures and CMS’ 7th Scope of Work Quality of Care Measures on heart attack, heart failure, pneumonia, and surgical infection prevention.
Both organizations have made available on their web sites a common measures specification manual, which includes a data dictionary, measure information forms, algorithms, and other technical support information. They are targeting full alignment of common measures by the time data for January patient discharges begin.
"This agreement moves us a lot further toward our common goal of having a standardized set of measures for inpatient hospital services," says Trent Haywood, MD, JD, CMS’s acting deputy chief medical officer and acting director for the quality measurement and health assessment group.
"I truly think the benefit of getting a unified set of measures is that it makes it easier for all parties interested in the QI process to become meaningfully engaged," adds Ken Anderson, DO, MS, CPE, vice president of clinical effectiveness at Memorial Hospital and Health System in South Bend, IN.
A plus for comparative data
For quality professionals focusing on comparative data, this move clearly will yield big benefits, says Robert G. Gift, vice president of strategic planning and business development at Memorial Health Care System in Chattanooga, TN.
"From a comparative data perspective, it will be beneficial to have both of those groups on the same page, using the same metrics, counting things the same way, and using common data definitions for each of the things they are examining," he asserts.
Melissa Roden, vice president for performance management at Memorial, agrees.
"It allows for consistent data," she says. "Right now, with even a small difference in the indicators, depending on what you measure, you may have two different numbers to keep up with in terms of data source."
In addition, Gift says, "when you look at the comparative data, it gives you increased numbers in the sample, where the resultant benchmarks should carry with them greater credibility."
"For people who might do benchmarking," Haywood adds, "you need standardized performance measures. Providers in particular were concerned that by not having fully aligned benchmarks, it might be difficult [to be scored fairly] if the way you develop measures for scoring are different.
"Someone who scores well on JCAHO will now similarly score well on CMS; it’s more of an apples-to-apples situation," he explains.
"We have long been looking for a set of indicators for which we will be held accountable to track, improve, and record so we can get a uniform set of benchmark standards," Anderson notes.
"We have historically had problems because of definitions, the ability to select from a listing of measures; it’s been hard to find what the real benchmarks might be. The ability for CMS and JCAHO to say that these are the important measures allows us to really get enriched data, so our benchmarks are better," he says.
Overall QI efforts strengthened
The new JCAHO/CMS alignment will be broadly beneficial for quality improvement efforts, Anderson adds.
"From a provider perspective, when we hear from so many different, but important, sources that they’d like us to track, improve, and report upon a variety of discordant measures, there is little overlap," he notes.
"It thus becomes difficult for provider groups to focus their QI efforts. From a patient perspective, it is difficult for them to understand the nuances when they are not looking at apples to apples. Also, this gives greater power from the perspective of a national agenda to focus on and improve U.S. health care delivery," Anderson adds.
Quality professionals now will be on the same page, he adds. "Quality professionals have a tendency to want to standardize as best we can. If we can’t come to an agreement on a QI project or standards, it’s difficult to determine what’s most important for our own organization."
"The biggest difference for QI professionals is that this reduces their burden," Haywood asserts.
"Traditionally, they have been required to submit information to JCAHO, and similar information — but in a different process — to CMS. This makes their life easier; we’ll both be asking for the same type of information in the same format," he points out.
This change will have long-range benefits as well, Haywood says. "The best thing for quality managers to do is to look at and understand the process we use, because that will be the process going forward," he advises.
"We may have 10 measures on the CMS web site today, but we will continue to build them out. However, the processes will be the same, so quality professionals will be able to anticipate how they will work," Haywood says.
Despite his generally positive response to the JCAHO/CMS agreement, Anderson says he does have some concerns about individuals relying on the measures too heavily, or responding inappropriately.
"Here are some of the vulnerabilities: Not all hospitals are alike, and one of my fears is that in response to this standardized reporting methodology, some hospitals who have historically taken care of any and all patients with a given clinical condition may now focus on their indicator success, and by focusing so much on that, might be more selective in the patients they care for," he warns.
"If, for example, we are measured on how well we do with outcomes of MI [myocardial infarction], and if I know I am being tracked and reported on outcomes of MI, I may wish to take care of only those patients with fairly standard, garden-variety heart attacks. Then, what happens with the others?" Anderson asks.
He sees a challenge at his own facility in terms of stroke data. "We are a referral center, so we see complicated stroke patients in fairly large numbers," he notes.
"A cursory glance at outcomes may indicate we are not as good as we really are." That makes it all the more important, Anderson stresses, to make sure your definitions are very clear, that people have the opportunity to look at their cohorts of patients, and in a meaningful way, explain to the public that these patients may be able to be stratified so they can look at the differences between patient cohorts.
He sees still another potential problem area. "On the current CMS national volunteer reporting, you do not necessarily have to report on all indicators, so you may not elect to report on those that are not good for you.
"Consequently, the group that you included so that you were in the top 10 percentile may be a cohort smaller than the universe of possible reporting institutions," Anderson notes.
"If there are 6,000 hospitals and only 3,000 choose to report on MI data, and if you are in the top 10 percentile, that’s really good, but it also may be good to be in the top 50 percentile because those who were not reporting were not good, so you could really be in the top 20%," he explains.
"This is a nuance that is sometimes difficult for the public to grasp," Anderson points out.
Gift, however, does not share all of Anderson’s concerns.
"Seeking greater commonality around the data should eliminate a lot of that uniqueness [among hospitals]," he says.
"One of the chief barriers in benchmarking is that we are different — the degree to which we can have common data eliminates a lot of that artificial difference," Gift continues.
However, there may be those instances in which benchmarks potentially could be misleading, he concedes.
"For example, if you look at mortality or LOS numbers for the Medicare population, they may be substantively different than those for non-Medicare," Gift points out. But in such cases, "you just need to keep that in mind — which goes back to the data definitions you are looking at."
Need More Information?
For more information, contact:
• Ken Anderson, DO, MS, CPE, Vice President, Clinical Effectiveness, Memorial Hospital and Health System, 615 N. Michigan St., South Bend, IN 46601. Phone: (574) 647-3104. E-mail: firstname.lastname@example.org.
• Robert G. Gift, Vice President, Strategic Planning and Business Development, Memorial Health Care System, 2525 de Sales Ave., Chattanooga, TN 37404. Phone: (423) 495-8664. Fax: (423) 495-7226. E-mail: email@example.com.
• Trent Haywood, MD, JD, Acting Deputy Chief Medical Officer, Acting Director for the Quality Measurement and Health Assessment Group, Centers for Medicare & Medicaid Services, 7500 Security Blvd., Baltimore, MD 21244-1850. Phone: (877) 267-2323. Web site: www.cms.hhs.gov.
• Melissa Roden, Vice President, Performance Management, Memorial Health Care System, 2525 de Sales Ave., Chattanooga, TN 37404. Phone: (423) 495-5773. E-mail: Melissa_roden@memorial.org.
• Joint Commission on Accreditation of Healthcare Organizations, Oakbrook Terrace, IL. Web site: http://www.jcaho.org.