Medical Center Says CMS Star Ratings Miscalculated
August 1, 2018
Vindicating many quality improvement leaders who felt their work was not recognized by the Centers for Medicare & Medicaid Services (CMS) star ratings, a recent analysis by Rush University Medical Center (RUMC) in Chicago determined that CMS has miscalculated hospitals’ star ratings since implementing the ratings system in 2016.
CMS disputes the findings, but two industry professionals who advised and assisted RUMC with the analysis tell Hospital Peer Review that CMS weighted specific categories more heavily than others, skewing the overall ratings.
CMS’ Overall Hospital Quality Star Ratings formula awards hospitals a 1- to 5-star rating based on their performance across seven categories. CMS overemphasized the PSI-90 measure, the Patient Safety and Adverse Events Composite, according to RUMC’s analysis.
CMS weighted the PSI-90 measure more heavily during the first four releases of the ratings, but focused more on complication rates from hip and knee replacements for the latest ratings, released in December 2017, RUMC says.
From 5 Stars to 3
RUMC reported its findings to CMS in May 2018, after CMS notified the hospital that its star rating would fall from a previous five stars to three in the next ratings period. (CMS provides advance notice to hospitals before posting the ratings on the Hospital Compare website.)
The University of Chicago Medicine, the Association of American Medical Colleges (AAMC), and Vizient, a company in Irving, TX, that assists healthcare organizations with data management and quality improvements, worked with RUMC on the analysis.
CMS announced in June that the planned July release of the latest star ratings would be postponed, without announcing a new date.
At any rate, a CMS spokesperson tells HPR that the RUMC analysis is “inaccurate” and that “no analysis to date has demonstrated any miscalculations.”
Others disagree. The star ratings system should be improved so that it accurately reflects the work of quality improvement professionals and other healthcare providers, says Janis M. Orlowski, MD, MACP, chief healthcare officer with AAMC.
“It has been the AAMC’s belief for a while that the current hospital rating system does not adequately reflect either the current quality or improvements in quality at many institutions. We have disagreed with CMS on the underlying methodology, which is meant to show differences between institutions but actually sometimes shows stark differences that may be trivial in nature,” Orlowski says.
“It’s supposed to say hospital A is different from hospital B, but if hospital A is just very slightly different or different in an inconsequential way, that can look like a stark difference in the ratings.”
The system also benefits institutions that report fewer measures, Orlowski notes.
For instance, a specialty hospital that only reports on a few measures because the rest do not apply actually has an advantage over large hospitals that report all the measures, she explains.
“It’s not an even playing field. You shouldn’t get a lower rating just because you’re reporting on all the measures,” she says.
“One hospital may not have enough data to report on mortality at all, but another may have just enough to report even though their mortality rate is excellent. Because one hospital doesn’t report at all and one reported some mortality, the hospital that reported ended up with a lower ranking. That shouldn’t be,” she adds.
The data would be understood by quality professionals, but within the CMS star ratings system it can be overemphasized to the point of misleading consumers, Orlowski says.
A CMS spokesperson tells HPR that the star ratings are “based on a scientifically rigorous methodology” that “maximizes the information available in our existing Hospital Compare data.” The spokesperson says that the model was selected only after CMS received extensive feedback from expert panels, hospital provider groups, and patients.
Risk adjustment is another critical flaw in the system, Orlowski says. Hospitals are penalized for factors beyond their control and for taking care of sicker patients and those with low socioeconomic status, she says, but proper risk adjustment could account for those factors.
After the most recent star ratings, Orlowski and her colleagues spoke with hospital leaders at several facilities where the rating had fallen one or two stars, finding that many of those hospitals had demonstrable improvements in many key measures. They didn’t understand how that could be.
They found that the discrepancy was attributable to changes in both how CMS calculated PSI-90 and how it weighted that measure in the ratings calculations. AAMC has supported changing how PSI-90 is applied but did not know how CMS was doing so.
“We also have concerns because only the larger hospitals or the ones that treat complex diseases can report on PSI-90. Not everyone can,” she says.
“The methodology worked so that differences in complications from total hips and knees ended up being more important than anything else. We saw examples where hospitals improved in their hospital-acquired infections, but had a minor difference in total hips and knees that was weighted much more strongly.”
Ratings Can Discourage
The current CMS methodology produces ratings that can seem random and capricious to hospitals, Orlowski says, which can be detrimental to quality improvement efforts. Healthcare professionals can be discouraged if they work hard to improve quality but see that the rating promoted to the public does not reflect their accomplishments, she says.
“We are concerned that the methodology is too fragile. If your overall quality is improving in the institution, there shouldn’t be one or two weak points in the methodology that shift you down one or two stars,” Orlowski says.
“That is a system that doesn’t work. People who work in quality and know they are making progress in these specific areas should see that reflected in their star ratings, as a reflection of how they are moving forward.”
AAMC and other organizations have addressed these concerns with CMS for a while now, before the RUMC analysis, and Orlowski says CMS Administrator Seema Verma has expressed willingness to improve the ratings system.
“I think this is a positive sign that the administrator has expressed this kind of commitment, and I hope we see a good collaboration with the healthcare community to address these issues,” Orlowski says. “We have mutual goals here of making this system transparent and reliable as a way for patients and their families to assess the quality of different institutions.”
Hospital quality leaders should continue the work they’re doing at their own facilities and participate in the dialogue over how to improve the system, she adds.
“There are times when we and other organizations need the opinions and the expertise of quality improvement professionals, the people who are out there working every day with these quality measures, to get a better understanding of how these issues play out in the real world,” Orlowski says. “The more people are willing to contribute and be a part of the process, the more likely we are to make the ratings system better.”
The current ratings system is not helpful and does not inform the public in the way CMS intended, says David Levine, MD, FACEP, senior vice president for advanced analytics and informatics with Vizient. Levine previously was medical director of the emergency department at John H. Stroger Jr. Hospital of Cook County in Chicago.
CMS is, to a large extent, measuring the right things and using the right metrics to determine hospital quality, Levine says. The problems occur when the data are massaged to come up with a singular star rating for a facility, he says.
“There is not a one-size-fits-all rating for all hospitals, of all sizes, for any medical condition,” he says. “If I have an advanced cancer, there is only a subset of hospitals that I am concerned with. A small community hospital with a five-star rating is not going to be appropriate for my advanced, complicated care.”
The effort to simplify ratings, of any type, usually degrades both their usefulness and their accuracy, Levine explains. If a consumer is buying a vehicle, the search does not begin by comparing quality ratings across all types of vehicles, he says. It begins by focusing on the particular type of vehicle that the consumer wants — a truck, or a fuel-efficient car — and then comparing the options in that category.
The consumer’s search for a quality healthcare provider takes the same form, but the CMS star ratings are not helpful in that regard.
“CMS should either break down the rankings by size and type of hospital or by specific procedures that the American public is most interested in making an informed decision about,” Levine says.
Too Much Computer Control?
In addition, the statistical approach used to develop the ratings relies too much on computer decisions about what factors matter most and how to weigh them, he says.
“We should know what factors are most important, especially within a subset of types of hospitals or hospitals providing a certain type of care. The computer, now in the current state, picks it differently every time,” Levine says. “If I were in an American history class that covered a broad range of topics and at the end of the semester a computer decides that most of the final exam is going to be on the years 1850 to 1860, is that an accurate reflection of what I learned in the course? That’s what’s happening with the current CMS methodology.”
That scenario has played out recently with CMS emphasizing hips and knees, Levine says. That is an important variable in the health safety domain, but a hospital could have serious deficiencies in other areas and still get a high star rating as long as it excels in this one measure, he says.
This understanding of the flaws in the CMS star rating system may leave some quality professionals feeling vindicated if they think their past ratings have not accurately reflected the quality of their institutions.
That’s a reasonable reaction, Levine says, but not reason to lose hope or pull back on efforts to improve.
“It’s a very misleading message in its current state, but elements of the ratings are very good and should not be dismissed,” he says.
“The overall approach of the methodology that distills those metrics to a single star rating, as is currently done, is not right, and the way it is presented to the public is not the right approach,” he adds.
The healthcare community seems grateful that flaws in the star rating system are being exposed and that CMS seems to be willing to make improvements, Levine says.
“I’m hopeful that CMS will convene experts in quality improvement to help guide them in creating a ratings system that is meaningful for the public, that actually provides consumers what the star ratings system was supposed to have been all along. That’s my best hope.”
Good news may be coming for Levine and other critics of the star ratings. CMS's spokesperson told HPR that the agency plans to “reconvene a technical expert panel this summer to review the methodology.... Feedback will be used to enhance the methodology in future Star Ratings calculations.”
- David Levine, MD, FACEP, Senior Vice President, Advanced Analytics and Informatics, Vizient, Irving, TX. Phone: (866) 600-0618.
- Janis M. Orlowski, MD, MACP, Chief Healthcare Officer, Association of American Medical Colleges, Washington, DC. Phone: (202) 828-0400.
The Centers for Medicare & Medicaid Services (CMS) disputes the findings, but two industry professionals who helped the medical center conduct the analysis tell Hospital Peer Review that CMS weighted specific categories more heavily than others, skewing the overall ratings.
Subscribe Now for Access
You have reached your article limit for the month. We hope you found our articles both enjoyable and insightful. For information on new subscriptions, product trials, alternative billing arrangements or group and site discounts please call 800-688-2421. We look forward to having you as a long-term member of the Relias Media community.