When CMS recently released its annual Hospital Quality Star Ratings, there were significant changes from previous ratings. Most quality leaders welcome the changes as long-needed improvements that will make the data more meaningful.
Hospital leaders had criticized previous ratings because they believed the methodology used to create them was flawed and produced inconsistent results that made the ratings misleading and not useful to consumers.
CMS rated 4,586 hospitals, with 13.5% receiving five stars, 988 receiving four stars, 1,018 receiving three stars, 690 receiving two stars, and 204 receiving one star. Sufficient data were unavailable for 1,181 hospitals.1
More Streamlined Approach
This year’s report represents a pivot to a much more streamlined, straightforward, and predictive weighting system of the measures that go into the ratings, says Beth Godsey, MBA, MSPA, senior vice president of data science and methodology with Vizient in Chicago. Vizient was among many groups that provided input to CMS on how to improve the star ratings.
Godsey describes two major changes in the recent release:
- The discontinuation of latent variable modeling in the ranking methodology. This modeling created unpredictable measure weights that were difficult to interpret. CMS replaced that methodology with an equally weighted measure approach.
- New cohorts for providers in the rankings. Previously, CMS treated all hospitals as if they were equal in offerings, patients, and services. The new approach groups hospitals by the number of measure groups.
Latent variable modeling made it challenging for organizations to understand how they were evaluated, which makes it difficult to drive performance improvement.
“It’s challenging when you’re not exactly clear how individual measures are weighted. In previous releases, it was difficult for organizations to understand the methodology,” Godsey says. “In our conversations with CMS, we suggested an approach that allowed for a much more clear and simplistic way of assessing performance and weighting the measures, and they adopted that approach. Several organizations had supported a similar mindset of keeping it simple and making the process clear for healthcare organizations.”
The new approach addresses some concerns about measures within the measure categories. The importance of measures would fluctuate with each star rating release, sometimes changing substantially from one year’s release to the next.
“Hospital leaders were scratching their heads trying to figure out what changed and why this is less important than last year and whether they should change their priorities,” Godsey says. “Now, all the measures within one domain are all equally weighted. Therefore, organizations have an understanding that they will see the same weighting next year. That gives organizations a bit of clarity and more predictability in what they will see in the methodological structure.”
Those improvements should help quality leaders drive change in their organizations because they will have a better understanding of what leads to higher ratings. Hospital quality professionals are responding positively to the changes.
Hospital Cohorts Created
Godsey believes the creation of hospital cohorts is a major step in the right direction. Many groups had urged CMS to take this step to make the ratings more meaningful.
“There are still some opportunities in this space, but it does begin to create more of a like hospital comparison so that hospitals can compare themselves to similar facilities,” Godsey says. “There are still some gaps that we think CMS should consider exploring. Some of the details involve whether you offer transplant services, cardiothoracic services, or whether you have high trauma volume or high levels of acuity in certain conditions or specialties.”
The current methodology does not consider those factors. “While there is a step forward in their approach to creating cohorts of organizations based on some level of similarity, we think this level of additional refinement would be very value-added in the future,” Godsey says. “We’re certainly excited to see the latent variable modeling removed from the measure weighting. A more streamlined, predictive, transparent way of scoring measures will provide clarity and give hospitals more insight into where they have opportunities to improve, as well as where they’re doing well.”
While the cohorts are an improvement, the ability to see a true apples-to-apples comparison remains limited by how a hospital must meet minimum patient volume for some measures to be reported.
“The measures you have in a core domain with the star ratings program determine how you are grouped with hospitals that have similar numbers of measure weights,” Godsey explains. “What has changed in that space is that organizations may see more fluctuation than in the past because they used to be evaluated with everybody. Now, they are evaluated within more of a like cohort of organizations. For some organizations that have a smaller volume, such as critical access hospitals or a specialty hospital that focuses only on orthopedics, their score may have fluctuated more than in previous releases. Their volumes in those specific measures are only reflective of only a small number of patients.”
Hospitals Dropped in Ratings
Godsey has heard from hospital quality leaders wondering why they lost a star or two in the ratings this year. She says it often is explained by the change in the cohort methodology and the weights associated with some measures. Godsey suggests CMS explore defining cohorts that are based more on the types of services provided by the organization and patient volume.
“That speaks more clearly to patients and the community. If a patient is looking for an organization that has services they are looking for, they can specifically go to that cohort and evaluate the hospital within that cohort,” she says. “It adds a bit more refinement to the cohorts that we think CMS certainly could explore.”
Godsey also notes the star ratings remain hindered by the lack of contemporary data to determine the rankings. The lag between when data are reported and when it is made public can be two or three years. That means hospitals that are improving their quality metrics may not see that reported for years, with consumers basing decisions on where the hospital was a long time ago rather than today.
“[Hospitals] really need to know how they’re doing now, not in the past. They would like their performance reported publicly to be reflective of today,” Godsey says.
Changes Help Quality Leaders
The improved ratings methodology should bring some comfort to quality improvement leaders because they at least have a better understanding of how their performance will be evaluated now and in the future. There should be less anxiety about the methodology and the resulting score changes.
“That has given [administrators] some sigh of relief because they don’t have to spend the time, effort, and energy to try to explain a more complicated algorithm to their board of directors or their C-suite executives,” Godsey says. “Knowing how they are going to be measured, organizations can double down and address their opportunities for improvement, showing staff how they are going to be assessed by CMS.”
Different Impacts from Changes
The new methodology has resulted in some troubling star ratings for some hospitals, says Kristen Geissler, managing director with Berkeley Research Group in Baltimore. For instance, Geissler is working with hospitals that have gone from a five-star to a three-star rating.
“There were a lot of changes, but not all of them have the same impact as the others,” Geissler says. “The change from latent variable modeling to measure domains has had the largest impact on the overall scores. The peer grouping helps a very small hospital not be compared to a teaching hospital. But for the average-size hospitals, that has not had as much impact on the overall scores.”
Geissler says the changes promise more continuity and predictability, but it is unlikely CMS will not tinker more with the measures and methodology. Each release of the star ratings comes with the retirement of some measures and the addition of new measures.
“If a hospital was doing particularly well in measures that were retired, that could be hurtful to them. Conversely, if they were doing poorly on measures that are added, that could hurt them in that way,” Geissler says. “That’s always a potential impact from the updates on the star ratings.”
Hospitals should evaluate which measures are eroding year after year. Study the individual measures within the domain and the overall score in that domain.
“Changing from the latent variable modeling to the simple average has allowed hospitals to do more evaluation of themselves. Before, it was so black-boxed that you couldn’t understand the impact of the individual measures,” Geissler says.
Still Need Fresh Data
The lack of contemporary data still confounds any effort to respond to peer group comparisons. Geissler does not expect any improvement soon with making the data more contemporaneous.
“I think it will be even more challenging in the next release because of the COVID-19 time frame. CMS has some proposals for suppressing data from the COVID-19 time frame. Though that is not final yet, we can make some good conclusions about the effect on the five star measures from what CMS has proposed,” Geissler says. “We know January through June of 2020 data likely will not be used. The July through December of 2020 data is still in question.”
With much of the current star ratings based on 2019 data, Geissler says the lack of 2020 data will create a lag in information available for the next round of ratings. “The main takeaway for hospitals today is the need to understand their performance today in all of the CMS measures. The patients they are treating today impact the five star measures for two and three years out. It is critical to understand current methodology and their trends over the past several years,” Geissler says. “Understanding that now will help your ... performance for the next several years.”
- CMS.gov. Overall hospital quality star rating.
- Kristen Geissler, Managing Director, Berkeley Research Group, Baltimore. Phone: (443) 391-1046.
- Beth Godsey, MBA, MSPA, Senior Vice President, Data Science and Methodology, Vizient, Chicago. Email: firstname.lastname@example.org.