TJC report shows quality continues to improve
Some measures still show room for improvement
There was mostly good news for quality professionals in The Joint Commission's latest annual report on quality and patient safety. For example, between 2002 and 2008, compliance with the quality measures improved to 96.7% from 86.9% for heart attack care; 91.6% from 59.7% for heart failure care; and 92.9% from 72.3% for pneumonia care. In addition, hospitals showed steady improvement on surgical care measures, and more than 99% achieved the asthma care measures. From a global perspective, 90% of American hospitals achieved greater than 90% performance on eight of 28 measures tracked during 2008. In addition, these latest figures continue a year-over-year improvement pattern, which becomes increasingly difficult the higher the compliance rate achieved in the previous year.
The report looks at performance by accredited hospitals on 31 care measures for heart attack, heart failure, pneumonia, surgery and children's asthma.
"What's gratifying is that we continue to see improvement," says Jerod Loeb, PhD, executive vice president of The Joint Commission's division of quality measurement and research. "We see excellent performance on the part of a lot of hospitals particularly with those [measures] that do connect to salutary effects on health outcomes. We felt this was significant because what we're trying to do, to the extent possible, is create highly reliable care, and with performance at the 90% level, the fact that most, if not all, patients are receiving evidence-based treatment is important to care quality."
What does he think makes it possible for hospitals that are already at a high level of quality to continue to improve? "A relentless focus on processes of care we know are linked to good outcomes," Loeb replies. "Most of these measures are related to specific processes of care, specific medications, prophylactic antibiotics, or timing issues. [Improvement occurs when] all of these are areas of focus, reminder systems and checklists are created, and you have an infrastructure where all processes are related."
When the curve "flattens out," he notes, it does become a bit harder to achieve perfection. "If you look at performance in the years immediately following the start of public reporting in 2002, the slope of improvement was large, which was probably not a surprise," he continues. "Part of the answer [for why improvement has continued] is that hospitals realize it is not just these processes but others that may not be reported as much as these that are important."
Does only what's measured improve?
Loeb's comments raise an interesting point: Are hospitals improving only in those areas that are reported publicly, linked to accreditation, or to pay-for-performance models? "All of the above," says Patrice L. Spath of Brown Spath Associates, Forest Grove, OR. "If you look at it from the government's standpoint, you want to get the biggest bang for your buck, so they tend to pick high-volume, high-cost diagnoses, but as a consumer, if I do not have [one of those conditions] does that mean I don't also deserve high-quality care?"
There are two different ways of looking at care choices, she continues. "During the health care reform debate, some were looking at population-based thinking while others used individual-based thinking, which led to the 'death panel' comments. On an individual basis you would not like it if someone decided it was not worth $20,000 to give you a pill to extend your life for one year. But if we look at it from a population-based standpoint, can we afford that extra year for more people and not afford to vaccinate our kids? Medicare, for example, operates from a population-based focus on right diagnoses to improve quality; but as an individual, if I have pancreatitis, I should still get high-quality care."
"To some extent I worry about the same thing, because what gets measured gets improved," concedes Loeb. "But we know that these particular measures represent not only things hospitals should do, but also the bread and butter of American acute care."
Loeb continues: "Because of the importance of some of these measures in terms of incentives, pay for performance, public reporting, and so on, there is a focus, in part, at least because there is an incentive payment linked to it. The point is well taken that there are a lot of other areas where we know there is variability of care and where that care can be improved."
On the other hand, he notes, "It is also a very fair statement that these measures we focus on have the greatest potential for mortality, which is the other piece of the puzzle."
Part of the challenge in addressing even more measures, Loeb notes, is that "this places a fairly significant burden on the hospital in terms of data collection; if we all had [a national] electronic health record, we might be able to do a lot more."
Speaking of data collection, Spath notes that this may also play a role in some of the improvement found in the report. "Some of this could be reflecting improved documentation of the care actually provided; there's no way to tell," she offers. "For example, smoking cessation counseling may have been occurring in the past, and now people are documenting that so they can capture that piece of information. However, things like giving certain medication do not reflect documentation; either it is given or it isn't."
Check your data
While there are a number of best practices that can be adopted to institute and maintain improvement, Spath warns that the adoption of best practices in and of itself will not guarantee that improvement. "The Joint Commission and others have shared best practices with us that helped improve the numbers, but you need to focus on how your hospital did better," she notes. "First of all, best practices tend to improve performance when viewed as a team activity, with everyone needing to be involved instead of just delegating responsibility to a case manager or quality director. Then, one of the most important lessons to learn from these improvements in quality is for people to step back and say, 'What worked and what didn't? Did it work to hire a nurse to audit charts and get things done? Is that what contributed to the improvement?' If it did, the next thing we want to improve we should use the same model. But if we did that but the doctors did not change practices, then we have to change the model."
Spath continues: "The biggest indicator of success was when people viewed improved performance as everyone's responsibility and looked at how they could change the system to ensure patients got the care they needed and did it in a collaborative way."
What if your performance is already very high in a number of key areas? How can you continue to improve? "In industrial quality control, they would say the energy it takes to get from 95% to 100% is so much greater that you're better off using that energy to tackle another problem," notes Spath. "I'm not sure that's acceptable from the standpoint of CMS or The Joint Commission. In other industries, when you reach a tolerable level of quality, you do not try to go beyond that but rather move on to improve other areas. The hospital board should probably be involved in deciding at what point you are going to be satisfied."
Loeb notes that hospitals wishing to improve can learn from high performers. "If a hospital wanted to do better, they could go to sites like our Quality Check and learn which hospitals, for example, excel in terms of prophylactic antibiotics, and see how they handle that," he suggests. "Plus, one thing we're doing in 2010 is becoming engaged in a process through which we hope to build a solutions database identify those particular high performers and have them populate a database that will be searchable. With this database, hospitals that are not doing well would be able to match themselves to those that are doing well, see what they've done, and implement it in their institution."
However, he cautions, it may not be as simple as that. "If hospital A has done X and gotten good results, hospital B can't simply implement X and assume they will get the same results," he explains. "That's something [Joint Commission President] Mark Chassin has been emphasizing. We need to understand that a solution put into place in one institution to solve a problem does not necessarily indicate a one-size-fits-all solution. The problem may be the same, but the way in which it manifested in one hospital may be entirely different. There could have been a whole bunch of root causes, so the solutions for the two hospitals could be really different."