The trusted source for
healthcare information and
New accountability measures could mean sea change for QI
Joint Commission spearheading effort to stress outcomes
With a coordinated double-whammy, The Joint Commission and a group of quality experts, including current president, Mark R. Chassin, MD, MPP, MPH, have served notice on the QI community that they'd like things to change in a major way.
In a paper first published online in June by the New England Journal of Medicine, Chassin and his co-authors outlined four criteria for accountability measures that would have "the greatest likelihood of improving patient outcomes."1
The four criteria are as follows:
At the same time the NEJM article was published, The Joint Commission released the June 23, 2010, edition of Joint Commission Online, in which it announced these four criteria. In fact, the paper's authors wrote: "For its part, The Joint Commission is incorporating this framework into its programs."
"We wanted to indicate that while the paper itself is not a Joint Commission policy paper we are calling on all stakeholders to adopt these concepts and we wanted to say The Joint Commission is doing that," Chassin explains. "The way The Joint Commission is embedding these ideas into its programs is in that publication."
The reaction from quality experts has been mostly positive. "I most definitely agree with them," says Kathy Schumacher, MSA, CPHQ, director of quality, safety, standards, & outcomes and director of the Surgical Learning Center at William Beaumont Hospital in Royal Oak, MI. "One of the things I underlined in the paper is that the measures must be based on a strong foundation of research; I couldn't agree more." In fact, she adds, she plans to pass the paper along to others in her department.
"I think this is a wonderful step in the right direction," adds David B. Nash, MD, MBA, dean of the Jefferson School of Population Health in Philadelphia, who served on the inaugural ORYX steering committee. "I applaud The Joint Commission because, in part, hospitals had all started to appear as though they were living in 'Lake Woebegone,'" he says. "Everyone was way above average on all the core measures, so after awhile we all recognized the core measures needed greater granularity or a level of transparency. I think these four criteria will go a long way in that direction."
"My bias is to be suspicious of literature, such as this article originating from The Joint Commission, coming out of the compliance/regulatory world, but I think this is really a good move," says Martin D. Merry, MD, CM, a physician in Sanbornton, NH.
But Patrice L. Spath, of Brown-Spath & Associates in Forest Grove, OR, was not quite as quick to jump on the bandwagon. "In some ways, these are 'mother and apple pie' until you dig into them," she observes. "The problem is that in order to apply these criteria, you need some hindsight. In the rush to create accountability in health care, measures have been chosen that in hindsight did not necessarily have strong evidence that such care led to improved outcomes; but we only know that in hindsight. So consequently, when we apply these criteria now, how are you going to know the answers to some of these questions?"
Chassin responds: "We certainly have to be sensitive to the possibility that unanticipated adverse effects will creep in when measures are put into the real world, but there are some characteristics of these process measures and the way they are constructed that leads one to be suspicious of certain kinds of measures."
Chassin cites the measures governing the timing of the first dose of antibiotics for patients with pneumonia. "The real problem is that the way we construct these measures is based on identifying groups of patients using a principal diagnosis, which is not determined until weeks after they leave the hospital," he notes. "Much of the time the diagnosis is not clear at the time the patient is admitted to the hospital, so you do not know who will end up with that label three to six weeks later. The pressure to improve on measures may lead us to give antibiotics more quickly than good clinical judgment might deem appropriate. So, understanding how these measures work in the real world can lead to anticipating where 'unanticipated' adverse effects are likely."
Development of the measures
Where did these criteria come from? "This effort goes back on my part a long way to when I was responsible in a big hospital for overseeing efforts to improve on core measures and found there were a number of them that really did not contribute much to improving outcomes," recalls Chassin. "And a lot of my colleagues in similar roles felt the same way. That experience on the front lines, together with my first year or so in my current role with The Joint Commission hearing from thousands of hospitals with very similar observations, led me to conclude that we needed to do a better job now.
"There are measures in the 'accountability spotlight,'" he continues, "that cause hospitals to do a tremendous amount of work and expend lots of resources and energy; we really need to meet a higher standard where we can be very confident that improvement on measures will mean improved outcomes. Then there was the matter of looking carefully at the experiences of measures and understanding what good measures have in common, and using that as criteria for getting rid of some measures and how to replace them with others that were superior."
Schumacher, for one, thinks they've done a good job. "One of the things we struggle with often is the linkage of process to clinical outcome measures; we can certainly measure process, but we can't marry the two together. It's frustrating when you try to get clinicians to change processes or put new things in place and they want to know how it will affect outcomes. I think they are measuring in the way the physicians want to do it. The third criteria that the measure needs to address a process proximate to the desired outcome is very, very important."
Merry agrees. "Some measures have been ineffective and frustrating; doctors have resisted them, and hospitals have said 'Trust us,'" he notes. "One of the things I like is that it validates the notion of external measures how we compare providers on a number of measures. This paper is a real contribution; we're beginning to become more granular, and asking which measures we need to plug into accountability."
The criteria dealing with more downstream processes, he adds, "is a real advance; I've never seen that in print before."
Nash agrees with the need to replace some measures. "We all recognize, as an example, that the smoking cessation measure doesn't really tell us anything," he notes.
Is broad adoption ahead?
The publication of these criteria raises the question of whether there will be broad adoption across agencies such as the Centers for Medicare & Medicaid Services (CMS). "I think CMS will get on board," predicts Merry.
"I cannot predict how the politics are going to align, but I do believe eventually all the major groups will get behind a 'no outcome, no income' approach, and as a result these four criteria will actually help that journey," adds Nash. "To me that is the main message."
"I would hope the likelihood would be great," says Schumacher. "Hospitals need that alignment. Oftentimes we get mandates from The Joint Commission, CMS, and NQF (The National Quality Forum), and they hit us all around. Any opportunity for them to align would help immensely, and I do think that is coming."
"CMS and NQF are right at the top of the list," notes Chassin. "These criteria really should influence, if not dominate and at least serve as a filter for getting rid of measures that do not meet the criteria."
Chassin notes that The Joint Commission has already "worked very hard and successfully to make sure our detailed specifications for those measures we have in common with CMS are identical." In addition, he says, "We publish together a manual of specifications; we review it constantly with CMS to make sure all the measures we have in common are the same. Eliminating non-accountability measures from public reporting will not help hospitals much if Medicare still requires them. We're trying to persuade CMS; and NQF needs to recognize that part of the process of bringing new measures forward is understanding and having a way to assimilate the tremendous experience we have with what happens in the real world. As of now, we do not have a way of systematically assessing that experience and drawing the appropriate lessons and taking them back to measure development and the endorsement process. We must have that if we're going to be more successful in the future."
Meanwhile, says Chassin, The Joint Commission is moving forward with its own plans. "We are starting a process by which we will require a certain level of performance on accountability measures as part of accreditation," he shares. "Hospitals currently report those data to us, but we do not judge their accreditation worthiness based on performance of those measures; that's going to change."
"I think that's wonderful," says Schumacher. "When we have people here doing their Joint Commission visit, we can meet every standard they want us to meet, but this means looking at that data set they asked us to collect 'What did you do and how did you do it, and how have outcomes changed? Have you seen a decrease in your mortality or morbidity? Show us that.'"
Good news for quality managers?
Schumacher believes the broad adoption of these criteria would be good news for quality managers. "I think personally it may help clarify things and give us a better sense of direction," she offers. "It's wonderful that we've had process measures for years, but to be able to look at the outcomes of these measures is extremely important. From a quality perspective I'm very excited. I want to marry process and outcome together."
"It's going to have a great impact," adds Nash. "Your readers will be the frontline troops for promoting accountability."
"No. 1, it will make their jobs much easier in terms of getting their local physicians, nurses, pharmacists, and other clinicians to engage in this work of improvement," says Chassin. "The evidence that improvement on accountability measures results in improved outcomes is incredibly solid and The Joint Commission will play a much more central role in seeing that that evidence is available to quality managers; so if skeptics doubt them, they will have that kind of information at their fingertips. It will also make it much easier to get resources from management if the measures really are connected to improved outcomes."
This approach, he continues, is "real quality management and health improvement, and should put hospitals in a much better position. Also, as we get broader measures (for example, The Joint Commission just launched a new measure set for perinatal care), it gives the quality manager a much broader range in what they can work on in terms of improved outcomes and very reliable sources of really good measures. There are a lot of bad measures out there, and what we'd like to say is, 'You do not have to look farther; here are the best measures for hospitals.'"
Merry, however, does not believe the criteria will get the entire job done. "I think it will improve the field, especially if we begin to base payments on percentage of compliance; we will see a lot of improvement in a lot of these quality measures," he predicts. "But will it improve to things to the degree the health care system needs to? Absolutely not."
The true drive to excellence, he asserts, will never come from regulatory agencies. "The compliance industry has never been the motivator of the truly great organizations," he says. "They don't look to The Joint Commission because they are so far ahead of it. They are into genuine excellence and the highest levels of performance."
The Institute for Healthcare Improvement, he notes, talks about the "theoretical ideal," which is 100%. "The [NEJM] article talks about 90%-95% being good compliance with quality measures," he notes. "You'll never get there by accountability if inspectors come and encourage you to improve 2%; true quality/excellence leaders are already beyond 95%.
"We should measure ourselves in both percentages and Sigma units," he continues. Six Sigma, he explains, means 3.4 defects per million. "If you have 90% compliance, out of one million patients, 100,000 are not getting the care they are supposed to get," he says. "Even 95% compliance equates to 50,000 defects per million opportunities."