Common performance measures: Worthy goal or unrealistic aim?
Common performance measures: Worthy goal or unrealistic aim?
Differences in institution size, culture present biggest challenge
The Joint Commission on Accreditation of Healthcare Organizations (JCAHO) has them. The National Committee for Quality Assurance (NCQA) has them. The Leapfrog Group has them. The National Qualify Forum has them. It seems that everywhere you turn, some organization has issued its own version of the gold standard of quality performance measures.
Recently, the American Hospital Association (AHA) in Chicago joined the fray, renewing its call to hospitals to participate in a national voluntary initiative "devoted to developing a common framework for measuring hospital care quality" it launched last December.
Is a common framework for quality possible, given the wide variety of facilities and systems that exist? Is such a framework a worthy goal? And if the answer to the first two questions is yes, how are quality managers to balance that commonality with the needs of patients and their own unique set of priorities?
Even the AHA concedes the concept is not without flaws. "I think the con’ may be that as you standardize in a national effort like this, you clearly look at things that apply across a broad spectrum of hospitals; and specialty hospitals are not likely, for example, to get into the kinds of cases cared for in a tertiary facility," notes Nancy Foster, senior associate director for policy at the AHA.
"You do limit yourself in where you are going with measurement. However, I see the pros’ as far outweighing that," she says.
"It is possible to have a framework for measuring some quality indicators in hospitals, but health care is locally defined and locally delivered," notes Debora Simmons, RN, MSN, CCRN, CCNS, senior clinical quality improvement analyst at the Institute for Healthcare Excellence at the University of Texas M.D. Anderson Cancer Center in Houston.
"I definitely agree," adds Jason Etchegaray, PhD, systems improvement specialist at M.D. Anderson.
"I think it’s possible if all the different groups who have different aims can get together and agree on that framework," says Patrice L. Spath, RHIT, health care quality consultant with Brown-Spath & Associates in Forest Grove, OR.
"There are lots of purposes for measuring hospital quality. For example, what consumer groups would be interested in might be different from what insurers are interested in, i.e., costs and patient satisfaction. Payers interested in rewarding hospitals for better quality base that on measures," she says.
What appears to be happening now, Spath adds, "is that each of those different groups is trying to convince the others that they are right. And you have to throw into the mix the fact that there are a lot of state initiatives going on."
Despite the roadblocks, however, some see efforts like those of the AHA to be positive steps. "I think it’s about time," says Kay Beauregard, RN, MSA, director of hospital accreditation and nursing quality at William Beaumont Hospital in Royal Oak, MI.
"Hospitals are asked to consider measurements from all different sources — AHRQ [Agency for Healthcare Research and Quality], AHA, CMS [Centers for Medicare & Medicaid Services,] JCAHO, Leapfrog, and other professional organizations — so it would be helpful if the measures are coordinated," she explains.
"The AHA’s attempt to try to unify all these different organizations into common measures is of value," Beauregard continues. "You have to realize that collecting measures is very labor-intensive; you have to write programs, integrate things, and so on. A small change in measures affects how we collect them. But we have to define what are the most critical-to-quality measures, and we’re getting closer to that."
"This is an effort to provide information to the public and, therefore, a common set of information for people to use that would enable them to evaluate the hospitals in their area," Foster adds. "Many purchasers look for a resource that gives them a way to get common information about the hospitals in all the areas in which they have employees or other beneficiaries. This achieves that goal."
One of the goals in the AHA initiative, she explains, "is to try to encourage more and more organizations to adopt this common set of measures, this approach to measuring."
Foster notes that most of its initial measures mirror some already established by JCAHO. "The Joint Commission is a great partner in this, so it makes sense to start off with what work they’ve done," she says.
"But as we look at more conditions [the initial three are acute myocardial infarction, heart failure, and pneumonia], we want to shrink the number of organizations that are asking our hospitals to provide data," Foster explains. "This is labor-intensive work. If we can encourage people to join with us and not ask independently for disparate sets of data, we enable hospitals to marshall their resources and get this robust data."
Allowing for individuality
Even if a common set of measures is adopted one day (and this is far from a sure thing), hospital quality managers still will need to take into account the unique aspects of their own organizations, the experts note.
"Every hospital or system should choose what they want to measure relative to their performance based on what the priorities are in their organization," Spath points out.
"This should flow directly from mission, vision, and strategic objectives. As part of that process, as strategic objectives are defined, they will want to consider what are the national priorities for quality of health care. Then they should determine whether their organization is going to select those national or state areas of interest. It’s up to the individual facility to decide if the common framework makes sense for them given patient population, strategic objectives, and so on," she says.
"Often, the clinical topic or issue might seem to be the same [diagnosis]," Beauregard adds, "but the individual indicator could be different. For example, the AHRQ published a whole list of measures with evidence and literature behind them, but even as we delved into those, we had some differences of opinion.
"Don’t get me wrong; [developing common measures] is a good idea, but all of these are trigger points. Everyone has to step back and say, Let’s validate that these are meaningful and reliable, and then decide as an organization if this is of value to our patient population,’" she continues.
"What we did here was, we took core measures to the medical committee and hospital committee and asked them to select which were most meaningful, and we prioritized them within our organization — what would be the most meaningful to us and still meet the requirements," Beauregard says.
Even within an apparently rigid set of measures such as those established by JCAHO, choice is involved, she notes. "You can’t change the measure, but you can change what you submit," she explains.
"Different institutions that come from different philosophies and guidelines will infuse the standards with their own approach," Simmons adds. "The Leapfrog Group, for example, was certainly influenced by the organizations that make it up. There is not a shortcut for sitting down and having some thoughtful reflection on your institution and culture and frontline providers."
Etchegaray agrees. "You have to look at how that quality metric fits in with your culture. For example, every program is at a different level in terms of their safety culture. Some are not as advanced as others; they still are in the culture of blame. So you can’t really have common metrics; you have to merge or align the metrics with the location on the continuum of culture of these organizations," he says.
"Part of the problem is that, up until this time, the delivery of health care has been driven by individual practitioners and clinical judgment at the point of care," Simmons says.
"Now what we have are evidence-based guidelines. When you look at how that is translated to the ability of a hospital to meet those guidelines, you have to look at a wide variety of hospitals. Certainly, someone who is part of a small rural hospital and has no resources is not going to be able to collect data that easily and may not have the expertise or the resources to look at that data objectively," she points out.
Toward a common framework?
Given the diverse needs of institutions, will we move closer to a common set of performance measures in the future?
"My expectation is that, at least in terms of the initial core set, many quality managers already use these to improve performance," Foster says.
"What we are doing is finding a way for hospitals to begin to have a conversation with their community about quality and be actively engaged in efforts to improve quality and share some of that with the public," she adds. "We may get to a point where individual consumers have conversations with doctors about what the findings might mean in terms of the choice of a health care facility."
"Everyone wants to be the one that has the answer," Spath says. "I think the AHA is attempting to take a leadership role so that individual hospitals can define for themselves their logical priorities."
"The AHA and JCAHO have common core measures, which is of great value to an organization — not being asked to collect something different," Beauregard says.
"So, I would hope we move to a common set." Still, she says, "that would not be the be-all and end-all. If you walked into a hospital and all they measured were the core measures, that would be suspect. PI [performance improvement] measurements should be larger than that."
"There certainly would be less confusion, but the reality is — we don’t have one set for all institutions," says Simmons.
For her part, Spath remains skeptical. "I don’t think one set of measures can fulfill every purpose for measuring performance," she asserts.
"As the users of the data, we will always ask new questions; so it’s quite likely we will continue to have measures that evolve. I still find it difficult to believe that practitioners will use these to make decisions about quality. As more research is done, as evidence changes, those measures will change, so we never will have a standard set of measures, or one owner of the measure," Spath adds.
Need More Information?
For more information, contact:
• Jason Etchegaray, PhD, Systems Improvement Specialist, Institute for Healthcare Excellence, University of Texas M.D. Anderson Cancer Center, 1515 Holcombe Blvd., Unit 141, Houston, TX 77030. Telephone: (713) 745-1357. E-mail: [email protected],
• Debora Simmons, RN, MSN, CCRN, CCNS, Senior Clinical Quality Improvement Analyst, Institute for Healthcare Excellence, University of Texas M.D. Anderson Cancer Center, 1515 Holcombe Blvd., Unit 141, Houston, TX 77030. Telephone: (713) 745-1357. E-mail: [email protected].
• Patrice L. Spath, RHIT, Consultant, Brown-Spath & Associates, P.O. Box 721, Forest Grove, OR 97116. Telephone: (503) 357-9185. E-mail: [email protected].
• Kay Beauregard, RN, MSA, Director of Hospital Accreditation & Nursing Quality, William Beaumont Hospital, 3601 W. 13 Mile Road, Royal Oak, MI 48073. Telephone: (248) 551-0941. E-mail: [email protected].
• Nancy Foster, Senior Associate Director for Policy, America Hospital Association, 325 Seventh St. N.W., Washington, DC 20004. Telephone: (202) 638-1100 or (800) 424-4301. Fax: (202) 626-2345.
Is a common framework for quality possible, given the wide variety of facilities and systems that exist? Is such a framework a worthy goal? And if the answer to the first two questions is yes, how are quality managers to balance that commonality with the needs of patients and their own unique set of priorities?Subscribe Now for Access
You have reached your article limit for the month. We hope you found our articles both enjoyable and insightful. For information on new subscriptions, product trials, alternative billing arrangements or group and site discounts please call 800-688-2421. We look forward to having you as a long-term member of the Relias Media community.