Multiple methods are needed to assess quality
Multiple methods are needed to assess quality
Otherwise assessment may miss the mark
If you don't measure quality accurately, your organization could face dire consequences — ranging from financial problems to plummeting patient satisfaction scores.
Now, new research underscores that quality professionals can't rely exclusively on any one single method to judge quality. One study showed that patient ratings of quality of care didn't correlate with technical quality of care measured by medical record review. Although evidence-based care was given, patients didn't always feel they received quality care.1
For a better understanding of the quality of care being delivered, both patient satisfaction and evidence-based measures need to be looked at, concludes John T. Chang, MD, MPH, the study's lead author and faculty at the David Geffen School of Medicine at University of California-Los Angeles. Patient charts or electronic health records can determine what care and services have been provided to ascertain technical quality, but patient surveys will give you information that can't be obtained from medical records, says Chang. "We need to combine information from both chart reviews and patient surveys to get a better picture of the quality of care patients are receiving."
When hospitals are compared, the information you get varies widely depending on which source you consult, such as Solucient, Healthgrades, Quality Check, or others. "If you compare hospital X on a variety of different public reports, sometimes it will be listed as number 1, sometimes number 10 and sometimes number 50," says Scott Williams, director of the Center for Public Policy Research within the Joint Commission's division of research.
So what is the best way to measure quality? For many organizations, a multifaceted look at their performance is the answer. The goal is to look at several aspects of quality, such as evidence-based measures, financial indicators, and patient satisfaction, to keep tabs on performance.
Quality is multidimensional, emphasizes Pat Cooper, director of health care improvement at Baylor Regional Medical Center at Plano (TX). The organization is moving to a "Circle of Care" reporting structure, with quality reports done for four "pillars" of quality, service, people, and finance. "Our balanced score card is under development," adds Cooper.
One of the principles of health care improvement is that measurement is not the goal — improvement is, she says. "That being said, to move forward, teams need data to know if improvement is necessary or if process changes are effective," she says.
At Wellspan Health in York, PA, patient satisfaction, cost of care, and clinical data are shared with the general public via the organization's web site. "It's important for the patient to understand how the organization is doing in multiple areas of quality," says Sandra Abnett, director of quality. "Baby boomer consumers are savvy and will be looking at data to make their choices for health care."
The organization has adopted the Institute of Medicine's Six Aims for Improvement — effectiveness, efficiency, timeliness, patient centeredness, patient safety, and equity. "This has really provided us a structure to guide us with our quality management processes," says Abnett. Each department identified measurements to fit into each of the six aims, and presenters at the quality committee are asked to identify at least one priority for improvement a year.
At Mission Hospitals in Asheville, NC, each service line is responsible for measuring and monitoring the specific measures associated with their various projects and quality assurance activities. "In addition, we encourage all teams to use our Project Charter methodology that clarifies measurement and scope," says Tom Knoebber, director of quality and performance improvement.
Service lines are required to report to the medical administrative committee twice a year to give an update on their projects. Systemwide projects, such as medication reconciliation, are monitored and reported within the performance improvement department. Other system projects that tend to align within a service line, such as acute myocardial infarction or heart failure, are incorporated into the specific service line goals.
"The service line panel is really the focal point for all of the various measurements," says Knoebber. This includes all applicable metrics, such as core measures, satisfaction, medication errors, and financial.
The service line panels are the primary forum for reporting on quality improvement projects. "There is definitely a balance between showing everything in a run chart vs. specific focus study data indicating an opportunity," says Knoebber. The panels are presented during service line meetings on a periodic basis, and then consolidated with a broader "state-of-the-service-line" report semi-annually.
"All metrics are not reported, although as part of the semi-annual report, all metrics should be reviewed. And if there is an opportunity, it should be included either through narrative or in a graphical form in the report," says Knoebber.
Potentially, there could be hundreds of measures for any one service line, he explains. "We tend to focus on the macro summaries and, if needed, can follow up on specific questions," says Knoebber. For example, if an increase in length of stay was observed within a specific service line, a further break-out would be done to see which specific diagnoses comprised the increase.
These measures are not reported to the public, although scores for the organization's three key satisfaction questions are posted on its web site: Overall Quality, Likelihood To Return, and Would You Recommend?
"We have had several discussions related to providing more public data, but the debate of propaganda vs. patient education and litigation has prevented any unified approach," he says.
Service lines and physician groups tend to debate the public's ability to interpret quality performance, he explains. "As with many public databases and reports, the details of inclusions and exclusions can produce significantly different results with the same data set," adds Knoebber. "The risk of transparency is a two-way street and should be evaluated carefully."
Clearly defined process measures based on evidence-based practices are the most applicable to everyone, says Knoebber. "The concept being, these things should be done to every patient every time," he says. "While there can still be extenuating circumstances for every process, a 'should be done' reporting process insures both patient education as well as measurement of quality."
While there is some strategic direction set by the CMS and National Quality Alliance measures, there will always be pockets of variation for individual hospitals that need to look at other factors, such as problems getting patients to the cardiac catheterization lab quickly, or problems with patient identification.
"Ultimately, you will have to do a mix of what's required by accreditors and insurers, and what you believe is important," says Jerod M. Loeb, PhD, executive vice president for research at the Joint Commission.
One long-standing problem is that quality measures are not always aligned, which means the same data may have to be collected several times. "This multiplicity of data demands drives you right up the wall. What you really want to do is collect the data once and stream it to whoever needs it for whatever reason," says Loeb.
That's not possible when hospitals are sending data to the Joint Commission, vendors, the QIO data warehouse, health insurers, and the state to comply with reporting requirements.
"Unfortunately, these things are not all aligned properly, so it creates an enormous burden on those who have to collect the data," Loeb says. "They have demands coming like crazy, and oftentimes the devil is in the details." Measures are often defined just a little bit differently, so similar data have to be collected twice.
CMS and JCAHO have aligned their quality measures, and the Hospital Quality Alliance adopted many of those identical measures. "But along comes the Ambulatory Alliance which has many similar measures, but they are defined in a different way," says Williams. "So we need to come to some sort of compromise."
Ideally, data collection could become a byproduct of care delivery. "Electronic health records are going to be the answer, but that too requires standardization," Loeb says. The bottom line, however, is that data collection efforts are having a significant impact on the quality of care that patients receive. "There is a lot of literature that demonstrates that public reporting of health care data creates a very strong stimulus to improve," says Loeb. "Nobody wants to be on the bottom of the list."
References
1. Chang JT, Hays RD, Shekelle PG, et al. Patients' global ratings of their health care are not associated with the technical quality of their care. Annals of Internal Medicine 2006; 144: 665-672.
Sources
For more information, contact:
- Sandra Abnett, director of quality, Wellspan Health, 1001 South George Street, York, PA 17405-7198. E-mail: [email protected].
- John T. Chang, MD, MPH, The David Geffen School of Medicine at UCLA, Department of Medicine, Division of General Internal Medicine and Health Services Research, 911 Broxton Plaza, 3rd Floor, Los Angeles, CA 90024. E-mail: [email protected].
- Pat Cooper, director, health care improvement, Baylor Regional Medical Center, 4700 Alliance Boulevard, Plano, TX 75093. E-mail: [email protected].
- Tom Knoebber, director of quality and performance improvement, Mission Hospitals, 509 Biltmore Avenue, Asheville, NC 28801. E-mail: [email protected].
Subscribe Now for Access
You have reached your article limit for the month. We hope you found our articles both enjoyable and insightful. For information on new subscriptions, product trials, alternative billing arrangements or group and site discounts please call 800-688-2421. We look forward to having you as a long-term member of the Relias Media community.