Study finds ways to improve EHR quality measures
A federally funded study by Weill Cornell Medical College in New York City demonstrates ways in which quality measurement from electronic health records (EHRs) can be improved.
In a large cross-sectional study in New York state, researchers demonstrated that the accuracy of quality measures can vary widely. Electronic reporting, although generally accurate, can underestimate and overestimate quality.
“This study reveals how challenging it is to measure quality in an electronic era. Many measures are accurate, but some need refinement,” says the study’s senior author, Rainu Kaushal, MD, director of the Center for Healthcare Informatics Policy in New York City, and chief of the Division of Quality and Medical Informatics and the Frances and John L. Loeb Professor of Medical Informatics at Weill Cornell.
Healthcare providers and hospitals are being offered up to $27 billion in federal financial incentives to use EHRs in ways that demonstrably improve the quality of care. The incentives are based, in part, on the ability to electronically report clinical quality measures. By 2014, providers nationwide will be expected to document and report care electronically, and by 2015, they will face financial penalties if they don’t meaningfully use EHRs.
“Getting electronic quality measurement right is critically important to ensure that we are accurately measuring and incentivizing high performance by physicians so that we ultimately deliver the highest possible quality of care. Many efforts to do this are underway across the country,” says Kaushal, also a professor of pediatrics, medicine, and public health at Weill Cornell and a pediatrician at the Komansky Center for Children’s Health at New York-Presbyterian Hospital/Weill Cornell Medical Center in New York City.
For this study, Weill Cornell researchers analyzed clinical data from the EHRs of one of the largest community health center networks in New York state. The research team examined the accuracy of electronic reporting for 12 quality measures, 11 of which are included in the federal government’s set of measures for incentives. What they found was fairly good consistency for nine measures, but not for the other three.
The study’s lead investigator, Lisa Kern, MD, a general internist and associate director for research at the Center for Healthcare Informatics at Weill Cornell, said, “The variation in quality measurement that we found in a leading electronic health record system speaks to the need to test and iteratively refine traditional quality measures so that they are suited to the documentation patterns in EHRs.”
The automated reports generally performed well. However, they underestimated the percentage of patients receiving prescriptions for asthma and receiving vaccinations to protect from bacterial pneumonia. A third measure suggested that more patients with diabetes had cholesterol under control than actually did.
The automated report said 57% of eligible diabetic patients had cholesterol controlled, while a manual check of the charts showed it actually was only 37%. Part of the problem is that physicians and nurses filling out the EHRs might be typing in information in a place that is not being captured by quality reporting algorithms. “EHRs create the opportunity to measure and provide feedback to clinicians regarding quality performance in real time, thereby improving clinical practice,” says Kern, who is also an associate professor of public health and medicine at Weill Cornell.
Kaushal adds that “EHRs are not just electronic versions of paper records, but rather tools that enable transformation in the way care is delivered, documented, measured, and improved. The federal meaningful use program will enable the deployment of these promising systems across the country, thereby enabling health care to enter the digital age.”
The full study is available online at http://tinyurl.com/EHRquality.