Recent research indicates electronic health records (EHRs) are not improving patient safety in the way many users hoped. There are multiple explanations for the shortcoming.
- Physicians may be tuning out alerts that are not urgent or not configured properly.
- Some hospitals are customizing their EHR settings poorly.
- Incomplete data can hinder the effectiveness of EHRs.
Recent research indicates electronic health records (EHRs) still are not improving patient safety, despite years of efforts to make them more effective in preventing errors and boosting adherence to best practices.
The researchers examined EHR data collected between 2009 and 2018 from more than 2,300 hospitals, assessing computerized physician order entry (CPOE) and clinical decision support data from The Leapfrog Group’s annual survey. They found the overall mean total score for CPOE EHR systems, which assesses whether the EHR met basic safety standards, rose from 53.9% in 2009 to 65.6% in 2018.
Other scores also increased in that period. The mean score for basic clinical decision support rose from 69.8% in 2009 to 85.6% in 2018. The mean score for advanced clinical decision support rose from 29.6% in 2009 to 46.1% in 2018.
Drug-diagnosis contraindications — often heralded as the EHR feature that could most improve patient safety — actually was the lowest-performing category. The mean score increased from 20.4% in 2009 to 33.2% in 2018, meaning not quite one-third of EHRs met basic safety standards in this category. (The full report is available online at: https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2766545.)
“These findings suggest that despite broad adoption and optimization of EHR systems in hospitals, wide variation in the safety performance of operational EHR systems remains across a large sample of hospitals and EHR vendors. Hospitals using some EHR vendors had significantly higher test scores. Overall, substantial safety risk persists in current hospital EHR systems,” the study authors wrote.
The authors explained improvements in basic clinical decision support were greater than in advanced clinical decision support, which is consistent with other studies.
The researchers recommend three improvements: conduct CPOE safety assessments at least annually and after upgrades, share the results of safety assessments with EHR vendors to spur development of safer systems, and include CPOE safety assessment scores in publicly reported process quality measures.
The lead researcher in the study, David C. Classen, MD, MS, from the division of clinical epidemiology at the University of Utah School of Medicine, observes that one of the primary motivations for the wide adoption of EHRs was to improve safety. He notes as far back as 2012 there was evidence the industry was falling short of its goal.
“We’ve not used the EHR to measure safety effectively, which we can do, and we’ve not used the EHR to prevent safety problems,” Classen says. “Indeed, there are reports that we’ve actually injured people with the EHR. If the EHR’s original main goal was to improve patient safety, it hasn’t worked out very well.”
Classen’s research is disappointing, he says, because it shows not much has changed in almost 10 years. “You would expect that if the focus on EHRs was to improve safety, we would see evidence of that in an objective, operational test like this one. This was not a test of software on the shelf; it was a test of operational software,” Classen says. “This puts a big hole in the government’s argument that if the software on the shelf improves safety, it should be true in operation, too. That’s not true.”
Other industries are much more vigilant about testing safety-critical software on an operational basis rather than trusting that it will achieve safety goals based on theoretical testing, Classen says.
Healthcare professionals have become more dependent on EHRs to backstop them on tasks that previously would have been performed manually, like checking for drug interactions, Classen says. Clinicians believe the hype that EHRs can improve safety automatically and let down their guard, he says.
Classen points out another problem that should trouble risk managers: The EHR may not be improving safety, but it is documenting every error and oversight.
“When your pharmacist doesn’t do the drug interaction check because they’re pushed on productivity and they’ve been told the EHR will take care of it, the EHR is creating a footprint of exactly what happened,” Classen says. “There’s a record of all that. When you get sued, you’ll have to provide the EHR records, which may show that the EHR didn’t do that drug check because the feature got turned off in an upgrade, and your pharmacist was relying on that.”
Creating Potential Liability
The expectation that technology will solve all problems can be dangerous in healthcare and leads to liability risks, says Roy Wyman, JD, partner with Nelson Mullins Riley & Scarborough in Nashville, TN.
“The danger comes in the interplay between machines and humans. Machines will make certain types of mistakes, and humans will make other types of mistakes,” Wyman says. “When you have human and technology input overlapping, that’s sometimes when you have some of the biggest mistakes. At times, humans will tend to overrely on the technology and create issues, but they also can underrely on the technology and override things without understanding why they are there.” (See the story in this issue for another perspective on the latest research.)
The research showed EHR vendor choice only explained approximately 10% of the variation in performance differences, says Paul Dexter, MD, research scientist with the Regenstrief Institute, and associate professor of clinical medicine at Indiana University School of Medicine. The combination of EHR vendor choice and observable hospital characteristics — such as an academic vs. rural setting — only explained approximately 15% of the performance differences.
“That is, what causes the majority of the variation in performance is left unexplained,” Dexter says. “It seems likely that the bulk of performance differences found between hospitals relate to how the EHR systems were configured by the hospital. Dissecting the relationship between EHR configurations and Leapfrog scores might prove a useful follow-up study.”
Dexter notes several study limitations related to the data available for analysis. Perhaps most importantly, he says, the existence of an alert usually does not lead to changes in clinician ordering practices. In the case of drug-drug interaction alerts, clinicians override more than 90%. Popping up an appropriate alert typically is only the first step in the process of avoiding adverse events and increasing patient safety, he says.
“This fact highlights the importance of monitoring the effects of particular alerts after they go live. How often are clinicians accepting or overriding the alert?” he asks. “When overridden, how often is the alert clinically appropriate vs. how often the clinician was correct to ignore it for a particular patient?”
If a particular reminder pops up surprisingly frequently or is overridden in most cases, there could be an unanticipated problem in its logic, Dexter says. At worst, it can be a risk to patient safety.
“Given the realities of alert fatigue, those responsible for decision support in a health system need to prioritize alerts most likely to otherwise lead to adverse events, perhaps relegating less important alerts to non-interruptive status — alerts that are displayed for consideration, but don’t interrupt clinician workflow with a pop-up,” Dexter explains. “With respect to the different types of available alerts, despite large differences in adoption of these various types of alerts across hospitals, we don’t know which categories are the most important from the standpoint of reducing adverse events. Are the most widely adopted types of alerts also the most important ones?”
Better Incorporation of Order Sets
In most cases, drug alerts do not indicate if the patient is receiving the optimal medication to treat the clinical problem, Dexter notes. Rather, only after the clinician has decided on a specific medication for a particular indication, will the alerts notify the clinician if there is an allergy, if there is a problem in the dose, or if there are renal dosing considerations. Increased incorporation of order sets into clinical workflow might increase the likelihood the optimal medication is selected, which Dexter says is arguably the first step in medication safety considerations.
“We don’t know from this study what accounts for the bulk of hospital differences in Leapfrog scores. We do not know what available EHR functionality or alert logic went unutilized or was turned off to avoid clinician alert fatigue,” Dexter says.
He notes the authors pointed out “the particular concern was that organizations purchase EHRs and medication safety tools from separate vendors and have great latitude in how they implement and maintain them, so substantial variation in safety performance could be present.”
This “latitude” likely is at the heart of the wide hospital variation in Leapfrog performance differences, Dexter says.
Ways to Improve Safety
Dexter offers three recommendations for improving Leapfrog scores, and by extension, increasing medication safety:
- Provide incentives or require hospitals to adopt minimum sets of the highest-priority alerts. This might include the “top 100” drug-drug interaction alerts (lists that have been assembled by various groups). Renal or hepatic dosing alerts should be mandatory functionality given they are easily overlooked, he says.
- More transparency in public reporting of hospitals’ EHR systems, clinical decision support alerts, and adverse events. Such transparency might help identify those alerts that prevent the most adverse events, providing the rationale for mandates of high-priority alerts. Trying to prioritize all alerts at the individual hospital level is inefficient and probably not feasible, Dexter says.
- Leapfrog or similar testing could be a mandatory yearly exercise for all hospitals. The authors of a recent study noted 1,812 hospitals took part in Leapfrog testing in 2018. Given there are currently 6,146 hospitals in the United States, this means only about one-third of hospitals participated in 2018. (More information is available at: https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2766545.)
A big component of medical errors is incorrect or incomplete information, notes Daniel Cidon, chief technology officer for NextGate, a company in Monrovia, CA, that provides technology services to the healthcare industry.
“Information contained in EHRs is predominantly inconsistent. This is because different systems capture patient demographic information in different ways. For instance, some EHRs will include hyphens, apostrophes, and suffixes in last names; others don’t,” Cidon explains. “This makes it extremely challenging to tie patient information from other facilities together.”
As the use of EHRs and clinical applications exploded over the last decade, ensuring patients and individuals are accurately and consistently matched to their data became the focus, he says. But although roughly $35 billion of U.S. taxpayer dollars have been spent on digitizing health records, the programs remain disjointed, causing a flood of duplicate and disparate records.
“This means that individuals receiving care from more than a single provider in the network often have medical records in several other locations,” he says. “Large-scale M&A [mergers and acquisitions] and consolidations exacerbate the issue, as acquired EHR systems often reside in silos.”
The inability to match patients to their data can lead to dire consequences, including "inappropriate medications being dispensed, incorrect diagnoses, erroneous test results, and increased risk from redundant medical procedures,” Cidon explains. “The JAMA study demonstrates that technology alone is not the solution to safety problems. Effective data governance policies are pivotal to ensure accurate data capture and make best use of the technology.”
Patients will continue to suffer consequences if they are not consistently and correctly matched to their data, Cidon warns. EHR data-matching functionalities are not sufficiently compiled to unify information from various external systems.
EHRs that cannot communicate with one another can exacerbate inefficiencies, generating redundant information and duplicate, incomplete records. This can result in patient safety errors, skewed reporting and analytics, administrative burdens, and lost revenue, he adds.
Master patient indexes (MPI) within EHRs are limited in their ability to compare and link records from external sources, Cidon explains, especially those outside the network. A report from Pew Charitable Trust indicated EHR match rates within facilities are as low as 80%, meaning one out of five patients may not be completely matched to his or her record. (The report is available at: https://www.pewtrusts.org/en/research-and-analysis/reports/2018/10/02/enhanced-patient-matching-critical-to-achieving-full-promise-of-digital-health-records.)
When exchanging records outside the organization, match rates can be as low as 50% — even when the providers are running the same vendor EHR, Cidon notes.
“Most data entry errors are preventable. But without a centralized data-matching system in place to automate record de-duplication and data integrity, hospitals are increasingly placing patients at risk and barring physicians from making informed, life-saving decisions,” he says. “Hospitals and health systems that aren’t running an efficient enterprise MPI are operating EHR systems fraught with duplicates and inaccurate patient information.”
An enterprise MPI is one tool to help an organization with data governance, Cidon says, at least from the fundamental perspective of the identity of the patient on which all other information is based.
Healthcare organizations must pay attention to people, process, and technology. “It’s not just about the shiny widgets and fancy user interfaces. As healthcare consolidation continues to soar and organizations strive to create a more clinically integrated environment, patient identification across the continuum will become even more difficult and more critical,” Cidon explains. “To keep pace, organizations must engage in a more comprehensive patient-matching approach. Leveraging the right technology, along with better processes, will be key.”
A culture of mistrust over data-sharing can negatively affect EHRs and how data are handled, notes Venky Ananth, senior vice president and global head of healthcare at Infosys in Hartford, CT. To overcome this barrier, healthcare players should look to partner across ecosystems and with technology companies, he says.
“Data-sharing improves EHRs, and in turn patient safety, by opening up the gates to innovation and opportunity, accelerating the process in identifying solutions and curing patient diseases. Given that healthcare is such a highly regulated industry, and data breaches are expensive, it is pertinent that patients play an involved role in managing their data,” he says. “One example of this in practice is through wearable consumer technology, where patients themselves can input and monitor their health data.”
- Venky Ananth, Senior Vice President, Global Head of Healthcare, Infosys, Hartford, CT. Phone: (959) 333-4000.
- Daniel Cidon, Chief Technology Officer, NextGate, Monrovia, CA. Phone: (626) 376-4100.
- David C. Classen, MD, MS, Division of Clinical Epidemiology, University of Utah School of Medicine, Salt Lake City. Email: firstname.lastname@example.org.
- Paul Dexter, MD, Research Scientist, Center for Biomedical Informatics, Regenstrief Institute; Associate Professor, Clinical Medicine, Indiana University School of Medicine, Indianapolis. Email: email@example.com.
- Roy Wyman, Partner, Nelson Mullins, Nashville, TN. Phone: (615) 664-5362. Email: firstname.lastname@example.org.