EXECUTIVE SUMMARY

Preventing diagnostic errors has proven difficult. Many of these errors are captured through passive reporting, and systems are not in place to help clinicians learn from such errors.

  • Children’s Hospital Colorado in Aurora has implemented active surveillance to identify potential diagnostic errors. The approach involves leveraging an electronic algorithm to identify cases that meet specific criteria.
  • Once potential errors are identified, they are reviewed to find what learning opportunities can be passed on to clinicians and what steps may be needed to prevent such errors from happening again.
  • A nonpunitive approach is needed when sharing diagnostic errors with clinicians. Researchers piloted a system of feedback that can be used to discuss diagnostic errors with physicians in emergency medicine, primary care, and hospital medicine.
  • The pilot was well-received, and it is expanding. The model can be duplicated at other health systems interested in employing similar approaches to improve diagnostic performance. Notably, the program can be challenging in terms of time and resources.

Several years ago, clinicians at Children’s Hospital Colorado (CHC) in Aurora were grappling with what led to the death of a patient who presented several times to the health system’s ED and urgent care centers with what was initially diagnosed as a migraine.

“Unfortunately, [the patient] ended up dying from a brain abscess, and that was the actual correct diagnosis,” explains Joe Grubenhoff, MD, who is now the medical director of the diagnostic safety program at CHC. “There was a root cause analysis done to show that this was primarily a diagnostic error. Nobody took into account the patient’s fever and considered that perhaps something else was going on.”

The tragic event ultimately led the health system to delve deeply into how diagnostic errors occur and what can be done to ensure that mistakes are not repeated. That process is ongoing, but CHC has made significant progress toward dealing with the multiple barriers that can stymie efforts to improve diagnostic performance. For instance, Grubenhoff notes one important early hurdle for any health system engaged in this work involves finding a way to ensure everyone knows when diagnostic errors occur.

Employ Active Surveillance

“Most hospitals have ways that you can report adverse events or at least safety concerns, but, traditionally, physicians are not good reporters. They report far fewer incidents than nursing staff. They may not even be aware that a diagnostic error occurred, especially in the ED,” Grubenhoff shares. “If you have discharged the patient, and then he comes back after your shift or after your week of service ... you might not even know the patient is back in the hospital and that he has a different diagnosis than the one you sent him home with.”

Further, clinicians who see the patient upon his return to the hospital also may not report the incident. Those clinicians might not want to tell on their colleagues, or they may not recognize the care provided during the patient’s initial visit as something that could have been handled better. Grubenhoff calls this “passive surveillance,” which can cause problems to fall through the cracks. Grubenhoff and colleagues at CHC are moving from passive to active surveillance to detect diagnostic errors. This involves leveraging an electronic algorithm to pick up cases from the electronic medical record that meet specific criteria.

Recently, investigators studied pediatric patients who had visited an ED/urgent care system, and then were subsequently admitted to the hospital with a different diagnosis within 10 days of the original encounter.

For each case pulled, the electronic algorithm identified the diagnosis from the initial care encounter and the hospital discharge summary from the subsequent admission. Investigators analyzed the cases to see if clinicians missed an opportunity to identify a condition earlier based on the available information. If yes, then the case was categorized as a probable diagnostic error.

“Our hospital admits from the ED about 12,000 kids a year. Of that 12,000, about 1,000 end up being kids who were seen previously in the system,” Grubenhoff explains. “Over the first two years of data collection, [we found] about 5% of all admissions had a potential diagnostic error if they were seen [in the healthcare system] in the prior few weeks.”

Following a structured review of these cases,1 investigators concluded that in the two-year review period, there were 92 cases for which there was sufficient information available to the clinician during a prior encounter to pursue further workup of the patient. However, that follow-up did not occur. Grubenhoff says only six of the cases identified through active surveillance were detected by any of the hospital’s passive systems in place for reporting errors or safety concerns.

While some of the diagnostic errors were relatively minor and did not result in any lasting harm, some patients experienced serious adverse events. “One patient that stands out in particular was a girl similar to [the young patient who died from a brain abscess in 2015],” Grubenhoff explains. “She had been seen for what sounded like migraine headaches, but she kept having fevers, and she ended up also having a brain abscess.”

The patient was required to undergo a craniectomy. During that procedure, surgeons also removed an infected bone flap and installed a titanium plate. “Had we caught that a week earlier when we saw her for the first time, she might not have had that complication,” Grubenhoff says. “We are detecting things that are worrisome that otherwise go unrecognized.”

Find the Repeats

Now engaged in their fourth year of active surveillance data collection, CHC investigators are finding groups of kids on similar trajectories. They arrive with the same complaints, the same errors occur, and clinicians miss the same diagnoses.

“I found 12 kids who had a story where they were diagnosed initially with a migraine headache and ended up having some sort of intracranial process like an abscess, tumor, or a bleed,” he says. “We also found several kids who came in with fever and a limp; they came back with a bone infection or a joint infection that was missed [during the initial encounter].”

With this evidence, investigators can aggregate the cases and look for the reason why these diagnoses are missed repeatedly. The information can be included in a quality improvement effort that underscores these are not one-off cases.

“Human beings reason the same way all the time, and we keep making the same mistakes over and over again,” Grubenhoff says. “What can we do or what can we put in place to keep those repetitive mistakes from causing patient harm?”

Electronic prompts could remind clinicians to consider certain diagnoses and/or workups when specific criteria are met. Still, it also is important for clinicians to know when they have made a diagnostic error. It turns out a significant obstacle to this task is finding an acceptable way to broach the topic with clinicians so the information is well-received.

“When you are talking about diagnostic errors, you are talking about the fundamental role of what a physician or a nurse practitioner does,” Grubenhoff observes. “The first step in doing any patient care is making the diagnosis. If you don’t get that right, then you are potentially treating the wrong problem and causing harm.”

In his own research, Grubenhoff has found conversations about diagnostic errors make clinicians uncomfortable, particularly when such discussions take place in open forums, such as in morbidity and mortality conferences.2 However, Grubenhoff also has found clinicians want to know when they have made an error and how they can avoid making similar errors in the future.

So what is the best way to deliver this feedback to clinicians? CHC surveyed providers about their preferences regarding this type of feedback.

“[The providers indicated] that they wanted something that is actionable, a clinical learning pearl,” Grubenhoff reports. “They wanted it delivered by somebody with either the same or more experience. They preferred to have it delivered primarily through a conversation, such as during a phone call, rather than in an email or something like that.”

Grubenhoff explains more data collection around this topic is ongoing, but the information already provides some clear guidance on how to fashion a process around error identification, case review, and feedback.

“Having a conversation with a colleague after a group has had a chance to review [the case] and come up with some discrete opportunities ... has made some people seem to feel a bit more comfortable with the feedback they are getting,” he says.

Secure Leadership Support

Geisinger Health, a large, integrated health system based in Pennsylvania, has been focused on diagnostic improvement in recent years, too. Its Committee to Improve Clinical Diagnosis (CICD) has partnered with a multidisciplinary research team to form the Safer Dx Learning Lab, a group focused on a range of activities aimed at boosting diagnostic safety and measurement.

As part of this effort, the group developed processes to identify what it calls missed opportunities in diagnosis (MOD). The CICD partnered with the departments of emergency medicine, hospital medicine, and primary care to pilot an approach for providing feedback to the clinicians involved with these identified MODs.3

Key to their approach was delivering the feedback in a non-punitive manner, and presenting the errors as learning opportunities. A clinical psychologist accustomed to working with clinicians trained department leaders on how to deliver this feedback to other providers based on a guidebook outlining the approach.

Between January 2019 and June 2020, the trained facilitators held feedback sessions about the potential MODs with clinicians. The sessions were scheduled at a convenient time for the providers, and investigators reported that most of these sessions lasted for 20 to 30 minutes.

In follow-up surveys, most facilitators reported they believed recipients were open to the feedback and the sessions would improve patient safety. Likewise, most recipients reported the feedback was delivered in a nonpunitive way, providing them with opportunities to improve their diagnostic process.

Nevertheless, there were some hurdles. Hardeep Singh, MD, MPH, one of the authors of this research, notes finding the time and resources to do this work was challenging.

“Also, [you] need very strong leadership support to create these programs. Most institutions do not currently have such programs, so it will take some time and additional evaluation before this can get into routine practice,” Singh explains. “We feel everyone can start, even in a pilot form, based on the resources and ideas we [have] provided. In addition, you need some data sources, such as safety reports and cases, that can facilitate the discussions.”

Geisinger is expanding the feedback process to other departments, and Singh is hopeful the approach can be implemented in other health systems. In the meantime, he acknowledges more research is needed to assess whether such interventions actually improve diagnostic performance. “We are envisioning this in our future work,” Singh adds. 

REFERENCES

  1. Singh H, Khanna A, Spitzmueller C, Meyer AND. Recommendations for using the revised Safer Dx instrument to help measure and improve diagnostic safety. Diagnosis (Berl) 2019;6:315-323.
  2. Grubenhoff JA, Ziniel SI, Cifra CL, et al. Pediatric clinician comfort discussing diagnostic errors for improving patient safety: A survey. Pediatr Qual Saf 2020;5:e259.
  3. Meyer AND, Upadhyay DK, Collins CA, et al. A program to provide clinicians with feedback on their diagnostic performance in a learning health system. Jt Comm J Qual Patient Saf 2021;47:120-126.