Potential harms from misdiagnosis in the ED is important information that is difficult to measure. The authors of two recent studies investigated this problem using the Symptom-Disease Pair Analysis of Diagnostic Error (SPADE) method.

In one study, researchers measured the misdiagnosis-related harm rate for sepsis among patients with altered mental status and fluid and electrolyte disorders.1 The authors analyzed 4,549 ED encounters for patients with these symptoms who were discharged home. Of this group, 26 were hospitalized for sepsis in the next 30 days.

In the other similar study, researchers measured misdiagnosis-related harms of missed acute myocardial infarction (AMI) in the ED.2 They analyzed 44,473 AMI hospitalizations. Of this group, 2,874 had been seen in an ED and discharged home in the previous 30 days, representing 574 probable missed AMIs.

“With improvements in diagnostic testing, it was not surprising that the overall error and adverse event rate were low for AMI,” says Susan Peterson, MD, an author of the second study and associate medical director for patient safety and quality at The Johns Hopkins Hospital department of emergency medicine. “[These findings] demonstrate a method for diagnostic performance, consistent with our clinical experience, that can be used toward other conditions that may be more obvious targets for intervention.”

ED Management (EDM) talked with David E. Newman-Toker, MD, PhD, director of the Johns Hopkins Armstrong Institute Center for Diagnostic Excellence, about using the SPADE approach to improve diagnostic accuracy in the ED. (Editor’s Note: This transcript has been lightly edited for length and clarity.)

EDM: Why is improving diagnostic accuracy in the ED so challenging?

Newman-Toker: Diagnostic errors are not like treatment errors. With wrong site surgery, you know that it happened, you know when it happened, and you know who was involved. All you need to do is figure out how to prevent it from happening again. By contrast, it is really hard to measure and monitor diagnostic errors because you often don’t know they’ve happened until well after the fact. This makes diagnostic errors the hidden “bottom of the iceberg” of patient safety. You have to go deliberately out of your way to even identify the problem.

EDM: What is the current approach for most EDs?

Newman-Toker: The most common way is to review the charts of cases that come as incident reports, patient complaints, or risk management/malpractice claims. Chart review is very labor-intensive. Most of the time, it’s inaccurate for identifying diagnostic errors because all of the most important information that would help you decide is usually missing from the chart. If you find a case you believe represents a diagnostic error, you then dig into the chart even further to try to sort out what were all the holes in the Swiss cheese that led to the diagnostic failure. Then, you try to plug all those holes. But in the end, you have no idea if any of it had an impact on your patients because you can’t measure your diagnostic error rate or the rate at which they resulted in harms. You can easily get lost in a sea of infinite possibilities of things that could be fixed, yet never know if any of them ever made a difference.

More often, though, the path of least resistance is just to conclude that it’s mostly “this person made the mistake, so we’re going to fire them for being an incompetent doctor,” instead of thinking of it as a system problem. That’s ultimately unhelpful, since the majority of diagnostic errors are made by well-intentioned clinicians working hard and doing their best for patients.

Unfortunately, the system isn’t supporting them in one way or another. Maybe they have too many patients and not enough time. Maybe they don’t have access to relevant tests, consultants, or computer-based tools to help guide decision-making. Maybe they were not adequately prepared for a particular clinical problem through their training. Maybe they are never given any systematic feedback on their diagnostic accuracy. Without feedback, it’s impossible to improve. It’s this last issue of diagnostic performance feedback that measuring diagnostic errors seeks to address.

EDM: What is the SPADE approach? How does it tell EDs what is causing misdiagnosis better than existing methods?

Newman-Toker: A few years ago, we started looking for ways to make it easier to measure and monitor, at an institutional level, how we were doing in terms of diagnostic performance. That led to what we call SPADE.

SPADE basically takes simple administrative or billing data about medical encounters and converts that into knowledge about diagnostic errors. It measures adverse events, such as hospitalizations, in the short-term follow-up of patients treated and released from the ED. But it does so with a twist; SPADE looks for specific symptoms and diseases that are related to each other, rather than looking at any short-term adverse event.

By analyzing only symptoms and diseases that are related from a diagnostic standpoint, you can get a biologically plausible, clinically sensible, and statistically valid estimate of the diagnostic error rate for that symptom-disease pair. For example, imagine a patient comes to the ED with chest pain, leaves with a diagnosis of acid reflux, and comes back a week later with a heart attack hospitalization. That could be just a coincidence of two diseases affecting the same patient one right after the other. But it is probably a missed diagnosis.

To find the diagnostic error rate across such patients for a given hospital ED only requires four pieces of data that are all routinely gathered: the release dates and discharge diagnoses from that particular ED’s treat-and-release visits, and the admit dates and discharge diagnoses from the later hospitalizations. With those four pieces of information, we can construct a curve of the frequency with which patients are coming back and being hospitalized for a heart attack, after having been told that they had something else “benign” like acid reflux.

What we see is a cluster of events very close in time. The risk of going home with a heart attack, after being sent home with “reflux,” is highest in the first days or weeks afterward. It then drops down to a flat rate that stays constant thereafter. We see the same curve for stroke, sepsis, heart attack — with any disease that’s acute, essentially. For every disease that’s dangerous to the patient, when you miss it the danger level tends to be highest at the time it’s initially presenting, and then declines back to a more stable risk level over time.

We can use the tail of that curve to estimate the baseline rate of heart attacks in that exact patient population. When we subtract that long-term base rate from the short-term rate — say, the first 30 days after the initial ED discharge — we get a measure of patients harmed by diagnostic error occurring at the initial ED visit. Note that in principle, we don’t have to use hospitalizations, we could use deaths, disability, excess use of healthcare resources, or anything else. It just has to be consistently captured, and it so happens that hospitalizations are almost always captured.

There are a certain number of patients who get misdiagnosed; we don’t know which ones, necessarily. But a measurable number of them get unlucky and something bad happens that causes them to be hospitalized a day later or a week later. We call these adverse events “the misdiagnosis-related harm rate,” and we can monitor that over time. If we have a stroke misdiagnosis reduction program, we can see whether the rate of short-term stroke events (after being told you didn’t have a stroke) is declining or if it’s staying unchanged in response to our intervention. It becomes the “needle” that we’re trying to move.

With missed sepsis, we thought it might be in patients who had fevers, who were being diagnosed with a viral syndrome and being sent home, because we knew from a prior study that’s what happens in kids.3

Using the SPADE method, we looked back at sepsis hospitalizations for what symptoms were being treated and released that turned out later to be sepsis. It turned out that change in mental status (confusion or altered thinking) was the dominant thing in adults that was the harbinger of missed sepsis. Those patients were being sent home and were then hospitalized with sepsis in the next three to five days.

With SPADE, you get to explore what is the thing you are missing, how often you are missing it, and whether your intervention is making a difference. If you don’t have those three things, then you don’t know what to fix ... and you don’t know whether you’ve fixed it when you try to fix it. That’s why the measurement piece of this is so critical. With SPADE, you can actually measure the valid rates of harm after misdiagnosis in an ongoing fashion, with a focused lens on what needs to be fixed.

EDM: Can EDs reduce risks of litigation with SPADE?

Newman-Toker: Our goal in doing this is to try to improve care for patients and to prevent the harms that are happening to patients, regardless of whether those patients sue clinicians or hospitals. But, for sure, risk insurers, hospitals, and clinicians would all be happy if claims could be reduced. More than one in five malpractice lawsuits are for diagnostic errors. They also account for nearly 30% of all risk insurer payouts, which is more than any other type of medical error. That’s plenty big, but, among the serious-harm claims, where the patient suffered permanent disability or death from the error, more than one in three claims are diagnostic errors.4 One of the downstream benefits of all this work to improve diagnosis for patients, although it’s not really the motivation for it, is a reduction in malpractice claims.

REFERENCES

  1. Horberg MA, Nassery N, Rubenstein KB, et al. Rate of sepsis hospitalizations after misdiagnosis in adult emergency department patients: A look-forward analysis with administrative claims data using Symptom-Disease Pair Analysis of Diagnostic Error (SPADE) methodology in an integrated health system. Diagnosis (Berl) 2021; Apr 26. doi: 10.1515/dx-2020-0145. [Online ahead of print].
  2. Sharp AL, Baecker A, Nassery N, et al. Missed acute myocardial infarction in the emergency department-standardizing measurement of misdiagnosis-related harms using the SPADE method. Diagnosis (Berl) 2020;8:
    177-186.
  3. Vaillancourt S, Guttmann A, Li Q, et al. Repeated emergency department visits among children admitted with meningitis or septicemia: A population-based study. Ann Emerg Med 2015;65:625-632.e3.
  4. Newman-Toker DE, Schaffer AC, Yu-Moe CW, et al. Serious misdiagnosis-related harms in malpractice claims: The “big three” — vascular events, infections, and cancers. Diagnosis (Berl) 2019;6:227-240.