No one would argue that physicians and other providers always get it right. But there can be a variety of reasons for getting a patient diagnosis wrong. For example, patients may not report all their symptoms clearly, or symptoms may not conform to what is considered the norm. Diagnostic equipment may not be sensitive enough to catch an illness or injury initially. The patient may delay treatment — or the busy-ness of the hospital or doctor could delay treatment.
But none of these reasons, as well as others, such as physician fatigue, equipment failures, or miscommunication among providers, obviates the fact that there has been a diagnostic error, and these sometimes result in injury or death. Some estimates put the annual number of such misdiagnoses, missed diagnoses, and over-diagnoses as high as 12 million in outpatient clinics alone, according to Hardeep Singh, MD, PhD, a patient safety researcher at the Veterans Affairs Center for Innovations in Quality, Effectiveness and Safety at the Michael E. DeBakey VA Medical Center and Baylor College of Medicine in Houston.
A study from just over 20 years ago using autopsies found that as many as 40% of patients had misdiagnoses resulting in death,1 and a little more than a decade ago, that as many as 80,000 people a year died from misdiagnosis. Recent data suggest the main areas of misdiagnosis are related to pulmonary embolism, drug overdoses or adverse reactions, lung cancer, colorectal cancer, acute coronary syndrome, breast cancer, and stroke.2
Singh is one of the pre-eminent researchers on the issue of diagnostic errors and believes that whatever the reason behind the errors, we will never get a handle on the problem and truly begin to correct it until we start to measure it.
But that’s a difficult undertaking. Almost any paper on the topic includes some of the issues that make measurement difficult. Singh and his colleague Dean Sittig list some of them in their most recent paper3: diagnosis is not necessarily something that happens in a moment, but over time, as increasing information comes in. Autopsies, which are a great way to determine if there was a mistake in a diagnosis, are done with decreasing frequency. And some of the best methods of counting — observational prospective studies — are expensive, while retrospective studies based on chart reviews or administrative data are imperfect at best and can be misleading at worst. Regardless, none of these are systematically done for the purpose of looking at diagnostic errors.
One reason is that no one requires that this be done, says Mark Graber, MD, FACP, president of the Society to Improve Diagnosis in Medicine (SIDM) based in Austin, TX, and a senior fellow at the scientific research and consulting organization, RTI International. “If physicians wanted to report this stuff, then incident reporting would be a good way to measure,” he says. It would be easy, rote, cheap. But they do not want to. Graber says that, partly, it is because providers do not like to think they are wrong.
Tools that might make reporting easy do not exist, Graber says. The IHI Global Trigger Tool, used for reporting adverse events, is used to capture errors of commission — things that happen — he says, while a diagnostic error is an error based on something that didn’t happen, or didn’t happen in time. That’s hard to capture with a trigger.
Asking patients is another way to get at the data, because past experience has shown they are usually happy to talk to someone and tell them, a day or so after release, if they are doing well. “But that’s not enough time usually, even after discharge from the ED, to ask if we got it right,” Graber says. “However, patients are a great resource, and are willing to disclose safety concerns. Generally, they are pretty accurate, too.”
The striking thing about patient reporting of diagnostic mistakes is that multiple studies show few of the things patients identify as problems are issues that can be picked up through normal detection in a hospital, such as medication errors, he says.
At facilities with a great safety culture with a non-punitive and collaborative atmosphere, Graber says it is possible to get good information on diagnostic errors. As an example, he talks about one medium-sized hospital where hospitalists were asked to report any diagnostic errors they came across. He says after six months, they had 36 to study. “Doctors run into these on a regular basis, and if you encourage them to talk to each other about them, and your system is not punitive about them, they will.”
Generally, though, even at the best hospital, keeping track of the diagnostic mistakes is not part of the way things are done, he says, and “there are still many doctors who think they do not have any errors at all.”
Any fear about their own performance is misplaced, Graber says, as research shows the majority of mistakes are system errors that can be fixed. He gives a hypothetical example of a patient who shows up with a chest nodule that needs to be biopsied. There are many steps between the determination of the need for a biopsy and the final date — pre-operative testing, getting approval from the payer, making sure the procedure doesn’t conflict with the patient’s daughter’s wedding or the surgeon’s vacation. By the time the biopsy is done, 10 months later, the mass has metastasized.
With the knowledge that the system was taking too long for relatively simple procedures, streamlining is possible. “With coordination, and someone owning the process of shepherding the patient from one stage to the other, it can all happen in 2 weeks.”
Another example of a system error is physicians being unsure of what test to order, and thus either over-ordering tests, or not getting the right one. “There are thousands of possible tests, and they do not always know who to call in the lab to ask what might be appropriate.” Something as simple as having a liaison person designated to answer physician questions about appropriate testing could help ensure patients get the right tests at the right time, says Graber.
When you start to explain the realities of diagnostic error to providers, and show them the data that increasingly backs up the assertion that it is a real problem, they start to get it, Graber notes. “If you can show them information about the things that underlie diagnoses and the predictable ways we can stray, they find it interesting and understandable.”
Defining an error has been another hurdle. Singh has written about the topic before and has a paper under revision that looks at the complexity of the issue. Along with that, there are no standardized measures. But he is working on that, too, and with Sittig has developed the Safer Dx Framework in the hopes that it will advance the cause of creating a uniform method of measuring misdiagnosis.4 Basically, they have tried to account for the various moving parts of medicine, the fact that there can be multiple settings involved, and the various stakeholder outcomes.
In time, Singh believes there will be a framework or process that a majority of people will agree upon and payers, regulators, and accreditors will demand that this data be collected and analyzed in some way.
“Initially, we will have to study the harm and learn from the error,” he says. In the eight years he has worked on the topic, he has seen cases where patients have come to the doctor in an outpatient setting and then ended up in a hospital 10 days or two weeks later. “What errors were made? What was missed on that first visit?” In many cases, data gathering during the history is a problem, or there are misses during the physical exam.
Like Graber, he can also note examples of delays in diagnosis that led to more serious staging of cancers that can lead to more arduous treatments (with their own side effects) or even death. At the VA, researchers found almost 30% of cancer cases had opportunities for earlier diagnosis, Singh says. So now they code every abnormal lung X-ray and ensure that there is a follow-up appointment or repeat X-ray within 30 days. If the patient doesn’t show, he or she will get a call. The coding is all done electronically.
The VA did an analysis of all peer reviewed data and marked diagnostic issues and found that only 15-17% of the things that were reviewed related to diagnostic issues. But that kind of notice doesn’t involve learning from the data, he says. “Which is a shame. If you have that data and you are collecting it, you should categorize it and do something with it.” Singh thinks peer review should be “transformed so that there is more information on the sharp end of medicine, the point of care. Peer review can be that. We can collect that kind of comprehensive information.”
There are things you can do now, before it becomes mandatory, and they are things that both Graber and Singh think you should do, whether it is currently incentivized or not. “There is no measure, so there is no incentive,” Singh says. But that is changing.
In the meantime, here are some suggestions:
• Use what you have. You can use your electronic health record program to flag certain patients. If there is someone who should be followed up with and is not, make sure that patient is flagged, Singh says. While particularly useful for outpatients, it can be used with inpatients who are awaiting test results, too, or patients who have to come back to the hospital for some sort of follow-up care.
• Monitor testing follow-up. Graber says knowing how well your physicians do in quickly following up on vital tests is a good gauge for knowing how well they are doing at timely diagnosis. Not every disease moves slowly. Sometimes, an hour’s delay can make a big difference.
• Look again. Consider doing a chart review of all your chest X-rays, says Graber. It can give you an idea of how many are misread. From there, you can look at some of the potential reasons — there is a lot of emerging literature in the area of misread radiographs — and begin to address them.
• Open up. Encourage open discussion among clinical staff, and if possible create a reporting program, says Graber. If you can find a physician champion, you might be able to create a good project around the issue and set up a framework for diagnostic error measurement that will work for you in the future, when no doubt, you will be required by outside forces to know more about it than you do now.
“For now, this is not about how many you have, but looking to see what is there and figuring out why they occur,” Graber says.
For more information on this topic, contact:
• Mark Graber, MD, FACP, President, Society to Improve Diagnosis in Medicine, and Senior Fellow, RTI International, Austin, TX. Email: email@example.com
• Hardeep Singh, MD, Ph.D., Patient Safety Researcher, Veterans Affairs Center for Innovations in Quality, Effectiveness and Safety, Michael E. DeBakey VA Medical Center and Baylor College of Medicine, Houston, TX. Email: firstname.lastname@example.org.
- Leape L. Error in medicine. JAMA. 1994; 272 ( 23 ):1851, 1852 .
- Schiff GD, Hasan O, Kim S, et al. Diagnostic error in medicine: analysis of 583 physician-reported errors . Arch Intern Med. 2009; 169 ( 20 ): 1881-1887.doi: 10.1001/archinternmed.2009.333
- Leape L, Berwick D, Bates D. Counting deaths from medical errors. JAMA. 2002 288(19):2405
- Singh H, Sittig DF. Advancing the science of measurement of diagnostic errors in healthcare: the Safer Dx framework. BMJ Qual Saf. 2015 Feb;24(2):103-10. doi: 10.1136/bmjqs-2014-003675.