IOM report on medical errors challenged as exaggerated, inaccurate, and overstated

Center re-examines the data and comes to a new conclusion

The Institute of Medicine (IOM) report on medical errors that created a Maelstrom in the health care industry is under fire itself, criticized by researchers who say the report’s conclusions are greatly overstated and not accurate enough to influence health care policy fairly.

The harsh criticism comes from Clement McDonald, MD, and other researchers working at the Regenstrief Institute in Indianapolis. The institute is part of the Indiana University Center for Aging Research at the Indiana University School of Medicine. In a recent article in The Journal of the American Medical Association, McDonald writes that the report from the IOM is "hot and shrill."1

The IOM report centers on death and disability in U.S. hospitals, with the IOM calling preventable adverse events a leading cause of death. The IOM report also says at least 44,000, and perhaps as many as 98,000, Americans die in hospitals each year as a result of medical errors.2

The IOM report also juxtaposes the death rate from motor vehicle collisions with the death rate from adverse events, suggesting that eliminating preventable adverse events also will eliminate the deaths.

"Unlike most people who step into motor vehicles, most patients admitted to hospitals have high disease burdens and high death risks even before they enter the hospital," Mr. McDonald writes.

"Although some hospital deaths are preventable, most will occur no matter how many accidents’ we avoid. Of course, medical errors are never excusable, but the baseline death risk has to be known and factored out before drawing conclusions about the real effect of adverse reactions on death rates, preventable or otherwise," he points out.

Mr. McDonald also writes that the IOM report draws inaccurate conclusions from the data reported in the Harvard Medical Practice Study (MPS), a groundbreaking study about adverse events, preventable and otherwise.3

The study’s data were misused in the IOM report to suggest that adverse events have a direct correlation with patient death rates, he writes.

"The Harvard study includes no information about the baseline risk of death in these patients or information about deaths in any comparison group," Mr. McDonald writes.

"Therefore, it cannot be determined whether adverse events are correlated with, let alone whether they cause, death. Indeed, an assertion that adverse events caused death in 13.6% of the patients who experienced adverse events is tantamount to the assertion that there would be no deaths in a group with similar baseline risks who avoided all adverse events. Clinical experience tells us that this is not true," he states.

Mr. McDonald’s article says the IOM report cited a number of other studies to support the argument that medical errors are a major cause of death. Most of those studies also depended on physician chart review, qualified their claims with words like "possible cause," and lacked any kind of control or comparison group, he writes.

The IOM did not emphasize those limitations. The IOM also cited a study that claimed 7,000 deaths were due to medication errors in 1993, but Mr. McDonald writes that this study miscounted deaths due to drug abuse as due to medication errors.

His article says the Harvard study acknowledged that eliminating the adverse event would have little effect on the life expectancy of many terminally ill patients, but the IOM report took the opposite approach by implying a causal effect for each adverse event followed by the patient’s death.

He writes further, "Given these facts, using available data and some reasonable assumptions, we believe that the increment in the published death rate due to adverse events above the baseline death rate could be very small. We also assert that the available data do not support IOM’s claim of large numbers of deaths caused by adverse events [preventable or otherwise].

"Clearly, more study with careful attention to risk levels is needed to determine the true impact of adverse events on death rates among hospitalized patients. Until those results are available, the design of regulatory solutions is premature."

One risk manager following the controversy tells State Health Watch that Mr. McDonald’s criticism hits the mark. Edward E. Bartlett, PhD, ARM, is a risk management consultant in Rockville, MD, who has written extensively on the subject of medical errors.

He says Mr. McDonald and his colleagues are right when they say the IOM report draws conclusions that are not justified by the research cited in the report. "The methodology in the IOM report was faulty," he says. "They did not take into account the fact that many of these patients would have died anyway, so we don’t know to what extent the adverse event increased the risk of death. We just don’t know the baseline."

Mr. Bartlett says the IOM made a mistake by focusing on medical errors rather than adverse events.

It is well-known that not all adverse events are caused by medical errors, and Mr. Bartlett estimates that only half are. If the IOM report had focused on adverse events overall and addressed how they might be reduced, it might have been more useful in influencing health care policies and procedures, he says.

"The whole notion of medical error has a great deal of sex appeal to it, and it’s a great way to get newspaper coverage and to get congressional committees to perk up," he says. "But anyone who works in the field knows that medical error’ is a very slippery concept. Best minds can disagree on what constitutes a medical error."

Lucian Leape, MD, a researcher at Harvard School of Public Health in Boston and a lead author of the IOM report, refutes most of Mr. McDonald’s assertions but agrees that the MPS research had flaws not cited in the IOM report. Mr. Leape wrote a rebuttal to the McDonald criticism in the same issue of JAMA.4 He says the study’s most serious limitation is that it was a retrospective medical record review study.

Mr. Leape writes, "Many important events in patient care are not recorded in the medical record. Some errors are not even known to clinicians caring for the patient."

"Studies of autopsy, for example, have found potentially fatal misdiagnoses in 20% to 40% of cases. On balance, the reliance on information extracted from medical records most likely led to a substantial underestimate of the prevalence of injury."5-7

Another important weakness was that the MPS relied on implicit judgments by physicians, Mr. Leape says.

"A serious weakness of any retrospective review is hindsight bias, the tendency to impute causation to an action when the [bad] outcome is known," Mr. Leape explains. "Hindsight bias would tend to overestimate the number of deaths due to adverse events."

But Mr. Leape takes issue with Mr. McDonald’s assertions that the research included a large number of patients who would have died about the same time anyway, regardless of the adverse events.

Most severely ill patients or those with complicated conditions were screened out of the study, he says.

"For example, all patients who had major surgery, acute myocardial infarction, pneumonia, or stroke who had an uncomplicated course [and therefore did not meet screening criteria] were excluded, as were patients who were admitted for planned terminal care, had a do-not-resuscitate order, or were extremely ill," he says. "Even many intensive care unit patients did not meet any of the screening criteria."

Mr. Leape tells State Health Watch that the IOM researchers worked hard to find the most useful data on medical errors. Mr. Leape says the IOM report did not exaggerate the extent of medical injury and death for three reasons:

 

1. First, he says it is unlikely the reviewers found adverse events that did not exist, but it is very likely that they missed some that did occur. Many errors and adverse events are not charted or even recognized as such, he notes.

 

2. Second, the large studies examined only injuries occurring inside the hospital, but more than half of surgical procedures take place outside of a hospital setting.

3. Third, Mr. Leape says that when prospective detailed studies are performed, error and injury rates are almost invariably much higher than indicated by the large record-review studies.

 

Bartlett praises the McDonald article for finding faults in the data that make up the IOM report’s foundation. If the IOM report is read by itself, without checking the source data or considering Mr. McDonald’s points, the report creates an unnecessarily poor impression of the health care industry, he says.

"You have to be rigorous in looking at these studies," he says. "It is clear that the IOM report, in some areas, was not rigorous in its review of these studies. I’m not trying to totally slam the IOM report, but it cannot be read and accepted at face value."

Keep politics out

Mr. Bartlett says he also is concerned that the IOM report may force political progress at the expense of the medical community. He calls the report "an astute political document" that can be misused.

"I would have preferred that the IOM report focus on adverse events and not imply that the medical field has been complacent or apathetic in dealing with it. I don’t think that’s a fair impression," he says. "It has been good in raising public awareness of adverse events. I just hope it doesn’t turn the issue into a political football."

References

1. McDonald CJ, Weiner M, Hui SL. Deaths due to medical errors are exaggerated in Institute of Medicine report. JAMA 2000; 284:93-95.

2. Kohn LT, Corrigan JM, ed., Donaldson M, ed. To Err Is Human: Building a Safer Health System. Washington, DC: Institute of Medicine; 1999.

3. Brennan TA, Leape L, Laird NM, et al. Incidence of adverse events and negligence in hospitalized patients: Results of the Harvard Medical Practice Study I. N Engl J Med 1991; 324:370-376.

4. Leape L. Institute of Medicine medical error figures are not exaggerated. JAMA 2000; 284:95-97.

5. Anderson RE, Hill RB, Key CR. The sensitivity and specificity of clinical diagnostics during five decades: Toward an understanding of necessary fallibility. JAMA 1989; 261:1,610-1,617.

6. Cameron HM, McGoogan E. A prospective study of 1,152 autopsies, I: Inaccuracies in death certification. J Pathol 1981; 133:272-283.

7. Goldman L, Sayson R, Robbins S, et al. The value of the autopsy in three medical eras. N Engl J Med 1983; 308:1,000-1,005.