The trusted source for
healthcare information and
Almost two years ago, the Institute of Medicine (IOM) report To Err Is Human, with its dire assertion that medical errors may kill between 44,000 and 98,000 people each year in U.S. hospitals, created a stir in medical circles and led to the advent of numerous safety initiatives.
Now, however, an article in the July 25, 2001, Journal of the American Medical Association (JAMA) puts at least some of the IOM findings into question.
In fact, if you listen to the mainstream media, you’d probably conclude that the IOM researchers were a group of "Chicken Littles," screaming about a sky that was not really falling at all. "Fatal Medical Errors Seem Overstated,"1 proclaimed The Atlanta Journal-Constitution, asserting that lead author Rodney A. Hayward, MD, "estimates that between 5,000 and 15,000 deaths are actually due to errors."1
Report sees some deaths as preventable
The JAMA report, which involved 14 board-certified internists in the review of medical records of 4,198 patients who died at seven Veterans Affairs (VA) medical centers from 1995 to 1996, found that 22.7% of active-care patient deaths were rated as at least possibly preventable by optimal care, and 6% were rated as "probably or definitely preventable."2
Noting earlier studies, the authors point out concerns about a lack of consideration in the IOM and other studies of "the expected risk of death in the absence of medical error."2 The question remains, they note: "How long would patients have lived if care had been optimal?"2
In conclusion, write the authors, "Our results suggest that these statistics on deaths due to medical errors are probably unreliable and have substantially different implications than has been implied in the media and others."2
Of course, there are statistics, and then there are statistics. What was the real message the authors were seeking to convey? Patient Safety Alert went directly to the source for the answer.
"We [Hayward and co-author Timothy P. Hofer, MD, MS] agree with the IOM report’s conclusions that quality problems are extensive and that it is critical that we need to increase our effort to find system solutions to decrease errors and adverse events, in both the hospital and the outpatient setting," Hayward tells PSA.
"The size of the problem is definitely big; we have just questioned the estimates on the impact on inpatient deaths and the precise nature of the problem — not whether the problem is important," he says.
Focusing on the numbers
However, Hayward continues, his study does, indeed, suggest that some of the numbers quoted in the IOM study "probably have been misunderstood, both in terms of their nature and the frequency at which they truly impact mortality."
But he also notes that "the impact of quality problems on harm and costs are undoubtedly much larger."
The bottom line, says Hayward, is that "too much attention has been given to the numbers controversy.’
"Media reports on the paper have not included the careful caveats presented in the paper," he says.
"We feel that the most important points in the paper are that the errors’ tend to relate to difficult decisions in very sick patients, rather than the types of outright blunders that tend to be the lead example in media reports. Blunders occur but are not the common types of problems that make up the majority of cases that result in increased risk of death," Hayward explains.
Common problems that increase risk of death are going to be more difficult to identify and avoid than is often appreciated, he continues, and many of these "errors" are differences in opinion and "not what most would call errors at all."
In addition, too much attention has been placed on the short-term prognosis issue, "which although we feel is important information for policy-makers, we feel should in no way be used to infer that errors are therefore not important," Hayward says.
"Indeed, it suggests that quality improvement opportunities may be particularly important in end-of-life care. Still, we must be cautious in intervening in complex issues like end-of-life care with arbitrary or inflexible patient safety systems that are solely directed at life extension."
The impact of errors on life extension sometimes will be less critical than putting more resources into improving the quality of psychological support for patients and family — errors that lead to increased pain, suffering, or unwanted treatment, he adds.
"Certainly we need to be careful that system approaches to life extension consider their impact on individualized care to meet individual patient and family desires, and do not make some aspects of care worse," Hayward says.
Not a reply’ to IOM
Finally, Hayward notes, his study was initiated long before the IOM report was released, so any juxtaposition of the two is somewhat misleading.
"It was designed to better understand and classify what physician reviewers truly meant when they said a death was preventable by better care, and to better understand reasons for disagreements between physician reviewers," he explains.
"We feel people are being much too tough on the IOM, since what we found relates to a complex issue that we didn’t anticipate when we began the study.
"We also feel that some are being unfair in implying that we have conducted statistical sleight of hand or have a hidden agenda. We entered this study with a sincere desire to understand this issue better. We’re just the messenger and did not expect to find what we found," he points out.
In conclusion, says Hayward, "We are very confident that our final results are accurate and fairly reported, and we think that a careful reading of the paper will show a nice balanced presentation of the results. We feel that our study provides valuable information and some cautionary notes, but generally supports the notion that there is a big problem and we need to work harder to improve care.
"However, we should just be careful to not oversimplify or sensationalize the extent and nature of the problem and potential solutions, or we could end up wasting health care resources and possibly making changes that inadvertently cause harm."
1. Atlanta Journal-Constitution July 25, 2001; A:15.
2. Hayward RA, Hofer TP. Estimating hospital deaths due to medical errors: Preventability is in the eye of the reviewer. JAMA 2001; 286:415-420.
Not surprisingly, Lucian Leape, MD, of the Harvard School of Public Health in Cambridge, MA, and co-author of the Institute of Medicine (IOM) Report, had some strong reactions to the Journal of the American Medical Association (JAMA) article.
While supporting the authors’ goals, Leape did note he had some "differences in judgment" with the authors.
He outlined both areas of agreement and disagreement in an extensive posting on the National Patient Safety Foundation listserv (www.npsf.org), excerpts of which he has granted Patient Safety Alert permission to share with our readers.
Here are some of Leape’s comments:
On the paper’s potential impact: "Would that all safety papers created such a splash. . . Unfortunately, beholders such as health care systems execs and large purchasers have been motivated by this paper to question the value of moving ahead with safety prevention, and that is a serious and unwarranted consequence. . . . Dr. Hayward [and I] do have differences of judgments. Here are mine":
1. "What they first set out to do is legitimate and not unimportant: to evaluate the reliability of physicians’ reviews of records to determine error causation . . . I have trouble with their methods."
2."Most interestingly, even with the different methods they used, they came up with an esti-mate of the fraction of preventable deaths, 6%, that was remarkably similar to the 8% found in the MPS [Harvard Medical Practice Study]. . . . It was closer to the MPS than the [Colorado/Utah] findings were and, thus, smack in the middle of the IOM estimate of 44,000-98,000 taken from those two studies. If that was all they had done, the newspaper headline for this paper should have been: [Veterans Affairs] study confirms high estimate of deaths due to medical injury.’"
3. "That it was not is a consequence of what the authors did next, and where I part company from them, which was to estimate the probabilities of patients who died from errors leaving the hospital alive if there had been optimal care and also of their living for three months. I have three problems with this:
A. "The implication that if a patient is going to die anyway the error (or suboptimal care) is somehow less important or less worthy of our concern.
"I believe this is inappropriate and unethical . . . Not surprisingly, the press calculated new totals from their percentages and concluded that the number of deaths that count’ is not 98,000 but 5,000-15,000. Again, not exactly what [Rodney Hayward and Timothy Hofer] said. Thus, the message’ of the paper, as so often in life, is not what you say but what it implied, what people hear.
B. "The concluding figure, 0.5% would have lived three months, has no face validity.
"I cannot believe that anyone who has studied adverse events in hospitalized patients — or, for that matter, looked at their own patients’ deaths — would agree with that. It goes against clini-cal experience, all other studies, and common sense. . . . We are making conclusions about the universe of patients who die in hospitals based on a variable number of reviews of 62 non-representative patients. . . . I called it tortured’ statistics, which some have found offensive. You judge for yourself. At the very least, I think a fair reading is that there are serious methodological questions that should give anyone pause in extrapolating these results.
C. "From a safety point of view this is the wrong message:
"It focuses attention on mortality and life expectancy, when it should be focused on process: finding systems failures and fixing them. Again, the authors say that, but the message from the numbers drowns them out."
4. "Despite the investigators’ best, and I believe sincere, efforts and methods, reviewer bias looms large in this and any study nowadays that asks physicians to make judgments on outcomes of care. . . .
"Unless [Hayward and Hofer] found a very exceptional group it is reasonable to assume that a fair number of their reviewers had at least unconscious desires to show that things aren’t as bad as people say they are."
5. "The focus on the numbers is . . . a sad, if predictable, diversion from the work of safety.
"This isn’t where the future of safety lies. We don’t make hospitals safer by counting bodies (or for that matter, counting [adverse events] or errors.) Identifying the underlying causes of the failures is another matter.
"The continuing effort to discredit the mortality figures speaks, I fear, far more eloquently about our failure to change physicians’ mindsets away from individual culpability toward systems analysis than it does about the validity of the numbers or the methods used to derive them.
"Everyone I know who has seriously investigated medical accidents finds the numbers are higher, not lower, than the crude population estimates indicate . . . The challenge is to figure out how to get doctors and CEOs to recognize [we have horrendous problems in our hospitals] and move beyond blame."
A total of 525 hospitals in six regions around the country have been invited to complete a web-based patient safety survey by the Business Roundtable’s Leapfrog Group, of Washington, DC. The survey was developed by The MEDSTAT Group, of Ann Arbor, MI.
This is the first phase of an initiative that will ultimately involve thousands of hospitals around the country. The first six regions to participate are Atlanta, California, Eastern Tennessee, Minnesota, Seattle/Tacoma/Everett, WA, and St. Louis.
"The purpose of this survey, first and foremost, is to put information in the hands of consumers about what hospitals are doing to promote patient safety," explains Suzanne Delbanco, PhD, executive director of the Leapfrog Group.
"Also, it’s important to have information to share with our members that they can use to recognize and reward those hospitals who meet our safety standards. That’s pretty fundamental to our mission," she says.
The survey requests information about patient safety practices in three specific areas: Computer physician order entry; intensive care unit physician staffing; and evidence-based hospital referral. (The Leapfrog Group’s safety standards for these areas, as well as a copy of the hospital survey, can be found on their Web site: www.leapfroggroup.org.)
Delbanco adds that her group picked MEDSTAT to develop the survey "because they clearly had the expertise, the know-how and the enthusiasm to support this important project." In addition, MEDSTAT has created some optional sections to the survey, the responses to which it will share with the hospitals so they can benchmark what other facilities are doing, she notes.
Choosing the participants
The six initial "rollout regions" were selected because Leapfrog Group members in those regions felt they were ready to invite hospitals to participate in the survey.
"In every case, we wanted the chance to meet with as many people as possible in person before sending our invitation letter," says Delbanco, noting the letters were sent over the last six weeks. "The members who work in the regions defined them either as cities, such as Atlanta or St. Louis, or as states," she adds.
Within the designated regions, all acute care facilities that were not located in rural areas were invited to participate. "We don’t want to raise public expectations that rural hospitals should prioritize our standards, but if a rural hospital wants to complete the survey we will post it," explains Delbanco. In fact, she notes, any hospital is free to visit the web site and complete the survey.
If an invited hospital declines to complete the survey, the Leapfrog Group will note that it "did not reply" — "but we’ll only post that information if our members get confirmation that they did not complete the survey," says Delbanco. "The main purpose is to show we have gathered as much complete information as possible."
Educating the public
Delbanco hopes that the Leapfrog Group’s key target audiences will derive significant benefit from the survey’s findings. "We hope it will reveal which hospitals meet our standards, and which ones are working towards meeting them," she observes. If a given hospital is not yet meeting the standards, that doesn’t necessarily mean the public should avoid that facility.
"Hopefully, they will recognize the importance of hospitals sharing such information with their communities, and the fact that they are working toward implementing some of these practices," she asserts.
From a purchaser’s perspective, says Delbanco, the survey information will be highly valued. "They like to know that their enrollees will be well taken care of.
"From a patient’s perspective, it probably depends on what they’re comparing.," she says. "If they are in a community where some hospitals meet the standards, they may be less impressed by one that is working toward meeting them. But at the same time, that still gives the impression that such a hospital takes safety seriously."
Delbanco says she anticipates being able to post survey response data sometime this fall.
[For more information, contact:
• Suzanne Delbanco, PhD, Executive Director, the Leapfrog Group, c/o the Academy, 1801 K St., N.W., Suite 701-L, Washington, DC 20006. Telephone: (202) 292-6711. Fax: (202) 292-6813. E-mail: firstname.lastname@example.org.]