New tool unveiled as patient safety option

Tool investigates defects in patient care

A team of quality and safety researchers at Johns Hopkins Medical Institutions in Baltimore, MD, has developed a tool to investigate defects in patient care called the Learning From Defects (LFD) tool. The tool, described in detail in an article in the Joint Commission’s Journal on Quality and Patient Safety," was designed, in the authors’ words, as "a method to collect defect information and improve safety."1

In reviewing and responding to the Institute of Medicine’s (IOM) 1999 report, To Err Is Human: Building a Safer Health System, "What we found was that while the IOM highlighted an urgent need to make health care safer, much of the emphasis has been on reporting mistakes — but not necessarily on learning from or reducing hazards," notes Peter J. Pronovost, MD, PhD, Johns Hopkins Quality & Safety Research Group Medical Director, Center for Innovations in Quality Patient Care, at The Johns Hopkins University School of Medicine, and the article’s lead author. "So we decided to provide a practical way to help ensure we learn from our mistakes."

Root-cause lite’

Pronovost is quick to note that such instruments already existed; however, he adds, they have their limitations. "There are more rigorous tools, like root-cause analysis or FMEAs [failure mode and effect analysis]," he observes, "but they are not practical for everyday use for the many things that go wrong."

He says quality managers should think of this tool as "root cause lite." "This helps the caregiver understand what happened, why, what we did about it, and most important of all, how we know the risk was mitigated," he explains. "A local team in the ICU, the OR, or the ED could use it to try to solve system problems that would complement the more robust, rigorous interventions of root cause analyses or FMEA’s. There are so many things that are broken; a facility would go bankrupt if they used root cause analysis for all of them. You should still use it for organization-wide problems, but in local areas, those teams still need to have tools, and this provides one."

The tool’s investigation process begins with a brief a summary of the event (what happened?). In the next section, the why section, contributing factors are listed in several different categories: patient, task, caregiver, team, training and education, information technology/CPOE, and local environment. For each, process components are listed as having either "negatively contributed" or "positively contributed." The case summary learning tool begins with safety tips, and then the case summary is restated. System failures and opportunities for improvement are noted, followed by actions taken to prevent harm.

Tool well received

To date, the free tool has been used in several care areas at the Johns Hopkins Hospital and by several clinical departments during morbidity and mortality conferences, in addition to 123 ICUs in Michigan and 25 in New Jersey.

"It’s been extremely well received — the response has been incredibly positive," Pronovost reports. "We’ve been successful at surfacing mistakes in instant reporting systems but ineffective in closing the loop — that is, measuring whether risk was mitigated. The science there is still pretty immature."

The biggest benefit, he says, is that the tool "has provided new lenses to see broken systems." Most doctors and nurses, he points out, don’t get trained in how to think in terms of systems. "Typically, what we see are the patient and the provider; this tool expands their lenses so they can see a richer array of factors that may have contributed to a problem," he explains. "If they didn’t have those lenses, they’d never consider them."

The tool also raises "a great Joint Commission issue," he says. "When a policy or procedure involves communication, the only valid way to measure it is by directly observing," he asserts. "I’ve worked with numbers of hospitals who’ve told me they conducted debriefings to reduce wrong site surgeries. They included the required time out, and then the hospital was reported to have another wrong-site surgery — even though on the medical record the nurse says We did a time out when we did the briefing.’ This shows you that just checking off a note that you did it is not the right way to measure if you did it well."

In sum, says Pronovost, this tool can provide quality managers with "a scientifically sound and feasible approach to learn from mistakes that could supplement the more formal root cause analysis or FMEA. For the large number of issues you face, you need something more practical and feasible."


  1. Pronovost PJ, Holzmueller CG, Martinez E, Cafeo CL, Hunt D, Dickson C, Awad M and Makary MA. A Practical Tool to Learn From Defects in Patient Care. Joint Commission Journal on Quality and Patient Safety, Feb. 2006; Vol. 32, No. 2: 102-108.

For more information, contact:

Peter J. Pronovost, MD, PhD, Professor, Departments of Anesthesiology and Critical Care, Surgery, and Health Policy and Management Director, Adult Critical Care Medicine Director, Johns Hopkins Quality & Safety Research Group Medical Director, Center for Innovations in Quality Patient Care, The Johns Hopkins University School of Medicine, 1909 Thames Street- 2nd Floor Baltimore, MD 21231. Phone: (410) 955-9080. Email: