For best results, critically examine process behind benchmarking data
It’s not the numbers but how they’re reported that counts
If you’ve just benchmarked your medication error rates and found them to be high compared with those of your peers, should you despair? Conversely, should you celebrate a low rate? Maybe, maybe not, says Sharon Lau, senior consultant in charge of Medical Manage-ment Planning’s BENCHmarking Effort for Networking Children’s Hospitals. Medical Management Planning is a consulting firm in Bainbridge Island, WA.
"Medication errors and adverse drug reactions are rates you certainly want to know, but it’s tough to do a meaningful comparison. Bench-marking can be a double-edged sword, " she warns. "If you show a low rate, it may mean that errors are occurring but not being reported. On the other hand, a high rate doesn’t look good on paper, but it may mean that your reporting system is catching more [than those of your peers]."
The essential issue, she explains, isn’t the numbers themselves, but how they are reported. "In order to interpret such indicators correctly, you must first ask yourself, How effective is our reporting system?’" she says. "Only after you really investigate your reporting system to see that it is catching errors can you be confident that the indicator is a real number."
For example, in 1996, a performance improvement team at Children’s Hospital in Los Angeles decided to take a close look at its reporting process for medication errors and adverse drug reactions. "Compared to our benchmarking partners, our rates were low, but we weren’t sure if this was an accurate picture," says Carol Taketomo, PharmD, manager of pharmacy.
The first step was to assemble a team of individuals "who had knowledge of and commitment to improving the current process for reporting medication errors," she says. They included physicians, nurses, and clerical staff from surgery, pediatrics, allergy/immunology, pharmacy, quality management, information services, patient care services, ambulatory care and the hemo-oncology clinic.
Next, the team categorized the events of the past two years according to these processes: prescription, transcription, dispensing, administration. They came up with 70 possible medication mistakes to avoid. (See list, p. 31.)
Two overall goals emerged:
• Increase reporting of adverse drug reactions and medication errors by 300% in an effort to identify system areas for improvement.
• Decrease administration errors due to omission by 50%.
Identifying improvement areas
"Our biggest concern was the possibility of underreporting," says Taketomo. Although underreporting is not uncommon in a voluntary system, she notes, if errors aren’t reported consistently, "it is impossible to determine the extent of the error problems."
So the team developed three separate Plan-Do-Study-Act cycles that emphasized the multidisciplinary responsibility of reporting.
The first cycle sought to improve pharmacy surveillance. "We compared on a daily basis the patient medication profile with the pharmacy med profile and physician’s order," she says. Team members also reviewed charting of medications and document discrepancies and collected data on number of reported adverse drug reactions and medication errors.
After implementing these steps, medication errors increased 44% as compared to the previous three-month period, she says.
"We then added documentation of medication errors and adverse drug reactions as part of the pharmacists’ job requirements," she says.
In the second cycle, the team members incorporated surveillance and documentation of adverse drug reactions and medication errors into the daily responsibility of the clinical care coordinator. "The coordinators do daily chart review for utilization management, so they could review for medication errors and drug reactions at the same time," she explains. After implementation, adverse drug reaction reporting increased 135%, compared with the previous six months.
In the third cycle, team members worked to improve definitions and provide education on policies and procedures.
After defining exactly what it meant by medication error and adverse drug reactions, the team distributed the definitions to managers along with a hotline phone number, stickers, and reporting procedures. (See box, p. 32.) They also published definitions in in-house newsletters, along with statistics from the Journal of the American Medical Association stating that 30% of acute care patients may experience adverse drug reactions during a hospital stay. The article, "Drug detectives tackle medication muffs," also featured a cartoon showing a physician explaining to a patient who had grown antennae, "If you remember, I did mention possible side effects."
In this last cycle, the team also flowcharted the process of ordering, preparation, and administration. "We knew this would provide insight into the complexity and pitfalls possible in our system," she says.
In the newsletter, team members explained to their colleagues that the "flowchart has been compared by some to the electrical circuitry of the space shuttle."
After the education initiative, adverse drug reaction and medication error reporting increased 32%, compared with the previous three months.
To tackle the second major goal, the team reviewed data on administration errors broken down by these categories: omission, unauthorized administration, dose twice, wrong dose, prep, infusion rate, extra dose, wrong patient, processing error, wrong time, wrong drug, wrong route.
"By far the greatest were the errors in omission," she says. Upon investigation, the team discovered the patient information system was potentially creating omission errors. For example, if the person entering the order was unfamiliar with the system or if the desired schedule was out of the norm, he or she would select "miscellaneous" from the common medication screen.
"Unfortunately, what happened was that the free text did not translate into a scheduled order, so it was essentially lost; therefore, the dosage was omitted," she says.
The team then removed the miscellaneous category from the common mediation scheduling screen. It was available under the "time-schedule" screen, but an "educational alert" warned users that "making this section may cause a medication error to occur." Such a simple change can produce astounding results: There have been no errors of omission due to miscellaneous scheduling. Lau points out this omission problem is a prime example of what a close examination of a reporting system can uncover.
"Residents thought they had ordered the med-ication, but in reality it was not ordered," she says. "This shows there is no way you can predict what you will find with your reporting system and subsequently your benchmarking indicator until you investigate the data as well as the process behind that data."
The other caveat with benchmarking a potentially deceptive indicator is that you don’t know the quality of your peers’ reporting system. "So, just as you examine your reporting system, you’ll also want to ask your benchmarking peers about theirs," she says.