Premier/IHI algorithms automate process for identifying patient harm
Premier/IHI algorithms automate process for identifying patient harm
Tool's results agreed with full chart review 76.2% of the time
An automated process for tracking rates of patient harm developed by Charlotte, NC-based Premier Inc. and the Institute for Healthcare Improvement (IHI) in Boston has been tested and compared against a manual full chart review. The automated tool's results agreed with full chart review 76.2% of the time; in cases in which the results did not match, it was found that 84% had issues with coding.
"I thought the results were pretty good," says Glenn Crotty, MD, chief operating officer at the Charleston (WV) Area Medical Center. The facility is a participant in Premier's QUEST: High Performing Hospitals collaborative, which provided the data for the research. "You are basically taking a training tool and trying to make it be sensitive and specific, which is quite a challenge."
"About two years ago, Premier set about trying to define what ought to be included in high-value health care," explains Richard Bankowitz, MD, MBA, vice president and medical director with Premier. "We brought together a group including top-performing hospitals from the CMS [Centers for Medicare & Medicaid Services]/Premier High Quality Incentive Demonstration project, Premier staff, staff from IHI, and CMS, and the group proposed that in order to evaluate health care, you needed at least five high-level metrics — adherence to evidence-based care practices; cost and efficiency; avoiding preventable mortality; patient experience; and avoiding harm. At that point we had the challenge of defining harm and measuring harm."
That's when a working group of 165 hospitals from among the participants in QUEST agreed to be measured in these five areas and to be open and transparent about the results. "A harm workgroup was formed to come up with a way of measuring harm that could be done across the whole collaborative," Bankowitz shares. "We had discussed this with IHI and determined there was not a good, agreed-upon standard of measuring harm. The IHI method required manual chart reviews, and our hospitals asked us not to impose any more data collection burdens on them."
Accordingly, Premier and the IHI developed 26 algorithms to automate the process for identifying patient harm.
Crotty further explains the sensitivity/specificity challenge inherent in such an effort. "In the sensitivity parts you want to be able to capture as many potential harm events as possible without sacrificing the specificity, which is to accurately determine true harm and not so many false-positive harms that you have to go chase them," he says. "You can end up with publicly reported data that are too sensitive and not specific enough. That's what we're trying to learn; where is that 'sweet spot'? This is the first iteration, and it's a pretty good first iteration."
"We defined what we could get from automated and ICD-9 discharge data, plus data we have in our safety surveillance and infectious disease surveillance tool," adds Bankowitz. "The third element is the patient charge data and chartmaster data that we have from all our participating hospitals; we constructed the algorithms based on these datasets. Then, we set about to validate how good the tool was, based on the gold standard — chart review aided by the IHI Trigger Tool [for measuring adverse drug events]."
"We are validating the tool against full chart review and against IHI's tool so there will be three-way validation," adds John Martin, MPH, director of Premier Research Services.
Collecting the data
How, exactly, are the data collected? "Premier has these tools at its disposal," Bankowitz explains. "Our hospitals send data to us just about every month, and what we do is run the data through these algorithms to determine the incidence rates of these conditions." Then, he says, each hospital's rates are compared against the distribution of the data. "So we know that a given hospital is in the fifth percentile, the 10th percentile, and so forth," says Bankowitz. "What we're trying to do is eliminate harm; our goal is to get to zero incident rates."
Hospitals can use those data to help achieve this goal, adds Crotty. "We will take the areas that suggest we have an issue, we will do chart reviews on those particular areas in our own organization in order to be able to further validate the data, and if the data are pretty close to on target, we will put in improvements to remedy the issue," he says.
"These are certainly preliminary results, so one can only draw small conclusions," cautions Martin. "However, we are finding they are doing a good job of identifying specific harms. To a QI person, this reduces the time they have to spend manually going through charts and then finding that information."
"The tools we developed can still be very helpful in shifting the effort away from discovery of harm into areas of improvement," adds Bankowitz. "We're trying to get hospitals away from spending time on data collection and as much as possible spending time on actual improvement methodologies."
POA coding critical
The Premier researchers also found that correct coding in the cases where the tool's findings did not match charts would have increased the match rate to nearly 100%. Of particular concern to CMS, for example, is POA, or present on admission, coding. CMS requires POA flags for all diagnoses on Medicare claims, which are used, in part, to identify hospital-acquired conditions and determine reimbursement.
"When we did the study, we found that at least one-third of 'complications' were not actual complications — they are present on admission," says Bankowitz. "So the POA flag is very important, even in areas where you think things may be obvious or clear. Basically, it's important to use POA flags, and hospitals apparently need to spend more time educating coders."
POA coding is certainly important to Crotty. "We have deployed 16 registered nurses to look at the charts, not just for POA, but to begin the coding process up front when the patient comes in rather than on the back end, so we can have a dialogue with the physician on the particular problems the patient is having," he shares. "The problem is that unless you do something like this, the coding conventions and how the doctors write and document the records do not always match. In other words, the coding convention for hospitals and what they have to bill according to Medicare conventions do not match totally what doctors write in doctor language."
This, he continues, is one of the key themes he has communicated to IHI and Premier. He has asked them to help advocate that CMS and other policy makers develop a common language.
Crotty offers the following example: A patient comes in with massive hemorrhaging in the colon, which could be from a diverticular bleed, the rupture of a small vein, cancer, Crohn's disease, and so forth. "If the patient has bled and their hemoglobin has dropped from a normal level of 12 for women or 15 in men down to six or seven, and the doctor writes 'GI bleed, will transfuse,' the coders can't code any of that," Crotty explains. "They have to say, 'Anemia from gastrointestinal bleeding due to' whatever the particular problem was; it has to be that specific." Doctors know this intuitively, he says, and in their minds, they code the right thing in their language.
While waiting for this "language" problem to be solved, "We continue to work on coding, and refining it," says Crotty. "We have conference calls every month and a meeting a couple of times a year, trying to make the coding data better and better."
Meanwhile, the initiative continues. "Once we get complete results of the validation study, we'll determine the predictive value of the measures we've constructed, and concurrent with that, we'll send the results out to approximately 200 hospitals," says Bankowitz. "Many are going back to their records and comparing results and learning a lot about improving their methods. And we're attempting to construct a composite index that would roll up a total of all these measures. We'd like to have one composite index of harm to use as the high-level metric in that area."
[For more information, contact:
Richard Bankowitz, MD, MBA, Vice President and Medical Director, John Martin, MPH, Director, Research Services, Premier, Inc., Charlotte, NC. Phone: (877) 777-1552.
Glenn Crotty, MD, Chief Operating Officer, Charleston Area Medical Center, Charleston, WV. Phone: (304) 388-7438.]
An automated process for tracking rates of patient harm developed by Charlotte, NC-based Premier Inc. and the Institute for Healthcare Improvement (IHI) in Boston has been tested and compared against a manual full chart review. The automated tool's results agreed with full chart review 76.2% of the time; in cases in which the results did not match, it was found that 84% had issues with coding.Subscribe Now for Access
You have reached your article limit for the month. We hope you found our articles both enjoyable and insightful. For information on new subscriptions, product trials, alternative billing arrangements or group and site discounts please call 800-688-2421. We look forward to having you as a long-term member of the Relias Media community.