The trusted source for
healthcare information and
Medical centers with high transfer rates are at a disadvantage
Hospital rankings and report cards are growing in number and importance, but a new University of Michigan study suggests these measures may be inaccurate if they don’t take into account the high number of very sick patients that large hospitals receive as transfers from other hospitals.
This study, which focused on medical intensive care unit (MICU) patients, was as much about benchmarking as it was about the MICU, says Andrew L. Rosenberg, MD, assistant professor of anesthesiology and internal medicine at the University of Michigan Health System (UMHS) in Ann Arbor, and lead author of the study.
"The idea of this study was to try to quantify something that most physicians intuitively know: Transfer patients are sicker," says Rosenberg. "However, this is difficult to quantify because the type of precise data needed are often lacking; they are expensive and hard to get at. In fact, much of [the quality rating] benchmarking deals with administrative databases, not clinical databases."
The UMHS study results were published in the June 3, 2003, issue of the Annals of Internal Medicine, in an article titled, "Accepting critically ill transfer patients: Adverse effect on a referral center’s outcome and benchmark measures."
"We used a very detailed clinical database [APACHE III for Acute Physiology and Chronic Health Evaluation]," Rosenberg notes.
The study examined 4,579 consecutive admissions for 4,208 patients from Jan. 1, 1994, to April 1, 1998. A full 25% were transfer patients. Its measurements were MICU length of stay, hospital length of stay, MICU readmission, and hospital mortality rates. "We reasoned, why not study the place [MICU] where the most valid benchmarking tools are used?" says Rosenberg. "If we still can’t adjust for the ICU, how can we possibly do it at another level?"
Even using tools to account for differences in diagnoses, severity of illness, and other predictors of outcome, the transferred patients had 38% longer ICU stays and 41% longer hospital stays compared with patients who were admitted to the ICU directly, and were twice as likely to die in the hospital. As a result, the authors report, a hospital that gets 25% of its ICU patients as transfers from other hospitals would show an extra 14 deaths for every 1,000 admissions, as compared with a hospital that accepts no ICU transfer patients and provides exactly the same quality of care. This seemingly small 1.4% difference would be enough to drive down the hospital’s score. The transferred patients also had higher Acute Physiology Scores at admission and discharge than did directly admitted patients, and they were more likely to have complex problems, such as severe infections and upper gastrointestinal bleeding.
Study makes valid point, experts say
Benchmarking and health care experts contacted by HBQI generally agree that Rosenberg and his research team make a valid point.
"If someone is not adjusting for severity or acuity, it’s back to the proverbial apples and oranges,’" says Robert G. Gift, MS, president of Systems Management Associates in Omaha. "You are not looking at data that are truly comparable."
"The distinction between administrative data and clinical data is a good point," adds Sharon Lau, a consultant with Medical Management Planning in Los Angeles. "Clinical data are very difficult to get."
Lau finds the whole issue of transfer patients interesting. Early in her career, she was the administrative staffer for a neonatal intensive care unit. "This was an issue every day," she recalls. "Because we did not have a maternity unit, every patient we had was transferred in. Of course, we’d get the sickest of the sickest, and no inborns to balance that out. If our morbidity and mortality had been measured [against other hospitals], it would have been horrible, but if you had a sick newborn, ours was the place it would have been sent."
"I think what he [Rosenberg] says is technically and scientifically valid," says Philip A. Newbold, MBA, chief executive officer of Memorial Hospital and Health System in South Bend, IN.
What this means, says Rosenberg, is that when "Top Hospital" rankings are issued, some of the best hospitals may not be included. Yet many consumers use these rankings to determine which hospitals deliver the best care.
"It’s the referral centers, the big urban systems, that are often the centers of last resort and are mandated and built to take care of highly complex patient cases," he notes. "But if the benchmarks don’t account for that, these centers are at risk for being compared negatively to other hospitals."
This is not a case of sour grapes on the part of UMHS, Rosenberg says, because UMHS often does well in such rankings. "But do you really think those 100 hospitals are the only best’ hospitals in the country? A lot of them are relatively small community hospitals. If you are really ill in Atlanta, you’ll go to Emory. In Cleveland, it’s the Cleveland Clinic. But generally, those are not the hospitals that show up on those lists."
Newbold agrees. "Generally speaking, places like the Mayo Clinic and the Cleveland Clinic don’t even apply for awards," he says. "Yet some of them are highly recognized for excellent care."
Nevertheless, says Gift, these rankings do carry a good deal of weight. "The problem with report cards is that the general public looks at the data and they do not know what they are looking at," he says. "What often tends to be missing is the quality of the data, and when that is the case, they absolutely do not tell the patient what he needs to know. Frankly, even some insurance companies and regulatory agencies may not be fully versed in the quality of that data; they may take it at face value with no caveats about its limitations. In other words, people are making decisions on data that are less than perfect."
Gift is quick to add that he doesn’t believe anyone is intentionally or maliciously misleading consumers, but rather that some of these folks are just uninformed. "But if I use poor data to make decisions about my family, that can be kind of scary," he says.
Rankings not all created equal
It’s important to note, says Newbold, that some sponsoring organizations do better than others — particularly those that do employ clinical data or that give active awards as opposed to passive ones. An active award is one for which the criteria are posted in advance, and organizations can literally spend several years working toward a designation.
"The value of the active awards is that they make you such a better organization in terms of performing for your patients and staff," Newbold says. "If it’s an active award, that is the most valuable goal and outcome for patient care you could want. If it’s passive, and we’re doing what we always do and someone writes us and says You won,’ there’s not the same sort of attraction for improvement of clinical care."
Clinical care is the key, he notes. "That’s definitely a good way to go," says Newbold. "We all can get much, much better in patient safety, outcomes, and service excellence — and we need to."
Report cards are where things really get problematic, says Rosenberg. "The majority of web sites that grade health care organizations use administrative databases," he says. "There’s a lot of literature that has looked at the quality of that kind of data, and it just isn’t good enough."
But these sites are still "better than nothing," he says. "Their intentions are absolutely spot-on, and their goals are what we all want: to evaluate and improve performance and quality. Fortunately, we do well on those sites, but we’re just concerned that the growth of the quality-rating industry may be outpacing the techniques for valid benchmarking."
The bottom line, says Newbold, is if you are going to use these consumer report cards, you should do so with a sense of perspective. "Certainly these benchmarks are valuable, but they are one piece of the puzzle," he notes. "They should not be considered definitive."
Making things better
Much can be done to improve the quality of the comparative data available, observers say. For one thing, there should be a greater opportunity to share information — which is, after all, the foundation of benchmarking.
"Some of the methodology is proprietary," says Rosenberg. "Take APACHE III, for example. In order to do their study, they created a company and used the company to collect the data. That became a big controversy, because then your model is in a black box.’ They counter by saying, This our business.’ I can understand both sides, but if you want the best quality measures, you want to know how they do it."
Lau agrees. "This is a real issue; we’re dealing with it now with the Joint Commission [on Accreditation of Healthcare Organizations] on pediatrics core measures," she says. "The children’s hospitals with which we work have requested a waiver because the core measures are not at all appropriate for pediatrics."
Lau says her firm is working with the Joint Commission and several major pediatric groups that lobby for child health care to develop good core measures. "But if you go to risk-adjust for conditions like asthma, all risk-adjustment methods are proprietary," she complains. "So, either everybody buys the same thing, or you can’t risk-adjust."
Not all standards are in a black box, notes Newbold. "With Baldrige, you are required to share your data," he says. "Press-Ganey has an annual conference, and everyone shares what they do. There are lots of forums where the information is almost in the public domain, and that’s where we want to get it, so everyone can get better at what they do."
Data-gathering presents challenges
Another challenge, says Rosenberg, is simply gathering all the data you need. "There’s a huge amount of information that’s not available because it’s not collected, and we have six people who do nothing but collect data."
For example, he notes, there are aspects of quality that have to do with the effect of teaching programs, the number and quality of nurses, ancillary support, the census, how busy a hospital is, the volume of cases a hospital has, and so on.
"Then there are things having to do with the patient himself that we are just starting to get at," Rosenberg says. "We think of transfer patients as those who failed to respond to therapy, but what was the quality and intensity of care at the caring hospital? We’re not studying psychological factors at all. Physiologic reserve is another variable. If you’re young and healthy, you have a better chance of getting better. Kids, especially, have such reserves, while 90-year-olds do not, but we don’t have measures for this. These are all part of risk-adjusting patients in order to benchmark."
"It would be wonderful if there were some kind of way to score patients at admission, so that if they were very sick, you’d get credit’ for it, while the others take the cream," says Lau. "It’s the same with kids; adult hospitals hold onto the tonsillectomies and send out cystic fibrosis.
"But the bottom-line message is, don’t avoid benchmarking; get together with these other hospitals. Try to join collaboratives, get together with your associations, and come up with standardized methods to monitor specific issues that need to be benchmarked. Line up as close as you can, and look for improvement."
Need More Information?
For more information, contact:
• Andrew L. Rosenberg, MD, Assistant Professor of Anesthesiology and Internal Medicine, University of Michigan Medical Center, Room 1G323, Box 0048, Ann Arbor, MI 48109-0048. E-mail: firstname.lastname@example.org.
• Philip A. Newbold, MBA, Chief Executive Officer, Memorial Hospital and Health System, 615 North Michigan St., South Bend, IN 46601. Telephone: (574) 284-7115. E-mail: email@example.com.
• Sharon Lau, Medical Management Planning, 2049 Balmer Drive, Los Angeles, CA. Telephone: (323) 644-0056. E-mail: Sharon@mmpcorp.com.
• Robert G. Gift, MS, President, Systems Management Associates, 4410 South 176 St., Omaha, NE 68135. Telephone: (402) 894-1927. E-mail: firstname.lastname@example.org.