Correlations help center predict costs, outcomes
Data from point of procedure are more reliable
Kathi Healey, MSN, outcomes group leader for the cardiology service line at the University of Nebraska Medical Center in Omaha, expects that soon her medical center will be able to predict cost, clinical, and functional status outcomes based on patient profiles. At least that is the goal for her cardiology QI teams, which are neck-deep in complementary programs to correlate and analyze patient statistics.
"We’ve entered a sort of a second level of outcomes analysis where everything is a little more refined and more cohesive across [cardiology specialties]. Our measurements are better, our databases are better, and we are becoming much more scientific. The level of rigor is becoming much more in-depth," she says.
In the "first level" of outcomes analysis, measurements were descriptive, such as percentages and means, she says. Now cardiology teams make correlations and predictions with more sophisticated statistical methodologies.
The medical center’s quality effort in cardiology began in 1991 with an open-heart surgery group that was charged with developing a critical pathway and tracking quality indicators from it. It grew into a program rather than a project, and now each group within cardiology including angioplasty, electrophysiology, open-heart surgery, heart failure, and heart transplant has conducted baseline quality monitoring and continues to monitor quality on a regular basis.
The teams look closely at outcomes, including clinical and functional status, quality of life, and costs, Healey says. "We are looking at it more wholly than how it was typically looked at when we tracked death or complications.
"Now we are constantly struggling with what we should be measuring, how, and what is the most efficient way to do it and still maintain reliability," she says. Baseline data include morbidity, mortality, cost, length of stay, and readmission rate. But the angioplasty team, for one, also is measuring baseline functional status before treatment then at three, six, 12, and 18 months after treatment for every patient.
The readmission rate is one of the medical center’s most difficult outcomes to measure. Because it is a referral hospital, patients often are sent to outside hospitals, closer to where they live, if a readmission is necessary. "We try to get at the real rate by having patients call us back if they are admitted to any outlying hospitals," Healey says. "We potentially want to measure readmissions within a year [rather than 30 days]. We also want to get a handle on what’s happening when patients leave the hospital, pinning down return to work and things like that."
The medical center has a sophisticated cost accounting system that provides actual costs, not calculated costs. The bypass team, for example, takes advantage of this detailed data by looking at clinical indicators for coronary artery bypass graft patients, linking those to costs, and making high-level correlations, Healey explains.
First, they pull clinical variables such as ejection fraction, a percentage that reflects how well the heart is functioning and data from the cost accounting system into an Excel file, then dump that into a SAS file. SAS is statistical program developed by SAS Institute in Cary, NC. The statistical program allows the team to find correlations between clinical and cost variables.
"[In making correlations,] we are going to a different level of outcomes monitoring and outcomes research," Healey says. "Our outcomes work is more research-designed where we’re assuring the reliability of our variables is correct and doing higher level statistical maneuvers like correlations. For example, do patients with lower ejection fraction have higher costs? This kind of thing is common for clinical research. But as far as outcomes data, we are getting more sophisticated in our analyses, the way we collect data, and the reliability of the data. We are getting into a different level of monitoring and measurement."
The cardiology teams have become more sophisticated at actually collecting data as well, Healey says. For example, the angioplasty team’s clinical data are based on Micromedical, a clinical database. "They’re actually collecting many of their data elements right at the point of procedure," Healey says. "When they use a certain type of catheter in the cath lab, it goes right into the database when they document it." Because they don’t have to collect data retrospectively, the database is more reliable. The team also has a functional status database.
Healey warns, however: "You can’t make assumptions. We made the mistake of saying, Let’s measure costs, do a critical pathway, then measure costs again,’ and assuming the critical pathway directly influenced that cost." How do you know core hospital costs didn’t go down or some variable other than practice patterns caused costs to decrease? she asks.
Healey’s department decided to look more at utilization and measure quantities. "If you were doing 12 chest X-rays, then set up a pathway calling for four, and now you’re doing four, you can probably be pretty assured your practice pattern changed because of the pathway. But when you look at just costs, you can’t make that assumption. Not that looking at costs is bad, because we do look at costs and will keep looking at costs, but look at utilization also," she recommends.