Statistics let a hospital's star quality shine
Statistics let a hospital's star quality shine
Institute's data prove accomplishment
Being a top-notch medical facility is sort of like being a movie star -- you work hard to get to the top, then you work twice as hard to stay there. While stars depend on publicity agents and a fickle public to maintain their status, top hospitals can turn to a more dependable source to prove greatness -- statistics.
Statistics don't have glamour, but they can be a highly effective tool for improving outcomes and marketing a hospital to health care providers. At the Rehabilitation Institute of Chicago (RIC), for example, statistics are a tool for maintaining and advertising high-quality care.
RIC is spinal and head injury rehabilitation center, which frequently admits particularly complex cases referred from around the country. For five years in a row, it has been ranked by U.S. News and World Report as the number one rehabilitation facility in the country.
Its experience in adjusting its outcomes data to reflect a high level of severity in its cases, and then marketing those statistics to payers and the public, may offer insight to other institutions now developing or refining their outcomes management programs, particularly in areas outside the hospital walls.
Patients often come in sicker than at other rehabilitation facilities nationally, stay at RIC a little longer, but leave healthier and have less recidivism, asserts Robin Turpin, PhD, director of RIC's outcomes information.
RIC, which has 136 acute beds and 40 subacute beds, treats a variety of severe traumas, including spinal cord injuries, brain injuries, cancer rehabilitation, amputee rehabilitation, and stroke rehabilitation, Turpin says.
Turpin was hired two years ago to develop a risk adjustment methodology and cost adjustment analysis to demonstrate RIC's value to managed care organizations, she says. She analyzes Functional Related Groups (FRGs), codes similar to DRGs that are severity-adjusted. This developing outcomes measurement tool can assist utilization managers in predicting LOS averages and conducting cost-benefit and outcomes analyses in the rehab setting.
In addition, Kris Cichowski, MS, director of RIC's outcomes management systems and analysis, uses a four-part database to gather diagnosis and outcomes information, and track patient functions post-discharge. The database generates reports that can be used to identify where treatments can be improved, thereby helping the hospital continually increase quality of care, Cichowski says.
The data generated by Turpin and Cichowski not only help maintain RIC's reputation, but they are also being used to advertise RIC to health care providers, says Joe Martini, formerly assistant administrator for business development. (For a sample LOS analysis, see p. 94.)
"We're picking out salient information [from statistical reports] that will be of use to managed care payers in deciding where patients should be directed, what providers should be added or deleted from networks, and which providers are more effective in achieving desired outcomes," he says.
Cichowski's role is to gauge improvements by compiling reports analyzing treatment results using the database, she says. She compiles demographics, diagnoses, patient assessment and condition follow-up, and customer relations information.
With those data, Cichowski can identify less-than-desirable outcomes and report those outcomes to the responsible departments, which then can make changes to their programs, she notes.
Cichowski's database is composed of four main areas that provide outcomes data:
* The main database provides patient demographics.
* The medical profile database contains diagnostic codes; Functional Independence Measures (FIMs) -- which measure how much help a patient needs for self-care and the degrees of assistance needed; and RIC's Functional Assessment Scale (RICFAS). RICFAS contains 66 items assessed on a seven-point ordinal scale similar to the FIMs. The 18 FIM items are embedded in the RICFAS but it also includes information from psychology, social work, and therapeutic recreation.
* The assessments and follow-up database includes information obtained via a telephone survey three months post-discharge. Patients are surveyed on the 18 FIM items as well as factors relative to rehospitalization, caregiver needs, coping skills, and adjustment to disability. Comments are entered into the feedback database, and a print-out is forwarded to appropriate staff, which then determine follow-up needs.
* The customer relations database contains information from a patient satisfaction interview conducted one month post-discharge to see what patients thought of RIC. "We use our feedback database to see if there are concerns or complaints," Cichowski notes.
"The beauty of the internal data set is that it lets us look at people demographically, what they looked like when they came in, how they functioned when they left, what they look like three months post-discharge, and what they thought about the care we rendered," she adds.
If a post-discharge problem appears, a chart audit is performed, and Cichowski sits down with the appropriate team to talk about possible changes in treatment protocols, she says.
"For example, we saw a group of stroke patients that weren't maintaining dressing or bathing skills after they went home," Cichowski recalls. A chart audit was performed with occupational therapy, and treatment protocols were revamped to improve outcomes.
"Maybe if they couldn't maintain, we had overrated them at discharge," notes Judy Hill, OTR/L, administrative director of the spinal cord injury, amputee, and orthopedic operating group.
In some cases, the patient was not maintaining a skill because it was the therapist's goal that the patient achieve that skill, not the patient's goal, Hill says. So therapists began goal-setting with the patient and family.
Adding allure to statistics
Until recently, RIC was fairly low key about its success, Cichowski says. But with the growing competition in health care, Martini is using RIC's data to launch a campaign aimed at convincing managed care organizations that RIC would make a good partner.
Martini culls lengthy statistical reports into sound bites, he says. For example, outcomes data show that, compared to patients in other spinal cord programs, spinal cord patients at RIC achieve 18% more functional gains, and the return-to-community rate is 8% greater, Martini says.
In addition, while most hospital databases give a look at systems in aggregate only, RIC's database allows extrapolation of data by specific factors such as patient, diagnosis, physician, floor, or team into custom reports for staff.
"We can bring the data down to the team level so they have more ownership," Cichowski notes.
Those customized reports are also used for educating new staff on why being accountable for outcomes is so vital, especially in today's competitive marketplace, she says. The database is also providing educational material to help RIC staff conduct national outcomes assessment workshops, Cichowski adds.
Longer LOS can mean better outcomes
Turpin maintains that RIC's self-assessment using FRGs and an internal measurement system is highly accurate because the data are adjusted for severity of disability, health status, and other factors that affect outcomes. FRGs were developed by Margaret Stineman, MD, who worked with the Uniform Data System (UDS), a case-mix classification system for medical rehabilitation, Turpin says.1 There are 53 FRGs representing 18 diagnostic categories, Turpin says. For example, the traumatic brain injury FRG contains five classifications from least to most severe, she says.
FRGs were created by extrapolating FIMs data at more than 700 hospitals. The first version of FRGs was released in late 1995. FRGs are considered by many as dependable for analyzing LOS averages, Turpin notes. Yet Turpin double-checks FRG accuracy by conducting her own computations with RIC patient data, she says. Turpin relies on a statistical methodology called Classification and Regression Trees (CARTs), which were created at UDS to develop decision tree algorithms. Turpin developed her own CARTs from RIC's patient base, she says.
To predict LOS averages, she takes the FRG, calculates RIC's own patient placement within each FRG, and based on the severity level of the patient, she can predict the expected range of LOS for each FRG category, she explains.
Reference
1. Stineman MG, Escarce JI, Goin JE, et al. A case mix classification system for medical rehabilitation. Med Care 1994; 32:336-379. *
Subscribe Now for Access
You have reached your article limit for the month. We hope you found our articles both enjoyable and insightful. For information on new subscriptions, product trials, alternative billing arrangements or group and site discounts please call 800-688-2421. We look forward to having you as a long-term member of the Relias Media community.