Why do it if you can't measure it?
Why do it if you can't measure it?
Helping outcomes, even if you can't prove it
With all the talk about needing more outcomes measures rather than process measures, there are some well-loved projects that could get left out in the cold, simply because it is hard to prove they have a direct impact on improved outcomes. And given the tighter purse strings in hospitals, if you can't prove it helps, and it is not required, you may end up cutting a program.
But there are champions of programs, particularly those that engage patients in their care, who say you should stick with these projects. It could be some day you'll find a way to measure the impact, but even if you can't, the goodwill they build, the opportunity to get a different view of the care you provide, makes it worth it.
One program at Dartmouth Hitchcock Medical Center in Lebanon, NH, that fits this description is the shared decision-making project, says William Abdu, MD, associate professor at the Dartmouth Institute and associate professor of orthopedic surgery at Dartmouth Medical School. Abdu is also medical director of the Spine Center at Dartmouth Hitchcock. Abdu and his colleagues have been collecting data for a dozen years on patients with three different conditions. "We are $30 million into it and outcomes are patient-reported data like pain relief," he explains. "It is easy to see how the outcomes improved, but compared to what?"
They took part of the group and put them through a shared decision-making program, through which they evaluated their values and preferences and were educated on spine surgery. All met the criteria for surgery, but a quarter of them changed their minds and opted not to have it, says Abdu. And nearly everyone found the program useful.
Did those people have better outcomes because they went through the shared decision-making program? No, he says. But they were better informed. "To find out if it really worked, we would have to have a huge multi-center trial that would require a great deal of investment," says Abdu. "It could be done with registries, but even those are still at the infancy stage."
For now, they content themselves with knowing that there are data that show involved patients do better, and they are spreading the involvement: Other departments and units are creating similar surveys for patients. The Center for Shared Decision Making at Dartmouth Hitchcock (http://patients.dartmouth-hitchcock.org/shared_decision_making.html) has some of the decision aids that are used throughout the organization now. It includes questions patients answer before they come in about their condition and what they hope to achieve. "I can't do anything about a herniated disc without an MRI," Abdu says. "The health survey information is used the same way."
And if there is never a way to tell that this kind of patient engagement really works, Abdu will make due with surrogates: satisfaction scores and the fact that "if we did a lousy job, we wouldn't get the amount of referrals we do. Even if we did great research on this, I think people would still question what we do. Part of this is art."
Follow makes leader
Kaiser Permanente has found a way to make use of patient experiences that, likewise, has little way of being measured for its impact: video ethnography. But they are happy to continue with it anyway, says Estee Neuwirth, Ph.D, director of field studies, evaluation and analytics for KP's Care Management Institute in Oakland, CA.
"We had been using mixed methods to find out how patients experience our programs so that we could learn from them, get ideas for change, and build the will for change," says Neuwirth. But by following the patients and getting their experience on video, "we can really help catalyze the changes" they want to make. The patient voice is powerful. It is powerful on paper, but it is even more powerful on tape, speaking as the patient encounter is unfolding. It does not replace other methods of gauging patient experience, she adds. It is added to other shadowing and survey programs. "The resonance we get from this is much more profound."
The videos are taken in hospitals, clinics, and even at home post-discharge. Usually, Neuwirth says, it is done over the course of a couple days with multiple patients. Then the video is dissected by a team of analysts, which also takes a couple days. Someone edits it, and about a week after the patients were filmed, you have a product that will bring home to providers just what it is like to be on the other side of the encounter.
Analysts look for patterns of behavior, as well as overt statements of what is not going right with the patient experience.
Making it happen
So far, 150 Kaiser people have been trained in video ethnography. "We know it has impact because the popularity is expanding," she says. After they are trained, they are contacted again, and all have talked about the huge impact what they are filming is having."
It does not take a lot to make it happen — you can hire a consultant to do it, or do it with a simple hand-held commercial-grade video camera. No professional equipment is needed, says Neuwirth. Editing can be done with simple software, or you can hire someone.
What to measure?
The Care Management Institute has an online toolkit that anyone can use, she adds. It is available at http://kpcmi.org/cmi-news/tool-kits/.
At the Kasier Permanente Roseville Medical Center in the Sacramento area, they used it in the surgical department to figure out what patients needed. "It can take a long time to figure out exactly what is needed," says Ryan Darke, MHA, performance improvement director at the hospital.
But with the video program, it was much easier because the patients they followed around were quick to mention the things that irked them. They had hours of footage that were boiled down to seven minutes, he says. In those seven minutes, patient and family voices told providers and other staff exactly how to improve the experience. The waiting room was cluttered and chaotic; there was no place for pediatric patients to wait that was appropriate; the preoperative area was not really private and did not allow a lot of space for patients and their families to be together before surgery. So they have changed the waiting room, improved patient flow, and created a separate pediatric waiting room.
The staff loved it, says Darke. "It is a great way to find things to improve."
But what to measure? Certainly patient satisfaction scores are part of it. But mostly they look at process measures, he says.
What might be more easily measured is its impact on more clinical areas. Kaiser has people using it to study unplanned readmissions, says Neuwirth. Just by following patients through the process, including at home, and by interviewing family members, they have been able to identify areas to focus on. "One key issue relates to medication," she says. "That's not surprising, but hearing it from their mouths is powerful." Less surprising was that the bedside teaching was not having the effect they wanted it to. "One patient explained that she had one foot out the door when the nurse was doing her teaching," she says. So now they still do that pre-discharge education, but they do it again over the phone and during outpatient visits. That helps patients absorb it better. Given that this is something concrete that may change readmission rates, Neuwirth says it might be possible to relate it directly to this program, without which they may not have had such an a-ha moment.
Down the road at Stanford, the university's medical center has spent more than 20 years creating a customer service program whose impact on outcomes would be hard to measure, but which has evolved over time to be a huge point of pride for people who work at the facility, says Barbara Ralston, vice president of international medical services at the hospital.
"It is hard to say that guest services impacts outcomes," Ralston notes. "But there is something to be said for staying in close touch with patients and guiding them through a complex and often stressful system — especially for the high-end specialty care where there can be so many more handoffs and transitions."
The healthcare navigator and C-ICARE program all grew out of a humble patient library that was created 23 years ago, she says. Volunteer librarians started hearing comments and questions. Over time, more bits and pieces were added to the customer care program. Now it includes some 900 volunteers and 150 staff who help patients get through their experience with as little headache as possible. If they get lost, someone guides them to the right place; if they have a question, then any time, day or night, they can call the service and have someone answer. Patients are less likely to be late or miss appointments, which has an effect on the efficiency of the hospital. If there is something particularly aggravating to a patient, Ralston says they will tell the navigator during follow-up calls. That can help identify patterns and trends of problems which can be quickly identified and dealt with.
Connecting with patients
While most patients do not have the same navigator throughout their experience, the program is seamless, Ralston says. The staff and volunteers use a customer relationship management tool that is similar to those used in other industries. This keeps all necessary information at the fingertips of whoever answers the phone when a patient calls with a question. There are also clinical navigators who have access to the patient's medical record, she says, and can help with clinical questions when appropriate.
The most recent part of the program is the C-ICARE. Every staff member wears a button with those letters on it. They are a reminder, Ralston says, to:
- Connect with the patient.
- Introduce yourself and your role.
- Communicate the plan of care.
- Ask permission before entering a room, examining the patient or taking any action.
- Respond to patient questions and requests promptly and anticipate needs.
- Exit courteously with an explanation of what will come next.
All of these efforts have resulted in a "dramatic" increase in patient satisfaction scores, improved reputation in the community, and more business, Ralston says. And there is an element of accountability built into the program, too: All staff and providers, including physicians, are required to participate in I-CARE huddles, rounds and meetings.
Does this, in the end, affect patient safety and outcomes? Rebold is sure it does, even if she can't prove it. "The patient, at every moment, is first and foremost. It is not always about the procedure with us. We interact more frequently and on a more personal basis. And that has to make a difference."
For more information on this topic, contact:
- William A. Abdu, MD, Associate Professor of The Dartmouth Institute and of Orthopaedic Surgery, Dartmouth Medical School, Medical Director, Spine Center at Dartmouth Hitchcock Medical Center, Lebanon, NH. Email: [email protected].
- Ryan Darke, MHA, Performance Improvement Director, Kaiser Permanente, Roseville Medical Center, Roseville, CA. Telephone: (916) 223-4708.
- Estee Neuwirth, Ph.D. , Director of Field Studies, Evaluation & Analytics for Care Management Institute, Oakland, CA. Telephone: (510) 625-5624.
- Barbara Ralston, Vice President, International Medical Services, Stanford University, Stanford, CA. (650) 723-7604.
Subscribe Now for Access
You have reached your article limit for the month. We hope you found our articles both enjoyable and insightful. For information on new subscriptions, product trials, alternative billing arrangements or group and site discounts please call 800-688-2421. We look forward to having you as a long-term member of the Relias Media community.