The trusted source for
healthcare information and
Financial Disclosure:Editor Lisa Hubbell, Executive Editor Russ Underwood, Associate Managing Editor Jill Drachenberg, and nurse planner Paula Swain report no consultant, stockholder, speaker’s bureau, research, or other financial relationships with companies having ties to this field of study. Consulting Editor Patrice Spath discloses she is principal of Brown-Spath & Associates.
The regular collection and sharing of data with stakeholders to find and fix problems goes by many names — the Virginia Mason Production System, Toyota Management System, Lean, Six Sigma, Quality Improvement Circles. All are based on the notion that to make things better, you need to look at data often and make changes quickly based on what you see. It is an idea that is gaining traction in healthcare as more peer reviewed studies showcase its potential for success.
Some of the biggest proponents of continuous feedback are hospitals that have achieved Magnet status. (For more on the Magnet model, see http://nursecredentialing.org/Magnet.) "There are challenges to continuous knowledge," says Pat Reid Ponte, DNSc, RN, FAAN, senior vice president of patient care services and chief nursing officer at Dana Farber Cancer Institute in Boston, a Magnet facility. She also serves as executive clinical nursing director at Brigham and Women’s Hospital, which does not currently have Magnet status. "You have to have a mechanism for measuring that is valid and accurate. One of the biggest hurdles is, you have to measure what you want to improve, know the baseline, and monitor it."
That said, the very best organizations will track data regularly, and strive for early recognition of areas that need to improve, she says. And one set of improvements isn’t enough for these top organizations: The improvement has to continue, which means continually measuring those data points and analyzing them. In short, there is a lot of hard work, but it’s the kind of work that pays off.
Most hospitals have some structure in place to make sure regular monitoring of key data happens, but what they regularly measure is often only what is mandated by payers, the government, or regulatory bodies. "Most organizations do some of this already," says Reid Ponte. "What moves it to the next level is to have centralized hospital resources and people who are experts in quality improvement and measurement working on this."
"Resource intensive" is a phrase that crops up often when Reid Ponte talks about great CQI. She says an organization must have knowledgeable staff that can regularly look at data — monthly, weekly, or even daily depending on the project and the frequency of the process you are trying to improve — and spot potential problems early. "You have to wait for enough data points to see trends," she says. "If you are looking at diabetes care and are trying to improve blood sugar levels for inpatients, then you might look at the levels daily over a few months to get adequate data to see trends. But if you want to improve patient access to care, you might need more time to see a trend."
While being statistically sure you are improving can take months, you can still see problems by looking at the data regularly within some longer time frame, Reid Ponte adds. "The more often you measure, the greater potential to see change. It might be random variation, but it’s important to note and consider." For some kinds of studies, every discrepancy from the norm you are trying to achieve may merit investigation, whether it is part of a snapshot in time or something you see in a longer-term trend chart. For instance, if you are trying to instill a culture where there are uniform catheterization procedures in the ICU in order to reduce infections, you might want to look at every deviation from the procedure you set up to find out why it occurred. "Regardless, the more you measure, the more potential to see change."
Close by Dana Farber, Massachusetts General Hospital is using continual feedback to exert change in one of its imaging departments, as described in a recent study in Academic Radiology.1 The project was created as a method of reducing cardiac computed tomography angiography (cCTA) and still achieving good image quality. Getting it right is highly dependent on the training, education, and experience of the physician and technologist, according to author Brian Ghoshhajra, MD, MBA, director of the clinical cardiac CT and MRI imaging and cardiac MR PET CT program in the hospital’s department of radiology. By providing near-real time feedback through weekly dose reports, the team hoped to achieve the best images with the least radiation.
Between April 2011 and January 2013, the authors looked at 450 consecutive patients who had physician-supervised cCTA for clinically indicated native coronary evaluation. A third of the patients were from before the weekly reports began, a third after the initiation of the reports, and to ensure that any positive outcomes were maintained over time, a third were done after the study was completed in September 2012. Doses declined "significantly" during the intervention period, he says, and were maintained afterward. The number of outliers also declined and held after the study was completed.1
Using this method enables the team to provide care with lower radiation doses "than that of an invasive diagnostic angiography or the most commonly used noninvasive modality, nuclear myocardial perfusion imaging," the study notes.1
They have used similar continual feedback methods for projects related to imaging times in cardiac MRIs and contrast doses for people with renal failure having CT scans, says Ghoshhajra. The former, to be published later this year in the Journal of the American College of Radiology, led to a reduction in scans exceeding 120 minutes from 28% to 8% while maintaining image quality. In the latter, which will also be published later this year (in the Journal of Computer Aided Tomography), Ghoshhajra says that they were able to reduce the IV contrast doses for CT angiography of the chest, abdomen, and pelvis from a standard 120 mL to a record low of 20 mL. "This allows a whole new population of patients in renal failure to undergo scans required for planning minimally invasive aortic valve surgeries."
Doctors never anticipate they are doing anything other than the best for their patients, but this kind of feedback can show them that they may have room for improvement, he says. "It’s a way of raising their consciousness."
The authors opted to do weekly reports because trainees were in the department for a week at a time. They charted 11 separate decisions to be made to get the best image at the lowest dose. While Ghoshhajra says they could have micromanaged those decisions with checklists, they chose instead to "let smart people react to the data." There was no singling out — just a scatter plot of data points sent to everyone via email. "No one wants to be the outlier," he says.
Ghoshhajra says his department currently keeps six months of data on scatter plot charts and the last week of doses in numbers for the entire team to see. He also checks to see what the outlying patients might have in common that could affect their imaging process — such as irregular heart rates.
Continuous feedback is a great tool, Ghoshhajra says. "It isn’t always the right choice, though. Not all of medicine is like an assembly line where you do the same thing over and over. But if you are doing a lot of something, then CQI methods can help you improve."
For more information on this topic, contact:
1. Engel LC, Lee AM, Seifarth H, Sidhu MS, Brady TJ, Hoffmann U, Ghoshhajra BB. Weekly dose reports: the effects of a continuous quality improvement initiative on coronary computed tomography angiography radiation doses at a tertiary medical center. Acad Radiol. 2013 Aug;20(8):1015-23.