Are you collecting data you don't really need?
Know when to put your resources elsewhere
In the process of collecting restraint data, you learn that certain physicians are not signing daily orders. Other data being collected show that patient education is being documented 97 times out of 100.
These are two examples of scenarios where ongoing data collection is no longer needed, says Paula Swain, MSN, CPHQ, FNAHQ, director of clinical and regulatory review at Presbyterian Healthcare in Charlotte, NC. In the former example, action needs to be taken by profiling non-compliant physicians and speaking to them directly. In the latter, spot checks can be used instead of rigorous data collection.
"A notice can be sent to staff that a good job has been done and letting them know that, since we are still interested, we will spot check from time to time and provide them feedback on this process," says Swain.
Almost all quality managers are struggling with increasing data collection burdens with limited resources. Yet many organizations are collecting the same data twice, or data that don't give meaningful information.
"Much of the data being collected lacks validity. Even if data is valid, it may not be important," says Peter J. Pronovost, MD, PhD, director of the Johns Hopkins Quality & Safety Research Group in Baltimore, MD.
Since data collection requirements are largely driven from outside the health system by insurers, accreditators, and regulators, it would be helpful for these groups to integrate their data collection requirements, says Pronovost.
"There are national efforts by JCAHO, CMS, and others to integrate their measures. However, the measures vary widely among insurers. It would be helpful if insurers could consolidate measures," he says.
To find out if redundant data collection is occurring, say the following to anyone who requests that data be collected, recommends Swain: "If I'm to collect data for you, I need to know: Why am I collecting it? What question will the data answer? And, if I'm still collecting, why? What is our goal, have we met our goal, and when will we know we have met it?"
Take an inventory of all the measures you collect, who is collecting the data, and what resources are being devoted to collecting it.
Next, go to the "consumers" of that data — whoever is supposed to take action on it — and ask if they believe it is valid, if they believe it is important, and whether it should continue to be collected, Pronovost advises.
"We did this exercise. It is eye-opening to see how much data being collected that the recipients believe is neither important or valid," he says.
Two examples of meaningless data collection include auditing operating room practices without clear specifications and collecting data on readmission to the intensive care unit within 30 days. "When we audited this measure, we found that readmissions were due to patients developing a new problem," he says. "We changed the time from readmission to within 48 hours, and it was much more useful."
All committees routinely evaluate the need to collect data. "There is such a large need to collect data that we can't support it all. So if we have something that seems to be a non-issue, we stop collecting it," says Dana Moore, RN, MS, a coach at the Baltimore-based Center for Innovation in Quality Patient Care, and clinical nurse specialist in the medical ICU at Johns Hopkins Hospital.
For example, the infection control department measured infection rates in intravenous lines used for parental nutrition for years, but the rates were low and below the national norms, so it is no longer collected.
There are many other examples, Pronovost says, adding that it is important to recognize that measurement has costs. "If measures are not valid or not important, then we should not waste resources collecting them," he says. "You are better off not spending your resources collecting invalid data that can misinform."
A better strategy is to collect data for a smaller number of valid measures, and choose your indicators carefully.
"Data collection, especially for quality control, can go on and on," Swain says. "When it does, there needs to be one indicator that is really important to the whole process. That indicator should also be simple to collect and sensitive enough that when it falls out, some action is taken."
For example, the indicator, "Is patient education documented?" is not a good choice, because it is too vague. A better indicator to use would be, "Is the patient educated at least daily while in the hospital by any provider?"
Data collection that is done to assure a process "took" and that change is being sustained also needs to stop at some point, says Swain. "Most PI projects end if some data is collected to verify that the process change was made and that desired results were achieved," she explains. "When the final report in the project's success is submitted, plan to roll the indicator off."
For example, if it is established that a practice is being done consistently, weekly data collection can stop and be replaced by a monthly quality control with a reduced sample size.
"Also, if findings from data collection are not being acted upon, it may be time to discontinue the study and breathe life back into the project, since it is not performing as it should," says Swain. "Or you may need to evaluate if this is the 'steady state' of the process and accept it, at whatever level it has steadied out."
[For more information, contact:
Dana Moore, RN, MS, Coach, Center for Innovation in Quality Patient Care, 601 North Caroline Street, Suite 2080, Baltimore, MD 21287. Telephone: (410) 502-0165. Fax: (410) 614-5462. E-mail: email@example.com.
Peter J. Pronovost, MD, PhD, Director, Johns Hopkins Quality & Safety Research Group, The Johns Hopkins University School of Medicine, 1909 Thames Street- 2nd Floor, Baltimore, MD 21231. E-mail: firstname.lastname@example.org.
Paula Swain, MSN, CPHQ, FNAHQ, Director of Clinical and Regulatory Review, Presbyterian Healthcare, 200 Hawthorne Lane, Charlotte, NC 28204. Telephone: (704) 384-8856. E-mail: email@example.com.]