Taking the measure of measurement

Do you make any of these common errors?

Imagine the ongoing dismay of a high school math teacher who year in and year out has to teach students how to do the problems the right way, and year in and year out sees the same mistakes over and over again. That's kind of what it must be like for Betsy Jeppesen, BSN, vice president of program integrity at the Bloomington, MN-based quality improvement organization Stratis Health, and her colleague Liesl Hargens, BA, MPH, an epidemiologist.

The amount of data that passes before their eyes is immense, and the errors seem to repeat themselves. So given a soapbox to stand on and a megaphone to shout through, just what would they tell those who collect and submit data to remember and double check? Here is a six pack of things they would like to see:

1. Remember companion measures. Jeppesen says that people often set up process measures or outcome measures, but not the two together. "We like to see both because the process measure lets you know if the change is embedded in the practice — whether you are checking the box on the checklist every time — but it can't tell you if the change is making the system or process better," she explains. "That's why you need an outcome measure." Take the example of falls and fall risk assessment. You want to reduce falls and you think that if you do more risk assessments, you can prevent more from occurring. You have to remember to measure both whether you are doing that assessment, and whether that change is affecting the number of falls.

2. How you measure matters. Hargens says doing things the easy way — say, through chart audits — will not always give you the answer you need. The data collection method has to be matched with what will get you the most helpful and accurate information. Take the example of surgical time outs. When a project coordinator creates a program, she might decide to just look at charts to see if a pre-surgery time out checklist was completed. "But that won't tell you if the time out process and checklist has enough rigor to prevent an error," Hargens says. To get an accurate picture of whether your time out process is improved, you need to use observation, critique the method, and examine exactly how it is carried out.

3. Consider alternative methods for extremely rare events or processes. You might want to choose something to measure that has the potential to cause great harm — say, postpartum hemorrhaging. However, such events are extremely — and thankfully — rare. Hargens says trying to measure an improvement on something that happens very infrequently may take a very long time to collect enough data to determine if your change was effective. Jeppesen adds that if an event is really rare, you aren't going to get enough chances to see how it is handled or in the case of a rare procedure, it can be difficult to predict when a particular process will be used. Consider combining data for similar cases or events that may benefit from a similar safety intervention to increase your population size and opportunities for monitoring the improvement. For rare events, consider monitoring and calculating the time between events as an alternate measurement strategy.

4. Know what you are collecting and how. Hospitals collect a vast amount of data, says Jeppesen. They do it for any number of organizations and reasons, and in many different ways. Trying to minimize duplication and workload of all that data mining can lead to mistakes and measurement fatigue. "When you are trying to measure and track for so many different requirements, measurement can become ineffective and lead to drawing the wrong conclusions from the results."

She suggests creating a matrix of the various organizations to which you are reporting, the measures you are collecting, and whether any of them line up appropriately so that you can use one measurement for two purposes. "QIOs can sometimes help with that," Jeppesen notes. "Just make sure when you are using measurements for multiple purposes, there is a true match."

Setting up a measurement plan that doesn't give you accurate results or actually help you draw accurate conclusions about your improvement effort can cost additional resources in the long run. Easier is attractive, but if you don't get it right the first time, you are making more work for yourself later, she adds.

5. Sample when you have to — and consider what type of sample to use. Hargens says that she frequently sees "convenience sampling" when someone draws information or a population that is readily and easily available, rather than alternative sampling methods.

Random sampling or stratified sampling may be more appropriate in some situations to limit the introduction of bias and assure you are getting a true picture of whether an improvement is having the intended effect for the population you are targeting for your improvement.

For instance, if you need a sample of 30 patients and choose every third patient that comes into the hospital, that limits the bias that could come from selecting only those patients that are available on a given shift or a particular unit. "Those patients selected on a particular shift or unit may not be typical of the population targeted for improvement."

"A convenience sample has some benefits in that it is simple and easy to design," explains Jeppesen. There are instances in quality improvement work when it is OK to use this method of "sampling," she says. "If you are pilot testing something and want just enough information to see if you should continue, that something is being adopted, then convenience sampling can be used."

6. Pick the right frequency or length of time. Jeppesen says people often do not know how long to continue to monitor or measure something in order to draw accurate conclusions. "If you are doing something where you draw samples of cases at three separate intervals, you might not see the data changes within those time periods," she says.

If you are looking at something at 30 days, 60 days and 90 days, you might have something that looks like an improvement, but miss variability within those 30-day periods.

She brings up pressure ulcers and how one improvement project looked at patients in three different months. The general trend was improvement, but within that, "it bounced all around."

Jeppesen notes that nurses, who do a lot of the heavy lifting in quality improvement, haven't had training in this area and are often uncomfortable with measurement.

Planning a project out — including the details of how you will measure, what you will measure (and why) and the frequency at which you will measure (and why) can help. "Weigh the pros and cons of the approach you are taking for measurement. If after an intervention you find you have very different results than you expected, you might want to go back and check your measurement again to assess the reliability of the information. But if you plan carefully up front, you can have more confidence in the conclusions you draw from your data and the implications for next steps."

Think of the companion measures, the kind of sample you need, inclusions, exclusions, and the approach for collecting data.

There are some places to get some training on this topic, including a measurement guide that Stratis Health did — http://www.stratishealth.org/documents/MN_AE_Health_Events_Measurement_Guide.pdf. Hargens suggests the state hospital association and area quality improvement organization for guidance, too. The latter often provide online or virtual training sessions.

The Institute for Healthcare Improvement (http://www.ihi.org/search/pages) and the National Association for Healthcare Quality (http://www.nahq.org/about/onlinestore/datatools.html) also have resources available.

For more information on this topic, contact: