The Quality - Cost Connection: Did we really make a difference?
Did we really make a difference?
How to measure the effectiveness of PI
By Patrice Spath, RHIT
Brown-Spath & Associates
Forest Grove, OR
Measuring the effectiveness of actions intended to improve patient care is an important element of performance improvement. Effectiveness evaluation determines whether an intervention has had the intended effect. It is the "check" portion of the plan, do, check, act (PDCA) continuous improvement cycle. Yet, effectiveness evaluation often is just an afterthought.
One golden rule is that the intervention and its evaluation should be planned simultaneously. It is important to decide on the evaluation design and methods before interventions are introduced. Some basic decisions are required before an evaluation strategy can be specified. These decisions should be made through a collaborative process with those who will use the evaluation results and those implementing the improvement plans.
The types of factors to be determined prior to implementing action plans should include overall purpose of the evaluation; main questions that the evaluation should answer; available resources (financial, personnel); and deadline for the evaluation results. These decisions have to be made well before the improvement plans are implemented since they influence the evaluation methodology. In particular, the rationale for the evaluation will influence the strength of the evidence sought. For example, you might want a more rigorous evaluation process if the result of the evaluation has larger resource or policy implications for the organization. Several quasi-experimental strategies that can be used to assess the effectiveness of performance improvement interventions are described below:
• Use a control group.
This strategy mimics a simple experimental design with one important difference: Participants are not randomly assigned. For instance, an intervention might be implemented in one nursing unit but not another. The other nursing units act as nonrandomized control groups by not receiving the intervention. This method of effectiveness evaluation can be invalid if the intervention and control groups differ significantly and these differences influence the measures used to determine an intervention effect. In the example involving nursing units, the different characteristics of the intervention unit (e.g., pediatric population only, short-stay patients, only, etc.) could result in invalid comparisons among units.
Another way to use control groups is to apply different implementation strategies in each group. The goal is to determine which strategy works the best. For example, one hospital implemented a process for reconciling patients’ medications at discharge. In one nursing unit, staff received an educational session on the new process after which the steps in the process were posted at the nursing station. In another unit, only the steps of the process were posted; the nursing staff did not receive any formal training. Compliance with the new process was then measured, and the two implementation strategies compared. Much to the surprise of the reviewers, both strategies yielded almost equal compliance with the medication reconciliation process. The step of staff training was eliminated when the process was introduced in other units.
• Take more measurements.
Consider using a simple time series measurement strategy. Establish a baseline by taking several measurements before implementing the intervention. Next, take the same measurements after introducing the intervention.
If the intervention was effective, one would expect to find a difference in measures between the two time trends. The number of measurements needed for a time series evaluation depends on the amount of random fluctuation (noise) that may occur in the process or outcome being measured and how much of an impact the intervention is expected to have. Typically, somewhere between six to 15 measurements are needed to establish a baseline and the same number to establish the trend after the intervention is implemented.
A time series evaluation is suitable only for some situations. For example, it would take quite a long time to gather reliable data on the incidence of unmarked surgical sites (an infrequent occurrence). It probably would be better to measure compliance with pre-incision timeout procedures. This would permit more frequent and reliable measurement. Another option is to combine the control group measurement approach with a time series evaluation to strengthen before-and-after intervention reviews.
• Stagger implementation.
With this measurement strategy, all affected groups eventually implement the intervention, but at different times. As a result, all groups also serve as a comparison group to each other. The advantage of this implementation and measurement technique is that it markedly reduces the chance of mere coincidence. For instance, when all nursing units make a change in a process at the same time, you can never really be sure that something didn’t coincidently occur at the same time to influence the measured results. Another influencing factor that can subtly impact measurement results is the Hawthorne effect. That is, everyone’s attention is focused on correcting the problem and it is this "focused attention" that actually causes improvements, not the process change itself. Once people’s attention turns to other priorities, the gains can slowly slip away.
When implementation of interventions is staggered, the possible influencing factors also are staggered — reducing the likelihood that mere coincidence or focused attention are influencing the measurement results. Staggered implementation of interventions also can allow interim assessments and, if appropriate, modification of the intervention or its implementation before it is introduced in other areas.
• Measure multiple outcomes.
It is important to measure more than one type of outcome following implementation of improvement actions. This increases the strength of the evaluation. There can be a number of outcomes intervening between the start of the plan implementation and the final outcome. It is ideal to try to measure as many of these different intervention outcomes as feasible. This includes measurement of completion of action plans, as well as short- and intermediate-term effects of the intervention. In instances where an improvement project failed to achieve goals, you want to be able to distinguish between inherently ineffective action plans and a flawed implementation strategy.
If an action was not completed as intended, measuring effectiveness of the project by measuring overall outcomes likely will underestimate the intervention’s potential impact. You might discard the improvement project as being a failure when in fact the culprit was inadequate implementation of one action. First try to improve this part of the intervention instead of discarding the overall plan altogether.
Often what you’ll be measuring during the "check" cycle is the effectiveness of the actions at achieving improved outcomes. For example, such an evaluation might answer the question, does the new patient admission assessment instituted six months ago (for the purpose of decreasing decubitus) actually reduce skin ulcers? Although outcomes often are measured in an effectiveness evaluation to determine whether the improvement project has had an effect or not, there are two situations where this might not be the case. At times, outcome data may be unreliable or invalid (e.g., when small numbers are involved because of the size of the facility). In this case, a surrogate measure of effectiveness could be used (e.g., compliance with patient assessment procedures).
The other situation is when the explicit objective of the improvement project is not to improve an outcome but rather to change the process. For example, an improvement project might be done to decrease the use of flash sterilization of instruments in the operating room. Upon completion of this project, the measures of success may focus solely on how often staff members are following the redesigned process. However, even if the purpose of the intervention is to ultimately effect a process change, it may be beneficial to also include a measure of outcome.
Measuring the effectiveness of actions intended to improve patient care is an important element of performance improvement. Effectiveness evaluation determines whether an intervention has had the intended effect.Subscribe Now for Access
You have reached your article limit for the month. We hope you found our articles both enjoyable and insightful. For information on new subscriptions, product trials, alternative billing arrangements or group and site discounts please call 800-688-2421. We look forward to having you as a long-term member of the Relias Media community.