THE QUALITY CO$T CONNECTION

Don't declare QI victory too soon

Evaluate the effectiveness of QI initiatives

Patrice Spath, RHIT
Brown-Spath & Associates
Forest Grove, OR

Hospitals are making numerous changes in an attempt to improve the quality and safety of patient care services. These interventions could be a new program, practice, or initiatives such as staff training. Changes are occurring at a rapid pace in different levels within the organization, including management and at the front lines. Unfortunately, people often don't take the time to critically analyze and evaluate the changes to see if intended improvements are actually realized. The sense of urgency for change is not coupled with the resolve to conduct post-intervention evaluations. With health care organizations taking an ever closer look at their overall expenditure levels, it is increasingly important to evaluate the effectiveness of quality and safety initiatives.

No change should be implemented without a plan for conducting an effectiveness evaluation to determine whether the initiative had the intended effect. Such an evaluation might answer the question: Does the new medication reconciliation process instituted for the purpose of decreasing medication errors actually reduce errors? This type of evaluation is the "CHECK" portion of the PLAN-DO-CHECK-ACT (PDCA) continuous improvement cycle. Although outcomes often are measured in an effectiveness evaluation to determine whether the initiative had an effect or not, this may not always be the case. For example, if medication error data are unreliable, a surrogate measure could be used (i.e., completeness of reconciliation documentation). Three types of measures can be used to evaluate the effectiveness of actions:

Completion of action — Easiest to measure, but weakest. Example: Measure that staff received training in the new medication reconciliation policy and procedure.

Compliance with process changes — Harder to measure, but more meaningful. Example: Measure how often reconciliation is done within established timeframes.

Outcome (result) of actions — Often hardest to measure, but best indicator of success. Example: Measure how often a near-miss or adverse event occurs because of inadequate reconciliation of a patient's medications.

It may be necessary to make tradeoffs when choosing which outcomes to measure. Issues such as resource limitations and potential availability of quality data must be taken into consideration. If a very complete effectiveness evaluation is warranted, pertinent data would need to be collected for all important variables.

Measurement plan

Plans for evaluating the effectiveness of a change should be developed while the intervention is being chosen or designed. Measurement starts with a clear understanding of what people are trying to achieve. The individuals involved must clarify what they hope the intervention will change and the mechanism by which that will happen. Next, have them tell you how they will know if the action made the situation better. It can often be useful to also identify potential unintended outcomes of the intervention. To do this, ask people to look at the changes that are supposed to happen following the intervention. Then have them think about what other effects could possibly result from the changes. For example, an intervention to reduce needle injuries by eliminating recapping prior to disposal into containers might not only have the intended effect of decreasing recapping injuries, but also an unintended effect of increasing disposal-related injuries if disposed into poorly designed containers.

Once the intervention effects (both intended and unintended) have been identified, measurement methods are selected. To clearly demonstrate intervention effectiveness, outcomes should be measured using quantitative methods (i.e., rate of patient injuries, percent of medication reconciliations completed on time). Quantitative measures are used to determine how big an effect the intervention had on the outcome(s) of interest and whether the effect was statistically significant. The presence of a demonstrated statistically significant change or difference in a measurable variable provides good evidence of intervention effectiveness. Qualitative evaluation methods also can be used. These measures are helpful in determining how the intervention achieved desired results and the reactions of individuals expected to adopt the change.

After the measurement methods are defined, it is time to select the sample that will be evaluated after interventions are implemented. The evaluation sample is obvious when an intervention is being introduced in just one department and all 30 employees will be affected by the change. However, often interventions are implemented across the organization and it may not be feasible to evaluate everyone's involvement. In these situations, a smaller study population must be identified. Lower numbers can provide you sufficient statistical power to measure an intervention effect if the study sample is chosen carefully. The sample should be representative of all involved groups and their particular circumstances (i.e., time, place, discipline, etc.).

Suppose a training intervention is being implemented to improve communication among members of the health care team and you will evaluate it by observing people interacting before and after the intervention. It is not feasible to observe everyone who received communication training, so you limit the evaluation to a smaller sample. It would be easier to observe everyone who works in one area, but this group is unlikely to represent the whole population of trained individuals. The best method is to choose a random sample, which increases the chance of a representative sample. Random selection involves choosing the groups to observe in such a way as to ensure every group has the same probability of being selected. When evaluating the effect of communication training, you'd want a stratified random sample that takes into consideration the particular circumstances that might influence communication practices, such as professional discipline, time of the day, unit, etc.

Measure over time

Often, it is necessary to take multiple measurements over time to reliably judge intervention effectiveness. But how long must you keep measuring? The answer depends on several variables. If you are making small incremental changes as part of a rapid cycle improvement or Six Sigma project, you need to gather data long enough to be relatively certain the change was successful at achieving the intended effect. Once success is confirmed, you can spread the change to other areas and move forward with the next incremental change.

Significant changes often require a longer period of measurement to determine if the new practices have been internalized by those involved. For instance, medication reconciliation is a new practice for nurses, pharmacists, and physicians. Thus, it will take awhile for the reconciliation process to be followed consistently.

Occasional measurement of compliance will be a necessary part of the implementation process for several months. If the intervention is replacing a long-established habit, such as the use of abbreviations in medication prescriptions, an even longer period of measurement may be needed. People newly trained to a practice change are more likely to comply if they know they will occasionally and randomly be observed.

The effectiveness of large scale interventions intended to positively impact patient outcomes can require measurement over several years. An integrated health system systematically embarked on a multi-faceted initiative to improve pediatric asthma outcomes. Data were gathered at each step of the process to assess the effectiveness of each new intervention at achieving desired practice changes, such as appropriate use of anti-inflammatory medications. In addition, outcome data were gathered for several years to determine if pediatric hospitalizations and emergency department visits were declining.

After an intervention is in place and appears to be running well, it is usually not necessary to continue evaluating compliance with process changes. Periodic review of outcome results may be sufficient to ensure that nothing is disrupting what appears to be a well functioning process. For example, the rate of patient falls can be monitored to determine if fall prevention interventions continue to be successful. If the outcome data are reasonably accurate, use a control chart to plot the results. Often, monthly rates vary considerably because of random variability and you'll want to know if results for a single month are significantly out of line with previous experience. Control chart methodology will alert you when the number of patient falls in a month is so high that there seems to be a real problem, or when the pattern over two or three months is a cause for concern.

If the intervention is expected to have a positive impact on important outcomes, quite likely some of those outcomes already are measured regularly. Thus the data collection burden may be minimal. Improvements in hand washing are expected to decrease nosocomial infections and the rate of infections often is routinely measured by hospitals. The success of interventions intended to improve patient satisfaction can be measured indefinitely through the hospital's existing satisfaction survey process. Length of stay and cost data can readily be obtained from the hospital's claims database, and this information can be used to evaluate interventions expected to impact these results.

People look forward to completion of any task, so it's no wonder that it is tempting to congratulate all involved and proclaim success once interventions have been implemented. However, if the results of the change are not adequately evaluated, it's like declaring victory before the war is over. Improvement projects that essentially end at the DO phase in the PDCA cycle may only produce a fraction of the possible results. Or, worse yet, six months after the end of the project, it will become clear that immediate gains were not sustainable.