THE QUALITY - CO$T CONNECTION

Uncover opportunities using comparative data

Understand what the date are telling you

By Patrice Spath, RHIT
Brown-Spath & Associates
Forest Grove, OR

With the increasing number of health care performance measurement initiatives being carried out across the country, comparative data are easy to obtain nowadays. These data can be powerful motivators for organizational change by highlighting opportunities for improvement and thereby assisting in setting strategic improvement goals. To reap benefits from comparative performance data, quality managers must identify ways to integrate this information with the organization's improvement efforts.

First, it must be determined what performance factors senior leaders are most interested in. Some organizations are chiefly concerned with comparative data that are routinely made available to the public. Although these measures may be applicable to only a small percentage of the organization's patient population, concerns about how consumers may react to these data can be the primary drivers of internal improvement activities. Ideally, the quality goals of the organization also should be taken into consideration as these goals may affect patient populations or factors not routinely evaluated in publicly available performance data. If senior leaders are interested in analyzing less widely available comparative performance data, quality managers can be challenged to find suitable comparisons.

A new on-line resource from the Commonwealth Fund, Performance Snapshots, can make it easier to locate comparative data for various performance issues. The resource includes measures for more than 80 topics along with suggestions for improving health care practices. You can download customized collections of topics and charts for internal reporting purposes.

Once the performance measurement priorities of the organization have been clarified and comparative data obtained, the task of analyzing and responding to the results begins. The journey of discovery and performance improvement is illustrated in Figure 1.

The journey begins with the quality council (or other leadership group) reviewing the data and determining the hospital's comparative standing. If the hospital performs outside the ideal performance range (significantly above or below the average) further investigation is warranted. However, for many comparative measurement results the hospital's observed performance rate will most likely fall between the upper and lower values of the expected performance. Just because your facility's performance falls within the average or expected range does not mean that quality is perfect. Tests of statistical significance and other data analysis techniques are merely tools that can help point to potential problem areas. Don't rely solely on these tools to identify improvement opportunities.

Your facility's performance may be consistent with that of other organizations and yet improvements are still desirable. Why? Because people in your organization have established stretch goals. Average or slightly better than average performance is not the stopping point in an organization with stretch goals — people aspire to achieve optimal performance. Quality goals, reflective of a commitment to performance excellence, may be established as part of your organization's strategic planning process. Medical staff departments may select certain conditions or procedures for which they have higher than average performance expectations. Nursing and other clinical departments can launch improvement projects that are intended to achieve lofty goals, such as a restraint-free environment. For any number of reasons, it may be necessary to look beyond statistical significance and special-cause process variation and decide if your organization is realizing internally defined expectations.

A common concern about comparative performance data is the validity of peer groups. It is virtually impossible to create peer groups that are identical in all ways: clinically, demographically, and therapeutically. There are literally hundreds of variables that may impact clinical outcomes and performance measurement results. Risk adjustment of patient populations as well as population stratification can help enhance the validity of the comparisons, but peer groups will never be perfect.

You may be unable to manipulate the peer group comparisons through production of customized reports or other means. For example, say your hospital has higher than expected mortality rates for patients undergoing cardiac surgery. Why? Because your facility has a first-class cardiac program and high-risk patients are more likely to be admitted to your facility. But don't be too quick to attribute the cause of significant variations. An in-depth analysis should be done to determine what factors are most likely affecting performance rates at your facility.

It is unlikely that perfect peer group comparisons will ever be possible. Either unique facility and/or patient characteristics will not be adequately accounted for. However, perfection is the enemy of improvement; understand what the data are (and are not) telling you. Don't expect comparison data to be flawless before you can investigate variations.

Should performance vary significantly from comparison facilities or if performance does not meet the organization's internal goals or expectations, questions naturally arise about why performance is what it is. These "why" questions are answered by investigating all the factors that might be impacting performance. For instance, if the facility's surgical wound infection rate is higher than that reported by other facilities, clinicians must identify the contributing factors and agree on what needs to be done to reduce infection rates. Effectively answering the why questions is the critical first step toward implementing appropriate changes in policies, procedures, or processes that will positively impact the quality of services.

Once agreement is reached on the tactics needed to improve performance, detailed action plans are developed. These action plans also should identify the specific steps to be taken to transform the action from an idea to reality. The action plans should also identify the time frame within which the steps will be completed and the individuals responsible for completion. Once approved, implementation of the action plans begins. Once implemented, performance is monitored to determine the level of improvement achieved.

As depicted in Figure 1, after a reasonable amount of time has elapsed performance is re-evaluated to determine if the desired level has been reached. If it hasn't, the improvement cycle begins again, with clinicians seeking new insights into mechanisms for change. If performance is acceptable, the organization can turn its attention to other areas needing improvement.

Comparative performance data can provide powerful motivation for organizational change by highlighting opportunities for improvement and assisting in setting improvement goals. The availability of this information is driving collaborative projects at the national, regional and local levels. Quality managers must champion a systematic approach to evaluating performance in key clinical and administrative functions. Without such an approach, organizations risk missed improvement opportunities or misapplication of data.

(To learn more about Performance Snapshots visit the Commonwealth Fund's web site www.cmwf.org/snapshots).