Data interpretation hinges on the rationale for benchmarking project

Art of benchmarking requires flexibility, keen judgment

Vast strides are being made in data collection methodology, and access to the same resources will help increase the commonality among benchmarking facilities. Nevertheless, benchmarking professionals argue, the practice of benchmarking is as much art as it is science, and interpreting data is as important as collecting them.

That being the case, once you have the data in hand, how do you determine what to do with them? "The whole issue of benchmarking is, why are you doing it?" notes Sharon Lau, a consultant with Medical Management Planning in Los Angeles.

What’s your reason for benchmarking?

"Is it to mark you as the best in the area? To improve overall? To slash and burn? — I hope not, but some people do use benchmarking at the 11th hour, so they can know where to cut. You have to have a reason, and hopefully, it is for overall improvement and for giving your managers some targets to shoot for," she says.

Robert G. Gift, MS, president of Omaha, NE-based Systems Management Associates Inc., agrees with that assessment.

"What’s the reason you gathered the data in the first place? You’ll never have complete data," he points out. "You have to look at this stuff and try to make sense of it; therein lies the judgment."

What to use — and when

How do you know which of the data to use, and when? "Here is where you have to be strong," Lau says.

"The first thing any manager will tell you when it comes to a benchmarking result is, Yes, but we’re different.’ Sometimes we are, but you have to approach benchmarking as follows: If the gap [between your performance and the benchmark] is not that big, then maybe that’s not what I want to go after. Maybe I’ll tackle another department where the gap is much larger, and I can get more bang for the buck and come back to this other area when I’ve solved it. That is part of the art of benchmarking," she explains.

"Part of it also has to do with whether you are looking at outcomes data or process data," Gift says. "If you are looking at outcomes data, that is the ultimate you are trying to achieve, but what will help you see what to change in the work you do are the process data."

Lau warns that "yes, but we’re different" should not be used as an excuse to disregard certain data. "Yes, we are all different, but that doesn’t mean we can’t learn from the performance of others," she asserts.

The heart of benchmarking

At the heart of all key benchmarking decisions is the need to balance the cost, quality, and speed aspects of performance, Lau says.

"This is true no matter what it is you are measuring," she insists. "You have to balance the cost benchmarks, the quality benchmarks, and the speed-of-service benchmarks. If you just go after productivity, which is a speed benchmark (i.e., lowest hours), that only gives you one view of the benchmark painting. You could be the most productive but have the worst speed and quality; there’s always a need for there to be a balance."

That, she says, is another part of the art of benchmarking — when to seek an exact balance and when, for example, to decide that quality/ satisfaction is more important than the other factors.

"We had an ED [emergency department] in our [benchmarking] group that was the most productive in the world," Lau recalls. "But their kids were waiting three hours to be seen by the doc, so they had to add staff to bring the wait time down. Then, customer satisfaction went up."

"Which data to use and when is really driven by the objectives you are trying to achieve," Gift says.

He uses an approach similar to Lau’s, employing a tool called the "Family of Measures." (See example, below.)

"This is a fascinating way of looking at multiple dimensions of performance," Gift says. "The assumption is that there are only three things we can measure — effectiveness, efficiency, and economy — or if you will, quality, time, and cost."

To use the "family," you start out with a process and break it down into these three major areas. "Then you identify the critical success factors and what the performance measures are," Gift says.

"If you are thinking about an admissions process, for example, maybe the accuracy of the information you collect is a critical success factor," he adds. "The actual performance measure may be the percentage of patient registrations that are error-free. It’s drawing that line of sight between the process you are working on and the actual thing you are measuring."

Many times, people may say they are looking at accounts receivables, for instance, and measuring X, "but what are you trying to get at with X’?" he asks. "The family of measures tries to force some alignment of those things. It also forces you to think of multiple dimensions of performance, so you are not pushing on only one of those three pedals."

Apples and applesauce?

Even with good data definitions and careful data collection, you’ll still have some data that will never be apples to apples, Lau notes.

"Apples to applesauce may be the best you can get," she concedes. "But you still need to use the data. How do you do that?"

The first thing you have to do is convince people they are still good data. "If it’s apples and rocks, maybe we have a problem," she concedes. "But if its apples and oranges, frankly I can see they are both fruit. You have to be able to interpret when you are close enough; look at the gap in performance and how close you are. If, for example, the benchmark wait time is 20 minutes and I’m at six hours, I don’t care if the data are off by 15%; we still have a problem, and you’ve got to use the data. If it’s the difference between 20 and 30 minutes, you may not go after it."

"What I tell people is, even if you do the best job you can, the best you can hope for is Golden Delicious to Granny Smith; you’ll never get Washington state to Washington state," Gift says. "Variables can include how you gather the data, how much time was spent to clean them, and so on."

The chief contribution, in many cases, is a cause of variation that you can’t eliminate — i.e., the nature of the patients who show up at your door, he continues. "You can adjust the data for it, but you will never be able to clean it out completely."

How will you use the results?

In the end, it does all come down to judgment and results. "Benchmarking should be 25% data and 75% best practices — what works and what doesn’t," Lau says.

"Maybe if we meet 80% of the benchmark for a certain area, [that is acceptable]; but we need a higher percentage for another," she adds.

Hopefully, the benchmarking results will be used as a carrot. "Our best executives in our benchmarking group have had that approach," Lau observes. "We will report on your performance as it compares to the benchmarks, but it’s not like we’ll fire you when you don’t hit it."

"One of the overarching principles that come to mind is that comparative data and benchmarks should never replace judgment," Gift adds.

"One of the things we find is that people seem to be developing this insatiable desire to have comparative data. I think they’re looking at this as a panacea — a silver bullet kind of thing. But data can only point you in a direction," he says.

Need More Information?

For more information, contact:

• Robert G. Gift, MS, President, Systems Management Associates Inc., 4410 S. 176th St., Omaha, NE 68135. Telephone: (402) 894-1927. E-mail: bobgift@radiks.net.

• Sharon Lau, Medical Management Planning Inc. (MMP), BENCHmarking Effort for Networking Children’s Hospitals, 2049 Balmer Drive, Los Angeles, CA 90039. Telephone: (323) 644-0056. FAX: (323) 644-0057. E-mail: sharon@mmpcorp.com.