Cleveland Clinic, related hospitals pull out of report-card project

Move threatens future of six-year-old Cleveland effort

Criticizing one of the nation’s oldest health care report cards as an expensive experiment that ultimately failed, the Cleveland Clinic and its affiliated hospitals have pulled out of Cleveland Health Quality Choice (CHQC), leaving the program’s future in doubt.

The move comes even as purchaser coalitions and others work toward creating a national forum to encourage performance assessment and reporting. (See related story, p. 40.) And the departure highlights the underlying conflicts and weaknesses that could befall other report-card efforts.

"The biggest issue really is that nobody used it," says John Clough, MD, chairman of health affairs for the Cleveland Clinic. "The whole point of this program was to add the dimension of quality to the selection process [of health care providers by purchasers], which had always been based on cost alone," he says. But Clough noted that in five years of reporting data, CHQC had not moved beyond hospital mortality, length of stay, cost, cesarean rates, and patient satisfaction. Employers who wanted information on quality would make individual requests.

But to Patrick J. Casey, JD, executive director of the Health Action Council of Northeast Ohio, a coalition of about 70 Cleveland-area corporate health care purchasers and a prime supporter of CHQC, the clinic’s departure sends a more ominous message about the vulnerability of voluntary efforts.

"The argument by the clinic that the program wasn’t as good as it should have been, so we’re just going to kill it — I don’t really know how to characterize it," he says. "You would suspect that there are other reasons involved."

Cleveland Health Quality Choice began with consensus-building meetings in 1989, and its sponsors included purchaser coalitions, the hospital association, and the local physician society. The Cleveland Clinic served on its board and provided input.1 "We wanted it to be a voluntary system, community-based," says Casey.

When CHQC prepared to release its first data in 1993, Business Week stated that "the Cleveland program stands out for its attempt to build its own measure of quality and for its broad-based support."2

Yet even that laudatory article pointed to concerns about a risk-adjustment model developed specifically for CHQC by Michael Pine & Associates of Chicago. The Cleveland Clinic and others asserted that the model underestimated the additional mortality risk of the most severely ill and didn’t properly take into account hospital transfers of the sickest patients to tertiary care centers.

"There were political problems along the way. Some of them dealt with risk adjustment; some of them dealt with other aspects," explains Clough. "I think we were uncomfortable with the risk adjustment, but if that was all that was wrong with it, we probably would have stuck with it."

CHQC executive director Dwain Harper, DO, says the program responded to the clinic’s concerns and tested additional risk-adjustment variables but didn’t find that they altered the results, in which in-hospital mortality, length of stay, and patient satisfaction are reported as "better than expected," "as expected," and "lower than expected."

Mortality, C-section rates improved

Public reporting of data had both an immediate and long-term positive impact on the community, say Casey and Harper. (See charts, above left and p. 39.)

"Death rates for six major medical conditions [including heart failure, stroke, and pneumonia] in 27 hospitals have dropped 30%. That’s significant," says Harper. "C-section rates have declined about 15% and are right around the Healthy People 2000 [national preventive health goals developed by a consortium of public and private organizations]. Vaginal birth after cesarean rates have increased almost 20%. Length of stay has declined almost 30%.

"There are studies out that have suggested that that rate of improvement does not occur in a community where there is no report [of health care performance] available," he says. "It’s one of the benefits a community gets from this project."

Clough asserts that those improvements and cost savings may have resulted simply from the evolution of better and more efficient medical care nationally over the seven years of the project. "It had much more to do with the changing of the marketplace and changes in the way patients are taken care of. There was never any clear relationship to Health Quality Choice that any of us could discern."

Meanwhile, efforts to expand the reports to include such data as mortality rates within 30 days of discharge and functional health status dragged on.

Ironically, the business community was poised to levy a surcharge to fund measures of post-discharge mortality and satisfaction with outpatient surgery. The program had overcome concerns about patient confidentiality, which could be breached by use of Social Security numbers to identify deceased patients.

The results of 30-day mortality would have been reported in the spring, says Harper. "We finally broke through on that," says Casey.

CHQC had designed a patient satisfaction survey for outpatient surgery and could have reported those results by the end of 1999, adds Harper.

As for other issues, "The Cleveland Clinic’s concerns about the program were heard in detail in 1996," he says. "All its suggestions were analyzed. Many of its ideas were simply rejected by the remaining participants as either too costly or things they were not interested in. Some of its suggestions have been taken into account."

But the Cleveland Clinic’s most deadly criticism was that the report card had become irrelevant. Studies have shown that consumers often don’t understand report-card information and don’t use it to make choices about their physicians.3 Health plans, not a primary target audience of the Cleveland report card, generally developed their own performance assessment systems.

Casey says the business community found the CHQC information valuable. "Purchasers of care were using it, were used to it, and had much discussion about the meaning of it."

But according to Clough, "We have surveyed our people who contract with us to provide care, and pretty much across the board, found they don’t use it." Instead of spending $2 million a year to produce the CHQC data, the Cleveland Clinic will concentrate on its own disease-specific benchmarking and quality improvement, beginning with diabetes care, he says.

That could eventually evolve into a Cleveland Clinic report card, incorporating its nine hospitals and 2,500 physicians. But Clough says, "Until we know that it’s a valid methodology, we will not report it publicly."

The departure of the Cleveland Clinic and its hospitals has a fallout for the other participating hospitals. CHQC is an approved vendor for the ORYX quality assessment program of the Joint Commission for Accreditation of Healthcare Organizations.

Of the 18 remaining hospitals, 16 used CHQC as their ORYX vendor, as did about four other Cleveland-area hospitals, says Harper. As of early March, the program staff had been downsized, but the directors were considering options ranging from restructuring to dissolution. "There are still many hospitals in the city that are interested in the performance measurement information," says Harper. "There is a possibility that the organization could change its mission and serve different needs."

Meanwhile, will the Cleveland experience produce shock waves in other report-card efforts? That’s not likely, say advocates of public accountability in health care.

Conflicts over methods and indicators aren’t surprising, says Cheryl Damberg, of Pacific Business Group on Health in San Francisco, who notes that "all report cards are in their infancy." The California purchaser coalition has developed an extensive report card of hospital and medical group performance called Physician Value Check Report. She calls the Cleveland program "a respected report card."

"It takes time and resources to gather the information and put it together," she acknowledged. "I did think they were on the right path."

In Cleveland, purchasers were considering pressing for a state-mandated release of health care data. Casey called the recent events "a speed bump" in the road toward public reporting of health care information. "The genie is out of the bottle as far as information such as this."

References

1. Rosenthal GE, Harper DL. Cleveland Health Quality Choice: A model for collaborative community-based outcomes assessment. Jt Comm J Qual Improv 1994; 20:425-442.

2. Schiller Z, Galen M. A consumer’s guide for health-care shoppers. Business Week May 3, 1993; 53-54.

3. Hibbard JH. Use of outcome data by purchasers and consumers: new strategies and new dilemmas. Int J Qual Health Care 1998; 10:503-508.