Hard lessons: Texas docs wrestle with measurements

Business coalition lets physicians set indicators

Like many of their colleagues around the country, physicians in Dallas and Fort Worth were wary of "quality measures" they felt had little to do with quality care. So when a major business coalition offered to let them craft their own indicators and assessment tool, the physicians jumped at the chance.

They learned that defining and measuring good medicine is harder than it seems. And after a pilot project involving some 30 OB/GYNs, the Dallas County Medical Society is back to the drawing board, revamping the data collection instrument.

"We wanted to make sure that the criteria that were used were really an indication of the good practice of medicine, rather than just having something that is easy to survey," says Robert Gunby, MD, a Dallas OB/GYN and president of the medical society.

But as the physicians attempted to analyze variations in office-based prenatal practice along with birth outcomes, the form became lengthy.

"It was so complex and so difficult to answer and get correct information that we felt that it was really not going to be utilized by most physicians," says Gunby. "It was taking 45 minutes to an hour for each patient to code someone. As a practical matter, that was not going to work."

Perhaps the greatest accomplishment of the project so far has been the relationship among the business group, health plans, hospitals, and physicians. A separate group of physicians is just beginning work on cardiovascular measurement.

"We spent two years building a relationship of trust with the physicians," says Marianne Fazen, PhD, executive director of the Dallas-Fort Worth Business Group on Health, which includes such major employers as Texas Instruments, JC Penney, and Exxon. "We’re really confident now that we’re partners in this."

First step: Building trust with employers

The Health Care Value Initiative began in 1995 with a focus on employers’ concerns, which they identified, in order, as: pregnancy and childbirth, cardiovascular disease, musculoskeletal problems, mental health and substance abuse, and cancer.

"The employers are interested in value-based purchasing," says Fazen. "We want to know we’re getting the best quality for the right price."

But the coalition also wanted to approach the project as a true collaboration, without any negative or potentially punitive overtones. The baseline results of indicators for the area’s 45 hospitals are expected in December, including cesarean rates, infection rates, uterine rupture, and unplanned neonatal readmissions. Hospitals received their individual measures, but all the aggregate results were blinded to the coalition and others.

"We wanted to assure the hospitals and the physicians that we want it done right," says Fazen. "By agreeing to mask the baseline information, [we’re giving] them the opportunity to correct problems. Down the road, all the hospital comparative reports will be made public."

The coalition moved even more cautiously with the physicians, who sought to measure variation in office-based practice. The project encompassed routine blood pressure and abdominal measurements as well as treatment of problems such as preeclampsia and gestational diabetes.

Physicians will decide whether to release their rates to health plans, employers, or consumers. "It’s a completely market-based initiative," explains Fazen. "Ultimately, the physicians who choose not to participate will be answerable to their patients and the community as to why they didn’t."

Physicians struggle to define quality

For physicians, the Health Care Value Initiative prompted discussions about what constitutes quality care and how to measure it.

"We wanted to be sure that the criteria that were used were really an indication of the good practice of medicine, rather than just having something that is easy to survey," says Gunby.

It was easy to find fault with commonly used indicators. "Everybody uses C-section rates. Most of us feel C-sections are not really an indication of quality of care," says Gunby. "It’s really more a measure of physician attitude about how much risk aversion they have."

The physicians were interested in learning about differences in their office-based prenatal practices. But, again, defining a standard of quality was difficult. "We can’t come to agreement on what significance it is whether you have 12 prenatal visits or six," says Gunby.

Nonetheless, a core group of physicians moved forward with a data collection tool, gathering data retrospectively from the 30 most recent charts.

In some regards, the project confirmed some fears physicians have about the validity of "report cards." For example, Gunby noted that in some cases, the hospital data attributed a patient’s cesarean to one doctor when it was actually performed by a colleague on duty at the time.

The quality of the data also related to the experience of the person filling out the form. Gunby had his secretary complete the form, but she didn’t understand all the items, he says. "She put down some of the wrong answers, [such as saying] I didn’t do genetic counseling, which I do," he says. "Since it’s a pilot program, that’s not critical. If that were being done for real and that data were going to be released, I would be real upset."

Docs influence hospital measures, too

While some aspects of the Dallas project were discouraging, the physician input proved valuable. Gunby recalls that the vendor developing hospital indicators planned to risk-adjust the data based on socioeconomic information. In other words, since women with higher socioeconomic status tend to have more cesareans, the rates would account for that. There is no medical justification for that disparity, says Gunby.

"They were giving artificial risk benefits for something that had been erroneous to start with," he says. "That was giving people permission to do the wrong thing for the wrong reasons."

Gunby also has advice for physician groups that seek to design their own performance assessment projects: "Don’t get into such a complex research tool that it’s impossible [to implement].

"You just have to start somewhere, pick some indicators, and go with it and alter it as you go," he says. "Physicians are bad about wanting to do everything to perfection and sometimes that bogs us down."