The idea of using physician performance data to spur individual competition among their peers is a tried and true quality improvement strategy, but it has been used enough now to show its strengths and weaknesses. Showing a doctor where he or she stands in comparison to others can be effective, but if you go about in a ham-handed way it can backfire and just cause problems for everyone involved.
The latest confirmation of the strategy’s usefulness was in a recent Journal of the American Medical Association study, which found that electronic “behavioral interventions” comparing a physician’s performance to others were effective in decreasing the inappropriate prescription of antibiotics. (See more on that study later in this issue.)
Hospitals can use the strategy on a departmental or physician group level before addressing individual performance, says Richard E. McClead Jr., MD, associate chief medical officer at Nationwide Children’s Hospital in Columbus, OH. McClead also is co-editor-in-chief of the journal Pediatric Quality and Safety. He cites an example of working with the gastroenterology department to improve management of celiac disease, which includes a number of process measures.
At McClead’s hospital, they initially track how the section is performing on a particular measure, in order to get the group engaged in the processes and outcome measures that need to improve. Then they break the information down by physician and provide that information to the entire section, but not with names attached. Each physician is provided with a coded letter or number to match with the posted data, so that they can identify their own scores and see where they stand against the others.
Resistance is to be expected in the early stages, with doctors questioning the validity of the data. (See the story later in this issue for more on that initial resistance.) That resistance is likely to persist unless the hospital promotes transparency, not just in this improvement effort but throughout the hospital, McClead says.
“We see over time that they gain confidence in the process and then we can reveal each other’s data and be very transparent with it,” McClead says. “But my experience has been that gaining their confidence is difficult to do in an institution that does not have a culture of transparency. Doctors don’t like to be an outlier, so that’s a real motivation, but if you don’t have a culture of transparency they start questioning the validity of the data and respond with all the excuses for why their scores are lower.”
Show Validity of Data
It is important to acknowledge that physicians treat patients of different acuity, as well as other factors that can affect a quality score, McClead says. But if the data are presented in the most transparent way possible and with an emphasis on quality improvement, most physicians will accept that the data is valid. The hospital must stress that the scores are not a way to find “bad” doctors and punish or embarrass them, but rather an effort to help them improve, he says.
“We had one outlier physician who challenged the data at first, and then when he saw everyone’s data he realized he could improve his performance,” McClead says. “He went to the QI leader in his department, sat down, shut the door, and said ‘OK, tell me what I’m doing wrong here so I can fix this.’”
Physicians are driven to change their performance primarily by two factors: competition and money, says David Friend, MD, MBA, chief transformation officer and managing director of the Center for Healthcare Excellence & Innovation with BDO USA, a Chicago-based consulting company. He previously was chair of the clinical affairs committee at the University of Connecticut in Storrs.
Friend used both in quality improvement efforts at the university, which includes the UConn John Dempsey Hospital and the UConn School of Medicine. Efforts to improve outcomes often involved encouraging competition between individual physicians and also against other institutions. Physicians are, by and large, a very competitive bunch because they had to compete to get good grades, get in to medical school, earn the best internships, and stand out among their peers, Friend notes.
Public Recognition Matters
One effort began with declaring that UConn John Dempsey was going to be the safest hospital in the state and comparing outcomes data to other hospitals. That created competition among the hospital’s physicians on a hospital level, with UConn’s physicians working together to beat the other facilities. That was successful, and the physicians were proud to say they worked at the safest hospital in the state.
Awards and recognition are important to physicians, much more so than many hospital administrators realize, Friend says. It is easy to assume that physicians are highly accomplished, well-paid professionals who don’t need a pat on the back or a little plaque to feel good about themselves, he says. But they do value those gestures.
“I don’t think most people realize that, and it is absolutely essential. Physicians are driven by a desire to excel and they want that recognition. They crave it,” Friend says. “Peer recognition means everything to physicians and if you don’t include that, you are omitting one of the most powerful tools in a quality improvement program.”
Failing to focus on positive aspects with praise, incentives, and recognition will make the effort look punitive, like you are looking for the failures and encouraging physicians to be “RVU slaves,” just working to create good data instead of good medicine. Friend recalls working with the hospital’s urologists to determine the amount of mesh used in procedures and what was the optimal amount. There was a wide disparity, with some physicians using 10 times the mesh that others used in the same procedure, with no difference in results.
“When we said we were going to give a prize for the best outcome, including the proper use of mesh, immediately the worst performer became the best performer,” Friend says. “This motivation to excel is far more effective than punitive systems or these overloaded programs with hundreds of metrics that nobody understands.”
Financial rewards also make a difference, Friend says. The reward doesn’t have to be huge to have an effect. Even a modest financial reward tells the physician that the hospital values the efforts to improve and acknowledges that those efforts save money for the hospital, he says.
Choose Objectives Carefully
Friend cautions that you must make physicians responsible only for the factors they can control. Including performance measures that they cannot control will only frustrate them and make the whole experience a negative one. They also must be completely objective measures; any subjectivity will result in physicians arguing that the data are inaccurate.
Also, be careful not to overburden physicians with too many goals at once. Determine which factor is most essential to the outcome improvement and have them focus on that, Friend says.
“If I tell a world-class runner to focus not just on getting to the finish line as fast as he can, but also to smile for the cameras, count the pebbles on the track, and sing a song, he isn’t going to win the race,” Friend says. “People do best when they focus on one thing.”
Physicians also need sufficient notice of any initiative that will affect their public image, standing within the hospital, or their compensation, says Karen Meador, MD, managing director of the BDO Center for Healthcare Excellence & Innovation. Meador is working with one-third of the New York state providers participating in state-sponsored Medicaid reform efforts (DSRIP), which include physician alignment and care coordination incentives.
“If something is imposed on them and they feel like they didn’t have an opportunity to influence the design or what quality metrics were chosen, there is going to be a lot more resistance,” Meador says. “Bring the physicians together early on and explain what you’re planning, here’s why, how it’s going to benefit them, and get their input. Let them help determine what metrics are most relevant.”
Patient Satisfaction Complicates Metrics
When selecting metrics and goals, Meador says you have to be careful not to omit those that seem obvious. If a certain metric is crucial to outcomes but seems so obvious that you don’t include it in the initiative, you can inadvertently cause physicians to neglect that metric or even sacrifice it to make the stated metric better, she says.
Quality initiatives based on physician competition must acknowledge that there will always be those who rank as the lowest performers, Meador says. If everyone is providing high-quality care, the hospital should acknowledge that those ranking lowest are not providing poor-quality care.
Patient satisfaction measurements can complicate physician outcome measures, Meador notes. In some cases, doing the right thing in terms of good medicine and best practices will not make the patient happy. An example is refusing to prescribe antibiotics for a common cold or prescribing too many narcotics to a pain patient.
Any physician quality initiative must account for this conflict, Meador says. Otherwise, physicians get frustrated and lose confidence in the quality improvement effort.
“I know a number of physicians who say they feel this struggle because the patient says the doctor doesn’t appreciate his pain and won’t prescribe what he wants,” Meador says. “They debate whether to stick to what they think is right or risk having the patient give a really low satisfaction score. When the incentives for a good ranking are really strong, that’s difficult.”
- David Friend, MD, MBA, Chief Transformation Officer and Managing Director, Center for Healthcare Excellence & Innovation, BDO USA. Chicago. Telephone: (212) 404-5562. Email: email@example.com.
- Richard E. McClead Jr., MD, Associate Chief Medical Officer, Nationwide Children’s Hospital, Columbus, OH. Telephone: (614) 355-0495.
- Karen Meador, MD, Managing Director, BDO Center for Healthcare Excellence & Innovation, BDO USA, New York City. Telephone: (214) 259-1477. Email: firstname.lastname@example.org.