Medical staff: What should trigger a focused review?
Be sure your data are trustworthy
An operation done on the wrong body part is an obvious red flag calling for the need to closely examine a practitioner's competence. But what about a verbal complaint from a nurse who works closely with that physician? Or what if length of stay is increasing for that physician, but only slightly?
The Oakbrook Terrace, IL-based Joint Commission's new 2007 medical staff standards require you to define the "triggers" that indicate the need for a "focused evaluation" — an intense assessment of a practitioner's competence.
"We don't have a definition of triggers — organizations are free to define them however they want," says John Herringer, The Joint Commission's associate director of standards interpretation and lead interpreter of medical staff standards.
He notes that Element of Performance (EP) 2 requires you have criteria for conducting the evaluation once performance issues are identified, while EP 5 requires you to identify the triggers that are evidence of a performance problem.
"At EP 2, you may be concerned but you are not sure — it's a little more nebulous. The trigger is a little bit more definitive or exact, in terms of this is a performance problem," says Herringer. "I think triggers are a little bit more finite."
The two most common triggers are sentinel events and infection rates exceeding a certain predefined level. "If there is an unacceptable rate, that's an automatic trigger and you are going to look to try to see why this is happening," says Herringer. "You may end up finding there is no problem at all — the physician may have had just a lot of trauma or seriously compromised patients and nothing is wrong with his technique."
Standard MS 4.40 requires you collect ongoing data about performance for all practitioners. "You are collecting everything, not just the negative or outlier data. People tell me that we only look at them if certain things happen, but I say no, we want you to collect data on good performers, too," says Herringer. "The other thing is, zero data are data. The fact that somebody is not performing in your organization is an important thing to know."
This fact in itself might be included as a trigger for a focused evaluation. "You might want to say, 'If a physician hasn't done a certain procedure in x number of months, we are going to do a focused review of the first one, two or three procedures or admissions that he does do, because we are concerned that he's not coming here,'" Herringer says.
The data you collect for MS 4.40 may identify triggers that say a provider needs a focused review, but they might also tell you the opposite. "It's just as important with 4.40 to know that somebody is an exemplary practitioner as it is to know that he is a problematic practitioner," says Herringer. "You might be able to extrapolate some of his practice patterns into your clinical practice guidelines for other people to use."
Your organization also will need to define how much of a trend triggers a focused review, such as length of stay increasing. "The key is really understanding that you are collecting all kinds of data, and then funneling those data from 4.40 into 4.30. Based on what you get on 4.40, you might need to do a focused review," Herringer says.
Don't wait for bad things to happen
"When you say, 'It's time to do a focused review,' my question is: How did you get there in the first place? Because of a bad outcome in the OR, which may or may not have been bad?" asks Doug Elden, chairman of National Peer Review Corp. in Northbrook, IL, which conducts external peer review for hospitals. "The new standards are welcome, because most hospitals are unaware of the data they need to conduct peer review."
The real problem is that many hospitals are reviewing only bad outcomes, and are not reviewing practice patterns, says Elden. "A hospital needs to work with its IT [information technology] system to find clinical screens that may indicate quality-of-care issues, and not just review bad outcomes," he says.
Hospital departments need to establish appropriate clinical screens or triggers for each specialty and subspecialty to identify cases for peer review, advises Elden. Cases identified by the clinical screens should be reviewed by the hospital's peer review coordinator, under the direction of the chair of the peer review committee, to determine if the clinical screens have appropriately identified the cases for peer review, he explains.
If the cases have been appropriately identified, and they are not false-positives, the peer review coordinator should refer the cases to the peer review committees, says Elden.
"By catching problems early through clinical screens and other methodology, you can educate physicians instead of disciplining them," says Elden. "It's easier to change a practice pattern if it's two or three cases vs. two or three hundred."
If hospitals are not reviewing practice patterns, benchmarks, and utilization data on a daily basis, and these data are not being funneled into the peer review committee, they will just be doing focused reviews when bad outcomes occur, says Elden.
Ideally, your hospital peer review system should have a data committee that works closely with your IT department. Elden gives the example of a hospital where doctors talked directly with the IT department for the first time. "They had never talked to their IT department before. The IT department was producing reams of reports and documents that the physicians didn't understand," he says.
If physicians and IT staff communicate directly through the structure of a peer review data committee, there is less chance of physicians questioning the data later on. "Physicians conducting peer review in a hospital setting must have complete confidence in the information provided to them before taking peer review actions," says Elden. "Without such confidence, most physicians are unlikely to jeopardize a colleague's career and livelihood, no matter how much they may suspect problems."
Physicians need to say to IT, "Here is what I need to make my decision," and IT people need to respond, "Here is what I can get you," and they need to arrive at a data set that is acceptable to the physicians, says Elden.
If peer review committees look only at bad outcomes, the committees will never identify cases when a practitioner is deviating from the standard of care without a bad outcome occurring. "If you cross the highway blindfolded and manage to get to the other side, it doesn't mean it was a good decision," says Elden.
Data may be contradictory depending on the source, which can cause distrust among physicians. "One hospital had internal data indicating that certain physicians were outliers; however, the physicians responded saying, 'My mortality rate is fine. [The Society for Thoracic Surgeons database] says I'm right in the middle. We don't believe the hospital data,'" says Elden. "In many instances it depends on how procedures are coded."
Hospitals should gather data in a uniform fashion. "In one hospital, one physician group was filling out their own STS forms, and for the other physician group the forms were being filled out by nurses," says Elden. "Data have to be collected uniformly, they have to be beyond reproach, and everybody has to buy into them."
In addition to the triggers, Elden says not to ignore the "whispers," meaning complaints verbalized in hallways or offices that aren't put into writing. "These often are not acted on, but if somebody makes a verbal complaint to an officer of the medical staff or to administration, I think you are on notice and you have a legal obligation to investigate," says Elden, who also is an attorney.
If written complaints go into your peer review system, so should verbal complaints, says Elden. Have the peer review coordinator pull the cases, see if there is a problem, and put this in writing, he recommends. "Then send your clinical people in to take a look at the particular cases," he says. "There may be a lot of dead-end streets, but if you do find a problem, it may be straightened out by educational peer review rather than disciplinary peer review."
How to define triggers
So how should organizations define triggers? "That is the $64,000 question," says Christina W. Giles, CPMSM, MS, president of Medical Staff Solutions, a Nashua, NH-based consulting firm specializing in assessment and development of medical staff organizational structures in the hospital environment.
"There are some obvious ones, like involvement in a sentinel event, or the same or similar issue happening more than once," says Giles. "But the onus is really on each organization to define what their concept of quality of care is."
For example, if clinical practice guidelines have been put in place for treatment of pneumonia patients and they are not being followed, the practitioner must document why he or she chose not to follow the guideline, says Giles.
"A trigger could be a trend of not following adopted clinical practice guidelines and not providing supporting documentation for the decision not to follow the guideline," she says.
Medical staff also will have to establish acceptable thresholds, such as performance of a particular number of procedures with less than a certain percentage complication rate, which might be 1%, 2%, or 3%, says Giles. "They must define what the acceptable rate will be."
At Denver-based Catholic Health Initiatives, each of the 73 hospitals in the system has some latitude in determining its own triggers for an intensive review of a member of the medical staff, says John Anderson, senior vice president and chief medical officer.
"This latitude is important so hospitals can select triggers and methodology based on the availability of technology, resources, services offered, and the ability of the organization," he says.
Several hospitals have determined that, based on a review of a practitioner's individual cases over a period of time, those triggers can include any number of factors, says Anderson. Any single egregious case can act as a trigger, but in a time frame determined by the medical staff and based on the volume of admissions by a practitioner, other criteria also can act as a trigger, says Anderson.
He gives the examples of two cases where care is rated as inappropriate, four cases where physician care is determined to be controversial or inappropriate, or four cases rated as having documentation issues, regardless of the care rating.
The shift toward ongoing evaluation of clinical practice rather than episodic reviews will "truly advance patient safety," says Anderson. "What's more, continuous evaluation to ensure evidence-based practice without taxing current resources necessitates a greater dependency on electronic documentation and reporting, coupled with better physician alignment for participation in quality initiatives."
[For more information, contact: Douglas L. Elden, Chairman, National Peer Review Corp., 1033 Skokie Boulevard, Suite 640, Northbrook, IL 60062. Phone: (847) 480-8800. Fax: (707) 988-8800. E-mail: firstname.lastname@example.org. Web: www.nationalpeerreview.com.
Christina W. Giles, CPMSM, MS, President, Medical Staff Solutions, 32 Wood Street, Nashua, NH 03064. Phone: (603) 886-0444. Fax: (810) 277-0578. E-mail: email@example.com.]