The trusted source for
healthcare information and
Meeting more explicit peer review imperatives
Includes ongoing monitoring and individual review
By Patrice Spath, RHIT
Brown-Spath & Associates
Forest Grove, OR
(Editor's note: This is part one of a three-part series.)
Physician peer review has been an essential part of hospital quality since the American College of Surgeons first established minimum hospital standards in 1918. To this day an effective peer review process continues to be important. The 2007 Joint Commission (TJC) Medical Staff Standards reinforce the need for an ongoing, objective, and fact-based process. This concept is the foundation for the professional practice evaluation requirements in standard MS.4.40. This standard does not necessarily create new requirements; however, it does delineate in more specific language exactly what's expected of hospitals and the organized medical staff. The more explicit requirements are intended to reduce variation in the ongoing monitoring activities among accredited facilities and ultimately strengthen the link between peer review and performance improvement. To meet this standard, the medical staff must engage in two activities: ongoing monitoring of performance patterns and individual case review. The goal of these reviews is to ensure that physicians have the necessary skills, knowledge, and attitudes to care for patients. Although a physician's training is validated at the time of credentialing, it cannot be assumed that the training makes the physician competent. Competence can only be assured through demonstration of consistently appropriate patient care.
Ongoing monitoring involves the collection and analysis of departmental- and physician-specific performance measurement data. This includes measures of process and outcomes. Process measures evaluate whether the right things are being done and outcome measures evaluate the results of patient management practices. The Joint Commission standards allow hospitals some discretion when selecting measures to be used for physician performance evaluations; however, there are some "must do" requirements. For example, the standards call for the organized medical staff to regularly evaluate practitioner performance by reviewing the use of blood and blood components, appropriateness of operative and other procedures, medication use, significant departures from established patient care practices, use of autopsies, significant adverse and sentinel events, and patient or family complaints. Physician compliance with the safe practices recommended by The Joint Commission National Patient Safety Goals also should be evaluated. For instance, physicians' use of non-approved abbreviations should be monitored. Other relevant performance data may be derived from infection control, risk management, and utilization review activities.
Often data already being gathered for The Joint Commission- and CMS-required measures also are used to identify physician departures from established patient care practices. However, it is not sufficient to rely solely on data from these measures to evaluate the performance of every practitioner. The measures required by external groups cover only a few conditions and some physicians never care for patients with these conditions. Required measures must be supplemented with other discipline-specific performance data.
Each medical staff department should identify the set of performance measures that will be used to assess the performance of physicians in that department. Some of the measures may be the same in every department. For example:
Some of the measures will be department- or discipline-specific. For example, the department of surgery might also measure the rate of physician compliance with surgical care improvement project (SCIP) criteria, surgical injuries to a body part, unplanned procedures not noted in the patient's consent, wrong patient/wrong site surgeries, and incidence of postoperative deep vein thrombosis. In addition, the department of surgery may have some discipline-specific measures. The measures below might be used to evaluate the performance of cardiac surgeons who operate on adult patients:
In some hospitals, each medical staff department has a quality or peer review committee that is responsible for reviewing performance results and initiating further investigations as needed. Some hospitals have one medical staff quality oversight committee chaired by the medical director or another high-ranking medical staff leader. This oversight committee is usually comprised of the chiefs of each major department (e.g., medicine, surgery, pediatrics, obstetrics and gynecology, behavioral health, emergency services). These physicians may also serve on the credentials committee. This group can also include non-physician members such as the director of nursing services and vice president of operations. The oversight committee conducts an initial review of the performance results from all departments. Trends requiring further review are referred to the relevant department for in-depth investigation and appropriate action.
To meet Joint Commission standards, the medical staff must periodically evaluate performance results to identify variations for further examination. Each department's performance on selected measures should be evaluated at least quarterly. Departments in low-volume hospitals may need to wait for six months of data before the results are meaningful. Physician-specific results should be evaluated at least annually. If the medical staff only reviews physician performance data every two years — at the time of reappointment — opportunities for prompt intervention may be lost. Timely evaluation of measurement results is important so an in-depth evaluation can be rapidly initiated if department or physician performance is found to vary from expectations. In Figure 1 is a physician performance profile for measures regularly evaluated by the OB-GYN department. Aggregate department-wide data are reviewed quarterly and physician-specific results are only reviewed annually.
It is not sufficient to merely report performance data for each physician. The medical staff must also define "trigger points" for determining when further investigation is warranted. For instance, the performance profile in Figure 1 shows that the maternal patients of physician #103 are more than twice as likely to visit the emergency department or be readmitted within seven days as compared to the patients of other physicians in the department. Without any predefined trigger point for this measure, the committee reviewing the results has no clear direction on what action to take. For some physicians, the committee may conduct a more in-depth review; whereas for other physicians, no action may be taken. To eliminate inconsistencies in the peer review process, it is important for the medical staff to delineate measure "trigger points" that signal the need for further action.
Gathering data for performance reports is time-consuming for the quality department. Some information may be readily available in the facility's electronic information system. Other data, such as appropriate use of medications or compliance with established patient care practices, often must be gathered by reviewing patient charts. Even if this information is collected from active records (while patient is hospitalized), the data must be aggregated retrospectively to create department- and physician-specific performance scorecards. In some instances, the data may not be available for several weeks after patients are discharged. Ideally, the lag time between patient discharge and reporting of performance results is no more than three months so any new or recurrent problems can be quickly identified and investigated.
The second element in physician peer review is evaluation of individual cases. Techniques for selecting cases for review and conducting fact-based evaluations will be covered next month in part two of this three-part series on medical staff peer review.