Pass/fail system gets high marks for assessments

One provider used scores to set pay raises

There appears to be more to gauging nurse competency than evaluating clinical performance. Apart from meeting regulatory mandates, some hospitals are moving toward tying pay increases to well-defined, measurable bedside skills.

At least one provider, Meridian Hospitals in Neptune, NJ, used this approach to bring parity to its revised pay schedule and differentiate its staff’s proficiency levels following a complex merger.

In 1997, the combination of three local institutions resulted in Meridian, a hospital system with 1,500 RNs and 7,000 employees described by officials as "a consolidation among equals."

But as soon as the newly formed entity agreed to grant the performance-based pay, the question arose: What would the compensation be based on? And how would performance be judged?

"That’s always where questions arise," observes Donna Sue Gloe, RN, EdD, a clinical analyst at St. John’s Regional Health Center in Springfield, MO.

Problems lie in scoring assessments

Although St. John’s is not affiliated with Meridian, managers there have faced the same challenges when trying to appropriately assess nurse core competencies.

Devising reliable written or observational tools is one thing. But adequately scoring and interpreting test results presents a series of different challenges, nurses say. Here’s what some veteran managers advise:

Assuming that the assessment tool being used reliably evaluates nurses on key skills, Gloe and other nurse educators advocate a pass/fail system. "Either you know the skill or you don’t," says Gloe, who sits on the advisory board of the Journal for Nurses in Staff Development, a research publication read by nurse educators.

In general, a pass/fail system works; but only if the assessment tools and models on which they are based are relevant and intelligently devised, according to Pat Nolan, RN, an education specialist at Genesis Medical Center in Davenport, IA.

On written exams, a passing grade tends to be more arbitrarily set than in observational testing. But here, as well, nursing officials should set the bar at a reasonable height, experts say.

At St. John’s, which administers a written test and an observational assessment, a nurse has to get 80% of the answers correct on the written test to get a passing grade, Gloe says. Anything below that is a failing grade.

Descriptive terms may be substituted

For many providers, a pass/fail system has been sufficient. But others have chosen a variant by substituting the terms "met" or "unmet" for pass or fail.

Descriptive terms such as "exceeds," "meets," or "does not meet" also have been used and are considered by some as more relevant and less either/or in describing proficiency, says Richard Hader, RN, PhD, CCRN, chief nurse executive at Meridian Hospitals.

Design an effective assessment tool and the scoring system is likely to mean something more, experts conclude.

Whatever the scoring system, nurse educators insist that the assessment process should not be a punitive ritual, but a helpful experience for nurses.

In most cases, testing helps to reveal areas where a nurse needs to improve. The results should then lead to remedial course work or in the case of a new nurse time spent with a preceptor, says Gloe.

Nurses who receive a poor assessment the first time out should be given reasonable time and adequate support in demonstrating an acceptable skill level on the next assessment, Gloe adds.

Most nurses contacted for this series favored adopting sound, established models and criteria as the basis for assessment tools.

Nolan says her facility based its assessment on the Nursing Intervention Classification (NIC), a system developed a few years ago by Iowa nurses and favored by many in critical care there because it breaks down nursing interventions into distinct patient-care areas.

They also like it because it is behavior- based rather than solely knowledge-based and tests nurses on 433 different decision-making activities.1 (The March 2000 issue of Critical Care Management will contain additional information on the Iowa NIC system.)

At Genesis, managers have eschewed written tests and use the NIC to identify 50 observable core competencies relevant to the unit’s workload. "Each category [in the NIC], such as infection control or fluid monitoring, describes a single expected behavior, which is observable and measurable," Nolan says. (For more details on assessment tool models, see the first two parts of this series in CRM, December 1999, p. 133; and January 2000, p. 5.)

Pay raises linked to competencies

In 1998, Meridian nurses developed an RN position description and a performance appraisal tool designed to provide "a reliable, objective measurement" on which to base merit pay.2

Accordingly, RNs were to be evaluated on how well they performed in their identified roles as practitioners, educators, and leaders.

Each role was defined in a position summary, and a series of 28 role indicators, including a fourth indicator representing generic competencies, was devised to measure a nurse’s essential technical skills.

For example, as a practitioner a nurse had to demonstrate a level of clinical practice that "accurately reflects board of nursing code of ethics, department of health requirements, and institutional policies and procedures."

The nurse as educator was measured by success "in teaching patients and families based on identified health need." And as a leader, a nurse had to demonstrate "the mission, vision, and values of the health system through professional excellence and personal concern."2 (Additional role indicators are listed on p. 21.)

To aid evaluators, standards that resemble benchmarks were developed for each indicator. For example, for the leadership indicator the standard included: "Exercises calm behavior and uses professional judgment while work- ing in a team approach during emergency situations."

Generic competencies included essential, technical, and bedside skills expected of all nurses and were viewed as either met or unmet.

Nurse managers in each department received the tool along with written instructions. Each RN was then assessed by whether he or she exceeded, met, or did not meet the standard for each indicator based on observed and verified written performance in the unit.

The above chart compares appraisal levels for two role indicators: leader and educator. Afterward, each indicator was scored on a three-point scale (exceeded = 2 points, met = 1 point, did not meet = 0 points).

Standardized tools help in scoring

For Meridian officials, calculating a composite score taken from the total scores of the three role components was important because it formed the basis for the nurse pay increases, according to Hader.

The relative scores for each nurse weren’t necessarily aimed at assessing clinical quality, but to determine relative pay levels, according to a study later released by the provider. The hospital did not disclose how the scores related to actual pay.

Hospitals have various objectives for their scoring systems. Whether it’s pass or fail or some other criteria, interpreting core competencies among nurses in most cases directly reflects performance expectations with patients.

And using standardized language and testing models helps evaluators get a more objective view of performance, says Nolan. "They help in setting the bar at the right place."


1. Titler MG, Bulechek GM, McCloskey JC. Use of the Nursing Interventions Classification system by critical care nurses. Crit Care Nurs 1996; 16(4):38.

2. Hader R, Sorensen ER, Edelson W, et al. Developing a registered nurse performance appraisal tool. JONA 1999; 29(9):26-32.