Regardless of the goal of a quality improvement project, most successful initiatives share common threads. These common factors should form the foundation of any quality improvement effort and help tailor how the effort is carried out.

One common thread is including frontline staff all the way through from ideation to design and the pilot project, advises Jeff Terry, MBA, FACHE, CEO of Clinical Command Centers with GE Healthcare Partners in Dallas. It is a mistake to try implementing a quality initiative only from the top-down, making frontline staff feel ignored and simply ordered to change their ways.

“Whatever the quality improvement initiative is, when we scale it to the organization it needs the nuance that the frontline will bring to it, and it needs the clear endorsement that comes from having frontline staff engaged in it from the start,” Terry says.

There also must be clear alignment between the project at hand and other quality initiatives in the organization. “You will sometimes see people rolling their eyes when you announce a new initiative because they think this is just the latest thing we’re supposed to do. They have initiative fatigue,” Terry reports. “The way to address initiative fatigue is with clear alignment of this project with the overall goals of the organization and the other ongoing initiatives so it appears as a continuation rather than one more thing you’re telling them to do.”

Quality leaders have to work at avoiding initiative fatigue, Terry says. It is easy for frontline staff to feel overwhelmed by quality initiatives that come from different sources and that may sometimes seem redundant, he notes.

“They can feel like they addressed this issue already when another department came in with a new dashboard. They got their numbers up, and they got the pizza party. Now, you’re coming in with something that seems to cover the same ground and asking them get on board with it,” Terry explains. “They don’t have the time or, frankly, the interest to sort it out. If the messages on these quality initiatives aren’t well sorted out, they’re going to pretty quickly check out, push it aside, and go back to their other duties.”

Acknowledge Extra Work

Remember that any quality initiative amounts to extra work for staff, at least in the beginning when leaders ask them to attend education and training sessions, Terry stresses. It can be helpful to acknowledge that fact, even if one cannot avoid the increased work load. Staff also know that not all quality initiatives work out (even if leaders ask employees to bet that this one will be a winner). Consistency and follow-through are important, Terry recommends. Without it, people will lose faith in the project, becoming reluctant to commit if it does not seem leaders are all in.

“Three months later, are we still talking about it in the same way, with the same leaders behind it? If the answer is yes, people will start to buy in. If the answer is no, they will just wait it out,” Terry says.

Monitoring the impact of a project also is vital. Hospitals are finding ways to use data in real time to drive the impact of an improvement effort, Terry says. “Instead of just using the data to say our trend is up or down, we look at that data and say if we don’t do something now, we’re going to have a real problem in an hour or 24 hours,” he explains. “It’s about using that data in the moment. That’s probably the biggest change in the way quality improvement initiatives are being driven lately.”

Address Executive Concerns

Most healthcare organizations are consistent in their focus on cost, quality, and market performance, says Julie Cerese, PhD, RN, MSN, group senior vice president, Performance Management and National Networks, for Vizient, a company based in Irving, TX, that offers healthcare performance improvement assistance.

To increase the likelihood of buy-in and eventual success with the effort, any quality improvement project should address those concerns at least broadly, and sometimes in the specific ways that are relevant to the organization, she says.

“We try to drive cost, quality, and market performance in every project so that we can address the needs of the chief financial officer, the chief operations officer, chief nursing officer, and the chief quality officer,” Cerese says. “This is a very systematic approach to defining the project plan.”

That principle was applied in several quality improvement programs through the Vizient Performance Improvement Collaboratives Program, which has been used in more than 1,300 hospitals to reduce cost, readmissions, and staff turnover, Cerese notes. The collaboratives highlight some of the common factors in successful quality improvement initiatives because they require hospitals to work together toward the same goals, she explains.

Share Data and Experience

In one project, 21 hospitals sought to implement the CDC Guideline for Prescribing Opioids for Chronic Pain, which required better use of standardized prescribing guidelines, more patient monitoring, and the use of electronic medical records (EMRs) to reduce opioid usage. Hospitals focused on optimizing inpatient opioid use and opioid prescribing in the ED while implementing naloxone programs, among other tactics. Cerese says that the experience highlights another factor of successful quality improvement programs: the willingness to share data and experiences, and to learn from others.

“Organizations come together and join a collaborative because either they have an initiative underway in their organization and they want to learn from others, or they’re at the very beginning of undertaking an initiative and use the format from the collaborative to kick-start their effort,” Cerese says. “A collaborative is a very prescribed effort in that we are asking organizations to develop a charter that defines where your biggest opportunities are, your biggest gaps, where you want to implement change, and how you will measure that success over time.”

Collaborative participants study best practices and develop an implementation plan that should be operationalized quickly, Cerese explains. Members gather every two weeks to discuss implementation progress and exchange ideas. That frequent discussion and feedback is another common thread in successful programs.

“We used to do it monthly, but we increased it to every two weeks. The members told us that monthly was too long to wait for getting back together and evaluating progress,” Cerese reports. “If we do it on a two-week cycle, members stay more attentive in the effort. Over a month, it can kind of fall off your radar.”

Measures of Success Vital

Every project also has measures of success, Cerese says. In an orthopedic collaborative, members are asked to collect detailed data on the percentage of patients receiving opioids in the first two days postoperatively and after discharge.

“We measure those on an ongoing basis, not just during the collaborative, which can last four to six months,” Cerese says. “We continue to measure because sometimes it takes a little bit longer to take hold of the improvement efforts. A lot of times, these data are embedded in the EMR. Getting that data out of the system can take a little more effort than you expect.” Another recently completed collaborative addressed managing serious illness. The goal was to support patients in their decision-making to meet their goals of care, Cerese says. The collaborative had to avoid jumping right into implementation before knowing the targets, illustrating the need for prioritization with quality improvement.

“We utilized a steering committee of subject matter experts from across the country to determine the best practices and how to implement them. The first strategy was around identifying the patients who needed to have this conversation,” Cerese explains. “We asked members to identify methods to identify patients who needed this attention. Some of the solutions were technological, ways to flag cases based on their presenting condition, but there are lots of other patients who should be included but who don’t have a condition like cancer that might be an obvious flag.”

The collaborative identified patients by asking physicians, “Would you be surprised if this patient died in the next 12 months?” The question was suggested in literature regarding the identification of patients with end-of-life concerns; there were data to support its use. The care process is determined in part by the answer to the question, and the collaborative developed guidelines for clinicians to use for patients who may be facing end-of-life decisions.

“We assembled a checklist of expected processes that should be put in place to guide clinicians in these discussions. Many of the members implemented them in small settings like a nursing unit,” Cerese notes. “In one nursing unit, we saw an increase of 53% of the patients who should be having goals-of-care discussions actually having them over a four-month period.”


  • Julie Cerese, PhD, RN, MSN, Group Senior Vice President, Performance Management and National Networks, Irving, TX. Email:
  • Jeff Terry, MBA, FACHE, CEO, Clinical Command Centers, GE Healthcare Partners, Dallas. Phone: (312) 775-1700.