By Melinda Young, Author

With just a couple of months remaining to prepare for the revised Common Rule, the question for human research protection programs (HRPPs) is: What needs to be done to make programs more efficient, faster, better quality, and compliant?

The revised Common Rule’s chief focus is on modernizing, strengthening, and making HRPPs more effective through reducing burden, delay, and ambiguity for investigators and by simplifying and modernizing research oversight. (To view the rule, visit: http://bit.ly/2Gorspk.)

IRBs have been working toward the same goals for more than a decade, aided by a growing body of research into how to improve IRB operations and make the review process shorter and more efficient.

There isn’t one path, as IRBs across the country have found. But there are some good examples of best practices. For instance, one HRPP has developed metrics to determine which steps in the IRB review process could be changed to improve review time. Another organization uses a team that identifies areas in need of improvement and oversees changes.

These projects started with questions about their current processes.

“One of the most common questions we get from our investigators and study teams is about how long something will take to be reviewed and approved by the IRB,” says Brian Moore, MS, CIP, IRB director at Wake Forest School of Medicine in Winston-Salem, NC.

“We in the IRB understand there is a significant amount of variability in review time, based on a number of different factors,” Moore says. “These factors may depend on the department, whether or not the study is FDA-regulated, who is sponsoring the study, which board is reviewing it, and a whole host of factors.”

Based on this question, the organization launched a program to more accurately predict IRB outcomes by using advanced statistics.

“Our goal was to be able to ask some very simple questions of our study teams in order to give them a more accurate estimate of the time it would take to complete the review,” Moore says.

Salem State University in Salem, MA, had a different initial question: How could the IRB reduce the number of revisions needed for studies by new researchers?

“We had revision requests for 66% of proposals by students and faculty,” says Megan Williams, MPA, director of research administration at Salem State University.

The IRB reviews hundreds of proposals a year, including proposals from student investigators, she says.

Since tackling this issue and making changes, the IRB has reduced its revision questions to 25% of proposals, Williams notes.

“There has been a sharp increase in quality of proposals,” she adds.

With the revised Common Rule’s deadline looming, implementing process improvements is especially important.

“We’ve had a system of standing committees to look at ongoing needs and new needs as they arise for a long time — more than a decade,” says Lark-Aeryn Speyer, IRB senior associate regulatory analyst at the University of Michigan in Ann Arbor.

The University of Michigan has a change management process (CMP) that engages various stakeholders in developing educational and other IRB content and process improvements. (See story in this issue on creating a change management process.)

“Considering how important the changes are to everyone this particular year, we had to make a more intensive use of our existing processes and flex them somewhat,” she says. “We also thought this was a time when more people would be interested in having a good, robust multilevel change management process.”

When IRBs seek to change their processes, a good starting point is to collect data. The Wake Forest IRB collected data from 225 studies reviewed by the IRB, starting in July 2016. Metrics included time to approval and days with the IRB and the study team.1

Each institution’s results are unique because of their different processes and procedures, Moore notes. “In the sense other institutions structure things differently, they could use the same methodology we use, using the same statistical methods, and plug in their own data.”

The information was useful in a couple of ways. First, it highlighted potential causes of slowdowns in the IRB review process; secondly, it can be used to give investigators more accurate predictions about how long their protocol’s review might take.

“Our goal is to plug in some basic elements to our prediction equation, and based on the type of sponsor and study design and a couple of other factors, we can say with some confidence it will take X number of days,” Moore says. “This gives the investigator the most accurate estimate, so they can plan for when they are ready to start enrollment.”

The study and data collection of the review produced the following findings:

• There is variability in how long reviews take, partly due to the particular committee to which it is assigned.

• The institution’s eight different IRBs vary in review time between 13 and 16 days.

• Investigators’ responsiveness has greater variability.

• When there is no external sponsor and the principal investigator alone receives the IRB’s feedback, makes revisions, and resubmits, the review process is faster.

• When other parties are involved and have to approve or sign off on changes, that can extend the timeline.

• The review time was the same regardless of whether a study was regulated by the FDA.

• The number of stipulations the IRB sent back to the study team had a significant correlation and could be used as a surrogate measure of the quality of the submission.

“The first step was to identify where the problems are, and that’s what the focus of this project was,” Moore says. “Since then, we have done some educational efforts to get our committees as consistent as possible.”

Another change was to prioritize communication and feedback to study teams, showing them how to answer IRB questions correctly the first time. This helps reduce lengthy back-and-forth, he adds.

“It allows us to focus on areas that are of concern, and those not of concern that we don’t need to spend that much attention on because they had no real impact,” Moore says.

One key to collecting study review data is to let all stakeholders know that this process is not punitive. When problems are found, the idea is to come up with solutions — not shame those whose review processes are taking longer, he notes.

“Our first and foremost priority is to review the studies and make sure criteria for approval are met and that participants are safe, protected, and well-informed,” Moore explains. “And we do not let our timing data or initiatives get in the way of appropriate participant safety.”

The study of review times also acknowledged that some IRBs have more complicated studies to review or might receive poorer-quality submissions, which take longer. Despite some differences, there were ways IRBs and study teams could improve the review process and shorten the time it takes to review a protocol, and that’s where the IRB could focus its improvement efforts.

For instance, the IRB used its software to place pop-up windows with links and answers to some questions where there are frequent errors.

“We put a link with instructions and guidance on how to answer those questions with the hope that they’re answered correctly and with fewer concerns and stipulations, but we don’t have data yet on how it’s working,” Moore says.

REFERENCE

1. Moore B, Wesley D, Rushing J et al. Using advanced statistics to more accurately predict IRB outcomes. Poster presented at the Public Responsibility in Medicine and Research (PRIM&R) Advancing Ethical Research Conference, held Nov. 5-8, 2017, San Antonio. Abstract: 42.