Study results to help IRBs see what slows studies

Reports of how IRB stacks up against others

More than three dozen institutions who participated in a study of IRB response times will be receiving individualized reports that show how their institution stacks up against the rest of the group.

It's the first step of what organizers hope is an ongoing effort to help determine what slows studies down in IRB review and how to achieve improvements.

The study was conducted by a committee of the Clinical and Translational Science Awards (CTSA) consortium, as a part of the consortium's goal of progressing more efficiently from scientific breakthrough to patient treatment.

IRB review had been identified as an area of concern, with several committees looking into how to better facilitate reviews of multicenter studies. In order to do that, the institutions first had to figure out just how long reviews were taking at various institutions.

"The IRB task force was focused on trying to figure out what the baseline metrics are for the 38 CTSA institutions," says Ray Hutchinson, MD, associate dean for regulatory affairs at the University of Michigan Medical School in Ann Arbor. Hutchinson is one of the co-chairs of the CTSA's Clinical Research Management IRB committee.

"The way we have defined the purpose of this is to establish a set of data points that exist at all IRBs, regardless of the processes they use, to be used in future research to identify, implement, monitor and standardize improvements to the protocol approval process."

Collecting dates

In order to do this, organizers set out to gather very specific information about 25 consecutive clinical trial protocols reviewed by CTSA institutions' IRBs during the month of February 2009.

Most of it consisted of dates:

- when the application first was received by the IRB review office;

- dates of any pre-review and changes sent to the principal investigator, along with the dates that any changes sent back by the PI were received;

- when the application first was reviewed by a fully convened IRB;

- date of any request by the IRB to the PI for changes to the protocol, as well as the date those changes were returned to the IRB;

- the number of times the study was reviewed by a fully convened IRB, including the date of the meeting at which it received final approval;

- and lastly, the date that the IRB sent notification of final approval to the PI.

In addition, the researchers gathered basic information about the protocols themselves – whether they're initiated by investigators or by sponsors, and whether they were single- or multisite studies.

Institutions sent their data this summer to data managers at Vanderbilt University, where it is being analyzed, Hutchinson says. He says the first step is for institutions to see how they compare to the group as a whole.

"Individual institutions would not be seeing identified data from other institutions," Hutchinson says. "But you can look at the other 37 and say, 'Well, we're roughly in the middle, at the top or at the bottom.'"

Kathleen Uscinski, MBA, CIP, deputy director of Yale University's Human Investigation Committee in New Haven, CT, and co-chair of the CTSA IRB committee, says that information alone will be of value to the participating institutions.

"The goal is for them to look at their own measurements and see where they might want to improve," she says. "You can start looking at your data in relation to others and start thinking about best practices you could implement."

Measures and metrics

Uscinski and Hutchinson says the eventual goal is bigger — to use the data to improve practices across the entire CTSA consortium, providing guidance about where slowdowns typically occur in the IRB process and how to eliminate them.

Uscinski says a new measures and metrics committee has been created to begin that process. Hutchinson says the analysis would attempt to target institutions that are particularly adept at various parts of the process and gain their permission to share their practices with others.

"Maybe one institution is really good in the pre-review, or another is really good at getting a rapid response from PIs," Hutchinson says. "We'll be able to target small areas of the process that are particularly good and maybe reach out to that institution and find out what has eventuated in their doing so well."

He says the initial data collection also could lead to future surveys to hone in on specific areas needing improvement.

"Maybe we notice that 15 of the institutions have a particular problem in one phase of the review process and we could structure a subsequent survey or subsequent intervention to try to enhance that and recollect and analyze data," he says.

Uscinski says the group of 38 institutions has proven to be relatively open and committed to the effort.

"The important thing is that all of these institutions were able to work together, and it was done pretty much on time," she says. "The fact that we were able to get an 80% response was pretty impressive, I think.

"It will be interesting as we move down this path to see some best practices evolve," Uscinski adds.