Researcher promotes IRB efficiency rating system
Risks include blocked research
While no one disputes the need for IRB reviews and their importance in reversing decades-old trends of human subjects abuses, some say IRBs often ignore the risks of delaying or rejecting research by overstating the risks to human subjects. This overprotectiveness can prevent important public knowledge from emerging and can lead to costly delays and research inefficiencies.
The Advanced Notice of Proposed Rulemaking (ANPRM) proposed changes to the Common Rule, published in the Federal Register, on July 25, 2011, address these issues with some suggestions that would make research reviews more efficient and less burdensome.
One researcher notes in comments on the ANPRM that IRB boards fail to protect researchers and the public when they are allowed to delay and stifle research without accountability.
The solution would be to create an efficiency rating system for IRBs, says Zach Warren, MPP, MDiv, a PhD-candidate at Georgetown University in Washington, DC.
Warren surveyed investigators who had studies reviewed by the Georgetown IRB, and found that 23% of respondents reported that the IRB process resulted in delayed research; 14% cited excessive bureaucracy in the IRB review process; 6% said the IRB was overprotective, another 6% reported changing their study design or topic to avoid an IRB review. About 38% of respondents reported a positive experience with the IRB.
"It's a small sample size," Warren acknowledges. "If it was replicated, I would advise doing it at multiple institutions, but I think the results probably would be about the same — the problem with IRBs is a systemic problem that results from rules and oversight."
It can be boiled down to a fear of risk that results in excessive bureaucracy and IRBs reviewing studies that probably should be exempt from IRB review, he says.
"It's gotten to the point where a lot of IRB guidelines have become ridiculous," Warren says. "If you ask people on the IRB, 'Do you think this is absolutely necessary?' in private they'll say, 'No.'"
But they continue to make the unnecessary requirements of investigators.
Warren offers an example of his own research involving a benign topic and confidential questionnaire in which the people being surveyed would never be identified by name at any point in the research process. His study was designed to ask them about creativity and their critical thinking processes and place these results in the context of a region's violence and extremism. His first population of subjects was located in Afghanistan.
"The goal was to understand how the Afghan and Western perceptions of creativity differ," Warren says. "I wanted to study the relationship of creativity to violence and responses to frustration."
Warren intended to administer the survey with anonymity and confidentiality. No one analyzing data would ever have access to the subjects' identities.
"The IRB required that I obtain informed consent from all of the people I interviewed, using the IRB's standard informed consent form," Warren says.
Several logistical problems emerged: language differences and cultural differences. Warren had translators who were from the culture of his subject population, and they could bridge the language difficulties, but many of his subjects also were illiterate and unaccustomed to signing long legal forms unless these were for very rare and important lifetime events, such as marriage and buying a house.
Plus, the surveyors had promised the subjects total anonymity, a promise that was contradicted by the requirement of their signature on this strange form that clearly was produced by a foreigner and foreign institution, which added to their underlying paranoia about the American military presence in their country.
"I've had dozens of Afghans tell me, 'We don't sign papers to give consent; that's not how it works here,'" Warren says.
In addition, the IRB required that he have the study reviewed by a second IRB in Afghanistan even though it was approved for an expedited review process by the U.S. IRB. Warren approached the newly-formed biomedical study review board in Afghanistan, only to be told that he would need to pay the IRB official a personal fee in order to have his study heard by the board. Warren felt this bribe payment would be unethical, and he declined to pay the fee. The ethics review officer then refused to return his calls or review the protocol. The U.S. IRB did not exempt him from this second review, but suggested he at least have someone associated with the Afghanistan board sign something that stated the study would meet local and cultural standards.
Warren's research continued, but the expedited IRB review process took five months, nearly delaying the project to the point where Warren's Afghanistan visa would expire.
This type of issue highlights how IRBs can add unnecessary risk to a study that poses minimal to no risk. Requiring additional layers of bureaucratic documentation can become a risk in itself to both the subjects and to the research, which could be pointlessly delayed or cancelled as a result.
"I wish IRBs would take into consideration a broader array of risks, and one of these is efficiency," Warren says.
The solution would be a rating system that has federal regulators or others assessing each IRB for their efficiency and quality performance, Warren says.
"An auditor could contact researchers and assess their experiences with the IRB anonymously," he says. "They could ask about the overall experience with the IRB, whether there have been delays that the researcher felt were unreasonable."
The IRBs could be rated on a scale of one to five, with one being the most inefficient and lowest quality performance. Then these collective ratings could be published on the HHS website, Warren suggests.
"IRBs require full transparency of researchers, so why can't researchers require full transparency of the IRB?" he says. "It's better if this ratings system is standardized; it's better if it's transparent, and it's best of all if it came with some possible penalty or a violation for inefficiency."
One possible penalty is a revocation of the research institution's Federalwide Assurance (FWA), he adds.
"That's a much stronger statement and probably more likely to affect change than if the ratings are done as internal quality measure," Warren says.