Working with social- behavioral researchers
Try a meeting of the minds
One possible explanation for why IRB reviews of social-behavioral research pose complications and some confusion among IRBs and researchers can be found in the very different mindsets of the two parties.
For their part, social scientists believe they have a right to research, and their work should not be impeded by IRBs. Alternately, IRBs believe it’s their job to anticipate any imaginable harm that a social-behavioral study might inflict on participants, explains J. Michael Oakes, PhD, assistant professor of epidemiology at the University of Minnesota in Minneapolis.
The problem is that both sides have dated mindsets, he notes.
Until recent years, social behavioral research has been free of most oversight, and the more experienced researchers have difficulty accepting that their corner of the universe has changed, Oakes says.
"There’s a lot of entrenchment and anger among social scientists because often the most distinguished social scientists have had illustrious careers, but only 10 years ago, never had to submit to an IRB," he explains. "And now they do, so there’s an attitude of, Who are these bureaucrats telling me what to do?’"
The tension lies on the balance between conducting important social science research while also protecting human subjects, Oakes notes.
"As a social scientist, I fully empathize with them," he says. "The right to research is very different for biomedical researchers who are so used to having oversight."
However, when an institution and IRB members work with social scientists, it’s important to reframe the issue. For example, IRB members might explain to social scientists that the IRB is no different from any other peer review, including obtaining a grant or publishing a paper, Oakes suggests.
"Just like the old, distinguished social scientist would submit a paper to a journal with no peer review years ago, now it’s hard to get papers published in distinguished journals — the bar has been raised," he says.
Likewise, the IRB review of social-behavioral research is the new standard, and there is no reason for social scientists to believe their old methods and lack of oversight are rights that should continue into the 21st century, Oakes says.
The typical IRB’s mindset also needs some adjustment, he notes.
"We don’t understand the risks of social science research," Oakes says. "We don’t empirically or scientifically understand them, and that’s so obvious we forget about it."
This lack of understanding creates conflicts between social-behavioral researchers and IRB members and may reinforce the investigators’ perception that they are being subjected to unnecessary oversight, he explains.
"When I sit on medical IRBs and panels, the physicians and psychologists know what effect the increased dose of medication will have at this point in an experiment, and they know through animal models or other quantifiable means what is going to happen," Oakes says. "In the social science world, we have no empirical idea of what will happen if you ask a sad person if they’ve ever thought about killing themselves."
IRBs may assume it’s possible that such a question could lead to a person committing suicide, but there is no evidence that such an outcome has ever occurred, Oakes says.
"In my own work, subjects who are depressed or at a tough time in their lives find it therapeutic to talk about their depression," he says.
Given the lack of data, it’s unclear how any IRB can predict costs and benefits of a social-behavioral study, Oakes notes.
"This can drive a social scientist crazy because the IRB, with the best intentions, will say, This is the worst-case scenario,’" Oakes says. "But it’s so different from medical research where they can say, I know this might happen.’"
IRBs when reviewing social-behavioral research will base their decisions on imagined worst-case scenarios and then require investigators to jump hurdles and follow stipulations designed to prevent these imagined consequences, Oakes says.
"Then you have a social scientist who is very angry," he notes. "And yet, the IRB is doing its job because without empirical or scientific information, it has to be ethical, worrying about the worst possible case."
There are a variety of potential solutions to this problem, although some will require changes within the research field, Oakes says.
Here is what can be done, both by individual IRBs and by the research industry, in general:
1. IRBs should make certain social-behavioral research experts are on the board if that kind of research is reviewed.
Many institutions have physicians and biomedical researchers and experts on their IRBs, but fail to have members who are experts in survey or social-behavioral research, Oakes notes.
"First ensure that people reviewing studies are experts in the protocols’ [field], and make sure you have survey researchers on the panel if there will be survey research to review," he says.
IRB members who are unfamiliar with social science research may tend to err with one of these two attitudes: "They may go by the worst-case scenario out of fear for subjects, or they may say, This isn’t surgery, so there’s no risk here in asking people questions,’" Oakes explains. "Both attitudes are wrong; the truth is someplace in the middle."
However, if an IRB makes an effort to have as social-behavioral expert on the board then that person will be able to help other members put this type of research into the proper context, Oakes explains. "How can IRB members legitimately say they have the proper expertise without having a [social scientist] on the medical panel?"
2. The federal government should invest money into understanding the risks of low-risk studies.
NIH and OHRP need to invest some money into learning more about the risks of social science and other low-risk research, Oakes says.
"We’d never do surgical or pharmacy trials without understanding the risk, but we do it in social science research all the time," he says.
It’s possible that such an investigation would show that there are not any real risks to social science work and this would mean that the IRB should refrain from imposing barriers and restrictions to such work, on a case-by-case basis, Oakes says. "Armed with the right information the IRB can use the intentional flexibility in 45 CFR to back off, to waive signatures on informed consent, which often is onerous in a simple survey study."
3. Upcoming social scientists should be taught respect for IRBs and human subject protection.
"I tell IRB administrators and social scientists that it’s up to the social scientists themselves to help themselves," Oakes says. "When we train young scientists I think they need to be taught by professors about research ethics and how the IRB works."
Unfortunately, the prevalent social scientist attitude about IRBs leads to faculty giving students the wrong kind of lesson, he notes. "Faculty will say the IRB is a hassle, and it will be four months before you get your answer," Oakes says.
So perhaps the solution is for IRB members to talk with senior social scientists on a one-on-one basis, where they’ll more likely develop mutual respect, and through this process change the attitudes underlying the conflict between social science research and IRB review, he suggests.
4. Social scientists could conduct their own risk analysis research.
Rather than waiting for NIH to fund a major risk-analysis project, social scientists easily could conduct their own studies, Oakes says.
For example, individual researchers could assess risk of a sexual behavior survey study by having someone follow up the social scientist’s sexual behavior survey by asking these questions:
- What were you worried about?
- Did you understand the informed consent?
- Are you worried about confidentiality?
- Do you feel sad that you answered these questions?
- Do you feel happy that you could express yourself?
"That’s very simple, but it can be an expensive approach," Oakes says.
Another approach would be to send out surveys with highly sensitive questions and then sending out the same survey without those questions and then asking both groups what they thought about the IRB’s human subject protection and research risk issues, he adds.
"If there’s no difference between the groups in their perceptions, then I would say these couple of offensive or scary questions had no impact," Oakes says.