Despite the considerable effort and bedrock ethical principles brought to bear in IRB oversight, the inconvenient truth is that human research oversight is not “evidence-based,” says Holly Fernandez Lynch, JD, MBE, a medical ethicist at the University of Pennsylvania in Philadelphia.

Rather than careful forethought of action, many of the principles of human research protections are essentially reactionary, formed in the aftermath of a succession of horrific episodes of unethical experimentation. Lynch and colleagues are trying to change this status quo — in effect, to show that research oversight can be evidence-based. She invites others to join in the effort, as chair of the steering committee of the newly founded Consortium to Advance Effective Research Ethics Oversight (AEREO).

AEREO has embarked on an ambitious mission to seek input from IRB members, researchers, participants, academics, and others on the current state of human research ethics and oversight. AEREO outlines its calling as follows:

“At present, we lack valid and reliable outcome measures to assess the effectiveness of IRB and HRPP review and oversight of research with human participants,” AEREO states.

“That means we can’t properly evaluate IRB/HRPP effectiveness, which in turn means that we can’t clearly demonstrate that the impact of the human subjects research oversight system is justified by its effectiveness or evaluate the effectiveness of new approaches. This has to change. We practice evidence-based medicine — it’s time to practice evidence-based human subjects protection.” (For more information, visit: https://www.med.upenn.edu/aereo/.)

Lynch discussed the project with IRB Advisor in the following interview, which has been edited for length and clarity.

IRB Advisor: Some may be surprised at AEREO’s founding premise, that human research is not evidence-based.

Lynch: Human subjects research regulation is not evidence-based but neither is most regulation and law. Evidence-based medicine is the expectation, but we don’t really have similar expectations yet for policymaking. The way we got the regulations we have now is through a history of scandal and the response to that. Bad things happened, lawmakers and regulators get involved, and they said, “We are going to implement some changes in response.”

It is not that we have done some careful testing and controlled trials to evaluate different policy approaches and find the ones that work best. It is that something bad has happened and we have to fix it. Historically, it’s been “implement now and test later.” But we haven’t done the testing. It becomes very challenging to do that testing because the regulations are in place.

How do you test whether the regulatory approach that we have works better than some alternative? If you took an alternative approach, you would potentially be out of regulatory compliance.

IRB Advisor: That’s kind of a catch-22. How is your consortium addressing this challenge?

Lynch: What we are trying to do with AEREO is to acknowledge we want evidence about whether our practices and policies work — especially given the potential impact of IRB oversight over research. There are a lot of resources that go into it and there have been plenty of things published about complaints regarding the impact of IRBs on research.

We think it is important to gather data in both directions. Are there things that IRBs are doing that really add value to the system? Can we identify those, evaluate them, and then try to create evidence-based best practices? Can we also figure out which things that we are doing that do not add value, so we can stop doing those?

IRB Advisor: On the website, your group is inviting IRB members to participate and share their input on the process.

Lynch: That’s right. The idea is that there is a collaboration between people who are at institutions who have data and academics like myself who can help design research questions and analyze that data. The more participants we have in our consortium, the better because we will be able to collect data across more sites. And to the extent we want to try out new things, we can try them out across more sites. So the bigger the [number of participants], the better the data.

IRB Advisor: How are you reaching out to research participants to get their perspective?

Lynch: We have a proposal to try to gather data from a variety of approaches. If funded, we will basically try to talk to patients about what they want IRBs to do and how IRBs should make decisions about research. Even though we have patient engagement and patient-centeredness in healthcare and in research design and conduct, we don’t yet have patient engagement around IRB oversight.

IRBs have a lay member, a community member, but that doesn’t have to be a patient. If we can find out what patients want from IRBs, that can help us figure out how well IRBs are doing.

That is not the only measure of IRB effectiveness, of course. Patients are one stakeholder alongside many others, but it is an important perspective that is not well understood.

IRB Advisor: You have another project involving the Association for the Accreditation of Human Research Protection Programs (AAHRPP). What is that about?

Lynch: AAHRPP-accredited sites have to demonstrate that they evaluate their effectiveness. We are very interested in finding out how they go about doing that. I have an ongoing project right now — an interview study talking with various stakeholders in the IRB community and accredited institutions. We are trying to find out how they go about defining IRB quality and effectiveness, how do they think it ought to be measured? Then for people who are IRB directors, what do they do now to measure IRB effectiveness? That is an ongoing project. We finished our interviews and are now in the data analysis phase.

IRB Advisor: Are you concerned that some of the data collected may be too subjective, not necessarily lending itself to evidence-based standards?

Lynch: The idea is to be as objective and evidence-based as possible. It’s a very long-term project. Right now, there is not clear definition about IRB effectiveness, and we have to collect data about it. This is completely uncharted territory.

We might all agree that IRBs should protect human subjects, but what does that mean? We need to actually operationally define what that means and how you define that in terms of data collection activity. We could agree that yes, protection is what we want them to do, but then more concretely they should do “X, Y or Z.” Things that are easy to measure, like regulatory compliance, do not necessarily tell us that participants are being protected. It tells us that the regulations are being complied with.