Changing trials in midstream

Adaptive design an intriguing alternative

As concerns grow over the expense and slow progress of classic randomized clinical trials (RCTs), an intriguing alternative is gaining steam – adaptive design, in which trials change at various decision points in response to accumulated data.

Unlike a trial unexpectedly changed or stopped midway, these trials are designed with the expectation that there will be changes. For example, one might start with four different doses of a study drug, and two arms would stop enrolling new subjects after a predetermined point, based on data analysis. Or a trial could be designed to change inclusion criteria if enrollment does not meet targets.

In recent years, interest has increased in this approach, fueled by more sophisticated computer models that can help with the designs. The FDA earlier this year released draft guidance for the adaptive design of drug trials.

Proponents of adaptive design say the idea provides some human subjects protection benefits, allowing people to be moved out of ineffective arms more quickly rather than waiting until the end of a protocol. If a trial is speeded up, they say, fewer participants overall might be needed.

But it's still unclear whether there would be improvements to participants' safety as a result of increased use of adaptive trials, says Marc K. Walton, MD, PhD, associate director of the Office of Translational Science of the FDA's Center for Drug Evaluation and Research in Silver Spring, MD.

"We're not convinced there are ethical imperatives to use this approach," Walton says, noting that many decisions made in adaptive design studies are based on limited amounts of data. "So the true accuracy of those decisions really remains to be seen."

And Walton says moving participants from a seemingly less active dose of a study drug to a more active dose also raises the possibility that they're being moved to an unrecognized more toxic dose, which increases risk.

Complex studies

While FDA's guidance addressed the design of these type of trials, currently there is no planned guidance for IRBs on the topic from the Office for Human Research Protections, says Health and Human Services spokeswoman Lt. Kate Migliaccio.

And some of these trials come with a certain degree of technical complexity, since many rely on statistical models to determine what changes would be made during the course of a study as a result of data collection.

That leaves IRBs with the task of determining what potentially could happen in these trials and how to ensure that all the various possible outcomes provide as much protection to subjects as possible.

Walton says IRBs have plenty of experience handling changes to protocols. But in adaptive design, those changes are considered and planned for from the beginning. He says IRBs need to be sure that they understand the rules that would govern those changes, and the various circumstances that could result.

"They have to try to think of all the different ways the study could go," Walton says. "And they have to decide whether or not they're comfortable with the safety of the study participants for all of those different circumstances. Their task in that regard is similar to FDA's in terms of reviewing and approving the study from the design point of view before the study even gets started."

He says that the more complex adaptive design studies may require more careful and intricate consideration than more traditional designs. "So it may well be that IRBs find they require greater amounts of time to think through the proposed study."

Ensuring expertise

The category of adaptive designs as described in the FDA guidance is a broad one – Walton says it can include fairly straightforward approaches that IRBs are used to seeing (for example, changing eligibility criteria to boost enrollment or incorporating a group sequential stopping rule that halts the study when predetermined measures of efficacy or inefficacy have been achieved).

However, some of the newer designs can be much more complex, relying on intricate computations to evaluate that IRBs may be less familiar with, he says.

As a result, they may need to make sure they have adequate expertise on the board – or can consult with someone to provide that expertise – in order to answer any questions that arise.

Marjorie Speers, PhD, executive director of the Association for the Accreditation of Human Research Protection Programs (AAHRPP) in Washington DC, says that doesn't necessarily mean that an IRB needs to have a biostatistician on its board.

"But what the IRB does need then to do is to ensure that that study has been reviewed by a peer review process or by the sponsor or a scientific review committee that can attest that the statistical design of the study is appropriate, she says.

Once the study is under way, the question of whether IRBs should review individual changes isn't entirely clear.

Walton says it wouldn't be necessary for IRBs to have additional review of the protocol if an adaptation is made in accord with the initial design.

"Changes are planned for and are explicitly laid out in the design of the study from the very beginning, so I'm not sure that the IRB has to have additional review of that at the time that it occurs," he says. "After all, the criteria for making the changes are laid out.

"And the IRB won't have access to the data that was the basis of making the change. They'll only know that a modification occurred at the planned time and according to the prospective plan. They would not really have any new information as a basis for reevaluating the study."

Speers says adaptations that change the trial in any unanticipated way should still go back to the IRB for review.

"Unless you can predict what that change is going to be, and when it would occur, then yes, each change would have to go back to the IRB" she says.

For example, the trial's design may allow for options to increase or decrease sample size or to discontinue one arm of the study.

"You know that going in – the study will be designed in such a way that if the interim results look a certain way, then you might change it in a certain way," Speers says. "But when those results or those data are finally available and looked at, they might be different than what was predicted. So the IRB would need to look at those data in relationship to the change that's being proposed."

Not right for every study

Because of their complexity, adaptive design trials are not necessarily appropriate for every protocol, says Daryl Pullman, PhD, an associate professor of medical ethics at Memorial University of Newfoundland in St. John's who has written on the ethics of adaptive design.

He says they would tend to work better in situations where it's possible to see fairly immediate results from a treatment. While some argue that adaptive design is better suited to more innocuous trials, Pullman takes the opposite approach.

"My view is that in situations where perhaps there were life and death kind of consequences – where the information could be critical to making care decisions in terms of putting (subjects) on a treatment that is shown to be effective – then you should design your trial to ensure that more people get on the effective treatment more quickly, rather than waiting until the end (as would occur in an RCT)," Pullman says.

Walton says FDA doesn't expect a sudden flood of new complex adaptive design trials to be submitted for review.

"We expect that sponsors will consider these methods, and begin to use them in a cautious manner, as experience with them grows over time," he says. "So there will not likely be an overwhelming increase in evaluation burden immediately – rather, one that slowly over time develops as sponsors learn where and how the methods can be best applied."

To view the FDA's draft guidance for industry for adaptive design clinical trials for drugs and biologics visit this website: