By Sue Coons, MA

A clinical trial that involved studying electronic health record alerts (e-alerts) for acute kidney injury seemed to be minimal risk to both the researchers and the IRBs that approved it. However, when two hospitals involved in the study reported an increased mortality rate, the researchers and the IRBs reconsidered what is truly minimal risk in these types of studies.

F. Perry Wilson, MD, MSCE, a nephrologist and associate professor of medicine at the Yale School of Medicine, and colleagues decided to study if e-alerts would help improve patient outcomes of mortality, dialysis, and progression of acute kidney injury (AKI). Wilson also is the director of Yale’s Clinical and Translational Research Accelerator.

Wilson says the researchers thought they would see broad improvement in care processes, and likely a “smaller, but still beneficial effect on clinical outcomes in AKI.”

“Our observational data were quite clear. Healthcare providers were not picking up on AKI when it happened, so it seemed a reasonable assumption that cluing them in would be helpful,” he says.

Wilson and colleagues were concerned about e-alert fatigue. “In fact, it’s our concern about alert fatigue that makes us argue that we must do these studies,” Wilson says. “The vast majority of EHR [electronic health record] alerts are implemented without any rigorous scientific support — and they clutter up the system. We envision a future where all alerts must be vetted before they are implemented in order to reduce the burden of alert fatigue.”

Waiver and Minimal Risk

In a discussion of the study results, Wilson wrote about the decision to waive informed consent.1 This waiver required these three principal items:

  • The intervention did not infringe on the rights or welfare of the patient. Letting a provider know the patient has AKI should not interfere with the rights or welfare.
  • The study could not feasibly be conducted with consent, since enrolled patients could not be told to keep this from their providers.
  • The intervention must be no more than minimal risk. “A purely informational alert, we reasoned, must be minimal risk,” Wilson wrote. “It is merely aggregating data that are already present. In fact, we even ensured the elements of our AKI order set were minimal risk (no fluid boluses here — just low-risk suggestions like urinalysis and following fluid input/output numbers).”1

There was some discussion with the IRB about whether this study may be minimal risk, Wilson says, particularly regarding who was receiving the alert. “Might an intern, who is less experienced, not react appropriately to the alert? Is that a risk?” he asks. “In the end, though, the measuring stick we used was whether we were providing any new information (we weren’t — one could easily make the diagnosis of AKI without the alert) or directing specific interventions (similarly, we weren’t). In the end, the IRB concluded that providing factual information that was broadly available is minimal risk.”

Unsettling Results

The researchers studied six hospitals (four teaching and two non-teaching) in the Yale New Haven Health System in Connecticut and Rhode Island, ranging from small community hospitals to large tertiary care centers. Over 22 months, 6,030 adult inpatients with AKI were randomized. The researchers integrated a pop-up alert in the EHR that would tell the provider that the patient had AKI, give “salient” information, and a link to an order set that could help with the diagnosis. The main outcome was a composite of AKI progression, receipt of dialysis, or death within 14 days of randomization.

Two IRBs associated with the six hospitals approved the study, along with support from Yale’s interdisciplinary center for bioethics. An interim analysis took place at 50% recruitment of the alert (before the enrollment of the two non-teaching hospitals), and the trial continued to completion.

Upon reviewing the final results, the researchers saw the two non-teaching hospitals showed a higher mortality rate in the alert arm.2 Wilson says his first thought was there was an error in the code. “Once we confirmed the results were as reported, we considered two possibilities. One was that this was statistical noise — a type 1 error. The other is that the alert engendered certain harmful behaviors. To ensure we were as thorough as possible, we chased down that option with a series of mediation analyses.”

The researchers dug deeper into the data, and the individual hospitals began their own investigations as well. “At first, I thought we’d find some clear signal that explained the results. I was betting on inappropriate fluid administration,” Wilson explains. “But we didn’t see that. After the exhaustive search, I realized that had we seen the opposite result, we would not have expected any single thing to be responsible. In other words, if alerts were protective, we’d expect a variety of care processes — different ones in different patients — that improved outcomes. I think the same is true with alert harm.”

Analysis showed “no sign that any mechanism of death was distributed unevenly between the groups,” Wilson wrote. The researchers wondered if they were seeing an example of heterogeneity of treatment effect. He described this as a “phenomenon whereby the impact of an intervention differs among different groups due to a variety of factors upstream and downstream of the intervention — many of which may not be easily measured.”1

Moving Forward

The new question: If these studies need to be conducted under a waiver of informed consent, what type of monitoring and requirement will make them minimal risk? “The IRB felt, and continues to feel, that these studies are important, and acknowledge that they can’t feasibly be performed with informed consent due to contamination across the study arms,” Wilson explains. The IRB believes that for most alert studies, the risk remains minimal but wants safeguards to ensure that is the case.

“For our ongoing studies, we have adapted the design to enroll only at the teaching hospitals initially and perform an interim safety analysis at 50% recruitment before we expand to non-teaching hospitals,” he says. “Then, at the non-teaching hospitals, we will also perform an interim safety analysis. An external DSMB [data and safety monitoring board] will evaluate whether the study should continue at each of these time points.”

This study led Wilson to think twice about studies that are “obviously” minimal risk. “The truth is it can be hard to know in the absence of data,” he concludes. “Things we now consider ‘quality improvement’ deserve closer evaluation. Many programs that seem obviously good (like programs to reduce readmission, or to reduce falls in the hospital) may have unintended consequences.”

REFERENCES

  1. Wilson FP. The challenge of minimal risk in e-alerts. The BMJ Opinion. Jan. 18, 2021. https://blogs.bmj.com/bmj/2021/01/18/the-challenge-of-minimal-risk-in-e-alert-trials/
  2. Wilson FP, Martin M, Yamamoto Y, et al. Electronic health record alerts for acute kidney injury: Multicenter, randomized clinical trial. BMJ 2021;372:m4786.