The trusted source for
healthcare information and
Researchers ignore past studies when conducting new ones
Failing to consider past studies can lead to unnecessary risks for subjects
A recent analysis of clinical trials showed that researchers routinely ignored previously published and relevant clinical trials when conducting their own studies.
On average, researchers whose papers were analyzed cited less than 21 percent of previously published relevant studies. In some cases, an investigator would present a paper as the first study of its kind when a search of the literature showed that it wasn't, says Karen Robinson, PhD, assistant professor of medicine at the Johns Hopkins University School of Medicine in Baltimore, MD.
Ignoring previous studies on the same topic can lead to wasting resources trying to answer a question that's already been answered, Robinson says.
And from the point of view of IRBs, it can lead subjects to be enrolled in unnecessary studies.
"At the beginning of a trial, a complete consideration of the existing evidence is needed to justify a trial," Robinson says. "So that patients who are signing up to join your trial know first of all that they're signing up to answer a question that needs to be answered."
Failing to review relevant studies also can lead to unnecessary risks for participants, such as their being assigned to the placebo arm of a study when the efficacy of the study drug already has been proven in prior research.
Robinson says previous studies have shown a lack of citations in clinical research. But her group's study was more expansive, looking at meta-analyses covering 19 specialties, including pediatrics, gynecology, oncology, psychiatry and surgery.
"We wanted to get a handle on whether this was a systemic problem," Robinson says. "Were these just anecdotal pieces of evidence from this prior literature, or is there something more widespread going on?"
The team looked at 227 meta-analyses published in 2004, comprising 1,523 separate clinical trials. Although the meta-analyses were published in 2004, the trials included in them dated back as far as 1963. For each separate study, the group determined the number of prior trials cited, as well as the number of relevant citations available to be listed.
Results showed that of 1,101 articles for which there had been at least five previous relevant papers, 46 percent cited no more than one of them. As time went on and the number of prior trials increased, the number of citations didn't increase along with it.
"Hence, the more (randomized controlled trial) evidence that existed, the more likely that investigators on subsequent trials would ignore it," Robinson and her colleagues wrote in an article published recently in the Annals of Internal Medicine.1
Robinson says there are many possible explanations for the omissions of previous research.
One sometimes offered by researchers is that journals' space limitations don't allow them to list all of the previous relevant studies, but Robinson says that is more of an excuse than a legitimate reason.
She says funding agencies and journals are more interested in novel information and unique trials, and so researchers may feel pressure to minimize previous similar research.
"Obviously, to be able to say, 'My trial is the first one or the first one of its kind,' is useful, so maybe referring to other trials with the same question won't help make their case," she says.
Changing research culture
Beyond that, Robinson says the research culture doesn't sufficiently emphasize extensive research before embarking on a new clinical trial.
"Investigators aren't taught to do this sort of thing, and there's not a culture of expectation that they do so," she says.
"We now recognize that a consideration of the best available evidence is the best way to inform health care decision-making, and I'd like to see that same mentality used for developing hypotheses, designing trials and interpreting trials," Robinson says. Those three things should be based on the best available evidence."
She says IRBs have an important role to play in encouraging the change in culture. She says they should require that researchers show they have done a systematic review of the literature for relevant studies.
"Maybe a researcher is the first (to explore a particular question), but IRBs should ask investigators how do you know you're the first?" Robinson says. "Did you search for other related trials? Did you do a comprehensive search?
"I think investigators should be required to do a systematic review and IRBs need to be in a position to evaluate whether they have actually done that."
She says some funding agencies and journals have instituted requirements in recent years that researchers conduct systematic reviews.
Robinson says she's continuing to look at this issue and is examining meta-analyses from 2009 to see if citations have improved.