Were study’s findings misleading?
Ethical concerns of “data slanting”
When the data from a clinical trial supporting a drug’s efficacy in treating adolescent depression was re-analyzed by independent researchers 14 years later, a different conclusion was reached: that the drug did not show efficacy.1
“The main ethical implication is that much of the scientific literature on drug efficacy is not based on a primary value of finding out the truth, as often is assumed, but rather, is based on the primary value of maximizing marketing goals,” says Nassir Ghaemi, MD, MPH, professor of psychiatry and director of Tufts Medical Center’s Mood Disorders Program in Boston.
Ghaemi says journal editors, healthcare leaders, and governmental regulatory agencies should insist pharmaceutical companies open up the process of data analysis of their clinical trials. “There needs to be a second level of confirmation of the claims made in interpretations of those data,” he says.
Ghaemi would like to see major scientific journals refuse to publish clinical trials from the pharmaceutical industry without independent verification of the interpretations of the data analyses from academic statisticians. “This step already happens with the Food and Drug Administration [FDA], which legally requires that pharmaceutical companies provide their databases for analyses by internal statisticians,” he says.
This data, which could reveal bias, isn’t published in scientific journals. “Only the pharmaceutical industry companies’ own analyses are published, unverified, and unchecked by anyone else,” says Ghaemi.
Some findings omitted
While people outside of scientific research probably view study findings as black and white, “the process of interpreting data may depend on who is doing the interpreting,” says Erick Turner, MD, associate professor of psychiatry at Oregon Health & Science University (OHSU)’s School of Medicine in Portland, and a senior scholar at OHSU’s Center for Ethics in Health Care. Turner previously worked at the FDA as a reviewer, where he became aware that drug companies were conducting studies that were not represented in the published literature.
An investigator who believes a treatment is effective will likely report the findings more positively than an investigator who expected the treatment to be ineffective. “The first author may try and find some aspects of the data that do suggest benefit, and try to find reasons why the study came out the way it did,” says Susan S. Ellenberg, PhD, professor of biostatistics at the University of Pennsylvania’s Perelman School of Medicine in Philadelphia. “The second author may write a paper emphasizing that the treatment simply doesn’t work.”
Caution must prevail where one set of authors accuses another set of slanting data, however. “Deliberately presenting data in a way that masks a true finding, when that finding is not desired by the investigators, is unethical,” says Ellenberg. “But one has to allow for the fact that honest investigators will often disagree about the implications of certain results.”
Investigators may fail to mention certain aspects of the study that, if known, would call the conclusions into question. “I certainly saw some of that when I was at the FDA,” says Ellenberg. “We had access to all the data, and we could see when authors left out of their papers things that didn’t jibe with their conclusions.”
A recent study examined the efficacy of FDA-approved second-generation antidepressants for anxiety disorders and found that the existence of reporting bias depended on how a given clinical trial turned out.2
If the trial was positive according to the FDA, the journal article would agree. “By contrast, if the FDA found that the trial results were negative, the corresponding journal article would usually convey a positive conclusion, thus disagreeing with the FDA,” says Turner, one of the study’s authors.
Before investigators were required to register trials on ClinicalTrials.gov, it wasn’t uncommon for investigators to write a paper focusing on something that appeared positive in their study, even if it didn’t relate to the primary study question, adds Ellenberg. “Readers would not know that this could easily be a chance finding — a fluke result of multiple looks at the data,” she says.
Attempt to curb abuses
Research suggests that when clinical trials are funded and conducted by a pharmaceutical company, there is greater likelihood of bias in reporting of data.3,4
“This is a serious problem, affecting academic investigators as well as pharmaceutical companies,” says Ellenberg. “Over the past decade, there has been an attempt to curb these abuses.” Investigators need to report primary aims on ClinicalTrials.gov before starting the research, she says. This prevents researchers publishing a manuscript that only reports favorable secondary aims. Investigators should also report every study prior to starting the research. “This way, we can know how many studies go unpublished — which are often those with less favorable results,” says Ellenberg.
The National Institutes of Health (NIH) is considering a requirement that all NIH-funded studies have their results deposited in ClinicalTrials.gov. Currently, the NIH requires data from studies it funds to be made available to any other investigator who wishes to explore and potentially re-analyze. “Transparency helps protect against gross distortions of study findings,” says Ellenberg.
David Hammond, director of Kenmore, WA-based Bastyr University’s Office of Research Integrity, says the key issue is the separation of the research and the publication. When an institutional review board (IRB) reviews a study, they review the protocol and the proposed study design; the protocol usually includes a general description of the statistical methods to be run, and the types of analysis planned.
“This portion of the protocol, unless the study is sponsored by industry and has a fleet of biostatisticians writing very detailed plans, is often a bit vague,” says Hammond. There is no additional IRB oversight of the analysis and the publication.
“The journals are often in a tough position, because they are unable to view the source data and verify the claimed results in the article,” says Hammond. This leaves the slant and potential bias in the reporting of the results solely in the hands of the investigators.
The public should not worry about these analyses and their effect on FDA approvals of products, says Hammond, since the FDA does review the source data looking for bias. “With the publication of results that do not have that FDA oversight, however, this safety check does not exist,” he says.
Hammond says ethical research can be supported with good clinical practice training for investigators, the existence of IRBs, and sound scientific and analytical principles employed in study protocols.
However, he concludes, “The only way to continue ensuring these good practices in the analysis and reporting is by continuing to re-evaluate and question the results that are published.”
- Ghaemi N. A New Look at an Old Study: How Do We Stop Data Spinning? Medscape Oct. 20, 2015. http://www.medscape.com/viewarticle/852531.
- Roest AM, de Jonge P, Williams, CD, et al. Reporting bias in clinical trials investigating the efficacy of second-generation antidepressants in the treatment of anxiety disorders: A report of 2 meta-analyses. JAMA Psychiatry. 2015; 72(5):500-510.
- Lexchin J. Sponsorship bias in clinical research. Int J Risk Saf Med 2012: 24(4): 233-242.
- Chopra SS. Industry funding of clinical trials: Benefit or bias? JAMA 2003; 290(1):113-114.
- Susan S. Ellenberg, PhD, Professor of Biostatistics, Perelman School of Medicine, University of Pennsylvania, Philadelphia. Phone: (215) 573-3904. Fax: (215) 573-4865. Email: [email protected].
- Nassir Ghaemi, MD, MPH, Professor of Psychiatry/Director, Mood Disorders Program, Tufts Medical Center, Boston, MA. Phone: (617) 636-5735. Fax: (617) 636-4852. Email: [email protected].
- David Hammond, Director, Office of Research Integrity, Bastyr University, Kenmore, WA. Phone: (425) 602-3416. Email: [email protected].
- Erick Turner, MD, Associate Professor of Psychiatry, School of Medicine, Oregon Health & Science University, Portland. Email: [email protected].
The “slanting” of data published in the scientific literature was recently spotlighted after an independent analysis suggested a psychiatric drug did not show efficacy, contradicting a previous study’s conclusions.
Subscribe Now for Access
You have reached your article limit for the month. We hope you found our articles both enjoyable and insightful. For information on new subscriptions, product trials, alternative billing arrangements or group and site discounts please call 800-688-2421. We look forward to having you as a long-term member of the Relias Media community.