Historically, the value of scientific research has been undermined to some degree by lack of reproducible results, unpublished data, and studies that achieve statistical significance but are “false positives,” says Donald Sacco, PhD, chair of the IRB and assistant professor of social psychology at the University of Southern Mississippi.

Some of these trends are fueled by researchers’ acceptance of “questionable research practices (QRPs),” Sacco and colleagues noted in a recent study.1

“A confluence of factors has likely contributed to low reproducibility rates in scientific research, including publication biases, a lack of direct replications, or a focus on statistical significance without considering sampling error,” they reported. “However, some evidence suggests certain factors significantly influence engagement in research behaviors that may undermine ethical scientific conduct and contribute to lower research reproducibility.”

They created a one-hour training course designed to “mitigate endorsement of research practices that raise ethical concerns and are detrimental to reproducible science in psychology graduate students.” The researchers assessed attitudes toward QRPs one week prior to the training, and one week after. This was followed by a two-month follow-up.

“Participants reported QRPs as less ethically defensible one week following the intervention compared with one week prior, with attitudes at two-month follow-up falling in between these time points,” researchers found.

IRB Advisor asked Sacco to comment on the implications of the study in the following interview, which has been edited for length and clarity.

IRB Advisor: As chair of your institution’s IRB, can you comment on implications of this social science research for other IRBs?

Sacco: I don’t want to extrapolate too far beyond the limited data we have. The IRB’s primary responsibility is human subjects research, but they may want to have a conversation about principal investigators submitting data management plans, much like granting agencies now expect. It is a well-defined plan on the data you are going to collect, the sample size, and the analyses you will be doing. It is then transparent that you are not going to be engaging in any of these potentially detrimental research practices.

IRB Advisor: Can you describe in more detail some of these research practices?

Sacco: What we use — and these have probably been developing over the last decade — are what are known as researcher “degrees of freedom.” In various stages of a project, you can make decisions on how you implement your methodology. These include how you sample participants, when you stop sampling, the various ways you might choose to analyze your data in terms of variables included, and whether you report all of the analyses that you have done. There is a variety of benign decisions a researcher can make, but statistical simulations have shown that some of these practices actually inflate what are known as Type 1 error rates. They can lead to statistically significant results, but not because those results are meaningful scientific outcomes; they are just idiosyncratic, statistically significant results.

IRB Advisor: Can you give an example of this kind of error?

Sacco: Some examples are things like if you have multiple dependent measures of the same construct, analyze those independently, and then create a composite of those. Anytime you increase the number of statistical analyses you are doing, you occasionally get a statistically significant result that is a false positive. Some people will engage in these multiple statistical analyses that inflate the likelihood of these type of errors.

Another is when looking at your data and seeing it is approaching significance, collecting additional data to try to get your results to reach significance. [You should have] a hard and fast data collection stop rule at the beginning of your study. These are all decisions a researcher can make throughout the process that can have negative consequences.

IRB Advisor: You found heightened awareness of these problems, but the effect faded to some degree.

Sacco: We did an assessment of peoples’ attitudes of the ethical defensibility of these various practices. We found a statistically significant reduction that these were ethically defensible from time one — a week before the intervention — to time two, the week following the intervention. At two-month follow-up, the mean value of ethical defensibility was somewhere between where their attitudes were before the intervention and at the one-week intervention. Their attitudes didn’t return to baseline, but they also were not maintained faithfully at two-month follow-up. There was some dissipation. Maybe to have more lasting impact we would need to implement something that was a little more impactful than just a one-hour intervention. It could be a series of seminars over the course of a semester.

IRB Advisor: What are the ethical implications of not publishing all of the data generated by a study?

Sacco: In the context of detrimental research practices, a researcher could have an entire program of research that involves multiple studies. All of those studies are, hopefully, going to converge to a relative amount of support for a series of hypotheses. Maybe they put forward a research question and hypothesis, and they run five total studies that they anticipate producing supportive findings. But, say only three of those studies actually generate support for the predicted hypothesis. So, a researcher, unbeknownst to the larger research community, could publish the three studies together that seem to support the general hypothesis. They could omit publishing the two studies that “didn’t work,” the idea being that they didn’t work because [the results] were not supportive of the overall hypothesis. The research community would get what they would think was a clearly supported picture of this research idea because of the selective publication of favorable results — and omission of those that might give a more comprehensive picture.


  1. Sacco DF, Brown M. Assessing the efficacy of a training intervention to reduce acceptance of questionable research practices in psychology graduate students. J Empir Res Hum Res Ethics 2019;14:209-218. doi:10.1177/1556264619840525.