Reproducibility Subject Aid

Author(s)
Authoring Institution
Year
2017
DOI
https://doi.org/10.18130/wd67-0f50
Description

A short guide to some key resources and readings on the topic of reproducibility.

Body

Reproducibility is the ability of an entire analysis of an experiment or study to be duplicated, either by the same researcher or by someone else working independently. Reproducibility is a core principle of scientific progress. Scientific claims should not gain credence because of the status or authority of their originator but by the replicability of their supporting evidence. Scientists attempt to transparently describe the methodology and resulting evidence used to support their claims. Other scientists agree or disagree whether the evidence supports the claims, citing theoretical or methodological reasons or by collecting new evidence. Such debates are meaningless, however, if the evidence being debated is not reproducible.

Open Science Collaboration. 2015. "Estimating the reproducibility of psychological science."  Science 349 (6251). doi: 10.1126/science.aac4716.

Subject Overviews

Alberts, B., et al. 2015. “Self-correction in science at work.” Science 348(6242): 1420-1422. DOI: 10.1126/science.aab3847. https://brucealberts.ucsf.edu/wp-content/uploads/2016/05/Selfcorrection-Science-article-2015.pdf

Week after week, news outlets carry word of new scientific discoveries, but the media sometimes give suspect science equal play with substantive discoveries. Careful qualifications about what is known are lost in categorical headlines. Rare instances of misconduct or instances of irreproducibility are translated into concerns that science is broken. The October 2013 Economist headline proclaimed “Trouble at the lab: Scientists like to think of science as self-correcting. To an alarming degree, it is not” (1). Yet, that article is also rich with instances of science both policing itself, which is how the problems came to The Economist's attention in the first place, and addressing discovered lapses and irreproducibility concerns. In light of such issues and efforts, the U.S. National Academy of Sciences (NAS) and the Annenberg Retreat at Sunnylands convened our group to examine ways to remove some of the current disincentives to high standards of integrity in science.

Ioannidis John P. A. 2005. “Why most published research findings are false.” PLoS Medicine. 2(8):e124. doi:10.1371/journal.pmed.0020124 http://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.0020124

There is increasing concern that most current published research findings are false. The probability that a research claim is true may depend on study power and bias, the number of other studies on the same question, and, importantly, the ratio of true to no relationships among the relationships probed in each scientific field. In this framework, a research finding is less likely to be true when the studies conducted in a field are smaller; when effect sizes are smaller; when there is a greater number and lesser preselection of tested relationships; where there is greater flexibility in designs, definitions, outcomes, and analytical modes; when there is greater financial and other interest and prejudice; and when more teams are involved in a scientific field in chase of statistical significance. Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true. Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias. In this essay, the author  discusses the implications of these problems for the conduct and interpretation of research.

Kitzes, J., Turek, D., & Deniz, F. (Eds.). (2018). The Practice of Reproducible Research: Case Studies and Lessons from the Data-Intensive Sciences. Oakland, CA: University of California Press. Available online at https://www.practicereproducibleresearch.org.

The Practice of Reproducible Research presents concrete examples of how researchers in the data-intensive sciences are working to improve the reproducibility of their research projects. Each of the thirty-one case studies in this volume describes the workflow that an author used to complete a real-world research project, highlighting how particular tools, ideas, and practices have been combined to support reproducibility. Emphasis is placed on the very practical how, rather than the why or what, of conducting reproducible research.

Stodden, Victoria et al.. 2016. “Enhancing reproducibility for computational methods.” Science 534 (6317): 1240-1241. DOI: doi: 10.1126/science.aah6168 http://web.stanford.edu/~vcs/papers/ERCM2016-STODDEN.pdf

The authors present a set of “Reproducibility Enhancement Principles which specifically address issues that arise in computational research.

Policy and Guidance

Buck, Stuart. 2015. “Editorial: Solving reproducibility.” Science 348(6242): 1403. doi:10.1126/science.aac8041.

The reproducibility problem in science is a familiar issue, not only within the scientific community, but with the general public as well. Recent developments in social psychology (such as fraudulent research by D. Stapel) and cell biology (the Amgen Inc. and Bayer AG reports on how rarely they could reproduce published results) have become widely known. Nearly every field is affected, from clinical trials and neuroimaging, to economics and computer science. Obvious solutions include more research on statistical and behavioral fixes for irreproducibility, activism for policy changes, and demanding more pre-registration and data sharing from grantees. Two Perspectives in this issue (pp. 1420 and 1422) describe how journals and academic institutions can foster a culture of reproducibility. Transparency is central to improving reproducibility, but it is expensive and time-consuming. What can be done to alleviate those obstacles?

National Institutes of Health. 2016. “Rigor and Reproducibility.” Accessed May 1, 2017. http://grants.nih.gov/reproducibility/index.htm

Starting in 2016, the NIH has adopted guidelines that are meant to help improve the transparency and reproducibility of research funded by this federal agency.

Nosek, BA, et al. 2015. “Promoting an open research culture.” Science 348(6242): 1422-1425. doi: 10.1126/science.aab2374.

Transparency, openness, and reproducibility are readily recognized as vital features of science (12). When asked, most scientists embrace these features as disciplinary norms and values (3). Therefore, one might expect that these valued features would be routine in daily practice. Yet, a growing body of evidence suggests that this is not the case (46). The Transparency and Openness Promoting Guidelines outlined in this article are available at the Center for Open Science web site.

Bibliography

Online Ethics Center for Engineering and Science. 2015. "Reproducibility Bibliography.” In the Online Ethics Center for Engineering and Science.  Last updated November 2015. https://onlineethics.org/cases/reproducibility-bibliography

An expanded bibliography of guidelines, books, videos and journal articles on reproducibility in science. 

Citation
Kelly Laas. . Reproducibility Subject Aid. Online Ethics Center. DOI:https://doi.org/10.18130/wd67-0f50. https://onlineethics.org/cases/oec-subject-aids/reproducibility-subject-aid.