Reproducibility Bibliography

Author(s)
Year
2016
DOI
https://doi.org/10.18130/22mr-7377
Discipline(s)
Description

A bibliography of guidelines, teaching tools, books, and articles on reproducibility in research. 

Body

Guidelines for Transparency and Openness Promotion (TOP) in Journal Policies and Practices
This document provides template guidelines to enhance transparency in the science that journals publish.  Developed and sponsored by the Center for Open Science, the guidelines are meant to be adapted for use by funders.

National Institutes of Health Rigor and Responsibility Guidelines
Starting in 2016, the NIH has adopted guidelines that are meant to help improve the transparency and reproducibility of research funded by this federal agency.

Overview

Challenges in Irreproducible Research (2015)
A Nature special issue looking at the growing alarm about research results that cannot be reproduced. Includes a number of editorials on how authors can make their research more transparent and reproducible.

Data Replication and Reproducibility (2011)
A Science Magazine special issue on issues of reproducibility, including how new tools and technologies, the increase of interdisciplinary approaches to research, and the complexity of questions being asked are complicating replications efforts.

Reproducibility Project: Cancer Biology (2015)
A collaborative project aimed at independently replicating selected results from high-profile papers in the field of cancer biology.

Reproducibility Project: Psychology (2015)
Web site of the Reproducibility Project, the research team that published the much-cited article “Estimating the Reproducibility of Psychological Science” in August 2015. The group conducted replications of 100 different experimental and correlational psychology studies published in three journals, and found that over half failed their reproducibility test. The site includes more information on the study, and links to other similar initiatives in other disciplines.

Video: The Basics of Reproducibility
A video of Brian Nosek, director of the Center for Open Science, discussing reproducibility and replicability.

Books

Atmanspacher, Harald and Sabine Maasen. (2016) Reproducibility Principles, Problems, Practices and Prospects. New York: Wiley.
A collection of essays that can serve as a guide for researchers who are interested in the general and overarching questions behind the concept of reproducibility, as well as practices and techniques  that have been developed to safeguard reproducible research.

Stodden, Victoria, Friedrich Leisch, Roger D. Peng. (2013) Implementing Reproducible Research. Boca Raton, FL: CRC Press/Taylor and Francis.
In computational science, reproducibility requires that researchers make code and data available to others so that the data can be analyzed in a similar manner as in the original publication. Code must be available to be distributed, data must be accessible in a readable format, and a platform must be available for widely distributing the data and code.

Articles

Albert, B. et al. (2015). Self-correction in Science at Work. Science. 348(6242): 1420-1422.doi: 10.1126/science.aab3847.
Week after week, news outlets carry word of new scientific discoveries, but the media sometimes gives suspect science equal play with substantive discoveries. Careful qualifications about what is known are lost in categorical headlines. The October 2013 Economist headline proclaimed “Trouble at the lab: Scientists like to think of science as self-correcting. To an alarming degree, it is not” (1). Yet, that article is also rich with instances of science both policing itself, which is how the problems came to The Economist's attention in the first place. In light of such issues and efforts, the U.S. National Academy of Sciences (NAS) and the Annenberg Retreat at Sunnylands convened a group of experts to examine ways to remove some of the current disincentives to high standards of integrity in science.

Anderson, J. A., Eijkholt, M., & Illes, J. (2013). Ethical reproducibility: towards transparent reporting in biomedical research. Nature Methods, 10(9), 843-845. doi:10.1038/nmeth.256.
Optimism about biomedicine is challenged by the increasingly complex ethical, legal and social issues it raises. Reporting of scientific methods is no longer sufficient to address the complex relationship between science and society. To promote 'ethical reproducibility', the authors call for transparent reporting of research ethics methods used in biomedical research.

Begley, C. G., Buchan, A. M., & Dirnagl, U. (2015). Robust research: Institutions must do their part for reproducibility. Nature, 525(7567), 25-27. doi:10.1038/525025a
Discusses the important role institutions such as hospitals, universities, government-supported labs, and independent research institutes have in improving the robustness of scientific research. This includes trying to make sure the incentive to be right is stronger than the incentive to publish first, making sure research methods are clear, modeling and promoting good scientific practice, and having clear reporting systems where individuals can raise questions and concerns without fear of retribution.

Buck, S. (2015). Solving reproducibility. Science, 348(6242), 1403-1403. doi:10.1126/science.aac8041
This editorial discusses recent misconduct cases in the areas of cell biology and psychology and discusses so potential tools and practices that can help increase the transparency and reproducibility of science.

Collaboration, O. S. (2015). Estimating the reproducibility of psychological science. Science, 349(6251). doi:10.1126/science.aac4716
This article helped push the debate about reproducibility in science to scientific publishers and funding agencies and researchers around the world. The Open Science Collaboration did a study of 100 different published psychology studies, and found that only 39 of them were reproducible. The article discusses reasons behind this fact, and how reproducibility is an important part of scientific practice.

Freedman LP, Cockburn IM, Simcoe TS: The economics of reproducibility in preclinical research. PLoS Biol 2015, 13(6):e1002165
An analysis of past studies indicates that the cumulative (total) prevalence of irreproducible preclinical research exceeds 50%, resulting in approximately US$28,000,000,000 (US$28B)/year spent on preclinical research that is not reproducible—in the United States alone. In answer to this, the authors outline a framework for solutions and a plan for long-term improvements in reproducibility rates that will help to accelerate the discovery of life-saving therapies and cures

Hines, William C., Su, Y., Kuhn, I., Polyak, K., & Bissell, Mina J. (2014). Sorting Out the FACS: A Devil in the Details. Cell Reports, 6(5), 779-781. doi: 10.1016/j.celrep.2014.02.021
The article describes two laboratories who collaborated on a project focusing on the heterogeneity of the human breast.  Despite using seemingly identical methods, reagents, and specimens, the two laboratories were unable to replicate each other’s fluorescence-activated cell sorting (FACS). The article discusses problems encountered in trying to reproduce each other’s research and provides advice to colleagues.

Ioannidis JPA (2005) Why most published research findings are false. PLoS Med 2(8):e124. doi:10.1371/journal.pmed.0020124
There is increasing concern that most current published research findings are false. The probability that a research claim is true may depend on study power and bias, the number of other studies on the same question, and, importantly, the ratio of true to no relationships among the relationships probed in each scientific field. In this framework, a research finding is less likely to be true when the studies conducted in a field are smaller; when effect sizes are smaller; when there is a greater number and lesser preselection of tested relationships; where there is greater flexibility in designs, definitions, outcomes, and analytical modes; when there is greater financial and other interest and prejudice; and when more teams are involved in a scientific field in chase of statistical significance. Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true. Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias. In this essay, the author  discusses the implications of these problems for the conduct and interpretation of research.

Kenall, A., Edmunds, S., Goodman, L., Bal, L., Flintoft, L., Shanahan, D. R., & Shipley, T. (2015). Better reporting for better research: a checklist for reproducibility, Editorial. BMC Neuroscience, pp. 1-3.
The article discusses how the open-access publisher BioMed Central has adopted a checklist aimed at improving the reproducibility of the research results it publishes, including experimental design and statistics, resources, and availability of data and materials.

McNutt, M. (2014, 11/07/). Journals unite for reproducibility, Editorial. Science, pp. 679-679. doi: 10.1126/science.aaa1724
A letter from the Editor-in-Chief of Science journals discussing the importance of reproducibility in scientific publication, and a conference held in June 2014 with major scientific publishers that ended in the adoptions of the Principles and Guidelines in Reporting Preclinical Research. These guidelines are now part of the U.S. National Institutes of Health policy on reproducibility.

Mullane, K., & Williams, M. (2015). Unknown unknowns in biomedical research: does an inability to deal with ambiguity contribute to issues of irreproducibility? Biochemical Pharmacology, 97(2), 133-136. doi:10.1016/j.bcp.2015.07.002
The authors argue that despite improvements in guidelines and tools available to improve reproducibility, many researchers are not adopting them fast enough, and journals and funding agencies have not adopted strict enough consequences to force adoption of these new tools and guidelines. Contributing to this is a reductionist mindset that prioritizes certainty in research outcomes over the ambiguity intrinsic in biological systems. Changing the researchers’ mind set in both of these areas is necessary in reducing issues with reproducibility.

Nosek, B. A. et al. (2015). Promoting an Open Research Culture. 348(6242): 1420-1422.doi: 10.1126/science.aab2374
Transparency, openness, and reproducibility are readily recognized as vital features of science. When asked, most scientists embrace these features as disciplinary norms and values. Yet, a growing body of evidence suggests that this is not the case. 

Poldrack, R. A., & Poline, J.-B. (2015). The publication and reproducibility challenges of shared data. Trends in Cognitive Sciences, 19(2), 59-61. doi:10.1016/j.tics.2014.11.008
As the amount of shared data available from scientific research continues to grow, it also raises some challenges of how to analyze these shared datasets. The authors discuss some strategies for how to re-analyze this data, how to deal with cases of irreproducible research (and how often this is not due to a case of scientific misconduct), and how credit should be assigned to data creators.

Ram, K. (2013). Git can facilitate greater reproducibility and increased transparency in science. Source Code for Biology & Medicine, 8(1), 1-8. doi:10.1186/1751-0473-8-7
Describes the open source software Git, which provides a framework for managing research outputs such as datasets, field notes, statistic code, figures, lab notes and manuscripts. The author goes on to highlight how this tool can be used to make science more reproducible and transparent, foster new collaborations, and support novel uses.

Springate, D. A., Kontopantelis, E., Ashcroft, D. M., Olier, I., Parisi, R., Chamapiwa, E., & Reeves, D. (2014). ClinicalCodes: An Online Clinical Codes Repository to Improve the Validity and Reproducibility of Research Using Electronic Medical Records. PLoS ONE, 9(6), 1-6. doi:10.1371/journal.pone.0099825
Discusses the importance of making clinical codes in human health research available to reviewers and fellow researchers in order to allow for the replication of database studies, and describes a new database developed to make clinical codes more widely available.

Teixeira da Silva, J. (2015). Negative results: negative perceptions limit their potential for increasing reproducibility. Journal of Negative Results in BioMedicine, 14(1), 1-4. doi:10.1186/s12952-015-0033-9
This editorial discusses how negative research results have been an important building block in scientific progress, and yet very little of these negative results gets published. The author argues that the mindset favoring the publication of only positive results needs to change in order to advance science and to help the increase its integrity.

Trouble at the lab”. The Economist, Oct 19, 2013.
This news article gives a good overview of some cases of irreproducible research in psychology, drug development, and other fields and discusses how the issue reproducibility is both a scientific and economic problem.

Teaching Tools

Duke Saga Starter Kit (2012)
This is a collection of slides, videos and articles covering the 2010 Duke research scandal, where researchers claimed that analysing a cancer’s genes offered would help find the best way to treat the cancer. Keith Baggerly and Kevin Coombes, statisticians at M.D. Anderson Cancer Center, attempted to reproduce the study and found major problems with its conclusions. While somewhat complex, this is an excellent story about the emerging field of forensic bioinformatics and why reproducibility is so important. 

Citation
Kelly Laas. . Reproducibility Bibliography. Online Ethics Center. DOI:https://doi.org/10.18130/22mr-7377. https://onlineethics.org/cases/oec-bibliographies/reproducibility-bibliography.