Bias in Research Subject Aid

Authoring Institution

A short guide to some key resources and readings on the topic of bias in research.


Bias is an inclination that influences judgment. The term may be used in a descriptive way to mean an inclination, and some biases are for the most part benign. For example, widely-accepted standards of statistical significance are biased in favor of the hypothesis such that typically, if an observed difference is likely to occur by chance more than 5 times in 100 samples (α = .05) or once in 100 samples (α = .01) then researchers do not reject the hypothesis and accept the inference that the treatment group is not significantly different than the control group. This bias errs on the side of not finding an effect where there is none, but overlooking a possible effect occurring at a lower level.

More often bias denotes an inclination that influences judgment in a way that distorts inferences or that unfairly advantages some and disadvantages others. "Prejudice," a synonym for bias in this pejorative sense, also suggests the prejudgment or commitment to an outcome in advance of evidence supporting the commitment. Bias may also arise under the influence of personal motivations to be first to publish or to promote a particular policy or outcome. Bias also may reflect prejudicial judgments about the honesty or ability of social groups, such as showing favor toward “Aryan science” and against “Jewish science.”

Biases can be conscious or subconscious, explicit or implicit, and subconscious and implicit biases are by their nature very difficult to identify through introspection. Conflict of interest is a legal term that denotes conditions where one has a personal stake in an outcome that is strong enough to jeopardize one’s ability to evaluate evidence fairly and draw a sound conclusion. Conflict of interest often arises in settings where a person is acting as an agent for others and also has a personal stake in the outcome, such as a government official managing a research funding competition that includes a proposal from a close friend or family member. These are circumstances where the potential for bias and public harm are great enough that legal requirements to address the conflicts exist.

Bias may arise in all phases of research — problem identification, population selection, methodology and data collection, geographical delineation, analysis and interpretation results, and presentation and evaluation of findings. A widely-shared set of biases, for example, may distort or close off important lines of inquiry. For example, biases in the selection of human subjects for biomedical research obscure treatment effects that differ by age, sex, or ethnicity. Much bias can be minimized through careful attention to the components of research, identifying and addressing sources of error, and by acknowledging study limits and financial or other personal interests.

The bias that cannot in principle be eliminated in the work of scientific investigators, in contrast to bias or prejudice that can and should be eliminated, is also an important topic in research ethics. For example, the way disciplinary training inclines people to interpret the results of an experiment in terms of the established categories of that discipline is an enduring and biasing feature of research, and one that must be taken into account in assessing responsible conduct. Of course, researchers may hold disciplinary biases and still be unbiased in other respects — for instance, they may be impartial on the question of the truth or falsity of a particular research hypothesis.

Other aspects of bias arise in the social institutions of research. For instance, gender and race discrimination may affect a wide range of research components ranging from mentoring of graduate students to selection of research subjects, to review of proposals or publications, to employment and opportunities for advancement in science and engineering. The social institutions of research may make choices that privilege certain socio-economic classes or create risks for others.

Uses to which research is put can also be biased, advertently or inadvertently. The unnoticed bias that has arisen in phases of research could result in biased use; but individuals or groups can also be aware of bias and use the results for their purposes nonetheless.

Implicit bias is discussed by Michael Brownstein in “Implicit Bias”, The Stanford Encyclopedia of Philosophy (2015 Edition), Edward N. Zalta (ed.),  See particularly, Ethics, Section 4, Accessed July 5, 2016.

The OEC subject aids “Responsible Innovation” and “Reproducibility” also address issues of bias in research.



The National Academies of Sciences, Engineering, and Medicine. 2017. Fostering Integrity in Research. Washington, DC: The National Academies Pres. Doi:

The Summary chapter of this extensive report lists eleven recommendations to help researchers, research institutions, research sponsors, journals, and societies to promote research integrity (pp. 1-8). Besides scientific and research misconduct, the report and its recommendations address issues of detrimental research practice that result in biased or weak studies and results. The report notes in particular that systematic biases lead to a high incidence of false positive results (p. 70) and lack of reproducibility and points to organizational failures that contribute to these failures (pp. 78-79). Disciplines, fields, and professional societies can take steps to minimize the biases that lead to irreproducibility (pp. 112-113).  

Quantitative and Qualitative Research

Delzell, Darcie A. P., and Cathy D. Poliak. 2013. "Karl Pearson and Eugenics: Personal Opinions and Scientific Rigor."  Science and Engineering Ethics 19 (3):1057-1070.

The influence of personal opinions and biases on scientific conclusions is a threat to the advancement of knowledge. Expertise and experience does not render one immune to this temptation. In this work, one of the founding fathers of statistics, Karl Pearson, is used as an illustration of how even the most talented among us can produce misleading results when inferences are made without caution or reference to potential bias and other analysis limitations.

Sarniak, Rebecca. 2015.  9 types of research bias and how to avoid them. Quirk’s Marketing Research Media. Available online at Accessed June 28, 2016

To reduce the risk of bias in qualitative studies, researchers must focus on the human elements likely to contribute to inaccuracies in responses. The author describes nine of them grouped in two categories:  respondent and researcher bias. The former contains such items as acquiescence and social desirability bias; the latter includes leading questions and wording bias, and the halo effect.

Biomedical Research Issues

Pannucci, Christopher J., and Edwin G. Wilkins. "Identifying and avoiding bias in research." Plastic and reconstructive surgery 126, no. 2 (2010): 619.

This narrative review provides an overview on the topic of bias as part of the journal’s series of articles on evidence-based medicine. Bias can occur in the planning, data collection, analysis, and publication phases of research. Understanding research bias allows readers to critically and independently review the scientific literature and avoid treatments which are suboptimal or potentially harmful. A thorough understanding of bias and how it affects study results is essential for the practice of evidence-based medicine. The discussion in the article is pertinent to many scientific and engineering fields.

Smith, Joanna, and Helen Noble. "Bias in research." Evidence Based Nursing 17, no. 4 (2014): 100-101.

The aim of this article is to outline types of ‘bias’ across research designs, and consider strategies to minimize bias. Evidence-based nursing, defined as the “process by which evidence, nursing theory, and clinical expertise are critically evaluated and considered, in conjunction with patient involvement, to provide the delivery of optimum nursing care,” is central to the continued development of the nursing professional. Implementing evidence into practice requires nurses to critically evaluate research, in particular assessing methodological and factors that may have biased findings.

Dickersin, Kay, S Chan, T C Chalmers, H S Sacks and H Smith Jr. 1987. Publication bias and clinical trials. Control Clin Trials 8:4, 343-53. Dec.

Results from a study to evaluate the extent to which the medical literature may be misleading as a result of selective publication of randomized clinical trials (RCTs) showed a statistically significant treatment effect. Three hundred eighteen authors of published trials were asked whether they had participated in any unpublished RCTs. 156 respondents reported 271 unpublished and 1041 published trials. Of the 178 completed unpublished RCTs with a trend specified, 26 (14%) favored the new therapy compared to 423 of 767 (55%) published reports (p less than 0.001). For trials that were completed but not published, the major reasons for nonpublication were "negative" results and lack of interest. From the data provided, it appears that nonpublication was primarily a result of failure to write up and submit the trial results rather than rejection of submitted manuscripts. The results of this study imply the existence of a publication bias of importance both to meta-analysis and the interpretation of statistically significant positive trials.

Research Standards

National Academies - Committee on Science Engineering and Public Policy. 2009. On being a scientist: a guide to responsible conduct in research 3rd ed. Washington, D.C.: National Academies Press.

The scientific research enterprise is built on a foundation of trust. Scientists trust that reported results are valid. Society trusts that the reports are accurate and unbiased. But this trust will endure only if the scientific community devotes itself to ethical scientific conduct. This guide supplements the informal lessons in ethics from research supervisors and mentors. It describes the ethical foundations of scientific practices and some of the personal and professional issues that researchers encounter in their work in all forms of research in academic, industrial, or governmental settings, and all scientific disciplines. It includes a number of hypothetical scenarios with guidance in thinking about and discussing them. Aimed primarily at graduate students and beginning researchers, its lessons apply to scientists at all stages of their careers.

InterAcademy Partnership (IAP). 2016. Doing Global Science: A Guide to Responsible Conduct in the Global Research Enterprise. Princeton NJ: Princeton University Press.

This concise introductory guide explains the values that should inform the responsible conduct of scientific research in today's global setting. It includes a definition of bias, examples of contexts where it may arise and options to reduce its occurrence.  

Policy or Guidance

Alberts, Bruce, Ralph J. Cicerone, Stephen E. Fienberg, Alexander Kamb, Marcia McNutt, Robert M. Nerem, Randy Schekman, Richard Shiffrin, Victoria Stodden, Subra Suresh, Maria T. Zuber, Barbara Kline Pope, and Kathleen Hall Jamieson. 2015. "Self-correction in science at work."  Science 348 (6242):1420-1422.

The article discusses ways to improve incentives in science in order to support and promote research integrity versus volume. Topics include the tendency for errors in science to be criticized and eventually corrected, the establishment of reproducibility and transparency requirements by science publications, and the implementation of reform to eliminate bias in the research review process.

InterAcademy Council and IAP.  2012. Responsible Conduct in the Global Research Enterprise. Trieste, Italy: IAP Secretariat.

Significant differences among countries have been revealed in the definitions of and approaches to the conduct of responsible research. These urgent issues are being addressed by the world’s national scientific academies through their representative international organizations, the InterAcademy Council (IAC) and the IAP – the global network of science academies. This report, sponsored by IAC and IAP, represents the first joint effort by the scientific academies to provide clarity and advice in forging an international consensus on responsible conduct in the global research enterprise. It acknowledges and draws on information and recommendations from the many national and international organizations that have issued guidelines and statements on the basic responsibilities and obligations of researchers.

McNutt, Marcia. 2016. Editorial: Implicit bias. Science 352: 6289, 27 May.

This editorial reports on a panel representing major journals that discussed the potential for implicit bias affecting acceptance of manuscripts for publication. While gender discrepancies appear to have disappeared, discrepancies in publication rates for manuscripts from different countries and regions remain. Possibilities other than quality differentials include reviewer inferences about country of origin. Additionally, the pool of reviewers is dominantly male and from Western nations; both the pool of reviewers and editors would benefit from broadening, diversification, and internationalization.

Sarewitz, Daniel. 2012. “Beware the Creeping Cracks of Bias. Nature News. May 9.

Evidence is mounting that research is riddled with systematic errors. Left unchecked, this could erode public trust. Scientists rightly extol the capacity of research to self-correct. But the lesson coming from biomedicine is that this self-correction depends not just on competition between researchers, but also on the close ties between science and its application that allow society to push back against biased and useless results.  Where these ties are less close, it may be more difficult to address errors.


Sanderson Simon, ID Tatt, JP Higgins. 2007. Tools for assessing quality and susceptibility to bias in observational studies in epidemiology: a systematic review and annotated bibliography. Int J Epidemiol. Jun; 36(3): 666-76. Epub 2007 Apr 30.

Assessing quality and susceptibility to bias is essential when interpreting primary research and conducting systematic reviews and meta-analyses. The researchers identified assessment tools for observational epidemiological studies from a search of three electronic databases, bibliographies and an Internet search using Google. Two reviewers extracted data using a pre-piloted extraction form and strict inclusion criteria. Tool content was evaluated for domains potentially related to bias. 86 tools were reviewed: 41 simple checklists, 12 checklists with additional summary judgments and 33 scales. The number of items ranged from 3 to 36 (mean 13.7). One-third of tools were designed for single use in a specific review and one-third for critical appraisal. Half of the tools provided development details, although most were proposed for future use in other contexts. Most tools included items for selection methods (92%), measurement of study variables (86%), design-specific sources of bias (86%), control of confounding (78%) and use of statistics (78%); only 4% addressed conflict of interest. The distribution and weighting of domains across tools was variable and inconsistent. The report concludes that while a number of useful assessment tools have been identified, a need to agree on critical elements for assessing susceptibility to bias in observational epidemiology and to develop appropriate evaluation tools remains.  Tools should be rigorously developed, evidence-based, valid, reliable and easy to use.

Brownstein, Michael. 2015. “Bibliography” in “Implicit Bias”, The Stanford Encyclopedia of Philosophy (2015 Edition), Edward N. Zalta (ed.),  Accessed July 5, 2016.

Anique Olivier-Mason, Rachelle Hollander. . Bias in Research Subject Aid. Online Ethics Center. DOI: