Author's Commentary on "Do the Ends Justify the Means? The Ethics of Deception in Social Science Research"

The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research, issued in 1979, elucidates three comprehensive principles that are relevant to the ethical practice of human subjects research: 1) respect for persons, 2) beneficence and 3) justice. The first principle, respect for persons, is particularly relevant to the question of deception in research. The report claims that "respect for persons demands that subjects enter into the research voluntarily and with adequate information" (p. 4). It goes on to apply this principle to formulate the requirement that subjects must give their informed consent to participate in research. This requirement of full and complete disclosure is waived, however, when

informing subjects of some pertinent aspect of the research is likely to impair the validity of the research. In many cases, it is sufficient to indicate to subjects that they are being invited to participate in research of which some features will not be revealed until the research is concluded. Such research is justified only if it is clear that 1) incomplete disclosure is truly necessary to accomplish the goals of the research, 2) undisclosed risks to subjects are no more than minimal, and 3) there is an adequate plan for debriefing subjects, when appropriate, and for dissemination of research results to them. (p. 6)

The report uses the phrase "incomplete disclosure" to indicate that its criteria apply not only to instances of outright deception in research but also to cases in which the researcher has misled subjects or has given them only partial information. I use the term "deception" here to describe all such situations in which subjects consent to participate in research on the basis of less-than-complete information. My analysis does not include an admittedly relevant question, whether the degree of disclosure makes a difference in deciding the ethical questions. In each of the cases outlined above, the researcher proposes to use some form of deception as a way of obtaining valid research results. Following, I analyze each of the three cases in light of the Belmont Report's criteria for ethically responsible research involving deception of subjects.

In Case 1, the researcher justifies deception on the grounds that awareness of her purposes will bias subjects' responses. Research in the field of social psychology has demonstrated that subjects' self-reports of attitudes can be influenced by a number of factors, including the subjects' desire to please the experimenter. It seems, therefore, that the research in this case meets the report's first criterion, that incomplete disclosure is necessary to accomplish the purposes of the research. The proposed research also meets the second and third criteria: This sort of attitude research does not seem to involve potential harm to subjects, and the researcher has included a plan for debriefing subjects following their participation.

Cases 2 and 3 similarly seem to require an element of deception to accomplish the purposes of the research. In Case 2, the study of conformity requires that subjects not be fully informed, or their behavior would not be spontaneous. The same reasoning applies to Case 3 -- subjects who knew that the research was measuring helping behavior quite naturally would help! Cases 2 and 3 differ, however, on the second criterion, undisclosed risks that offer the potential of harm to subjects. The research on group conformity is not likely to pose a risk to subjects; they are merely discussing a controversial issue, then reporting their attitudes. The research on helping behavior, on the other hand, is likely to entail some degree of harm to subjects. The experimental setup involves placing subjects in a situation that requires a difficult choice (to act or not to act) and then complicating that choice with the powerful influence of others. The subjects are likely to experience mild to extreme distress in such a situation. Case 3, therefore, does not meet the Belmont Report's second criterion of avoiding all but minimal risk to subjects.

With regard to the principle of voluntary consent, both Cases 2 and 3 are suspect. The researcher is also the instructor for the course, which presents a dilemma for students who may be uncomfortable about participating in an experiment. Although the researcher has included an alternative to participation (the 50-page paper), does this option constitute a true alternative? That is, is the option of not participating equally palatable from the student's standpoint? Consider that students may choose to participate in the experiment in spite of their apprehension because the paper option presents a heavy addition to the students' workload when compared to the one-time, one-hour appointment with the researcher.

These issues are complicated when the debriefing of subjects is considered. In Case 2, I noted that this experiment on group conformity was not likely to entail harm to subjects. That is true of the experiment itself -- but possibly not true for the debriefing. The debriefing in this case may do what Diana Baumrind has called "inflicting insight" upon subjects (quoted in Murray, 1980): When they are told that the researcher was actually studying group conformity, subjects who conformed may gain knowledge of themselves they would prefer not to have. Participation in this experiment, for these subjects, provides direct evidence of character traits most of us like to think we don't hold. We believe that we have minds of our own, that we don't bend too easily to outside pressure, etc. Gaining knowledge to the contrary (which, remember, was knowledge that subjects did not consent to gain) may cause subjects embarrassment or a lowering of their self-esteem. The effects of debriefing in Case 3 are similar, but the ramifications of unrequested knowledge are potentially still more serious. It could be quite disturbing for subjects to learn that in an emergency, when someone else needs help, they could be so easily swayed to inaction. Again, subjects may attribute their behavior in the experiment to flaws of character; unknown to the experimenter, some subjects may already struggle with low self-esteem, and their participation in such an experiment could be devastating. Only in the first case is debriefing not likely to introduce or add to the potential of psychological harm to subjects.

We have, therefore, complicated our consideration of the criteria for ethically responsible research involving deception, particularly in Cases 2 and 3. The Belmont Report's second and third criteria appear to conflict: The debriefing process, which is intended in part to "consolidate the educational and therapeutic value" (Sieber, 1992, 39) of research for subjects, is in fact an element of the research that either introduces or magnifies the risk of harm to subjects. Clearly too, deception research violates the principle of informed consent: Subjects in such cases may be understandably angry when the debriefing process "inflicts insight" about themselves that they neither wished to nor consented to gain.

Note that the report's third criterion includes "an adequate plan for debriefing subjects, when appropriate" [emphasis added]. We might conclude that when debriefing introduces or magnifies harm to subjects, as it does in Cases 2 and 3, a debriefing procedure is inappropriate. In such cases, it may be better for subjects not to know what was really being measured by the study. However, the problem of paternalism arises in judging for the subjects what constitutes a harm, and in deciding what is "best" for them. Further, this position seems to violate the concept of respect for persons, a central principle of ethically responsible research with human subjects. In addition to its educational and therapeutic value, the debriefing process also seems to be a gesture of respect for the subjects of research, built on the understanding that subjects have a right to know the true nature of the research in which they participated. We are then left with a difficult choice between introducing or magnifying the risk of harm to subjects by a debriefing process, or sending subjects on their way, never knowing what was actually done to them, an unpalatable option for responsible researchers who believe in honesty in research and who regard "subjects" as partners in the research process.

Options exist, however, for making such a choice, if still difficult, at least less difficult. A sensitive debriefing can go a long way toward alleviating the psychological harm that the process may introduce to subjects. In Case 2, the researcher could make clear that the responses of subjects who conformed are in no way unusual and could briefly explain some of the mechanisms that make group influence so powerful. In Case 3, again, the researcher should point out to subjects that the majority of those studied did not help. The researcher should summarize the research done to date on helping behavior and outline what is known about why people do not help in emergencies. In both cases, an explanation of how the current research is expected to add to the knowledge of group conformity or of helping behavior and a brief statement of the ways in which greater knowledge of these social phenomena may benefit others will also increase subjects' sense of well-being following the experiment.

Another option to minimize the risks of deception research is to anticipate some of the difficulties and adopt a research plan including a milder form of deception. Sieber (1992, 67-68) notes that deception in research takes one of five forms, with each succeeding form removing more of the subjects' right to self-determination and lessening the knowledge that is the basis for their consent to participate:

  1. informed consent to participate in one of various conditions: subjects know that they will not know which research condition they will participate in (e.g., treatment or control, experimental drug or placebo);
  2. consent to deception: subjects know there is some aspect of the study that will not be fully disclosed;
  3. consent to waive the right to be informed: subjects waive their right to be informed and thus are not told of the possibility of deception;
  4. consent and false informing: subjects give consent but are falsely informed about the nature of the research;
  5. no informing and no consent: subjects do not know they are subjects in any form of research (as when "real-life" situations are studied, or a seemingly real incident is contrived and then observed).

Each of the three cases analyzed here could be considered an example of consent and false informing: In each case, subjects have given consent but are not told what is actually being studied. Case 1 illustrates what one might consider a mild form of false informing -- that is, subjects are not fully informed because of the vagueness of the explanation of the study's purpose, but neither are they lied to outright. Yet because subjects have not consented to any form of deception (They do not know they are not being given full and adequate information), the case is still an example of consent and false informing. Cases 2 and 3 are clear-cut examples of consent and false informing.

The question then becomes, "Could the research purposes in these three cases be accomplished by employing a 'lesser form' of deception, one that preserves to a greater degree subjects' rights of self-determination and knowledge of the research?" In Case 1, it is questionable whether the accuracy of subjects' attitudinal responses would be compromised if they knew that the researcher could not tell them exactly what was being measured. If they were told that they weren't "getting the whole story," would their responses differ from the responses they would make when they were trying to guess at the purpose of the research? It seems that a milder form of deception might be feasible in Case 1; a well-informed researcher must make that judgment. In Cases 2 and 3, it is more difficult to imagine that any milder form of deception than consent and false informing would result in subjects behaving as they would when they were unaware of the study's purposes. In the study on helping behavior, if subjects were at all aware that they had not been fully informed, they would be quite likely to recognize immediately that the "emergency" was contrived. In the study on group conformity, it is possible that subjects would be so busy trying to figure out what was really being measured that they would not behave at all spontaneously or naturally in the group. It seems, then, that in at least two of the cases, the research cannot be accomplished without deception that limits subjects' autonomy.

However, a further determination must be made before the use of deception in research can be justified. The Belmont Report does not consider the worth of the research as a criterion for justifying the employment of deception. The report's criteria exclude any deception research that involves risks to subjects that are "more than minimal." Notice, however, that in this group of cases, as the risks to subjects escalate in severity, the potential benefits of the research increase as well. The study involving the greatest risk of harm to subjects, the helping behavior study, has enormous potential for increasing our understanding of the reasons people fail to help in emergencies, thereby increasing the possibility that we can develop strategies to combat those reasons. The research on group conformity has potentially beneficial aspects as well -- in increasing our understanding of the ways in which gangs operate, for example. It seems that in making decisions to undertake research involving deception, the potential costs to subjects must be weighed against the potential benefits for society.

Such a judgment is difficult to make. As Sieber (1992) points out, it is not always possible to identify risks and benefits in advance, and those that are identified are often not quantifiable. How does one weigh present harm to one individual against potential future benefits for many individuals? Sieber suggests that "common sense, a review of the literature, knowledge of research methodology, ethnographic knowledge of the subject population, perceptions of pilot subjects and gatekeepers, experience from serving as a pilot subject oneself, and input from researchers who have worked in similar research settings" (1992, 76) should all inform the assessment of risks and benefits. Imperfect as such judgments may be, they must be made. Trivial research involving any degree of harm to subjects is certainly unjustified; important research, on the other hand, may generate such benefits as to be worth some degree of harm (minimized and alleviated as much as possible) to subjects. The key is that the researcher should not be the sole authority in deciding when benefits outweigh risks: "[N]o single source can say what potential risks and benefits inhere in a particular study. . . . The benefit and justifiability of research depend on the whole nature of the research process and on the values of the persons who judge the research." (Sieber, 1992, 76-77)

Once we agree that the benefits and risk of research involving deception must be assessed together, we must consider what those benefits and risks may be. The discussion above identifies some potential benefits of the cases described here and some of the risks to subjects as well. Researchers must also be mindful of less obvious risks when considering research involving deception. These risks do not concern the potential for harm to the subjects of research, but rather entail negative consequences of such research for the researcher and for the science of psychology itself.

In a self-revealing essay entitled "Learning to Deceive," Thomas H. Murray describes his discomfort at engaging in deception in the course of research he helped conduct as a graduate student in social psychology (a helping behavior study similar to the one described in Case 3). He notes of the debriefing procedure following this study, "While I did reveal the true purpose of the study, I did not always answer all questions honestly; and I seriously doubt that I, or anyone else, could have removed all negative effects of participation" (Murray, 1980, 12). After encountering in debriefing anxious subjects who were shaking, stuttering, barely able to speak, he continues, ". . . you try to forget the queasiness in their smiles, and the uncertainty in their handshakes. You try to convince yourself that, yes, all harmful effects have been removed. But I did not believe it then, and I do not today." (Murray, 1980, 12) Disturbing as such post-study encounters may be, however, Murray identifies what he believes to be a more insidious danger of deception in research: the danger that the researcher will come to adopt an attitude of callousness, to view subjects as means to an end, and to believe that the characteristics and reactions induced by experimental manipulations in fact describe the way people are. Murray asks, "In trying to make our laboratory so much like the world, do we sometimes succeed in making our world like the laboratory?. . . Do we eventually come to see people as so easily duped outside the laboratory as inside it? And if our research induces people to behave inhumanely, do we come to believe that that is indeed the way people are?" (Murray, 1980, 14)

Such negative consequences of research involving deception do not end with the experimenter, however. The science of social psychology can itself be affected by the methods adopted by its disciples. The more prevalent the practice of deception in social psychology, the more the science comes to be associated with the practice, leading to an erosion of public trust in scientists and their purposes in any area of research in the field. Greenberg and Folger (1988) document that some social psychologists have challenged the unquestioning adoption of deception strategies, claiming that the "pool" of naive subjects grows smaller as populations, especially those such as college students who are often called on to participate in research, begin to expect to be deceived, thereby casting doubt on the validity of experimental findings. They also note that the public may acquire an attitude of distrust and suspicion regarding laboratories, scientists and even a profession that relies heavily on deception to make its progress.

A shocking incident at the Seattle campus of the University of Washington in 1973 illustrates one danger of such a widespread awareness of deceptive research methods in psychology. Students on their way to class witnessed a shooting and neither stopped to help the victim nor followed the assailant; when questioned later, some witnesses reported that they thought the incident was a psychology experiment! (Greenberg and Folger, 1988, 48). Although the criticism that "real-life" experiments lead to incidents such as the one above could be leveled as well at the movie and television industry, the example illustrates that deception in research has ramifications both for the subjects and for the science that extend beyond the time and place of the studies for which it is employed.

The discussion above, centered on three cases, illustrates why deception is employed as a research strategy and why its use has been called into question. Some of the dangers of deception are identified for the subjects, for the researcher, and for the science itself. Yet Greenberg and Folger (1988, 56) report eight studies that have indicated that subjects are bothered less about being deceived in the course of research than are the IRBs that review the proposals. If these findings are accurate, is more debate being raised about deception in research than is warranted? I believe that such findings add another element for consideration in the assessment of risks and benefits of research involving deception, but they do not eliminate the need for such consideration. Subjects in some kinds of experiments may not "mind" being deceived, but subjects participating in others may mind very much. In addition, subjects may not always recognize immediately, or ever, the subtle effects of such experimentation on their self-esteem, for example, or on their evaluations of social psychology and of scientists in general. We cannot dismiss the possibility that deception in research may have negative consequences for both subjects and researchers as well as for the science. Scientists considering deception have a responsibility to consider the costs with the benefits, and to minimize unavoidable costs wherever possible should they decide ultimately to deceive their research subjects.

References

  • Greenberg, Jerald, and Folger, Robert. Controversial Issues in Social Research Methods. Springer Series in Social Psychology. New York: Springer-Verlag, 1988.
  • Murray, Thomas H. "Learning to Deceive." Hastings Center Report 10 (April 1980): 11-14.
  • The National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research. Washington, D. C. : Department of Health, Education, and Welfare, 1979.
  • Sieber, Joan E. Planning Ethically Responsible Research: A Guide for Students and Internal Review Boards. Applied Social Research Methods Series, Vol. 31. Newbury Park, Calif.: SAGE Publications, Inc., 1992.