Do the Ends Justify the Means? The Ethics of Deception in Social Science Research

Description

This case raises issues of the question of ethical justification for intentionally deceiving subjects, experimentation on human subjects and voluntary consent.

Body

Case 1

Case 2

Case 3

Case 1

Ann Smith is a social psychologist who wants to study attitude change. She submits a proposal to her institution outlining details of a study that will examine the attitude change of participants following a workshop on environmental issues. Smith plans to identify attitude change by administering a pretest and a posttest. She is worried, however, that the participants will recognize that she is looking for changes in their attitudes and that this knowledge will influence their answers on the posttest. To address this problem, she plans to disguise the issues she is most interested in; when she administers the tests, she will give a very broad explanation that does not fully disclose the nature of the study. Her proposal includes these procedures and an explanation of why she believes they are necessary; she also includes a plan to "debrief" the subjects (tell them the real purpose of the study) after they finish taking the second test.

Discussion Questions

  1. What might be the benefits of this research, if any? What risks to subjects, if any, do you identify?
  2. What issues should members of the Institutional Review Board (IRB)(1) raise regarding Smith's proposal?
  3. If you were a member of the IRB, how would you weigh the benefits of the research with the risks to subjects in this case?
  4. Based on your assessment of the benefits and risks, would you approve Smith's proposal as submitted? If not, what changes would you suggest?

Back to Top

Case 2

For a study on conformity to group norms, Ann Smith constructs a survey designed to measure attitudes toward a controversial topic. The research proposal she submits describes her study procedures: She will use as subjects students in a large introductory psychology course she teaches. She includes the following paragraph in her syllabus? "One of the requirements of this course is your participation in a psychology experiment, through which you will be introduced to the methods of psychological research. If you prefer not to participate in the experiment, you may instead complete a 50-page research paper on a psychology topic of your choosing." She will bring two groups into the laboratory, ostensibly simply to obtain their attitudes on the survey. One group will be encouraged to discuss responses to the survey freely amongst themselves; the members of the other group, acting as controls, will take the survey independently. In the first (experimental) group, Smith will "plant" several confederates instructed to advocate loudly one side of the issue in question. Based on the results of similar studies, Smith believes that the majority of responses given by subjects in this experimental group will conform to the position advocated by the confederates, indicating the powerful influence of the group norm. Following the experiment, all subjects will be debriefed as to the true purpose of the experiment.

Discussion Questions

  1. What are likely to be the benefits of this research, if any? What risks to subjects do you identify?
  2. What issues should members of the IRB raise regarding this proposal?
  3. If you were a member of the IRB, how would you weigh the benefits of the research with the risks to subjects in this case?
  4. Based on your assessment of the benefits and risks, would you approve Smith's proposal as submitted? If not, what changes would you suggest?
  5. For the subjects involved, are there consequences of participation in the experiment that did not exist for Case 1? Are these consequences negative or positive? How do the effects of debriefing in this case differ from its effects in Case 1?

Back to Top

Case 3

In a research proposal modeling a familiar experimental manipulation to study people's altruistic, or "helping," behavior, Ann Smith plans to place one subject in a room with several experimental confederates. She will assign the group a task, supposedly the purpose of the experiment, then arrange for an "emergency" to occur in the vicinity of the laboratory -- the group will hear a loud thud from an adjacent room and then a piercing cry for help. She will instruct confederates to look up upon hearing the cry, then return to their task. In a pilot test of this procedure, the single subject looked around uncomfortably, then returned to the assigned task, as the confederates did. Following the experiment, the subjects will be debriefed about the true purpose of the research.

Discussion Questions

  1. What might be the benefits of this research, if any? What risks to subjects do you identify, if any?
  2. What issues should members of the IRB raise regarding this proposal?
  3. If you were a member of the IRB, how would you weigh the benefits of the research with the risks to subjects in this case
  4. Based on your assessment of the benefits and risks, would you approve Smith's proposal as submitted? If not, what changes would you suggest?
  5. For the subjects involved, how do the consequences of participation in the experiment differ from those in Cases 1 and 2? the consequences of debriefing?
  6. Suppose subjects were told before they agreed to participate that "This experiment could result in negative psychological effects for subjects," and the subjects still agreed to participate. Is Smith absolved of any further responsibility?

Discussion Questions for All Three Cases

  1. Is deception of subjects ever justifiable? If so, under what conditions?
  2. Can such questions as these be answered without deceiving subjects? Do potential benefits of such experiments outweigh psychological risks to subjects? At what point, if ever, do benefits of such experiments outweigh costs?
  3. If the consequences for subjects are positive (in Case 3, for example, if subjects who helped feel good about themselves, and subjects who did not help resolve to do so in the future), can the researcher conclude that the deception was justified?
  4. How might conducting experiments that involve deception of subjects affect the researcher? Is there any way in which such experiments could reflect upon science itself? If so, how?
  • (1)An Institutional Review Board (IRB) is an institutional committee established to review research proposals to ensure that the rights of human subjects are fully protected.
Notes

Brian Schrag, ed., Research Ethics: Cases and Commentaries, Volume 1, Bloomington, Indiana: Association for Practical and Professional Ethics, 1997.

Citation
. . Do the Ends Justify the Means? The Ethics of Deception in Social Science Research. Online Ethics Center. DOI:. https://onlineethics.org/cases/graduate-research-ethics-cases-and-commentaries-volume-1-1997/do-ends-justify-means-ethics.

Intentional Deception of Human Subjects in Research

IRB Considerations

The Use of Deception

Moral Wrongs and Harms

Moral Wrongs and Harms in Deception

Debriefing and Harm

Voluntary Consent

Benefits of the Experiments to the Subjects

Broader Issues

References

Intentional Deception of Human Subjects in Research

These three cases raise a narrower issue and a broader issue. The narrower issue is whether an IRB should approve the conduct of any or all of these experiments, which involve intentional deception of human experimental subjects. The broader issue is whether it is ethical for scientists to employ intentional deception in experiments on human subjects. The broader question has taken on increasing significance over the past 50 years as the use of deception in research has increased dramatically. The proportion of studies that use intentional deception in experimentation on human subjects increased from 18 percent in 1948 (Baumrind, 166) to 37 percent in 1963 and 47 percent by 1983. (Fisher and Fryberg, 417)

The broader issue raises not only the question of the ethical justification for intentionally deceiving the subjects but other ethical considerations including the moral significance of the particular acts of deception or a practice of deception for the researcher, the training of researchers, the university (if the research is university-based), the discipline, research science and society as a whole.

Back to Top

IRB Considerations

The IRB has a narrower focus. Its concern is primarily, although not exclusively, with protecting the rights and welfare of the human subjects in scientific research, given certain guidelines. Those guidelines may or may not adequately capture all the relevant ethical considerations concerning particular deception research or the practice of such research. Hence, even if the IRB approves any of these experiments, that does not settle the question of whether it is ethical for scientists to engage in this research. It is possible for the IRB to approve one of these experiments and it still not be ethically justified research. Nevertheless, the IRB is a good place to begin in these cases.

The federal guidelines on the protection of human subjects of research (found in the Code of Federal Regulations, Title 45, Part 46) provide the IRB with criteria for determining whether proposed research that falls under its purview will treat human subjects in an ethical manner. These guidelines specifically charge the IRB with determining two things: 1) That the subjects have given free and voluntary informed consent to participate in the study and more particularly that a) the circumstances under which the consent is obtained minimizes the possibility of coercion or undue influence; b) the informing includes a description of any reasonably foreseeable risks or discomforts; c) refusal to participate will involve no loss of benefits the subject is entitled to; d) the subject may discontinue participation at any time; e) if subjects are part of a population that may be vulnerable to undue coercion or influence that additional safeguards are included to protect their rights and welfare. 2) That risks to subjects are minimized and are reasonable in relation to any benefits of the research to the subjects and in relation to the importance of the knowledge gained in the experiment. (45 CFR 46.111)

These guidelines draw on three ethical concepts relevant to ethical practice of human research, namely, "respect for persons," "beneficence" and "justice."(2) The ethical principle, "One ought to treat humans with respect," is used to ground a requirement that in scientific research, prospective human subjects should not become subjects of a scientific research experiment until and unless they have given free and voluntary, informed consent to participate in that experiment.

IRB guidelines do not categorically rule out deception of human subjects in research, even though the ethical principle and concepts outlined above would appear to preclude it. Federal guidelines allow deception of human subjects in experiments by allowing a waiver of the informed consent requirement provided that

the IRB finds and documents that: (1) The research involves no more than minimal risk to the subjects; (2) The waiver will not adversely affect the rights and welfare of the subjects; (3) The research could not practicably be carried out without the waiver or alteration and (4) whenever appropriate the subjects will be provided with additional pertinent information after participation. (45 CFR 46.116 [d])

The risks to subjects must be "reasonable in relation to anticipated benefits, if any, to subjects and to the importance of the knowledge that may reasonably be expected to result." (45 CFR 46.111 [a] [2])

The IRB may also be guided in these cases by the American Psychological Association's Ethical Principles of Psychologists and Code of Conduct (1992) as well as the American Psychological Association's Committee on Ethical Standards in Psychological Research's Ethical Principles in the Conduct of Research with Human Participants (1973).

Before the IRB approves an experiment that involves deception, it must consider the risk of harm to subjects relative to the benefits to the subjects and the importance of the knowledge gained; actual harm to subjects; the necessity of deception in the experiment and whether subjects are adequately debriefed subjects after the experiment. A central consideration is the harm to subjects caused by deception. To assess such harm requires careful attention to the kinds of deception involved.

Back to Top

The Use of Deception

All three of these cases present some element of deception of the subjects; the level of deception increases from the first to the third cases. In all three cases, the subjects are deceived as to the purpose of the experimental activity. In Case 1, the investigator deceives by failing to completely reveal the nature of the study. In Case 2, the researcher lies to the subjects about the purpose of the experiment; they are told the purpose of the activity is to measure their attitudes when in fact the research activity involves investigating the degree to which they and their attitudes are vulnerable to group pressure. In Case 3, subjects are told that purpose of the task is one thing when in fact it another, which is to observe their "helping" behavior in response to someone they are deceived into thinking is in distress.

In Cases 2 and 3, the subjects are not only deceived about the purpose of the experiment but also about the status of other persons they interact with in the experimental group. Subjects are allowed to think all members of the group are experimental subjects when in fact some are confederates in the experiment. In Case 3, subjects are additionally deceived about the status of someone outside the group and are led to believe that person is in distress.

Back to Top

Moral Wrongs and Harms

The IRB needs to think carefully about the moral wrongs and the harm to subjects that come from the deception in these cases. Because social science research has a tradition of the use of deceptive techniques and because of the possibility that some lines of research may not be pursued without the use of deception of subjects, there may be a tendency of an IRB to underestimate the moral wrong or harm of deception or to be biased in favor of the benefits of the knowledge gained in the research. For that reason I will focus in some detail on the moral wrongs and harms to the subjects that can arise from such deception. Just as there are various levels of deception, so there are various kinds of moral wrong or harm to subjects that can arise from deception.

Intentional deception, as Sissela Bok argues in her book Lying, is as much a form of deliberate assault on persons as is physical violence: Both can be used to coerce individuals to act against their will. (Bok, 1989, Chapter 2, especially pp. 18-21) Deception is used in these cases to manipulate the beliefs and choices of the subjects as well as their responses to the situations. Deception is used to manipulate the subjects' choice to be involved in the experiments. Particularly in Cases 2 and 3, had subjects known the real purpose of the experiment, some may well have chosen not to participate. Deception is also used to manipulate the subjects' choice of responses to peers and to the situation within the experimental setting. If subjects knew the confederates were actually confederates, the subjects may well have withdrawn their initial disposition to trust the reactions of those peers, and they likely would have chosen to react to the confederates' behavior quite differently.

Deception fundamentally fails to respect persons and for that reason morally wrongs the person. Subjects in all these cases are not treated as rational beings capable of rational choice but are treated solely as means to the researcher's ends. Subjects of deception are always morally wronged in this way even if they do not realize or never realize they have been deceived. Of course, they may also be harmed by deception, even if they do not realize that they are being deceived.

It is important to distinguish between morally wronging persons and harming them; it is a category mistake to equate the two.(3) We morally wrong people when we violate fundamental moral principles in our dealings with them; for example, when we fail to respect them as persons, treat them unjustly, violate their rights, invade their privacy or gratuitously harm them. The concept of morally wronging a person is independent of its criterion of application. Some would argue the criterion involves treating persons solely as means; others would argue it involves only doing a person physical or psychic harm; some would include both. When we manipulate people by lying to them, we may morally wrong them, even though we may not harm them. Harming persons is not necessarily the criterion of morally wronging persons; it is one, but only one, way of morally wronging them. Moral wrongs to persons may be accompanied by harm to them; in that case, they have been morally wronged in more than one way.

This distinction between moral wronging and harming is blurred in the federal guidelines. The language of risks and harm in the guidelines may direct our attention away from concern for the morally wronging of subjects. Focus on the language of harm has blinded researchers to the distinction and led them to assume, No harm, no moral foul, that any negative consequence of deception can be undone by undoing the harm. Ignoring the distinction makes it easier to justify deceptive research because the risk-benefit analysis takes into account only harms, not other moral wrongs. Much reasoning about the debriefing of human subjects in deceptive research misses the point because it assumes that the only wrongs to be addressed are the harms caused by the research. The harms of deception may or may not be undone by debriefing. The moral wrong of manipulating subjects by deception into acting against their will cannot be undone. I assume that moral wrongs other than harms are relevant to IRB deliberations regarding approval of experiments.

Back to Top

Moral Wrongs and Harms in Deception

Some of the moral wrong of deceptive experiments, then, comes from simply failing to treat persons with respect. Notice that consent to be morally wronged does not eliminate the wrong. If we succeed in getting people to agree to let us morally wrong them, that does not justify the wrong. Indeed, even if people were to give us permission to fail to treat them as rational persons and we subsequently do so by deceiving them, we have still wronged them as much as persons who consent to slavery are wronged if we enslave them.)

In Cases 2 and 3, the nature of the experiments enabled by the deception may also be a source of wrong and harm in particular to some subjects. Joan Seiber notes that one defensible justification for deception research is if it is the only way "to obtain information that would otherwise be unobtainable because of the subject's defensiveness, embarrassment, shame or fear of reprisal.(Seiber, 64)

One might think that Seiber's justification is precisely a justification for not allowing the deceptive research. In these two cases, deception allows the investigator to invade the privacy of the subjects without their knowledge or consent and to force the subjects (again without their knowledge or consent) to confront certain inclinations in themselves and to reveal them to their peers in the experiment and to the researcher. The inclinations revealed might, for some subjects, fall under Seiber's category of "otherwise unattainable information." We are not given the controversial topic to be discussed in Case 2. That may be significant for the IRB to consider since there may be particular sensitive topics that might be especially stressful for some subjects or particularly affect their reluctance to have their private thoughts invaded. In Case 2, the subjects' inclinations to follow group pressure and group norms are revealed. In Case 3, the subjects' reluctance to help a person presumed to be in distress is revealed.

There are two sorts of wrong here. First, both experiments invade the subjects' private behavior and emotions. As Bok argues, learning about people's private behavior and emotions without their consent is akin to spying at them through keyholes and is not "less intrusive for being done in the interests of research." (Bok, 1989, 194)(4) It is not always true that what you do not know cannot wrong you. With regard to spying through a keyhole, we think a moral wrong has been done even if the subject is unaware of the spying.

Harm to subjects is also likely. In these cases, the subjects will learn about the invasion of privacy since they will be debriefed, and by that means further harm may be done. People may well vary in the strength of their sense of privacy and the harm from having that privacy invaded. Some may be quite bothered by this invasion of their most intimate being, others not at all. Consequently a reasonable case can be made for the claim that the subjects are best positioned to judge the harm done. Since the subjects will be deceived and denied the opportunity to give voluntary and informed consent, they cannot be asked how much they think they would be harmed by the experiment. Some would argue that if we survey a representative sample of potential subjects about participation in such experiments, we can take their responses as reasonable evidence of what the subjects would say if they were given a choice to participate. (Sharpe et al., 1992, 589) However, given the variability of individual responses to such invasion, there is no reason to think the substituted judgment of the researcher or the IRB committee, even based on such evidence, is an accurate gauge of the harm done the individual subject by this invasion of privacy.

The second sort of harm, in both Case 2 and Case 3, is that caused by forcing persons to confront or reveal to others knowledge about themselves they may not want to confront and may find painful to live with.(5) For example, Seiber notes research that suggests most people perceive others as "conforming sheep" but view themselves as not being influenced by peer pressure. (Baumrind, 1979, 65) Some subjects in Case 2 may be upset by being forced to confront that bit of self deception or by revealing it to others. In Case 3, subjects may feel anxious, embarrassed, ashamed or guilty for not coming to the aid of a person they feel is in distress; they may feel the same when forced to confront that fact about themselves and have it revealed to others. Notice that mere participation in the experiments in Cases 2 and 3 may force this realization, whether or not the subjects are debriefed.(6) Again, people may vary on how much difficulty this unsolicited knowledge may cause them; the subjects and only the subjects are best positioned to judge the harm done to themselves.

Additional harm to subjects may occur when subjects realize they have been deceived in order to be used in an experiment. In these cases they will know they have been deceived since they will be debriefed. In general, when persons discover that they have been deceived and manipulated, the natural response is to a feel loss of control over their own actions, to feel used, to feel they have been played the fool and consequently to be resentful, distrustful and suspicious both toward those who deceived them and more generally toward all others. In this case that distrust may also be directed toward social scientists and scientific research in general.(7) There is no reason to assume that this suspicion and distrust is a momentary or fleeting reaction that disappears without a residual impact on the trustful disposition of subjects. We know in some instances (e.g., the Tuskegee syphilis experiment or radiation experiments conducted on citizens in the 1950s by the U.S. government) that the experience of discovering that they have been deceived into being experimental subjects had lasting effects on subjects' trust of medical and governmental officials. The loss of trust caused by such deception also has a way of spreading to those who were not subjects, but simply learn about the deceptive practice.(8) The makeup of the subject group may be relevant here. We do know that the bulk of social psychology research is carried out on college students. (Fisher and Fryberg, 1994, 418) The impact of being deceived is especially significant when the subjects are college students and they realize they are being deceived by a trusted faculty member who is also supposed to be a teacher and role model for the profession.

Back to Top

Debriefing and Harm

Some researchers may assume that any harm caused by deceptive research can be "wiped out" by debriefing after the fact. Debriefing includes dehoaxing (revealing the deception) and desensitizing ( attempting to remove any undesirable consequences of the experiment). The aim of desensitizing is to restore the subject to an emotional state of well-being. Seiber notes evidence that desensitizing is not always effective in removing all the damage to self esteem caused by the deceptive experiment. (Seiber, 1992, 72)(9) Indeed, the debriefing may only increase the harm by ensuring the subjects are explicitly and exactly aware of the unflattering character traits and behavior they have revealed about themselves.

Voluntary Consent

As the IRB thinks about whether to approve any of these three research proposals, an important issue to consider is the degree to which the experimental subjects have given their voluntary and informed consent to participate in the experiment. Informed consent is a necessary but not sufficient condition of voluntary consent. That is, if the consent is not informed, it cannot be completely voluntary since, if subjects do not know what they are consenting to, they cannot be said to have voluntarily consented to do it. However, giving informed consent is not necessarily sufficient to ensure the consent is voluntary.

Suppose, however, the researcher proposes in these cases to ask subjects to agree to participate in an experiment with the understanding that they will not be told about the exact nature or purpose of the experiment until afterward and that there may be some deceptive elements in the experiment. The subjects at least voluntarily agree to be deceived, even if they are unclear about the details of the deception.

To assess voluntary consent under such conditions, it is necessary to know how these subjects are recruited and under what conditions. In Case 1, for example, how were subjects recruited to the workshop? Was selection for the workshop independent of the recruitment for participation in the study? For example, was the workshop part of mandatory training on environmental issues for employees? In such a setting, participants may not feel free to refuse to participate in the testing. Is the workshop staged only for the purpose of testing the impact of the workshop on changing attitudes and if so how were subjects recruited? In Cases 2 and 3, the subjects are brought to the laboratory, so they presumably are at least aware from the beginning that this will be an experimental activity. If they agree to participate after being told some information about the experiment is being withheld until after the experiment and some deception may be involve in the experiment, then one might argue reasonable voluntary consent was obtained.

But such an arrangement does not establish that the subjects' consent was sufficiently voluntary in the sense of given without undue influence or coercion. As an illustration, consider a practice of psychology departments in many universities, thought by many to be acceptable. These departments include in the syllabus of introductory psychology classes a requirement for the course that the student either participate as a subject in a certain number of departmental research experiments or write an additional paper for the course. (This requirement is a convenient way of ensuring plenty of experimental subjects for the department.) One such practice over a twenty-year period is described by Sharpe et. al. (1992).

Although the students have an "alternative" to participating in experiments as subjects, it does not follow that their choice to engage in the experiment is uncoerced or not unduly influenced. As a practical matter of fact, many of the students may need to take this course; avoiding the course is not an option. Once in the course, there are coercive negative inducements to becoming a subject in order to avoid writing a paper. The negative consequences of writing another paper are clear to students; the negative consequences of serving as a subject may not be clear.(10) In such circumstances, there may be a negative inducement to "volunteer" for the research. There are parallels in this practice to the dispensing of aspirin to poor black subjects in the Tuskegee syphilis experiment to gain their cooperation in a nontherapeutic experiment. Even if students knew in advance exactly the experiments in which they would be asked to participate, their consent, although informed, may in these circumstances be coerced and to some degree involuntary. Furthermore, in cases of deceptive experiments, students may need to decide between the syllabus alternatives before they know the nature of the experiments; it may be too late to back out of the experiments after they realize what they will be asked to do as subjects. If a similar practice is the source of experimental subjects in the three cases, then it is not at all clear the subjects are in a position to give voluntary consent, whatever the degree of informed consent in the cases.

Back to Top

Benefits of the Experiments to the Subjects

It is not clear that there is much in the way of benefits to the subjects in any of these experiments. A standard rationale for using college students as experimental subjects is that it gives them an increased appreciation of the discipline. A recent study suggests no evidence that participation has that effect. (See Sharpe et al., 1992, 589) Some argue that the subjects receive, as a benefit in debriefing, a brief explanation of current research understanding of the issues under investigation. The subjects could learn that information by reading the research literature without participating in the experiment.

In the absence of any benefits, the harm or potential harm to the subjects, particularly in Cases 2 and 3, surely outweigh the benefits to the subjects.

The IRB is also called on to determine if the benefit to general knowledge justifies the deception of these subjects. If one accepts that charge to IRBs as morally legitimate, one of the first questions an IRB ought to ask, particularly in Cases 2 and 3, is, "Are these experiments necessary?" The experiment in Case 3 is clearly very similar to a large number of experiments on helping behavior already done over the last thirty years. Unless it can be shown that this experiment adds significantly to that research, it ought to be denied on those grounds alone. Does the experiment in Case 2 really add anything to our knowledge of the influence of peers on our willingness to assert or express our views on controversial topics? Studies of group think have been around for a long time. The Case 2 experiment ought to also be denied on those grounds alone.

But one ought to raise a more fundamental ethical question at this point about the IRB guidelines. The IRB is allowed by its guidelines to weigh the harm to research subjects in an experiment against the value of general knowledge gained in the experiment. In the case of experiments in which subjects are involved without informed, voluntary consent, the harm to subjects must be considered "minimal" by the IRB in order to approve the experiment. (CFR 46.116) The definition of "minimal risk" is

that the probability and magnitude of harm or discomfort anticipated in the research are not greater in and of themselves than those ordinarily encountered in daily life or during the performance of routine physical or psychological exams. (CFR 46.102 (i))

One might ask if deception of humans is ever a minimal harm; if not, it should never be done.

The rational for balancing minimal harm to subjects against the value of knowledge gained is the principle of beneficence invoked in the Belmont Report. The principle of beneficence in the Belmont Report is understood as an obligation expressed in terms of two rules. "Two general rules have been formulated as complimentary expressions of beneficent actions in this sense: (1) do not harm and (2) maximize possible benefits and minimize possible harms." (National Commission, 1979, 4)

There are several issues here.(11)  First is the issue of whether beneficence is an obligation or merely a good thing to do. One might at least agree that it is a prima facie obligation to be beneficent. Second is an issue of the exact content of beneficence. The principle of beneficence is usually thought of as an obligation to do good and avoid harm. William Frankena argues that notion can be explicated as 1) One ought not to inflict harm; 2) one ought to prevent harm; 3) one ought to remove harm; and 4) one ought to promote good. (Frankena, 1973, 47) He argues that the notion of an obligation to maximize the good is yet a further principle, which presupposes but not is necessarily implied by the principle of beneficence. (Frankena, 1973, 45) A final issue is the lexical ranking of these obligations. Traditionally in ethics, the notion of not harming takes precedence over doing good, to say nothing of maximizing the good. If that is the case, on this explication of beneficence, the fact that subjects are harmed in deceptive experiments should settle the issue for the IRB. Deceptive experiments should not be done.

The rationale of the Belmont Report for giving priority to "maximizing the good" over "doing no harm" is weak on this point. The report argues that although one should do no harm,

[E]ven avoiding harm requires learning what is harmful; and in the process of obtaining this information, persons may be exposed to risk of harm. Further, the Hippocratic oath requires physicians to benefit their patients according to their best judgments." Learning what will in fact benefit may require exposing persons to risk. (National Commission, 1979, 4)

Usually the interpretation of the "do no harm" principle is that one should not intentionally do that which one already knows will do harm. It is not a requirement that one minimize harm or that one try to avoid all harm by first attempting to discover everything that may cause harm even if that discovery process itself causes harm. Nor is the dictum a general rationale for doing harm to someone in order to prevent harm to others. To say otherwise is simply to collapse the distinction between avoiding known harms and minimizing all harms, known or unknown. In the specific case of treating a patient, the dictum may allow a rationale for subjecting the patient to risk in order to find a cure for an even greater harm to the patient. But there, the risks and benefits are all borne by the same person. With the exception of such cases, "Do no harm" is silent with respect to the issue of calculating tradeoffs of harm between persons.

In cases of deceptive experiments, we do not need to do the experiments to know the harm caused by deception. It is possible that deceptive experiments may be make us aware of why humans do not alleviate harm, for example, in "helping situations." But to say it is permissible to sacrifice the interests of subjects of human experimentation without their knowledge or consent for the welfare of others in order to learn what is harmful brings us right back to a violation of the principle of respect for individuals. Notice the case is different when subjects freely give their informed consent to engage in experiments that may harm them but produce a good for others. In such situations, the principle of respect for persons is observed. One may conclude that IRBs may be allowing far more deceptive practice than is warranted by their own moral principles.(12)

We have concentrated on the harm deceptive experiments may do to subjects and criticized the notion of the IRB trying to balance the harms to the subjects of deceptive experiments against general gains in knowledge. One issue we will not have space to address is whether deceptive research is even necessary. Social scientists themselves differ on whether good science requires such research. (Compare Seiber [1992] and Baumrind [1985].)

Back to Top

Broader Issues

The practice of deceptive research raises broader ethical issues that the IRB is not charged with considering but are legitimate concerns for the professional research community as well as other social institutions. I can only mention them here. There is the harm of deception to the researchers who engage in it. Thomas Murray in his essay, "Learning to Deceive" (1980) eloquently details a first hand account of those harms. There are broader harms as well. The core values of integrity and devotion to the truth must necessarily be held by academics and in the university. Should the university really be in the business of teaching students how to deceive people? What impact does a generally acknowledged practice of deception have on the perception of the trustworthiness of the research community? What impact does a generally acknowledged practice of deception in the research community have on social perceptions of the acceptability of engaging in deception as long as the deceiver thinks it is in a good cause?

Back to Top

References

  • Baumrind, Diana. "IRBS and Social Science Research: The Costs of Deception." IRB: A Review of Human Subjects Research, 1 (6, October 1979): 4.
  • Baumrind, Diana. "Research Using Intentional Deception: Ethical Issues Revisited." The American Psychologist 40 (February 1985).
  • Bok, Sissela. Lying: Moral Choice in Public and Private Life. New York: Vintage Books, 1989.
  • Fisher, Celia, and Fryberg, Denise. "Participant Partners: College Students Weigh the Cost and Benefits of Deceptive Research." The American Psychologist 49 (May 1994).
  • Jones, James H. Bad Blood: The Tuskegee Syphilis Experiment, 2d ed. New York: The Free Press, 1993.
  • Macklin, Ruth. "Autonomy, Beneficence and Child Development" in Barbara Stanley and Joan E. Seiber, eds.
  • Social Research on Children and Adolescents: Ethical Issues. Newbury Park, Calif.: Sage Publications, 1992.
  • Murray, Thomas. "Was This Deception Necessary?" IRB: A Review of Human Subjects Research 2 (10, December 1980): 7-8.
  • OPRR, Department of Health, Education and Welfare. The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research. 1979.
  • Sieber, Joan. Planning Ethically Responsible Research: A Guide for Students and Internal Review Boards. Applied Social Research Methods Series, Vol. 31. Newbury Park, Calif.: Sage Publications.
  • Sharpe, Donald, et. al. "Twenty Years of Deception Research: A Decline in Subject's Trust?" Personality and Social Psychology Bulletin 18 (5, 1992).
  • U. S. Department of Health and Human Services, "Protection of Human Subjects." Code of Federal Regulations Title 45, Part 46 (Revised 1991).
  • (2)These principles were first articulated in The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research. OPRR Report, Department of Health, Education and Welfare, 1979.
  • (3)For an early discussions of these points see Tom Murray, "Was This Deception Necessary?" IRB: A Review of Human Subjects Research 2 (10, December 1980): 7-8. For a later discussion see Ruth Macklin, "Autonomy, Beneficence and Child Development," in Barbara Stanley and Joan E. Seiber, eds., Social Research on Children and Adolescents: Ethical Issues (Newbury Park: Sage Publications, 1992). The point seems to have been lost on many social scientists.
  • (4)The arguments here draw from her discussion in the whole of Chapter 13.
  • (5)Baumrind appropriately calls this "inflicted insight" because the subject is given painful insights into his or her flaws without asking for such insights. See Diana Baumrind, "IRBs and Social Science Research: The Costs of Deception," IRB: A Review of Human Subjects Research 1 (No. 6, October 1979): 4.
  • (6)For a graphic description of the negative effects on subjects of participating in helping experiments such as the one proposed in Case 3, see Tom Murray, "Learning to Deceive," The Hastings Center Report 10 (2, April 1980): 12.
  • (7)Seiber refers to research that indicates the extent to which college students who serve as experimental subjects now assume the researcher will be attempting to deceive them. (Seiber, 1992, 7, 65).
  • (8)See, for example, James H. Jones, Bad Blood: The Tuskegee Syphilis Experiment, 2nd ed. (New York: The Free Press, 1993), Chapter 14, for a discussion of the impact of the Tuskegee study on the trust of black Americans toward government health personnel and the subsequent impact of that on efforts to deal with AIDS in the black community.
  • (9)For a candid description of the experience of debriefing subjects of a helping experiment see Murray (1980), 12.
  • (10)Sharpe et al. (1992) report that virtually all students opt for the research. (p. 586).
  • (11)For a discussion of these points see William Frankena, Ethics, 2d ed (Englewood Cliffs, N.J.: Prentice Hall, 1973), pp. 45-48.
  • (12)For earlier discussions of some of these issues, see Ernest Marshall, "Does The Moral Philosophy of the Belmont Report Rest on a Mistake?" IRB 8 (1986, 6): 5-6 and Baumrind (1979).

Author: Brian Schrag, Association for Practical and Professional Ethics.

The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research, issued in 1979, elucidates three comprehensive principles that are relevant to the ethical practice of human subjects research: 1) respect for persons, 2) beneficence and 3) justice. The first principle, respect for persons, is particularly relevant to the question of deception in research. The report claims that "respect for persons demands that subjects enter into the research voluntarily and with adequate information" (p. 4). It goes on to apply this principle to formulate the requirement that subjects must give their informed consent to participate in research. This requirement of full and complete disclosure is waived, however, when

informing subjects of some pertinent aspect of the research is likely to impair the validity of the research. In many cases, it is sufficient to indicate to subjects that they are being invited to participate in research of which some features will not be revealed until the research is concluded. Such research is justified only if it is clear that 1) incomplete disclosure is truly necessary to accomplish the goals of the research, 2) undisclosed risks to subjects are no more than minimal, and 3) there is an adequate plan for debriefing subjects, when appropriate, and for dissemination of research results to them. (p. 6)

The report uses the phrase "incomplete disclosure" to indicate that its criteria apply not only to instances of outright deception in research but also to cases in which the researcher has misled subjects or has given them only partial information. I use the term "deception" here to describe all such situations in which subjects consent to participate in research on the basis of less-than-complete information. My analysis does not include an admittedly relevant question, whether the degree of disclosure makes a difference in deciding the ethical questions. In each of the cases outlined above, the researcher proposes to use some form of deception as a way of obtaining valid research results. Following, I analyze each of the three cases in light of the Belmont Report's criteria for ethically responsible research involving deception of subjects.

In Case 1, the researcher justifies deception on the grounds that awareness of her purposes will bias subjects' responses. Research in the field of social psychology has demonstrated that subjects' self-reports of attitudes can be influenced by a number of factors, including the subjects' desire to please the experimenter. It seems, therefore, that the research in this case meets the report's first criterion, that incomplete disclosure is necessary to accomplish the purposes of the research. The proposed research also meets the second and third criteria: This sort of attitude research does not seem to involve potential harm to subjects, and the researcher has included a plan for debriefing subjects following their participation.

Cases 2 and 3 similarly seem to require an element of deception to accomplish the purposes of the research. In Case 2, the study of conformity requires that subjects not be fully informed, or their behavior would not be spontaneous. The same reasoning applies to Case 3 -- subjects who knew that the research was measuring helping behavior quite naturally would help! Cases 2 and 3 differ, however, on the second criterion, undisclosed risks that offer the potential of harm to subjects. The research on group conformity is not likely to pose a risk to subjects; they are merely discussing a controversial issue, then reporting their attitudes. The research on helping behavior, on the other hand, is likely to entail some degree of harm to subjects. The experimental setup involves placing subjects in a situation that requires a difficult choice (to act or not to act) and then complicating that choice with the powerful influence of others. The subjects are likely to experience mild to extreme distress in such a situation. Case 3, therefore, does not meet the Belmont Report's second criterion of avoiding all but minimal risk to subjects.

With regard to the principle of voluntary consent, both Cases 2 and 3 are suspect. The researcher is also the instructor for the course, which presents a dilemma for students who may be uncomfortable about participating in an experiment. Although the researcher has included an alternative to participation (the 50-page paper), does this option constitute a true alternative? That is, is the option of not participating equally palatable from the student's standpoint? Consider that students may choose to participate in the experiment in spite of their apprehension because the paper option presents a heavy addition to the students' workload when compared to the one-time, one-hour appointment with the researcher.

These issues are complicated when the debriefing of subjects is considered. In Case 2, I noted that this experiment on group conformity was not likely to entail harm to subjects. That is true of the experiment itself -- but possibly not true for the debriefing. The debriefing in this case may do what Diana Baumrind has called "inflicting insight" upon subjects (quoted in Murray, 1980): When they are told that the researcher was actually studying group conformity, subjects who conformed may gain knowledge of themselves they would prefer not to have. Participation in this experiment, for these subjects, provides direct evidence of character traits most of us like to think we don't hold. We believe that we have minds of our own, that we don't bend too easily to outside pressure, etc. Gaining knowledge to the contrary (which, remember, was knowledge that subjects did not consent to gain) may cause subjects embarrassment or a lowering of their self-esteem. The effects of debriefing in Case 3 are similar, but the ramifications of unrequested knowledge are potentially still more serious. It could be quite disturbing for subjects to learn that in an emergency, when someone else needs help, they could be so easily swayed to inaction. Again, subjects may attribute their behavior in the experiment to flaws of character; unknown to the experimenter, some subjects may already struggle with low self-esteem, and their participation in such an experiment could be devastating. Only in the first case is debriefing not likely to introduce or add to the potential of psychological harm to subjects.

We have, therefore, complicated our consideration of the criteria for ethically responsible research involving deception, particularly in Cases 2 and 3. The Belmont Report's second and third criteria appear to conflict: The debriefing process, which is intended in part to "consolidate the educational and therapeutic value" (Sieber, 1992, 39) of research for subjects, is in fact an element of the research that either introduces or magnifies the risk of harm to subjects. Clearly too, deception research violates the principle of informed consent: Subjects in such cases may be understandably angry when the debriefing process "inflicts insight" about themselves that they neither wished to nor consented to gain.

Note that the report's third criterion includes "an adequate plan for debriefing subjects, when appropriate" [emphasis added]. We might conclude that when debriefing introduces or magnifies harm to subjects, as it does in Cases 2 and 3, a debriefing procedure is inappropriate. In such cases, it may be better for subjects not to know what was really being measured by the study. However, the problem of paternalism arises in judging for the subjects what constitutes a harm, and in deciding what is "best" for them. Further, this position seems to violate the concept of respect for persons, a central principle of ethically responsible research with human subjects. In addition to its educational and therapeutic value, the debriefing process also seems to be a gesture of respect for the subjects of research, built on the understanding that subjects have a right to know the true nature of the research in which they participated. We are then left with a difficult choice between introducing or magnifying the risk of harm to subjects by a debriefing process, or sending subjects on their way, never knowing what was actually done to them, an unpalatable option for responsible researchers who believe in honesty in research and who regard "subjects" as partners in the research process.

Options exist, however, for making such a choice, if still difficult, at least less difficult. A sensitive debriefing can go a long way toward alleviating the psychological harm that the process may introduce to subjects. In Case 2, the researcher could make clear that the responses of subjects who conformed are in no way unusual and could briefly explain some of the mechanisms that make group influence so powerful. In Case 3, again, the researcher should point out to subjects that the majority of those studied did not help. The researcher should summarize the research done to date on helping behavior and outline what is known about why people do not help in emergencies. In both cases, an explanation of how the current research is expected to add to the knowledge of group conformity or of helping behavior and a brief statement of the ways in which greater knowledge of these social phenomena may benefit others will also increase subjects' sense of well-being following the experiment.

Another option to minimize the risks of deception research is to anticipate some of the difficulties and adopt a research plan including a milder form of deception. Sieber (1992, 67-68) notes that deception in research takes one of five forms, with each succeeding form removing more of the subjects' right to self-determination and lessening the knowledge that is the basis for their consent to participate:

  1. informed consent to participate in one of various conditions: subjects know that they will not know which research condition they will participate in (e.g., treatment or control, experimental drug or placebo);
  2. consent to deception: subjects know there is some aspect of the study that will not be fully disclosed;
  3. consent to waive the right to be informed: subjects waive their right to be informed and thus are not told of the possibility of deception;
  4. consent and false informing: subjects give consent but are falsely informed about the nature of the research;
  5. no informing and no consent: subjects do not know they are subjects in any form of research (as when "real-life" situations are studied, or a seemingly real incident is contrived and then observed).

Each of the three cases analyzed here could be considered an example of consent and false informing: In each case, subjects have given consent but are not told what is actually being studied. Case 1 illustrates what one might consider a mild form of false informing -- that is, subjects are not fully informed because of the vagueness of the explanation of the study's purpose, but neither are they lied to outright. Yet because subjects have not consented to any form of deception (They do not know they are not being given full and adequate information), the case is still an example of consent and false informing. Cases 2 and 3 are clear-cut examples of consent and false informing.

The question then becomes, "Could the research purposes in these three cases be accomplished by employing a 'lesser form' of deception, one that preserves to a greater degree subjects' rights of self-determination and knowledge of the research?" In Case 1, it is questionable whether the accuracy of subjects' attitudinal responses would be compromised if they knew that the researcher could not tell them exactly what was being measured. If they were told that they weren't "getting the whole story," would their responses differ from the responses they would make when they were trying to guess at the purpose of the research? It seems that a milder form of deception might be feasible in Case 1; a well-informed researcher must make that judgment. In Cases 2 and 3, it is more difficult to imagine that any milder form of deception than consent and false informing would result in subjects behaving as they would when they were unaware of the study's purposes. In the study on helping behavior, if subjects were at all aware that they had not been fully informed, they would be quite likely to recognize immediately that the "emergency" was contrived. In the study on group conformity, it is possible that subjects would be so busy trying to figure out what was really being measured that they would not behave at all spontaneously or naturally in the group. It seems, then, that in at least two of the cases, the research cannot be accomplished without deception that limits subjects' autonomy.

However, a further determination must be made before the use of deception in research can be justified. The Belmont Report does not consider the worth of the research as a criterion for justifying the employment of deception. The report's criteria exclude any deception research that involves risks to subjects that are "more than minimal." Notice, however, that in this group of cases, as the risks to subjects escalate in severity, the potential benefits of the research increase as well. The study involving the greatest risk of harm to subjects, the helping behavior study, has enormous potential for increasing our understanding of the reasons people fail to help in emergencies, thereby increasing the possibility that we can develop strategies to combat those reasons. The research on group conformity has potentially beneficial aspects as well -- in increasing our understanding of the ways in which gangs operate, for example. It seems that in making decisions to undertake research involving deception, the potential costs to subjects must be weighed against the potential benefits for society.

Such a judgment is difficult to make. As Sieber (1992) points out, it is not always possible to identify risks and benefits in advance, and those that are identified are often not quantifiable. How does one weigh present harm to one individual against potential future benefits for many individuals? Sieber suggests that "common sense, a review of the literature, knowledge of research methodology, ethnographic knowledge of the subject population, perceptions of pilot subjects and gatekeepers, experience from serving as a pilot subject oneself, and input from researchers who have worked in similar research settings" (1992, 76) should all inform the assessment of risks and benefits. Imperfect as such judgments may be, they must be made. Trivial research involving any degree of harm to subjects is certainly unjustified; important research, on the other hand, may generate such benefits as to be worth some degree of harm (minimized and alleviated as much as possible) to subjects. The key is that the researcher should not be the sole authority in deciding when benefits outweigh risks: "[N]o single source can say what potential risks and benefits inhere in a particular study. . . . The benefit and justifiability of research depend on the whole nature of the research process and on the values of the persons who judge the research." (Sieber, 1992, 76-77)

Once we agree that the benefits and risk of research involving deception must be assessed together, we must consider what those benefits and risks may be. The discussion above identifies some potential benefits of the cases described here and some of the risks to subjects as well. Researchers must also be mindful of less obvious risks when considering research involving deception. These risks do not concern the potential for harm to the subjects of research, but rather entail negative consequences of such research for the researcher and for the science of psychology itself.

In a self-revealing essay entitled "Learning to Deceive," Thomas H. Murray describes his discomfort at engaging in deception in the course of research he helped conduct as a graduate student in social psychology (a helping behavior study similar to the one described in Case 3). He notes of the debriefing procedure following this study, "While I did reveal the true purpose of the study, I did not always answer all questions honestly; and I seriously doubt that I, or anyone else, could have removed all negative effects of participation" (Murray, 1980, 12). After encountering in debriefing anxious subjects who were shaking, stuttering, barely able to speak, he continues, ". . . you try to forget the queasiness in their smiles, and the uncertainty in their handshakes. You try to convince yourself that, yes, all harmful effects have been removed. But I did not believe it then, and I do not today." (Murray, 1980, 12) Disturbing as such post-study encounters may be, however, Murray identifies what he believes to be a more insidious danger of deception in research: the danger that the researcher will come to adopt an attitude of callousness, to view subjects as means to an end, and to believe that the characteristics and reactions induced by experimental manipulations in fact describe the way people are. Murray asks, "In trying to make our laboratory so much like the world, do we sometimes succeed in making our world like the laboratory?. . . Do we eventually come to see people as so easily duped outside the laboratory as inside it? And if our research induces people to behave inhumanely, do we come to believe that that is indeed the way people are?" (Murray, 1980, 14)

Such negative consequences of research involving deception do not end with the experimenter, however. The science of social psychology can itself be affected by the methods adopted by its disciples. The more prevalent the practice of deception in social psychology, the more the science comes to be associated with the practice, leading to an erosion of public trust in scientists and their purposes in any area of research in the field. Greenberg and Folger (1988) document that some social psychologists have challenged the unquestioning adoption of deception strategies, claiming that the "pool" of naive subjects grows smaller as populations, especially those such as college students who are often called on to participate in research, begin to expect to be deceived, thereby casting doubt on the validity of experimental findings. They also note that the public may acquire an attitude of distrust and suspicion regarding laboratories, scientists and even a profession that relies heavily on deception to make its progress.

A shocking incident at the Seattle campus of the University of Washington in 1973 illustrates one danger of such a widespread awareness of deceptive research methods in psychology. Students on their way to class witnessed a shooting and neither stopped to help the victim nor followed the assailant; when questioned later, some witnesses reported that they thought the incident was a psychology experiment! (Greenberg and Folger, 1988, 48). Although the criticism that "real-life" experiments lead to incidents such as the one above could be leveled as well at the movie and television industry, the example illustrates that deception in research has ramifications both for the subjects and for the science that extend beyond the time and place of the studies for which it is employed.

The discussion above, centered on three cases, illustrates why deception is employed as a research strategy and why its use has been called into question. Some of the dangers of deception are identified for the subjects, for the researcher, and for the science itself. Yet Greenberg and Folger (1988, 56) report eight studies that have indicated that subjects are bothered less about being deceived in the course of research than are the IRBs that review the proposals. If these findings are accurate, is more debate being raised about deception in research than is warranted? I believe that such findings add another element for consideration in the assessment of risks and benefits of research involving deception, but they do not eliminate the need for such consideration. Subjects in some kinds of experiments may not "mind" being deceived, but subjects participating in others may mind very much. In addition, subjects may not always recognize immediately, or ever, the subtle effects of such experimentation on their self-esteem, for example, or on their evaluations of social psychology and of scientists in general. We cannot dismiss the possibility that deception in research may have negative consequences for both subjects and researchers as well as for the science. Scientists considering deception have a responsibility to consider the costs with the benefits, and to minimize unavoidable costs wherever possible should they decide ultimately to deceive their research subjects.

References

  • Greenberg, Jerald, and Folger, Robert. Controversial Issues in Social Research Methods. Springer Series in Social Psychology. New York: Springer-Verlag, 1988.
  • Murray, Thomas H. "Learning to Deceive." Hastings Center Report 10 (April 1980): 11-14.
  • The National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research. Washington, D. C. : Department of Health, Education, and Welfare, 1979.
  • Sieber, Joan E. Planning Ethically Responsible Research: A Guide for Students and Internal Review Boards. Applied Social Research Methods Series, Vol. 31. Newbury Park, Calif.: SAGE Publications, Inc., 1992.