Brian Schrag's Commentary on "Do the Ends Justify the Means? The Ethics of Deception in Social Science Research"

Intentional Deception of Human Subjects in Research

IRB Considerations

The Use of Deception

Moral Wrongs and Harms

Moral Wrongs and Harms in Deception

Debriefing and Harm

Voluntary Consent

Benefits of the Experiments to the Subjects

Broader Issues

References

Intentional Deception of Human Subjects in Research

These three cases raise a narrower issue and a broader issue. The narrower issue is whether an IRB should approve the conduct of any or all of these experiments, which involve intentional deception of human experimental subjects. The broader issue is whether it is ethical for scientists to employ intentional deception in experiments on human subjects. The broader question has taken on increasing significance over the past 50 years as the use of deception in research has increased dramatically. The proportion of studies that use intentional deception in experimentation on human subjects increased from 18 percent in 1948 (Baumrind, 166) to 37 percent in 1963 and 47 percent by 1983. (Fisher and Fryberg, 417)

The broader issue raises not only the question of the ethical justification for intentionally deceiving the subjects but other ethical considerations including the moral significance of the particular acts of deception or a practice of deception for the researcher, the training of researchers, the university (if the research is university-based), the discipline, research science and society as a whole.

Back to Top

IRB Considerations

The IRB has a narrower focus. Its concern is primarily, although not exclusively, with protecting the rights and welfare of the human subjects in scientific research, given certain guidelines. Those guidelines may or may not adequately capture all the relevant ethical considerations concerning particular deception research or the practice of such research. Hence, even if the IRB approves any of these experiments, that does not settle the question of whether it is ethical for scientists to engage in this research. It is possible for the IRB to approve one of these experiments and it still not be ethically justified research. Nevertheless, the IRB is a good place to begin in these cases.

The federal guidelines on the protection of human subjects of research (found in the Code of Federal Regulations, Title 45, Part 46) provide the IRB with criteria for determining whether proposed research that falls under its purview will treat human subjects in an ethical manner. These guidelines specifically charge the IRB with determining two things: 1) That the subjects have given free and voluntary informed consent to participate in the study and more particularly that a) the circumstances under which the consent is obtained minimizes the possibility of coercion or undue influence; b) the informing includes a description of any reasonably foreseeable risks or discomforts; c) refusal to participate will involve no loss of benefits the subject is entitled to; d) the subject may discontinue participation at any time; e) if subjects are part of a population that may be vulnerable to undue coercion or influence that additional safeguards are included to protect their rights and welfare. 2) That risks to subjects are minimized and are reasonable in relation to any benefits of the research to the subjects and in relation to the importance of the knowledge gained in the experiment. (45 CFR 46.111)

These guidelines draw on three ethical concepts relevant to ethical practice of human research, namely, "respect for persons," "beneficence" and "justice."(2) The ethical principle, "One ought to treat humans with respect," is used to ground a requirement that in scientific research, prospective human subjects should not become subjects of a scientific research experiment until and unless they have given free and voluntary, informed consent to participate in that experiment.

IRB guidelines do not categorically rule out deception of human subjects in research, even though the ethical principle and concepts outlined above would appear to preclude it. Federal guidelines allow deception of human subjects in experiments by allowing a waiver of the informed consent requirement provided that

the IRB finds and documents that: (1) The research involves no more than minimal risk to the subjects; (2) The waiver will not adversely affect the rights and welfare of the subjects; (3) The research could not practicably be carried out without the waiver or alteration and (4) whenever appropriate the subjects will be provided with additional pertinent information after participation. (45 CFR 46.116 [d])

The risks to subjects must be "reasonable in relation to anticipated benefits, if any, to subjects and to the importance of the knowledge that may reasonably be expected to result." (45 CFR 46.111 [a] [2])

The IRB may also be guided in these cases by the American Psychological Association's Ethical Principles of Psychologists and Code of Conduct (1992) as well as the American Psychological Association's Committee on Ethical Standards in Psychological Research's Ethical Principles in the Conduct of Research with Human Participants (1973).

Before the IRB approves an experiment that involves deception, it must consider the risk of harm to subjects relative to the benefits to the subjects and the importance of the knowledge gained; actual harm to subjects; the necessity of deception in the experiment and whether subjects are adequately debriefed subjects after the experiment. A central consideration is the harm to subjects caused by deception. To assess such harm requires careful attention to the kinds of deception involved.

Back to Top

The Use of Deception

All three of these cases present some element of deception of the subjects; the level of deception increases from the first to the third cases. In all three cases, the subjects are deceived as to the purpose of the experimental activity. In Case 1, the investigator deceives by failing to completely reveal the nature of the study. In Case 2, the researcher lies to the subjects about the purpose of the experiment; they are told the purpose of the activity is to measure their attitudes when in fact the research activity involves investigating the degree to which they and their attitudes are vulnerable to group pressure. In Case 3, subjects are told that purpose of the task is one thing when in fact it another, which is to observe their "helping" behavior in response to someone they are deceived into thinking is in distress.

In Cases 2 and 3, the subjects are not only deceived about the purpose of the experiment but also about the status of other persons they interact with in the experimental group. Subjects are allowed to think all members of the group are experimental subjects when in fact some are confederates in the experiment. In Case 3, subjects are additionally deceived about the status of someone outside the group and are led to believe that person is in distress.

Back to Top

Moral Wrongs and Harms

The IRB needs to think carefully about the moral wrongs and the harm to subjects that come from the deception in these cases. Because social science research has a tradition of the use of deceptive techniques and because of the possibility that some lines of research may not be pursued without the use of deception of subjects, there may be a tendency of an IRB to underestimate the moral wrong or harm of deception or to be biased in favor of the benefits of the knowledge gained in the research. For that reason I will focus in some detail on the moral wrongs and harms to the subjects that can arise from such deception. Just as there are various levels of deception, so there are various kinds of moral wrong or harm to subjects that can arise from deception.

Intentional deception, as Sissela Bok argues in her book Lying, is as much a form of deliberate assault on persons as is physical violence: Both can be used to coerce individuals to act against their will. (Bok, 1989, Chapter 2, especially pp. 18-21) Deception is used in these cases to manipulate the beliefs and choices of the subjects as well as their responses to the situations. Deception is used to manipulate the subjects' choice to be involved in the experiments. Particularly in Cases 2 and 3, had subjects known the real purpose of the experiment, some may well have chosen not to participate. Deception is also used to manipulate the subjects' choice of responses to peers and to the situation within the experimental setting. If subjects knew the confederates were actually confederates, the subjects may well have withdrawn their initial disposition to trust the reactions of those peers, and they likely would have chosen to react to the confederates' behavior quite differently.

Deception fundamentally fails to respect persons and for that reason morally wrongs the person. Subjects in all these cases are not treated as rational beings capable of rational choice but are treated solely as means to the researcher's ends. Subjects of deception are always morally wronged in this way even if they do not realize or never realize they have been deceived. Of course, they may also be harmed by deception, even if they do not realize that they are being deceived.

It is important to distinguish between morally wronging persons and harming them; it is a category mistake to equate the two.(3) We morally wrong people when we violate fundamental moral principles in our dealings with them; for example, when we fail to respect them as persons, treat them unjustly, violate their rights, invade their privacy or gratuitously harm them. The concept of morally wronging a person is independent of its criterion of application. Some would argue the criterion involves treating persons solely as means; others would argue it involves only doing a person physical or psychic harm; some would include both. When we manipulate people by lying to them, we may morally wrong them, even though we may not harm them. Harming persons is not necessarily the criterion of morally wronging persons; it is one, but only one, way of morally wronging them. Moral wrongs to persons may be accompanied by harm to them; in that case, they have been morally wronged in more than one way.

This distinction between moral wronging and harming is blurred in the federal guidelines. The language of risks and harm in the guidelines may direct our attention away from concern for the morally wronging of subjects. Focus on the language of harm has blinded researchers to the distinction and led them to assume, No harm, no moral foul, that any negative consequence of deception can be undone by undoing the harm. Ignoring the distinction makes it easier to justify deceptive research because the risk-benefit analysis takes into account only harms, not other moral wrongs. Much reasoning about the debriefing of human subjects in deceptive research misses the point because it assumes that the only wrongs to be addressed are the harms caused by the research. The harms of deception may or may not be undone by debriefing. The moral wrong of manipulating subjects by deception into acting against their will cannot be undone. I assume that moral wrongs other than harms are relevant to IRB deliberations regarding approval of experiments.

Back to Top

Moral Wrongs and Harms in Deception

Some of the moral wrong of deceptive experiments, then, comes from simply failing to treat persons with respect. Notice that consent to be morally wronged does not eliminate the wrong. If we succeed in getting people to agree to let us morally wrong them, that does not justify the wrong. Indeed, even if people were to give us permission to fail to treat them as rational persons and we subsequently do so by deceiving them, we have still wronged them as much as persons who consent to slavery are wronged if we enslave them.)

In Cases 2 and 3, the nature of the experiments enabled by the deception may also be a source of wrong and harm in particular to some subjects. Joan Seiber notes that one defensible justification for deception research is if it is the only way "to obtain information that would otherwise be unobtainable because of the subject's defensiveness, embarrassment, shame or fear of reprisal.(Seiber, 64)

One might think that Seiber's justification is precisely a justification for not allowing the deceptive research. In these two cases, deception allows the investigator to invade the privacy of the subjects without their knowledge or consent and to force the subjects (again without their knowledge or consent) to confront certain inclinations in themselves and to reveal them to their peers in the experiment and to the researcher. The inclinations revealed might, for some subjects, fall under Seiber's category of "otherwise unattainable information." We are not given the controversial topic to be discussed in Case 2. That may be significant for the IRB to consider since there may be particular sensitive topics that might be especially stressful for some subjects or particularly affect their reluctance to have their private thoughts invaded. In Case 2, the subjects' inclinations to follow group pressure and group norms are revealed. In Case 3, the subjects' reluctance to help a person presumed to be in distress is revealed.

There are two sorts of wrong here. First, both experiments invade the subjects' private behavior and emotions. As Bok argues, learning about people's private behavior and emotions without their consent is akin to spying at them through keyholes and is not "less intrusive for being done in the interests of research." (Bok, 1989, 194)(4) It is not always true that what you do not know cannot wrong you. With regard to spying through a keyhole, we think a moral wrong has been done even if the subject is unaware of the spying.

Harm to subjects is also likely. In these cases, the subjects will learn about the invasion of privacy since they will be debriefed, and by that means further harm may be done. People may well vary in the strength of their sense of privacy and the harm from having that privacy invaded. Some may be quite bothered by this invasion of their most intimate being, others not at all. Consequently a reasonable case can be made for the claim that the subjects are best positioned to judge the harm done. Since the subjects will be deceived and denied the opportunity to give voluntary and informed consent, they cannot be asked how much they think they would be harmed by the experiment. Some would argue that if we survey a representative sample of potential subjects about participation in such experiments, we can take their responses as reasonable evidence of what the subjects would say if they were given a choice to participate. (Sharpe et al., 1992, 589) However, given the variability of individual responses to such invasion, there is no reason to think the substituted judgment of the researcher or the IRB committee, even based on such evidence, is an accurate gauge of the harm done the individual subject by this invasion of privacy.

The second sort of harm, in both Case 2 and Case 3, is that caused by forcing persons to confront or reveal to others knowledge about themselves they may not want to confront and may find painful to live with.(5) For example, Seiber notes research that suggests most people perceive others as "conforming sheep" but view themselves as not being influenced by peer pressure. (Baumrind, 1979, 65) Some subjects in Case 2 may be upset by being forced to confront that bit of self deception or by revealing it to others. In Case 3, subjects may feel anxious, embarrassed, ashamed or guilty for not coming to the aid of a person they feel is in distress; they may feel the same when forced to confront that fact about themselves and have it revealed to others. Notice that mere participation in the experiments in Cases 2 and 3 may force this realization, whether or not the subjects are debriefed.(6) Again, people may vary on how much difficulty this unsolicited knowledge may cause them; the subjects and only the subjects are best positioned to judge the harm done to themselves.

Additional harm to subjects may occur when subjects realize they have been deceived in order to be used in an experiment. In these cases they will know they have been deceived since they will be debriefed. In general, when persons discover that they have been deceived and manipulated, the natural response is to a feel loss of control over their own actions, to feel used, to feel they have been played the fool and consequently to be resentful, distrustful and suspicious both toward those who deceived them and more generally toward all others. In this case that distrust may also be directed toward social scientists and scientific research in general.(7) There is no reason to assume that this suspicion and distrust is a momentary or fleeting reaction that disappears without a residual impact on the trustful disposition of subjects. We know in some instances (e.g., the Tuskegee syphilis experiment or radiation experiments conducted on citizens in the 1950s by the U.S. government) that the experience of discovering that they have been deceived into being experimental subjects had lasting effects on subjects' trust of medical and governmental officials. The loss of trust caused by such deception also has a way of spreading to those who were not subjects, but simply learn about the deceptive practice.(8) The makeup of the subject group may be relevant here. We do know that the bulk of social psychology research is carried out on college students. (Fisher and Fryberg, 1994, 418) The impact of being deceived is especially significant when the subjects are college students and they realize they are being deceived by a trusted faculty member who is also supposed to be a teacher and role model for the profession.

Back to Top

Debriefing and Harm

Some researchers may assume that any harm caused by deceptive research can be "wiped out" by debriefing after the fact. Debriefing includes dehoaxing (revealing the deception) and desensitizing ( attempting to remove any undesirable consequences of the experiment). The aim of desensitizing is to restore the subject to an emotional state of well-being. Seiber notes evidence that desensitizing is not always effective in removing all the damage to self esteem caused by the deceptive experiment. (Seiber, 1992, 72)(9) Indeed, the debriefing may only increase the harm by ensuring the subjects are explicitly and exactly aware of the unflattering character traits and behavior they have revealed about themselves.

Voluntary Consent

As the IRB thinks about whether to approve any of these three research proposals, an important issue to consider is the degree to which the experimental subjects have given their voluntary and informed consent to participate in the experiment. Informed consent is a necessary but not sufficient condition of voluntary consent. That is, if the consent is not informed, it cannot be completely voluntary since, if subjects do not know what they are consenting to, they cannot be said to have voluntarily consented to do it. However, giving informed consent is not necessarily sufficient to ensure the consent is voluntary.

Suppose, however, the researcher proposes in these cases to ask subjects to agree to participate in an experiment with the understanding that they will not be told about the exact nature or purpose of the experiment until afterward and that there may be some deceptive elements in the experiment. The subjects at least voluntarily agree to be deceived, even if they are unclear about the details of the deception.

To assess voluntary consent under such conditions, it is necessary to know how these subjects are recruited and under what conditions. In Case 1, for example, how were subjects recruited to the workshop? Was selection for the workshop independent of the recruitment for participation in the study? For example, was the workshop part of mandatory training on environmental issues for employees? In such a setting, participants may not feel free to refuse to participate in the testing. Is the workshop staged only for the purpose of testing the impact of the workshop on changing attitudes and if so how were subjects recruited? In Cases 2 and 3, the subjects are brought to the laboratory, so they presumably are at least aware from the beginning that this will be an experimental activity. If they agree to participate after being told some information about the experiment is being withheld until after the experiment and some deception may be involve in the experiment, then one might argue reasonable voluntary consent was obtained.

But such an arrangement does not establish that the subjects' consent was sufficiently voluntary in the sense of given without undue influence or coercion. As an illustration, consider a practice of psychology departments in many universities, thought by many to be acceptable. These departments include in the syllabus of introductory psychology classes a requirement for the course that the student either participate as a subject in a certain number of departmental research experiments or write an additional paper for the course. (This requirement is a convenient way of ensuring plenty of experimental subjects for the department.) One such practice over a twenty-year period is described by Sharpe et. al. (1992).

Although the students have an "alternative" to participating in experiments as subjects, it does not follow that their choice to engage in the experiment is uncoerced or not unduly influenced. As a practical matter of fact, many of the students may need to take this course; avoiding the course is not an option. Once in the course, there are coercive negative inducements to becoming a subject in order to avoid writing a paper. The negative consequences of writing another paper are clear to students; the negative consequences of serving as a subject may not be clear.(10) In such circumstances, there may be a negative inducement to "volunteer" for the research. There are parallels in this practice to the dispensing of aspirin to poor black subjects in the Tuskegee syphilis experiment to gain their cooperation in a nontherapeutic experiment. Even if students knew in advance exactly the experiments in which they would be asked to participate, their consent, although informed, may in these circumstances be coerced and to some degree involuntary. Furthermore, in cases of deceptive experiments, students may need to decide between the syllabus alternatives before they know the nature of the experiments; it may be too late to back out of the experiments after they realize what they will be asked to do as subjects. If a similar practice is the source of experimental subjects in the three cases, then it is not at all clear the subjects are in a position to give voluntary consent, whatever the degree of informed consent in the cases.

Back to Top

Benefits of the Experiments to the Subjects

It is not clear that there is much in the way of benefits to the subjects in any of these experiments. A standard rationale for using college students as experimental subjects is that it gives them an increased appreciation of the discipline. A recent study suggests no evidence that participation has that effect. (See Sharpe et al., 1992, 589) Some argue that the subjects receive, as a benefit in debriefing, a brief explanation of current research understanding of the issues under investigation. The subjects could learn that information by reading the research literature without participating in the experiment.

In the absence of any benefits, the harm or potential harm to the subjects, particularly in Cases 2 and 3, surely outweigh the benefits to the subjects.

The IRB is also called on to determine if the benefit to general knowledge justifies the deception of these subjects. If one accepts that charge to IRBs as morally legitimate, one of the first questions an IRB ought to ask, particularly in Cases 2 and 3, is, "Are these experiments necessary?" The experiment in Case 3 is clearly very similar to a large number of experiments on helping behavior already done over the last thirty years. Unless it can be shown that this experiment adds significantly to that research, it ought to be denied on those grounds alone. Does the experiment in Case 2 really add anything to our knowledge of the influence of peers on our willingness to assert or express our views on controversial topics? Studies of group think have been around for a long time. The Case 2 experiment ought to also be denied on those grounds alone.

But one ought to raise a more fundamental ethical question at this point about the IRB guidelines. The IRB is allowed by its guidelines to weigh the harm to research subjects in an experiment against the value of general knowledge gained in the experiment. In the case of experiments in which subjects are involved without informed, voluntary consent, the harm to subjects must be considered "minimal" by the IRB in order to approve the experiment. (CFR 46.116) The definition of "minimal risk" is

that the probability and magnitude of harm or discomfort anticipated in the research are not greater in and of themselves than those ordinarily encountered in daily life or during the performance of routine physical or psychological exams. (CFR 46.102 (i))

One might ask if deception of humans is ever a minimal harm; if not, it should never be done.

The rational for balancing minimal harm to subjects against the value of knowledge gained is the principle of beneficence invoked in the Belmont Report. The principle of beneficence in the Belmont Report is understood as an obligation expressed in terms of two rules. "Two general rules have been formulated as complimentary expressions of beneficent actions in this sense: (1) do not harm and (2) maximize possible benefits and minimize possible harms." (National Commission, 1979, 4)

There are several issues here.(11)  First is the issue of whether beneficence is an obligation or merely a good thing to do. One might at least agree that it is a prima facie obligation to be beneficent. Second is an issue of the exact content of beneficence. The principle of beneficence is usually thought of as an obligation to do good and avoid harm. William Frankena argues that notion can be explicated as 1) One ought not to inflict harm; 2) one ought to prevent harm; 3) one ought to remove harm; and 4) one ought to promote good. (Frankena, 1973, 47) He argues that the notion of an obligation to maximize the good is yet a further principle, which presupposes but not is necessarily implied by the principle of beneficence. (Frankena, 1973, 45) A final issue is the lexical ranking of these obligations. Traditionally in ethics, the notion of not harming takes precedence over doing good, to say nothing of maximizing the good. If that is the case, on this explication of beneficence, the fact that subjects are harmed in deceptive experiments should settle the issue for the IRB. Deceptive experiments should not be done.

The rationale of the Belmont Report for giving priority to "maximizing the good" over "doing no harm" is weak on this point. The report argues that although one should do no harm,

[E]ven avoiding harm requires learning what is harmful; and in the process of obtaining this information, persons may be exposed to risk of harm. Further, the Hippocratic oath requires physicians to benefit their patients according to their best judgments." Learning what will in fact benefit may require exposing persons to risk. (National Commission, 1979, 4)

Usually the interpretation of the "do no harm" principle is that one should not intentionally do that which one already knows will do harm. It is not a requirement that one minimize harm or that one try to avoid all harm by first attempting to discover everything that may cause harm even if that discovery process itself causes harm. Nor is the dictum a general rationale for doing harm to someone in order to prevent harm to others. To say otherwise is simply to collapse the distinction between avoiding known harms and minimizing all harms, known or unknown. In the specific case of treating a patient, the dictum may allow a rationale for subjecting the patient to risk in order to find a cure for an even greater harm to the patient. But there, the risks and benefits are all borne by the same person. With the exception of such cases, "Do no harm" is silent with respect to the issue of calculating tradeoffs of harm between persons.

In cases of deceptive experiments, we do not need to do the experiments to know the harm caused by deception. It is possible that deceptive experiments may be make us aware of why humans do not alleviate harm, for example, in "helping situations." But to say it is permissible to sacrifice the interests of subjects of human experimentation without their knowledge or consent for the welfare of others in order to learn what is harmful brings us right back to a violation of the principle of respect for individuals. Notice the case is different when subjects freely give their informed consent to engage in experiments that may harm them but produce a good for others. In such situations, the principle of respect for persons is observed. One may conclude that IRBs may be allowing far more deceptive practice than is warranted by their own moral principles.(12)

We have concentrated on the harm deceptive experiments may do to subjects and criticized the notion of the IRB trying to balance the harms to the subjects of deceptive experiments against general gains in knowledge. One issue we will not have space to address is whether deceptive research is even necessary. Social scientists themselves differ on whether good science requires such research. (Compare Seiber [1992] and Baumrind [1985].)

Back to Top

Broader Issues

The practice of deceptive research raises broader ethical issues that the IRB is not charged with considering but are legitimate concerns for the professional research community as well as other social institutions. I can only mention them here. There is the harm of deception to the researchers who engage in it. Thomas Murray in his essay, "Learning to Deceive" (1980) eloquently details a first hand account of those harms. There are broader harms as well. The core values of integrity and devotion to the truth must necessarily be held by academics and in the university. Should the university really be in the business of teaching students how to deceive people? What impact does a generally acknowledged practice of deception have on the perception of the trustworthiness of the research community? What impact does a generally acknowledged practice of deception in the research community have on social perceptions of the acceptability of engaging in deception as long as the deceiver thinks it is in a good cause?

Back to Top

References

  • Baumrind, Diana. "IRBS and Social Science Research: The Costs of Deception." IRB: A Review of Human Subjects Research, 1 (6, October 1979): 4.
  • Baumrind, Diana. "Research Using Intentional Deception: Ethical Issues Revisited." The American Psychologist 40 (February 1985).
  • Bok, Sissela. Lying: Moral Choice in Public and Private Life. New York: Vintage Books, 1989.
  • Fisher, Celia, and Fryberg, Denise. "Participant Partners: College Students Weigh the Cost and Benefits of Deceptive Research." The American Psychologist 49 (May 1994).
  • Jones, James H. Bad Blood: The Tuskegee Syphilis Experiment, 2d ed. New York: The Free Press, 1993.
  • Macklin, Ruth. "Autonomy, Beneficence and Child Development" in Barbara Stanley and Joan E. Seiber, eds.
  • Social Research on Children and Adolescents: Ethical Issues. Newbury Park, Calif.: Sage Publications, 1992.
  • Murray, Thomas. "Was This Deception Necessary?" IRB: A Review of Human Subjects Research 2 (10, December 1980): 7-8.
  • OPRR, Department of Health, Education and Welfare. The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research. 1979.
  • Sieber, Joan. Planning Ethically Responsible Research: A Guide for Students and Internal Review Boards. Applied Social Research Methods Series, Vol. 31. Newbury Park, Calif.: Sage Publications.
  • Sharpe, Donald, et. al. "Twenty Years of Deception Research: A Decline in Subject's Trust?" Personality and Social Psychology Bulletin 18 (5, 1992).
  • U. S. Department of Health and Human Services, "Protection of Human Subjects." Code of Federal Regulations Title 45, Part 46 (Revised 1991).
  • (2)These principles were first articulated in The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research. OPRR Report, Department of Health, Education and Welfare, 1979.
  • (3)For an early discussions of these points see Tom Murray, "Was This Deception Necessary?" IRB: A Review of Human Subjects Research 2 (10, December 1980): 7-8. For a later discussion see Ruth Macklin, "Autonomy, Beneficence and Child Development," in Barbara Stanley and Joan E. Seiber, eds., Social Research on Children and Adolescents: Ethical Issues (Newbury Park: Sage Publications, 1992). The point seems to have been lost on many social scientists.
  • (4)The arguments here draw from her discussion in the whole of Chapter 13.
  • (5)Baumrind appropriately calls this "inflicted insight" because the subject is given painful insights into his or her flaws without asking for such insights. See Diana Baumrind, "IRBs and Social Science Research: The Costs of Deception," IRB: A Review of Human Subjects Research 1 (No. 6, October 1979): 4.
  • (6)For a graphic description of the negative effects on subjects of participating in helping experiments such as the one proposed in Case 3, see Tom Murray, "Learning to Deceive," The Hastings Center Report 10 (2, April 1980): 12.
  • (7)Seiber refers to research that indicates the extent to which college students who serve as experimental subjects now assume the researcher will be attempting to deceive them. (Seiber, 1992, 7, 65).
  • (8)See, for example, James H. Jones, Bad Blood: The Tuskegee Syphilis Experiment, 2nd ed. (New York: The Free Press, 1993), Chapter 14, for a discussion of the impact of the Tuskegee study on the trust of black Americans toward government health personnel and the subsequent impact of that on efforts to deal with AIDS in the black community.
  • (9)For a candid description of the experience of debriefing subjects of a helping experiment see Murray (1980), 12.
  • (10)Sharpe et al. (1992) report that virtually all students opt for the research. (p. 586).
  • (11)For a discussion of these points see William Frankena, Ethics, 2d ed (Englewood Cliffs, N.J.: Prentice Hall, 1973), pp. 45-48.
  • (12)For earlier discussions of some of these issues, see Ernest Marshall, "Does The Moral Philosophy of the Belmont Report Rest on a Mistake?" IRB 8 (1986, 6): 5-6 and Baumrind (1979).