Brian Schrag

This case raises issues in research ethics which are in part old and in part new. On the one hand there is the old issue of whether it is ethically justified to do observational research on “public” human behavior. A newer question is whether listserves on the internet are “public” spaces and whether there are privacy norms that are applicable and place restrictions even in “public” spaces. I will address three issues in this case.

I. Is this Human Subjects Research?

The first issue is whether or not the research proposed in this case is properly classified as human subjects research as defined in the Code of Federal Regulations and therefore falls under the United States guidelines for human subjects research.

In Part 1 of this case, Dr. McIntosh’s first suggestion is that Roger simply lurk on line as an unregistered guest and do his research, since the web site can be accessed by unregistered guests to read current postings and archived postings.  Is this human subjects research?  The relevant guidelines are found in the Code of Federal Regulations PART 46, PROTECTION OF HUMAN SUBJECTS.  (CFR 46.102 (1), (2) provide the definition and (CFR 46.101 (b) (2) (4) identifies the activities that are exempt from human research guidelines. (Code of Federal Regulation)

Consider first the definitions of  human subjects research in the Code of Federal Regulation. (PART 46, PROTECTION OF HUMAN SUBJECTS §46.102 Definitions):

(f), Human subject means a living individual about whom an investigator (whether professional or student) conducting research obtains

(1) Data through intervention or interaction with the individual, or
(2) Identifiable private information.

Intervention includes both physical procedures by which data are gathered (for example, venipuncture) and manipulations of the subject or the subject's environment that are performed for research purposes. Interaction includes communication or interpersonal contact between investigator and subject. Private information includes information about behavior that occurs in a context in which an individual can reasonably expect that no observation or recording is taking place, and information which has been provided for specific purposes by an individual and which the individual can reasonably expect will not be made public (for example, a medical record). Private information must be individually identifiable (i.e., the identity of the subject is or may readily be ascertained by the investigator or associated with the information) in order for obtaining the information to constitute research involving human subjects.

In Part I, if Roger merely lurks on line, observing postings or looking at archived postings, there is presumably no interaction between Roger and the members of the group; neither is there any intervention since he is not manipulating the subject or the subjects’ environment.The information Roger would obtain is available to anyone who accesses the web site as an unregistered guest. It is reasonable to say the information is as public as that in a daily newspaper. Given these considerations, it is reasonable to say that the research proposed in Part 1 does not constitute Human Subjects research under the U. S. Code of Federal Regulations.   

In Part 2, an alternate proposal to download only the site’s archived messages posted the previous year is considered. The forum moderator indicates that participants did not expect at the time of posting that their messages would be used for research purposes; that most participants were unaware that their postings are publicly available and that they view their messages as private communications to other members of the NFF forum.  The moderator will only help Roger if Roger first seeks permission from the entire NFF support group.

The fact that participants are unaware that their postings are publicly available and that they consider them private, does not change the status of the research under the definition. This research still does not fall under the definition of Human Subjects research. For a systematic discussion of what counts as human subjects research on the internet under the Code of Federal Regulation see (Walther).

In Part 3, Roger decides to post a message to the community to inform them that he would like to conduct research on NFF’s activities during the next year. At that point Roger has begun to interact with the group. By making the group aware that it is possible for them to be monitored, Roger has destroyed their illusion that this is a private space and may make them self-conscious about their postings. In that sense, Roger has intervened in the group and perhaps already altered the group behavior, whether or not they give him permission to proceed. If they consent to the research, the same research activity which did not previously fall under the definition of Human Subjects research certainly does now.

Impact of human subjects research on group function
The primary function of this website is to provide mutual support for a group of persons who are concerned with a disorder which is surely distressing to those who have it or those with a loved one with the disorder and all of whom are aware of the social stigma associated with the disorder. Their focus is understandably on that situation and they may deliberately keep access boundaries minimal in order to encourage those who seek support to join in. Their focus is thus inward on their group, not a wider public of strangers scrutinizing their every word.  It is not credible to assume group participants would not be affected by the realization that a complete stranger was observing them and reporting their interactions to a larger world of strangers. In this sense, this particular group differs from other groups such as those involved in Face Book where participants assume that what they write is for a wider public.

Thus one effect Roger and his mentor may have in doing their study of this group as proposed in Part 3 is to undermine the function of the group. The group has been created as a support group. That presupposes its members share a common concern and develop a certain trust and a climate of mutual support among the members. It has to be disruptive to be aware that a stranger, who does not share those concerns, is observing and at some point reporting their interactions. (Elgesem) That is so whether their anonymity is protected or not. (Imagine the impact on group dynamics if an Alcoholics Anonymous group was aware that a researcher was sitting in on their meeting and would be reporting their discussions and interactions to a wider audience.)

Undermining this narcolepsy group’s function as a support group is a moral harm. How does Roger justify the moral harm done to this group in order to observe them?  What is the research value of this study that is so important that it justifies undermining the very purpose of the group’s existence, especially since there may be many other groups Roger could study for which this is not a consideration? 

Is this research subject to research guidelines of other countries?
Although it is understandable to approach this case from the perspective of the U.S. Code of Federal Regulation, it is worth noting that since postings in this case could have been made by citizens in other countries, (and Roger has no way of knowing the background of the group) it is quite possible that participant’s  perceptions of the ethical acceptability of “research by lurking” and the research guidelines of those countries having to do with invasion of privacy in research on human subjects could be quite different than that reflected in the U.S. Code.  It is worth noting that European research guidelines are much more inclined to assume a deontological emphasis on rights of individuals and not (as does the U.S. code) allow utilitarian considerations of benefits to others to override those values (Ess).   

Is this Observation of Internet Behavior, the Observation of Public Behavior?
Whether the research proposed in either Part 1 or Part 3 of this case falls under the definition of Human subjects research is one issue. However, even if, in Part 3 of the case, the research does fall under the definition of Human Subjects research, it may be exempt from humans subjects research guidelines because, it could be argued, it involves observation of “public” behavior. 

The exemptions from human subjects research Guidelines are specified in the Code of Federal Regulations §46.101 (b):

§46.101 (Code of Federal Regulation)

(b)…research activities in which the only involvement of research subjects will be in one or more of the following categories are exempt from this policy:

(2) Research involving the use of educational tests (cognitive, diagnostic, aptitude, achievement), survey procedures, interview procedures or observation of public behavior, unless:
(i) information obtained is recorded in such a manner that human subjects can be identified, directly or through identifiers linked to the subjects; and (ii) any disclosure of the human subjects' responses outside the research could reasonably place the subjects at risk of criminal or civil liability or be damaging to the subjects' financial standing, employability, or reputation.

(4)Research involving the collection or study of existing data, documents, records, pathological specimens, or diagnostic specimens, if these sources are publicly available or if the information is recorded by the investigator in such a manner that subjects cannot be identified, directly or through identifiers linked to the subjects.

Assuming Roger can protect the identity of persons in the group, then if the behavior being observed is in public or the archives of group discussions are public, then Roger may be exempt from human subjects guide lines and is not required by the regulations to obtain informed consent from the subjects.  (I hasten to add that, even if the research is exempt, there may still be good reasons to bring this research before an IRB for their review.)

It is an open question whether, in fact, Roger can protect the identity of the participants in doing this research. Bruckman details just how difficult it is to disguise and protect the identity of subjects in research in this sort of online setting. (Bruckman)  

II. Is the Internet in the Public Sphere?

This case raises issues beyond that of simply asking what the Code of Federal Regulations would require of researchers.  It raises a new conceptual issue that has implications for this sort of research; that is, should we view the narcolepsy listserve as a public space or public sphere?  For the purposes of research ethics, is this listserve a public space, or is it relevantly analogous to a public space, or is it something altogether different?  In doing research ethics on the web, it is common to assert an analogy between a public space and space in a public chat room and therefore between observation of subjects in a public sphere and observation of online behavior. (Ess) The issue of whether there is an exemption in this research to human subjects guidelines as discussed above does assume that such spaces on the internet constitute a public space. Is that really so?

If the listserve literally is a public space, then there is a case to be made that ethical guidelines regarding observing human action in that “internet space” is no different than guidelines for observing human behavior in the public square. The behavior is thus public and in that sense “up for grabs.” Anyone is free to observe anyone else in the public square and since a listserve is just a variant on the public square, no listserve participant can complain about being the object of surveillance in that setting.

The paradigm of a public space is a public square, with actual people walking about, observable to all, perhaps with people sitting at open air cafes holding conversations accessible to others at nearby tables. Yet, on the internet, people are not in the same physical location, not in visual contact, perhaps not even in the same temporal coordinates.  What is true is that the internet is a “technically accessible medium.” But why should technical accessibility be equated with being in the public space?  (Berry, 2004)

Walther, for example, presupposes that a listserve is literally a public space or relevantly analogous to a public space in which participants cannot reasonably expect that what they say and do should be treated as private. As he notes, research use of conversation, if gathered in a publicly accessible venue is not human subjects research by definition and is parallel to recording conversations in a public park. Collection of data which is publicly available is analogous to collecting data from old newspapers or public broadcasts. (Walther, p. 207)

Bruckman, on the other hand, challenges that analogy and argues that our intuitive notions of “public” and “private” in this context can be misleading, and that a web page is neither a public place, like an art gallery nor a private place like one’s home—it is a web page. Bruckman argues that, in thinking about research on the postings on the narcolepsy listserve, rather than invoking the analogy of a person in a public square or public park, perhaps the appropriate analogy is an author of a published work. (Bruckman)  Is it conceptually clearer to think of the internet as a “space” in which embodied persons “interact,” or is it conceptually more appropriate to think of the internet as a textual repository where authors deposit their work? (Berry)  

It is true that we do argue that a letter to the editor of a newspaper, addressed to fellow citizens is a public document in the “public” sphere. But, in that setting, the letter is intentionally addressed to a wide audience of strangers.  How is that comparable to what is written by members of the narcolepsy group?  What is distinctive of their writings is that they are written explicitly to their group members who share a fairly narrow set of therapeutic goals; they are not writing to the universe as a whole.

If one thinks of postings on the internet as creative writings of authors, then that shifts the use of those materials from a focus on human subjects research guidelines to the permission of the use of copyrighted material. One effect of that shift in paradigm is to force a recognition that such postings, even if public, are not simply “up for grabs” as taped conversations in the park by the researcher may be, but must be treated as copyrighted material.    

It is beyond the scope of this commentary to resolve the issue but perhaps it is enough to raise the issue to caution researchers not to simply assume that the internet is a public space and all the usual understandings regarding doing research in public spaces apply.

III. Is there no privacy in the public sphere?

There is a larger issue that goes beyond the question of whether this research is activity in the public sphere.  Suppose we grant for the moment that the activity of the narcolepsy group falls within the category of the public sphere. There is a yet more fundamental question to address.  That is the question of whether there can be privacy in the public sphere. That possibility challenges the very presupposition of the conventional public/private distinction.

The conventional wisdom, which underlies longstanding practice in observational research in the social sciences and the Code of Federal Regulation human subjects research guidelines, is as Helen Nissenbaum puts it:

If you have chosen to expose yourself and information about yourself in public view with the result that others have access to you or to information about you without intruding upon your private realm, then any restrictions on what they may observe, record or do with this information cannot be justified. (Nissenbaum, 1998 p. 572)

This is not an issue unique to the internet. In the social sciences, there is a long history of assuming that public behavior is fair game for observational research and that there is implicit consent in a subject’s public behavior that such behavior may be studied by others.  Lurking on the internet, in this case, may be no different than anthropologists observing and writing about the behavior of an isolated, indigenous tribe without the tribe’s knowledge or consent or the infamous case of the observational research in the Tearoom Trade case.  In all such cases, subjects may be unaware that their public behavior is being recorded and reported to a wider audience of complete strangers.

Nissenbaum and other scholars (Nissenbaum, 97, 98, 2004; Rachels, 1975; Scanlon, 2001; Schoeman, 1984) have begun to challenge this conventional wisdom and argue for a fundamental rethinking of the public/private distinction and argue for the notion of a sphere of privacy in public. Nissenbaum has been at the forefront of that discussion as it relates to the internet.

We cannot rehearse the entire argument for this perspective but the basic argument is this. We all live our lives in multiple contexts, realms or spheres, including such contexts as our work setting, visiting friends, seeking health care, shopping, banking and walking the public streets. Each of these contexts is governed by norms, including norms for the exchange of information. The central point is that there is no place that is not governed by informational norms. The notion that when one ventures out in public no norms are in operation is simply pure fiction.

Nissenbaum posits two forms of informational norms for these contexts.  One is a norm of appropriateness. This norm dictates that information which may be appropriate and fitting to reveal in one particular context may not be fitting and appropriate to reveal in another context. The kind of information appropriately shared by a patient with a doctor is not necessarily the kind of information that would be appropriate for a doctor to share about himself with a patient. Information on one’s financial standing may be appropriately shared with a bank but not necessarily appropriately shared by the banker with acquaintances. It is understood to be inappropriate to take information that is appropriate in one context, e.g. revealing information about oneself in a group therapy session and insert it into another context-e.g. a researcher sharing that information in a research project.

The second is a norm for distribution or transfer of information. We recognize that there are norms regarding the flow of information about ourselves. It is expected that if one shares information with a friend, it would be a violation of the norms of friendship for the friend to share that information with strangers. It would be a violation of the norms of support groups if information revealed about oneself in that context were to be transformed by someone else into data for their research paper. 

On this view then, as Nissenbaum put it,

personal information revealed in a particular context is always “tagged” with that context and never “up for grabs” as other accounts would have us believe of public information gathered in public places. (Nissenbaum, 2004, p. 121)

It is beyond the scope of this commentary to assess Nissenbaum’s analysis but if her analysis is right, it does help one to see why there may be a difference between information about a person being “technically accessible” on the internet and a researcher being morally justified in appropriating that information. The mere fact that such information is “public” in the sense of “technically accessible” does not justify its acquisition and use by a researcher. That is what is wrong with Roger lurking on line and using the data from the narcolepsy group for his research. That is what is wrong with downloading the archived data without their consent and perhaps what is wrong with him even approaching them for consent. Nissenbaum’s analysis also raises questions about the general practice, particularly in the social sciences, of research involving observation of human behavior in a public setting. None of this is captured by the current Code of Federal Regulations research guidelines and may call into question the adequacy of those guidelines.   

References

  • Bassett, Elizabeth H. and O’Riordan, Kate. “Ethics of Internet research: Contesting the human subjects research model” Ethics and Information Technology, Volume 4, Number 3 2002, pp. 233-247.
  • Berry, David M. “Internet Research: Privacy, Ethics and Alienation: an Open Source Approach,” Internet Research, Vol 14, No. 4, 2004, pp.323-332.
  • Bruckman, Amy. “Studying the Amateur Artist: A perspective on Disguising Data Collected in Human Subjects Research on the Internet,” Ethics and Information Technology, Volume 4, Number 3, 2002, pp. 217-231.
  • Capurro Rafael and Pingel, Christoph. “Ethical issues of Online Communication Research,” Ethics and Information Technology, Volume 4, Number 3, 2002, pp.189-194.
  • Code of Federal Regulations, TITLE 45, PUBLIC WELFARE, DEPARTMENT OF HEALTH AND HUMAN SERVICES, PART 46, PROTECTION OF HUMAN SUBJECTS http://www.hhs.gov/ohrp/humansubjects/guidance/45cfr46.htm#46.101.  November 17, 2005.
  • Ess, Charles. “Introduction” Ethics and Information Technology, Volume 4, Number 3 2002, pp. 177-188.
  • Elgesem, Dag. “What is special about the ethical issues in online research?” Ethics and Information Technology, Volume 4, Number 3 2002 pp. 195-203.
  • Nissenbaum, Helen. “Toward an Approach to Privacy in Public: Challenges of Information Technology,” Ethics and Behavior, 7(3) 1997, pp. 207-219.
  • Nissenbaum, Helen. “Protecting Privacy in an Information Age: The Problem of Privacy in Public,” Law and Philosophy, 17: 1998, pp. 559-596.
  • Nissenbaum, Helen. “Privacy as Contextual Integrity, Washington Law Review, Vol. 79, No. 1, 2004, pp. 101-139.
  • Rachels, James. “Why Privacy is Important,”Philosophy and Public Affairs, 1975, 4: 323-33.
  • Scanlon, Michael. “Informational Privacy and Moral Values,” Information Technology, Volume 3, 2001, pp. 3-12.
  • Schoeman, F., (ed.). Philosophical Dimensions of Privacy: An Anthology, Cambridge: Cambridge University Press, 1984.
  • Walther, Joseph B. “Research ethics in Internet-enabled research: Human subjects issues and methodological myopia,” Ethics and Information Technology, Volume 4, Number 3, 2002, pp. 205-216. 

This case is more about the professional responsibilities of health professionals, particularly their obligation to safeguard the confidentiality of medical and health information, than about issues in research ethics. However, the case does suggest some issues for research ethics in the private sector.

Suppose a private corporation decides to conduct research to determine what it can do to improve employees' health as a means of enhancing the corporation's productivity. Imagine, for example, that a company surveys its employees to determine whether its workforce is experiencing significant sleep deprivation. The company might intend to use such information to plan educational programs for employees on the importance of adequate sleep. Or, to take quite a different example, perhaps the corporation conducts marketing research involving human subjects to determine interest in a product line it is considering developing.

In general, scientific research conducted by private corporations is not legally subject to federal guidelines for research ethics. The exceptions include instances in which the corporation uses federal funds in the research; is developing drugs, which must pass FDA guidelines; or hires outside researchers who are themselves subject to federal guidelines. If the researchers are interested in publishing their results in certain journals, New England Journal of Medicine, for example, they may also be required to show the research was conducted ethically under guidelines comparable to the federal guidelines (also referred to as the Common Rule) in order for the paper to be accepted for publication.

Even if the company is not subject to federal research guidelines, however, one can certainly argue that such research guidelines are morally obligatory, even if not legally required. The moral arguments for the Common Rule still apply; they are not predicated on the guidelines being required by law. Discussing a case such as this is a useful antidote to those might think that legal compliance and moral obligation are one and the same; if there is no legal requirement to follow research guidelines, then there is no moral obligation to conduct research in accordance with accepted moral guidelines.

So, for example, a company would be morally obliged to obtain informed and voluntary consent from its employees in order to conduct research on them. Obtaining voluntary consent might be tricky, since workers are in a vulnerable position when their company "invites" them to participate in a research project. The company would also be morally obliged to follow all other relevant ethical guidelines for conducting research on human subjects. The exact content of those guidelines may differ somewhat from those in the natural sciences and may be more akin to those accepted in social science research on human subjects.

In this case, the human subjects may have given informed consent to participate in the study. It is not so obvious that their consent is voluntary, however. The subjects are HIV+ adults. In exchange for participating in the program for two years, they are provided free medical care, psychological counseling and $50. Those are very powerful incentives for members of this population to participate in the study.

John must decide what to do when he learns that one of his subjects is having unprotected sex with a partner who is unaware of her HIV status and that she has no plans to inform her partner. Several alternatives are open to John: 1) He can do nothing and simply continue his research. 2) He can try very hard to convince the subject to inform her partner. 3) He can alert the subject's partner that he/she may be at risk for AIDS even if that means disclosing the subject's HIV status. 4) He may be able to alert the partner that he/she may be at risk for AIDS without disclosing the subject's HIV status (e.g., by having a third party inform the partner). Of course, there are other intermediate steps he may take that would lead to one of these outcomes.

It is worth noting that this situation could easily be anticipated before the study begins. began. There may be alternatives in the design of the study, particularly with respect to the informed consent process, which might avoid the issue in the first place. More about that later.

The stakeholders in this case include 1) the subject; 2) the subject's partner; 3) society, which may be benefited or harmed depending on whether John's action in dealing with the knowledge of this subject's behavior contributes to the spread or control of AIDS; 4) the HIV+ community and the general society, which may benefit from the results of John's study; 5) the scientific research community and the professional psychological community, whose reputation and capacity to do research may be affected by how John conducts himself.

In New York, unlike some other states, John is not required by law to warn the subject's partner. It is not clear whether he is prohibited by law from disclosing her status to the partner. The provisions of the APA Ethics Code may give conflicting directions by 1) requiring him to respect the autonomy, confidentiality and privacy of his subject, but also 2) allowing him to breach confidentiality for a valid purpose such as to protect others from harm and indeed 3) recognizing a duty to minimize foreseeable harm. One question this review of the APA code raises is whether the subject, in the process of giving her informed consent, has been made aware that John is subject to points 2) and 3) of the professional code. Since the code may provide for breaching her confidence, she may not be fully informed unless she is made aware of that fact.

There is a general point to be made with respect to the relative weight of an individual's moral obligations, professional moral obligations and the law. What happens when they conflict? In that event, which considerations override? In general, we recognize that legal obligations are not always trump. Indeed, moral considerations can be used to critique the law. A law may be immoral, and one might sometimes be morally justified in violating an immoral law on the basis of moral considerations. That is the standard justification for civil disobedience. Thus, state or federal statutes that prohibit John's disclosure of the HIV status of his subject might conflict with general moral obligation or specific professional obligations. In the case as stated, there do not appear to be any legal constraints on John's actions.

Some moral obligations are incurred in virtue of one's professional status. However, as a professional, one is subject not only to professional obligations but also to the constraints of ordinary morality. It does not follow that professional obligations automatically trump the moral obligations that one has as a human being. The special status of the professions is justified by appeal to general moral considerations such as the welfare of society. Hence one cannot simply invoke the status of the profession to justify overriding such general moral considerations. So, for example, merely because one is a researcher, one is not justified in violating society's general strictures against lying in order to carry out research that involves lying to subjects. In the event of a conflict between general and professional moral obligations, it does not necessarily follow that one's status as a professional excuses one from one's general moral obligations as a human being, or that professional obligations override. The burden of proof is on those who would argue that a particular professional obligation overrides general moral obligations.For general discussion of this issue, see Michael Bayles, Professional Ethics, 2d ed. (Belmont, Calif,: Wadsworth Publishing Company, 1998), Chapters 1-2.

Consider John's situation. He is now in possession of information that a particular, identified person whom he has the capacity to contact is at risk for contracting a fatal disease. Should he simply do nothing and continue his research?

What would we say of an ordinary person not in a professional relation, who is in possession of such information? Conventional morality would argue that persons who are aware of a life-threatening danger to others and who can warn the others without serious harm to themselves, ought to do so. That is so even when there are special relations to the person who presents the threat. Imagine a parent who has been told in confidence by his daughter that she is HIV+, that her husband does not know and that she will not tell him. If it is clear that she refuses to tell him, would we not say that the parent would be morally culpable if he failed to warn the husband that he was at risk? What if the father had promised the daughter never to reveal anything she told him in confidence? Would that fact alter our assessment? Why should the autonomy and confidentiality of the daughter or the value of his promise be viewed as outweighing the life of the husband? Would we really say that the possibility of damaged trust between father and daughter justifies allowing the harm to the husband?

What changes in the moral landscape if we consider John's situation instead of the father's? One might argue that as a professional and a researcher, John has promised to maintain the confidentiality of his sources. Why should his promise of confidentiality trump the welfare of the partner? It is true that if he violates his subject's confidence in this instance he has done her harm. Is that harm greater than the harm to her partner when John knows the partner is at risk and may be the only one able to do something about it? Why should his promise of confidentiality trump concern for the welfare of a partner who may contract (or perhaps already has contracted) a fatal disease? It is not likely that the knowledge John gains by having this subject in the study will outweigh the harm done to the subject's partner if John does not advise him or her of the situation. Since this situation is easily anticipated in doing this study, one might ask whether John was morally justified in offering such a promise in the first place. If not, does that lessen his obligation to keep this promise?

One might argue that unless he honors his confidentiality agreement with this subject, John may not be able to continue the study. That is not likely. There are plenty of others in the study who apparently do not fall into the same situation as this subject. It is unlikely that breaching her confidence will cause the others to bolt unless they are all in her circumstances. John may be required to contact all the others in the study and explain what has happened. Perhaps he should indicate to the other subjects that he is morally obliged to report all similar cases, reassure them that confidentiality will be maintained otherwise and give them a second opportunity to affirm their consent.

One might argue that unless John promises all the participants absolute confidentiality (not qualified confidentiality), at the beginning of the study, he will not be able to gain their consent and hence will not be able to carry out the study. Is that really so? Suppose that he indicates that if he learns they are having unprotected sex with identifiable partners, he will be obliged to alert the partners, if they themselves do not. He pledges to do that in a manner that protects their identity, if possible. Who would consent to be in his study? Those who would still consent may well include: 1) those who are not currently in a relationship; 2) those who have told their partners; 3) those who have not told their partners but would be glad if someone else did. Presumably, some subjects such as the one in this case would not join. But would that really impair the quality of the study? If potential participants were informed of this condition up front, some might bring themselves to inform their partners just so they could participate in the study and get the benefits. It is not obvious that constraining the population in this way would seriously impair his study.

If the restriction on unlimited confidentiality does reduce the amount of knowledge one could glean from a study, does that automatically mean we should consider the loss of knowledge trump and opt for the study with promise of unqualified confidentiality? Not necessarily.

Consider the long-term cost to research if it becomes known that a researcher sat on this information and the partner -- and perhaps other partners in similar situations -- contracted AIDS and died. In that case, one has the irony of a person doing research to prevent illness and death from AIDS but failing to act on information that would prevent actual persons, who do not know they are at risk, from contracting the disease. If that were to become known, would society's trust of researchers increase or decrease? Would potential subjects be more or less likely to participate in research?

This case has parallels, although not exact ones, to the Tuskegee syphilis study. It is true that in that case both subjects and their partners were kept in the dark regarding their condition and prevented from seeking known effective treatment (as well as preventive measures, in the case of sexual partners). What may be relevant to this case is that the loss of credibility of researchers as a result of the Tuskegee case has turned out to be far greater and have far longer lasting effects than any of the researchers imagined at the time.For a discussion of the long-term impact of the Tuskegee study, see James H. Jones, Bad Blood: The Tuskegee Syphilis Experiment (New York: The Free Press, 1993), especially Chapters 13, 14.

One might argue that a research practice of not promising unqualified confidentiality would render a great deal of psychological research impossible to carry out. It seems to me that is claim must be evaluated on a case-by-case basis.

What, then, should John do? All things considered, it seems reasonable to expect that once John becomes aware of the situation, he cannot ignore it. It would be reasonable first to vigorously counsel with the subject and try to persuade her to inform her partner. If it is clear that she will not do so, then John should inform the partner and, as indicated, inform the rest of the subjects in his study of his modified practice of limited confidentiality. In future studies, John and others should build these considerations into the informed consent process used when subjects are enrolled in the study.

Commentary On

In this case, participants in an obesity study agreed to provide blood and DNA samples, given their understanding of the nature and purpose of the study, that the samples and data would be anonymized, that the samples would be used exclusively for this study, and that subjects may withdraw at any time.

The case statement is ambiguous regarding whether the subjects have further explicitly agreed that if their samples are to be used in unrelated research, the individual participants must be recontacted and provide a second consent specific to the new study or whether the case author simply assumes that is an implication of the previously stated elements of the agreement.

If Renee has her way, the subjects in the first research project will be essentially cast as control subjects in a completely different research project. It is important to remember that control subjects in research are still research subjects and the same ethical safeguards should apply to them as to any other subjects. In some cases, although perhaps not this one, the control subjects are subject to greater dangers than the experimental subjects.

It is generally understood that research subjects' informed and voluntary consent can neither be informed nor voluntary if the subjects do not have some understanding of the research project in which they are participating. Since the subjects would not even be aware of the second project, they could not be assumed to understand the study and hence could not be assumed to give informed consent. Nor could they have the option of withdrawing from a project in which they did not even know they were participating. Renee's proposed procedure thus strips them of the minimal ethical safeguards for research subjects. She fails to treat them with the respect due human participants in research. That is also an argument for claiming that there is at least an implicit agreement in the original informed consent that if their samples are to be used in unrelated research, the individual participants must be recontacted and provide a second consent specific to the new study.

The action of enrolling them in the second study also violates the terms of their original agreement, namely, that their samples will be used exclusively for the original study. If we interpret the case to indicate there was an explicit agreement that if their samples are to be used in unrelated research, the individual participants must be recontacted and provide a second consent specific to the new study, then Renee's action is a violation of their original agreement.

The data are anonymous because anonymity was one of the conditions of giving consent. The irony here is that Renee seems to assume that since the data are anonymous, that justifies the use of data without consent.

It may well be true that it would be more convenient for Renee to use these samples without the subjects' consent rather than to go through the procedure of contacting all the subjects (which may already be a violation of their confidentiality) or do as Jim suggests and obtain anonymous samples from a DNA bank. That it is most convenient does not morally justify doing it, however. There are higher moral considerations than convenience, in life in general, and in research science, in particular. That is one of the things that some scientists in Nazi Germany and in the Tuskegee syphilis study failed to understand and that has led to explicit guidelines on research on human subjects.

For all the above reasons, it is irrelevant if the consent forms are not explicit regarding the use of samples; it is irrelevant that Renee does not intend to "study" the samples; it is irrelevant that she and Jim are in the same lab.

Jim has the benefit of these data to conduct his research as a result of the subjects' agreement. He correctly perceives that he has a responsibility to protect the subjects from anyone who would violate that agreement. It is a responsibility that arises from the moral duty of anyone who gives a promise. That duty is not overridden by considerations of convenience. The moral obligation is even more stringent since the violation of this duty harms not only the subjects but, as a practice, could harm the functioning of the scientific community and all members of society who benefit from such research. His study coordinator is bound by the same duty. Jim acts wisely in suggesting an appropriate alternate action to Renee.

Commentary On

The primary issue in this case is the moral justifiability of using DNA and tissue samples from one study in another unrelated study without the donors' knowledge or consent. It is worth making the point that the morally justifiability of such action is not settled by the fact that Dr. Thomas thinks it is acceptable or that the local IRB may have given its approval. As the Tuskegee syphilis study illustrates, the moral justifiability of a practice in research is not settled by the opinion of the research director of a study or a review board.

Neither is the law a sufficient guide to the moral justifiability of the secondary use of DNA and tissue samples without donors' knowledge or consent. Whether a state determines that it is legal to use DNA and tissues samples in secondary studies without the donors' permission (as does California in some circumstances) or whether the state rules that such use is illegal (as does Oregon), that does not settle the question of whether such activity is morally justified. The domains of law and morality are not necessarily congruent.

This case is similar to another case in this volume. (See "Share and Share Alike, p. 131.") It differs in that Dr. Thomas and perhaps the local IRB have approved the use of DNA and tissue samples in an additional, unrelated study without the donors' informed, voluntary consent. The case also differs in that the consent form in this case does not explicitly specify that the samples will be used only in the original study.

In this case, Fan (and apparently Thomas) plan to use the coded DNA samples as a control in a completely unrelated study. If the donors' confidentiality were breached, they could suffer some harm. There is a tendency to think that the use of the subjects' material as a control provides minimal risk of harm to the donors and hence would be acceptable. That, of course, is not always the case. Suppose, however, for the sake of this discussion, that the element of risk to the donors is minimal. Would the researchers be justified in using this material in the second study, without the donors' informed, voluntary consent?

The informed consent doctrine rests on several different moral principles, two of which -- beneficence and respect for persons -- are especially relevant here. Informed consent allows subjects to protect themselves from harms from a research activity. But even if there is minimal risk of harm to the subject in a research project, there is another moral justification: Obtaining informed consent is also a recognition of a basic respect for persons and their capacity for free choice; in particular, it respects their right to choose whether to cooperate with a particular scientific experiment.

This moral consideration is independent of considerations of the experiment's risks and benefits. Thus it is morally unjustified for the researcher or the IRB to argue that since the research exposes subjects only to minimal risk, it is acceptable to use their DNA samples as controls in an unrelated study without their informed, voluntary consent. The stringency of the obligation to show respect for the choices of human subjects is not lessened by minimal risk to the subjects in a second study or by the fact that it is convenient for the researchers to use their samples.

The issue could have been avoided if the IRB had done its job and insisted that the protocol of the original research study address the issue of the future use of tissue samples, including giving subjects the opportunity to consent or refuse consent to the use of their tissue samples in any secondary research or indicating that in the event of any secondary studies they would be recontacted and given the opportunity to consent or refuse consent to the use of their tissues in such studies. Such consent may not be morally adequate, but it would at least be a necessary step in respecting the persons serving as subjects in the original research.

Commentary On

The debate over the use of animals in scientific research, particularly when pain is inflicted, proceeds at both a general level and at the level of particular cases. (For discussions of some of the broader moral considerations in the use of animals in research see "The Gladiator Sparrow: Ethical Issues in Behavioral Research on Captive Populations of Wild Animals," pp 32-44; "Counting Sheep: Ethical Problems in Animal Research" pp. 82-96, and "Changing the Subject," pp. 97-106, in Research Ethics: Cases and Commentaries, Volume 2 [1998]).

Even if all were to agree that research on animals is sometimes justified, that would not settle the question of whether it would be justified in this case. We all recognize that there is no moral ground for gratuitously inflicting pain on animals. Thus in research experiments that involve inflicting pain on animals, the burden of proof must be on those proposing the experiment to show that the animals' suffering is somehow outweighed by the benefits of the experiment. A written scientific justification for any painful or distressing procedure to animals that cannot be relieved or minimized must be included in the Animal Study Protocol submitted to the Institutional Animal Care and Use Committee (IACUC).

Issues

In these comments, I will focus on this particular case, which raises a number of ethical issues. 1) Should the experiments have been done at all? What are the relative merits of the researcher's viewpoint and that of someone from the nonscientific community? 2) Even if the experiment is initially justified, is there a point at which it ought to be terminated, and if so, at what point? Related to this question is the issue of whether standards of certainty should be lowered when the cost of achieving involves infliction of pain on animals. 3) Should the protocol approved by the IACUC include points that trigger an evaluation of the decision to continue the experiment?

Aside from the experiment itself, the case raises issues about the researcher's interaction with his graduate student. 5) Is there adequate dialogue about experimentation on the animals and an atmosphere that encourages open dialogue? Should a lab that routinely experiments on animals have substantive introductory discussions with all entering graduate students on the ethical issues of research on animals in order to create an environment of ongoing dialogue in which students are free to raise issues?

The case

Should the IACUC have approved this experiment? Eric wants to test a drug that might have potential therapeutic efficacy for humans, for example, to relieve pain in inflammatory bowel disease. Thus, the research appears to be on a topic of significance to human welfare. What is at issue is whether that significance outweighs the suffering of the animals in the experiment.

However significant the problem being investigated, one issue that is always relevant in research that inflicts pain on animals is whether or not a particular experiment is likely to yield useful information. It is troubling in this case that, after the experiment is approved, Eric's graduate student Michael does further research and finds data (of which Eric is apparently unaware) that suggest this drug "would not be an effective therapeutic agent against visceral pain and inflammation using the rodent model." Did Eric do an adequate literature search before proposing this experiment? Would the IACUC have approved this proposal had they been aware of Michael's findings? It is possible that the data Michael found were inconclusive and that in Eric's considered opinion, the data were not sufficient to discount the value of the investigation. Nevertheless, it appears that Eric was unaware of the data going into the experiment, and he ought to have a reason for discounting the data. Absent that, it does seem to weaken the case for doing the experiment.

It is also troubling that Michael finds an "alternate model of visceral no inception that is much less painful to the animal" (which again is something of which Eric is unaware or at least did not consider in developing his original protocol). Eric contemplates using the new model, but he decides to use the original model since the alternate is not widely accepted. Eric's judgment may be right, but it does raise questions about how thoroughly he researched the issue before designing the experiment and presenting his protocol to the IACUC.

Finally, the protocol specifies an intra-animal study where the same animals are used repeatedly as opposed to a between-animal study, which uses each animal only once, hence minimizing pain to any one animal but using more animals. One advantage of the former design is a reduction in the amount of variability in results since fewer animals are used. It may be that Eric is justified in using a model that gives results less subject to variability. This model may increase the certainty of the results, but at the cost of more pain to individual animals.

Suppose one grants that Eric's original design was justified. As the testing progresses, Michael gets inconclusive results. Eric assumes there is a procedural error in the experiment and asks for a repeat of the experiment. Michael again finds inconclusive results. Eric asks for the experiment to be repeated again.

Eric seems to be in denial about the findings of the experiment. One wonders how long he will continue to repeat the experiment if the results continue to be inconclusive. All science must deal with levels of certainty of results. In principle, the more one repeats an experiment, the more confidence one can have in the results. But at what cost should that certainty be purchased? There is a difference between the cost of increasing levels of certainty gained by repeating an experiment in inorganic chemistry, for example, and the cost of increasing levels of certainty when the experiment involves causing considerable pain to animals. The burden of proof is higher for justifying the value of incremental increases in levels of certainty of results when pain to animals is involved.

It is important to realize that the justification for a protocol of an experiment involving pain to animals at the beginning of an experiment may weaken as the results of the experiment come in. When one combines the literature results, which suggest this drug may not be efficacious, with the results of the first two trials, the justification for continuing the experiment may be weaker than the initial justification for the experiment. In an experiment involving infliction of pain on animals, it would be preferable to have some guidelines to indicate when it is no longer appropriate to continue the experiment.

Researcher-graduate student interaction

Michael's obvious discomfort with the experiment and Eric's interaction with him suggests other concerns. If Eric is requesting a repeat of the experiment because he suspects Michael is not correctly performing the experiment, then, in light of the animal suffering involved, he has a responsibility to review procedures with Michael and to monitor Michael's work to ensure yet a third trial is not required because of errors on Michael's part. If there is nothing wrong with Michael's execution, then Eric's request for repeated work appears to be inappropriate pressure on Michael to get positive results. Michael may need some vehicle to raise the issue with Eric, and, if that fails, access to someone else to discuss the issue.

If this is a lab that routinely engages in experimentation with animals, it may be desirable to have initial faculty-graduate student discussions as graduate students join the laboratory, regarding the moral issues involved in research on animals as well as reporting new research on models for animal research and guidelines for research on animals. This setting would be appropriate for informing students of the proper procedures to follow if they have questions about the justification of particular experiments or experimental procedures involving animals. This forum would allow issues to be raised in a setting conducive to open discussion and not in the more threatening context of a professor's particular experiment. In this case, Michael would have had an opportunity to understand and evaluate the justification of various animal models before engaging in this specific research. Such a practice may also open lines of communication when students have a particular concern with a professor's experiment; in this case, that may have made it more comfortable for Michael to raise the issues with Eric. It may also open lines of communication when students have a particular concern with a professor's experiment.

Nature of the study

It is not clear from the case description whether this project was initially designed as a longitudinal study with expected follow-up research (or at least data collection) or whether it simply intended to allow for the possibility of follow-up, perhaps to clarify information or to do further research. What does seem clear, given the fact that the initial consent form mentioned the possibility of re-contact, is that the follow-up contact was not simply someone's afterthought several years after the study was completed. It appears that a clear decision to do this particular follow-up seems to have been made years after the end of the study.

It is also not clear if this study was therapeutic or nontherapeutic. If it was a therapeutic study, there might be reasons to follow up with subjects for their benefit. I shall assume for the purposes of the discussion that it was a nontherapeutic study and hence the re-contact of subjects was not for their welfare.

Ethical Issues

The case raises a number of issues. Is the mere contacting of subjects years after a study is completed an ethical issue, does such an action require obtaining informed consent? Is the means of relocating subjects of a study years after the study is complete an ethical issue, and does that activity require informed consent? Should such issues have been addressed in the original study protocol? Having failed to address such issues adequately in the initial protocol, what ought to be done at later stages?

What we do know in this case is that the attempt to re-contact former subjects did have consequences for them. Some of the participants were contacted without their knowledge of the process or consent. Some participants may have been re-contacted against their wishes and without their consent. Some experienced an invasion of their privacy. Information on subjects' credit ratings was apparently obtained by the study manager and perhaps shared with others. A smaller group may have had their credit ratings harmed without their knowledge or consent. Merely participating in the original study made them vulnerable to harms inflicted by the researcher that had nothing to do with the content of the study.

Is the mere contacting of study participants years after the study has been completed an ethical issue? Some subjects may know from the outset that they do not wish to be contacted after the end of a study. Mere participation in a study does not mean one has surrendered any rights to be left alone or to have one's privacy respected in subsequent years. Much can happen to subjects in the interval after a study is completed. Depending on the study, there may be a variety of reasons subjects may wish not to be contacted. For example, they may not want their current intimate contacts to know they had participated in the study. They may simply prefer to not be disturbed, and that itself is a moral reason for them not to be contacted. For these considerations, researchers are morally required to obtain informed consent from such subjects for future contact.

Is the means of locating persons, even those who have given consent to be recontacted, an ethical issue? Clearly it is. Would anyone consent to allow a researcher to use a credit bureau to track one down, particularly if that negatively affects one's credit rating? Surely not. It would not occur to most of us that was a possibility. But the possibility does underline the fact that there are limits to what we would agree to in terms of procedures used to track us down, even if we give consent to be re-contacted.

Without knowledge of the nature of the study, the need for follow up or any therapeutic value to the participants, it is difficult to fully assess the moral seriousness of the actions in this case or to suggest what actions ought to be taken at later stages of the case. However, it is sometimes better to exercise preventive ethics, to take steps to avoid the ethical issues from arising rather that trying to solve the ethical problems after the fact. Adequate informed consent procedures established during the study could have gone a long way toward avoiding the ethical issues raised in this case.

Informed Consent

Since the possibility of re-contacting subjects was anticipated from the beginning of the study, the investigators should have proposed a much more carefully thought out informed consent procedure to ensure that participants clearly gave their informed consent to be re-contacted.

Although the form mentioned the possibility of re-contact, it is not clear how explicit the request for permission to re-contact was, nor whether there was a blurring of the distinction between 1) consent to participate in this study and 2) consent to be contacted in the indefinite future for some unspecified purpose. It may well be that consent to participate in the study ought to be separated from consent to be re-contacted sometime in the indefinite future for some unspecified purpose. Participants may have been willing to participate but not willing to consent to re-contact. Some did not provide contact information, and it is not clear whether that should be interpreted as agreeing to participate in the current study but refusing to be re-contacted.

If re-contact is (less likely) actually the initiation of a new study, then it is not clear that it is appropriate to combine the informed consent to participate in the original study with consent to participate sometime in the future in a study of unspecified nature. The informed consent information presumably supplied details only for this study and, even if adequate to obtain informed consent for this study, could not be adequate for consent to an unspecified later study. Given these uncertainties, it is not at all clear that the informed consent for future contact was actually adequate for any of the subjects. It is even less clear that it was adequate for those who gave no future contact information.

If we assume the protocol should have included obtaining a clear and independent informed consent to re-contact, what would be reasonable for the researchers to say, regarding the method of contact, in order for the consent to be informed?

If one is contemplating the task of contacting subjects three years after completion of the original study, it should be obvious to the researchers from the start that some systematic procedure to locate subjects will need to be followed. (If the size of the pool is such that data would be useful only if virtually everyone in the study can be contacted, then it may not be wise to plan on re-contact unless an ethical means of successfully contacting all subjects is possible.) Obviously, the method outlined in the protocol was inadequate: Asking participants to list next of kin or other contacts did not produce a complete initial list of contacts.

One possibility would be to institute a tracking system to update contact information on a regular basis - every six months, for example. Each update would include and constitute a renewed permission to maintain contact.

However it is to be done, if the researchers expected to get informed consent for re-contacting subjects, they had an obligation to anticipate the difficulty of locating subjects and design a protocol accordingly. It does not take a rocket scientist to recognize there are acceptable and ethical ways of locating people and unethical and unacceptable ways of locating people. Obviously, technology is changing rapidly, and there may be possibilities of violating privacy in order to find contact information that were not thought of at the original point of consent. Nevertheless, the informed consent should include some assurances that the means used to find participants will stop short of violating their privacy or other interests.

Given all this, the researchers had an obligation to devise an informed consent form that would inform subjects of the purposes of re-contacting them in the future and the procedures that would be used to locate them. If the researchers failed to provide that in the protocol, the IRB had an obligation to raise the issue. Neither of them did so in this case.

Later stages of the case

Having failed to obtain an adequate informed consent for re-contact at the time of the study, what, if anything, should be done three years later when the researcher decides to do a follow-up study?

The researchers might use the contact information provided by subjects in the first place. There is, in the original consent form, at least a fig leaf of informed consent to be recontacted. If that strategy does not yield enough subjects for the study, the researchers should return to the IRB with a proposal for locating subjects for whom contact has been lost. It is the researchers' responsibility to obtain that approval from the IRB. It is the obligation of the IRB to see that the methods proposed to locate subject do not violate their interests. The researchers are also responsible for overseeing the actions of the study manager. It is clearly not the responsibility of the study manager to devise procedures on his own.

If no ethical means can be developed for locating subjects, then the study using these subjects should be abandoned. It may be that failure to find and contact the subjects means this study cannot go forward, and some useful or important information may be lost. It is not the case that the importance of the work automatically outweighs the means used to find the subjects merely because the technological means of locating the subjects are at hand. Poor initial planning and design by the PI are the reasons for the lost knowledge. That cannot be an excuse for the later unethical treatment of subjects.

Should Clarisse continue this experiment? The central moral issue in considering that question is whether the birds' interests should be taken into account in evaluating whether or how to conduct the experiment. If the answer is "No, their interests should not be taken into account," then, assuming the experiment is well designed and likely to yield useful results and hence is a good use of resources, there is no reason not to have undertaken it in the first place or continue the experiment If the answer is "Yes, the birds' interests should be taken into account," then one must determine what their interests are and what bearing consideration of their interests should have on a decision to undertake the experiment, modify it or continue the experiment.

Moral status of birds

To ask about the moral relevance of their interests is to ask about the moral status of birds. Are they the sort of entities that have moral standing, that is, entities that at least have interests of some moral importance and perhaps moral rights, or are they rather like rocks, which have no interests, no moral status?For an excellent introductory discussion of the various ethical theories that can be brought to bear on the issue regarding the moral status of animals, see F. Barbara Orlans, Tom L. Beauchamp, Rebecca Dresser, David B. Morton and John P. Gluck, The Human Use of Animals: Case Studies in Ethical Choice (New York: Oxford University Press, 1998), Chapter 1. Relevant to the question of the birds' status is whether birds can feel pain or suffering. If they cannot experience pain or suffering, then one central objection to this experiment is removed. The birds cannot be harmed in this sense, so perhaps there is nothing wrong with the experiment. Historically some (the philosopher Descartes, for example) have held that birds (and other animals) do not experience pain. That view is not widely held in the scientific community, although there may be debate on the extent of pain and kind of suffering such animals experience. I will not focus on this view.

Even if one grants that the birds in this case experience pain and suffering during the experiment or later as a result of the experiment, one might take the position that such a fact is morally irrelevant to whether or not to conduct the experiment. One might hold, as did the philosopher Kant, that "animals are to be regarded as man's instruments, as means to [man's] ends.Immanuel Kant, Foundations of the Metaphysics of Morals (1785) trans. Lewis White Beck, (Indianapolis: Bobbs-Merrill Company, 1959), p. 47. One can admit that the birds experience some forms of pain and suffering, but argue that their pain is of no moral relevance whatsoever to the question of whether the experiment is morally permissible. Some in the scientific community may hold this position. The fact that we have and observe the Animal Welfare Act, however, indicates that most people agree that research animals' pain and suffering is somehow a relevant factor. Although this position raises important ethical and philosophical issues, since the locus of disagreement tends to be elsewhere, I will not focus on this view, either.

At the other end of the spectrum, some hold that both humans and other animals (at least animals with a sufficiently rich psychological life) have inherent value, that is, they have "morally significant value in themselves, apart from their possible usefulness to others and independently of the. . . overall status of their mental life."Tom Regan, "Treatment of Animals" in Lawrence C. Becker, ed., Encyclopedia of Ethics, (New York: Garland Publishing Company, 1992), p. 44. This article includes a good introductory bibliography to the ethical issues discussed in this commentary. This view, developed by Tom Regan and called "inherentism" holds that at least some animals have the same moral status as humans provided they are capable of having beliefs and desires and acting intentionally.Regan, "Treatment of Animals," p. 45. Consequently, they are not to be used merely as means for human ends. If this view is correct, there is no justification for using these sorts of animals in any scientific research. If birds fall in the category of animals with inherent value, there ought to be no balancing the interests of the birds against those of humans. Clarisse should not have undertaken the experiment and should discontinue it immediately. As Regan himself admits, however, there are difficulties in determining which animals are psychologically rich enough to fall in this category.Ibid.

Even if one does not accept the inherentist position, a more common position holds that birds and other animals have some sort of moral status because they are sentient creatures, capable of experiencing pain and suffering. Sentience is the feature relevant to determining moral status. Humans have at least a prima facie duty not to cause any sentient being pain. The pain of any sentient creature counts, and the pain or pleasure of humans does not automatically override that of other animals. That is the utilitarians' view.

If one accepts this position on sentient beings, one still might argue that although the birds in this experiment experience pain, suffering, and death, those facts are irrelevant to determining what to do in this particular case. That is not because the birds' pain, suffering or death is morally irrelevant but rather because their pain and suffering was not causally connected to the experiment. Gladiator sparrows are aggressive. They attack and perhaps kill each other in the wild. If the same aggressive wounding and killing goes on in the wild at the same level as Clarisse observed in her cage, then one might argue that she had here a kind of natural laboratory. She was simply passively observing, in a convenient forum, what would have happened to these birds in the wild.

Indeed, Clarisse belatedly discovers from researchers and early reports in the literature that gladiator sparrows exhibit the intensity of aggression in the wild that she observes in her cage. If she were able to determine by literature search and field observation that the birds exhibited a level of aggression and outcomes that closely match what she later observed with her caged birds, then one might argue that she is simply a passive observer and that the experiment contributes nothing to the pain and suffering of the birds. Suppose she were able to determine by field observation in advance of the experiment that for every 30 birds in the wild, six to twelve would be killed and a number seriously injured during the first few weeks of the mating season. One might argue that her intervention had caused no harm and thus it would be acceptable to carry out and continue the experiment. On the other hand, if the aggression observed in the wild matches that in the cage but may not result in such serious injury or death because the birds under attack are not imprisoned with no chance for escape, then of course Clarisse's experiment has contributed to the pain and suffering of the animals.

Even if the levels of the birds' injuries and deaths in the wild match the outcomes in the cage, there are other considerations. The notion of the animal's "suffering" in this context typically includes tension, anxiety, stress, exhaustion, and fear. Do the acts of aggression in the cage create more of this sort of suffering since the birds under attack cannot escape? Do all the caged birds experience suffering in the capture process or when they discover they are trapped in an enclosure? If the aggression is learned behavior, will birds released from the cage have learned a higher level of aggression than their wild counterparts, and will they pass that on to the wild population, thereby raising the level of pain and suffering in the wild population? If any of these outcomes occur, then Clarisse's experiment has increased the birds' pain and suffering and her experiment is not merely a natural lab in which their pain and suffering is morally irrelevant.

One can of course debate whether birds are capable of emotions such as fear and anxiety or capable in general of experiencing the emotional pain of the events in this case. Some would argue that the basis for attribution of emotions in animals is as good as the basis for the attribution of pain.Orlans et al., Human Use of Animals, p. 19. Some would argue that we are as certain that some animals have emotions as we are that other people have emotions. "We are as sure that a bear is angry as that a spouse is angry."Ibid., p. 8.

Even if we grant, as it may be reasonable to assume, that the birds in this case experience pain and suffering in the cage that they would not have experienced in the wild, some might argue that, nevertheless, the experiment was morally justified in principle. The birds' pain and distress, although morally relevant, nevertheless, is somehow outweighed by the benefits of the experiment.

Back to Top

Assessing the benefits of this experiment

The benefits of conducting the experiment may include the value of the knowledge we expect to gain as well as benefits to other stakeholders. Assume for the moment that the experiment is well designed and executed so that we gain maximal knowledge. It appears that the objective is to try to determine whether gladiator sparrows' aggressive behavior is largely innate or if significant environmental factors influence the development of aggressive behavior in the species. It is not clear if it is expected that the findings of such a study could be generalized to aggressive behavior in all nonhuman animals or to aggressiveness in humans. One might argue that the more general the implications of the findings, the more valuable the knowledge gained in the experiment.

Suppose that the experiment indeed gives us some additional understanding of whether gladiator sparrows' aggression is largely innate or learned. What is the benefit of that knowledge? 1) This knowledge will not have any direct instrumental benefit to the welfare of birds in the study. It is not on the order of a therapeutic experiment to help those specific birds in the study. Given that there are no benefits to the subjects of the experiments, it is clear that the benefit to the birds does not outweigh the pain they experience. 2) It may have instrumental value for the promoting the welfare of the population or species if we learn something about their behavior that could be used in conservation strategies to protect this species. It is not at all clear that is an expected outcome of the experiment, hence it is not clear that one could argue that these birds' suffering is for the greater good of their population or species. From the birds' perspective, their burdens certainly outweigh the experiment's benefits for them individually or for the species. 3) Serious human interests are not at stake in this experiment in the sense that animal experimentation designed to find a cure for a human disease might be considered a serious interest. 4) The experiment appears to have some value in yielding information intrinsically interesting to humans; it satisfies our desire to know and understand the world around us. Clarisse assumes the experiment will have relevance beyond the question of whether aggression is innate in this particular species. She is challenging a prevailing view that aggression is innate in animals. If she is correct, that finding could be significant for our understanding of the issue. This experiment might contribute to our understanding of aggression in humans, although one must always ask whether bird behavior would be close enough to human behavior to be relevant.

The intellectual benefits to humans lie in the broad advancement of scientific understanding of aggression in animals. Aside from any knowledge we may gain, what are the experiment's other benefits? It provides a research project for a graduate student. It may be part of a grant project that generates income for the department. It may provide a research program or career opportunities for the graduate student and her professors. In this experiment, the humans assume all the benefit; the birds assume all the burdens.

Do such considerations justify continuing this experiment? We recognize that it is at least prima facie wrong to inflict pain and death on these birds. It would be wrong to kill them for our amusement or for trivial reasons. 1) That is why it would be wrong to wound and kill them in an experiment so badly designed that it yielded no benefits of knowledge. 2) That is why it would be wrong to kill and wound them in an experiment that yielded only trivial results. (Presumably the other personal benefits to researchers in conditions 1) and 2) would not outweigh the harm to the birds.) 3) That is why it would be wrong to kill or wound them in an experiment that could be alternately designed to substitute models or other means to obtain the same results. 4) That is why it would be wrong to use more than the minimum number of birds statistically required.

Even if all such considerations have been answered in this case (and, as the other commentator notes, it is not clear that all such considerations have been answered), should the study continue? The decision still comes down to weighing the knowledge gained against the birds' pain, suffering, and death that would not have occurred absent this experiment.

This call is perhaps closer than one might think at first. If considerations 1-4 are justifiable reasons for not conducting the experiment, it is not at all obvious that any significant knowledge automatically justifies conducting or continuing the experiment. Once one allows the moral significance of the birds' pain and suffering, one allows for the possibility that their pain and suffering could count for more than the value of the scientific knowledge gained by the experiment. There may be times when we might justifiably argue that some knowledge ought to be forgone or lost rather than inflict on animals the pain and suffering required by the experiment. Some ethologists have made that judgment in other experiments.The ethologist Mark Beckoff gave up the study of predatory behavior of coyotes because of the suffering of the coyote prey in a captive arena. See Marc Bekoff and Dale Jamieson, Reflective Ethology, Applied Philosophy and the Moral Status of Animals, Perspectives in Ethology, ed. P.P.G. Bateson and Peter H. Klopfer, vol. 9 (New York: Plenum Press, 1991), p. 26, note 20.

It is important to recognize that we are weighing the birds' highest interests in not suffering or dying against humans' less pressing interests in extending their knowledge. The difficulty is in weighing this tradeoff and, of course, humans are doing the weighing. In this case, it does seem to be a very close call.

It is not clear from this case if findings from field observations before the experiment or from the experimental results at the end of Year 1 would be adequate to allow Clarisse to design models that could be used to test subjects' behavior without harming them or to identify reliable indicators of aggression while permitting intervention before actual aggression occurs. If the experiment could be carried on without the actual pain and suffering of aggressive attacks, then perhaps the suffering due to capture, captivity, and threat of aggression might reasonably be judged to be outweighed by the knowledge gained. The experiment or its continuation as so modified might be justified.

To Clarisse's credit, she did do the literature research required, although perhaps not sufficient. She did attempt to match the habitat setting. She did get the animal use committee's approval. She did intervene to protect injured birds. She did reassess the experiment before the end of the year in an effort to reassure herself that it was a "natural lab." She did consider a modified program for Year 2.

It is worth noting that the Institutional Animal Care and Use Committee approved the experiment. It is not clear whether they required or considered a detailed ethical justification for the use of these birds or gave that question the same level of consideration they gave to the study's design or theoretical importance. The burden for such considerations ought not to lie solely with Clarisse. That fact has implications for the training and sophistication in ethical thinking one ought to expect from such committees as well as universities' responsibility to ensure appropriate levels of ethical training for such committees.

Back to Top

A nontherapeutic experiment with minimum risk

Informed consent process

Content of informed consent

Confidentiality

A nontherapeutic experiment with minimum risk

In this case, it appears that the research is valuable and can be done only with the involvement of patients who are schizophrenic as research subjects. The research procedure involves having subjects "listen to auditory stimuli presented over headphones while their brain waves are recorded using noninvasive electroencephalographic (EEG) techniques." The participants are drawn from a pool that includes persons with a variety of mental disorders. Experimental subjects (patients with schizophrenia) and the control group (patients with dementia, bipolar disorder, and depression with psychotic features) will all undergo the same research procedures. The research is described as a nontherapeutic experiment with minimum risk.For a comprehensive discussion of the issues raised in this case, see National Bioethics Advisory Commission (NBAC), "Research Involving Persons With Mental Disorders," Volume I, Report and Recommendations of the National Bioethics Advisory Committee, http://bioethics.gov/capacity, December, 1998.

Minimum Risk

Does this experiment indeed entail minimum risk? The criterion of minimum risk is itself contested. According to the Federal Policy for the Protection of Human Subjects, also known as the "Common Rule," a study involves minimal risk if "the probability and magnitude of harm or discomfort anticipated in the research are not greater in and of themselves that those ordinarily encountered in daily life or during the performance of routine physical or psychological exams or tests."45 CFR 46.102(i)(1998). However, as the National Bioethics Advisory Committee notes, "The need for sensitivity in the application of risk categories is especially great when persons with mental disorders are among the potential subjects of a study. For some persons with mental disorders, their limited ability to understand the rationale for a specific intervention could cause them more distress than it would someone who fully understood the intervention."NBAC "Research Involving Persons With Mental Disorders," Chapter Five, "Moving Ahead in Research Involving Persons With Mental Disorders: Summary and Recommendations," p. 8. NBAC continues, "What may be a small inconvenience to ordinary persons may be highly disturbing to those with decisional impairments. Thus, for example, a diversion in routine can, for some dementia patients 'constitute real threats to needed order and stability, contribute to already high levels of frustration and confusion or result in a variety of health complications.'"Ibid., Chapter Four, "The Assessment of Risk and Potential Benefit," p. 6.

Is it so clear that the activity of listening to auditory stimuli over headphones for six hours while being wired up to EEG equipment "so researchers could read their thoughts" would not provoke some psychotic reaction in some subjects who are also patients? The case states the "procedure has not been reported to exacerbate participants' symptomatology." It is not clear whether this statement refers only to patients with schizophrenia or to patients with the entire range of disorders that might be represented in subjects.

The diagnostic interview itself may not be so harmless. The National Commission noted that subjects who are institutionalized with mental disorders may "react more severely than normal persons 'to routine medical or psychological exams.'"National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, Research Involving Those Institutionalized as Mentally Infirm. Washington, D. C.: Department of Health, Education and Welfare, (1978), pp 8-9. One cannot be sure, but it is worth asking if it really is clear that being subjected to these procedures carries a minimal risk of provoking an adverse reaction in some of these subjects. That may depend on whether one accepts the Common Rule definition of risk as adequate for those with mental disorders.

Benefit

Although this experiment is nontherapeutic, it may offer a benefit to the subjects. The case suggests that the diagnostic procedure administered by the researcher is likely to produce a more accurate diagnosis than the hospital's preliminary diagnosis, given the hospital staff's limited time resources for diagnosis. If so, then the one benefit subjects could gain from participation is a more accurate diagnosis of their condition, particularly if there is a possibility of conflict with the treatment team's diagnosis.

Informed consent process

A central issue in this case is obtaining informed consent to participate in a non-therapeutic experiment, from patients in a psychiatric ward. The issue of informed consent in this case is of special interest for several reasons. 1) The possible disorders of this population include dementia, schizophrenia, bipolar disorder, and depression with psychotic features. All of these disorders are recognized as placing a subject's decisional capacities at risk.NBAC, "Research Involving Persons With Mental Disorders," Chapter One "An Overview or the Issues," pp. 7-10. 2) The target population for the study is patients with schizophrenia, who are at even higher risk for impairment of decision-making capacity that others in this population. They also have the compounding effect of fluctuating capacity for decision making. 2) Many patients in this environment may be recently institutionalized, which is an experience also recognized as sometimes impairing decision-making capacity. 3) Duncan will have access to these patients as subjects for a fairly short time since some may be processed and sent on to institutional settings. This limited access may create some pressure to abbreviate the consent process. The short time frame also reduces the option of waiting until temporary forms of impairment pass. 4) Duncan appears to be under pressure to collect as much data as possible, which again may create pressure to abbreviate the consent process. 5) All his potential subjects are under the influence of anti-psychotic drugs at some point. It is unclear whether they all receive medication before Duncan undertakes the informed consent procedure with them, although that is the case with Miriana. Presumably the impact of the anti-psychotic drug would be to increase their decision-making capacity, but that is not clear. 6) This study is nontherapeutic. Research in which the subjects receive s no benefit or are at higher risk or when the researcher has a conflict of interest are all situations that, morally speaking, may require even more stringent consent procedures, such as the use of an independent professional to assess subjects' capacity to make decisions, an auditor to administer the consent procedure, plans for reconsent procedures for subjects with fluctuating capacity and involvement of a friend or family member of the subject in the disclosure and consent process.Ibid., Chapter Two, "Informed Consent and Limitations on Decisionmaking Capacity," p. 7.

Given his subject population, Duncan has strong reason to take special care and use a more sophisticated assessment procedure in the consent process than one might use with other populations. This population is at higher risk for impaired decision-making capacities when he approaches them. It is not clear that Duncan has made an effort to assess the degree to which his potential subjects demonstrate of each of four relevant decision-making capacities (capacity to express choice, understand relevant information, appreciate the situation and its consequences, and reason) and the degree to which they can apply those capacities during his consent process. As the researcher -- and one under pressure to produce data -- as well as the one who assesses potential subjects' capacities, Duncan has a conflict of interest. Since the issue of whether this population is at risk might be debatable, perhaps he should follow the NBAC's recommendation that he use an independent professional to assess potential subjects' decision-making capacities.Ibid., Chapter Five, p.10.

When he first seeks her informed consent, Duncan does not know, based on his own assessment, Miriana's diagnosis or that of the other subjects from whom he obtains informed voluntary consent. Both the "experimental" group and the "controls" go through the entire procedure including the data collection as well as the two-hour diagnostic examination. It is possible, perhaps even likely, that a number of patients with schizophrenia as well as others would exhibit variable decisional capacity during the period of the procedure. Given the population with which he is dealing, it would appear prudent to have in place a procedure for dealing with those in either group whose decisional capacities, although at an acceptable level in the beginning, diminish during the research. Duncan apparently has not planned for that contingency since he is uncertain what to do when he encounters that situation. That procedure, whatever it is, should be addressed in the consent session and consent obtained if that event occurs.

Duncan should build into the consent procedure his plan for dealing with patients if they exhibit a decline or fluctuation in decision-making capacities. This plan may include an indication in the informed consent process of what is to be done if the subject experiences fluctuating capacity during the procedure. Possible responses might include the subjects' designating someone to serve as a surrogate decision maker or an indication that should such a situation develop, the researcher would suspend the research activity with the subject until he or she is competent to reconsent.Ibid., Chapter Five, p. 19. Finally, it might involve indicating that if decision-making capacities declined or fluctuated, the subject would be suspended from the program.

Because of potential subjects' risk of decisional incapacity and fluctuating decision-making capacity, it may be wise in this experiment to routinely seek to have the potential subject's family, friend or legal advocate sit in on the consent and informational procedures and, with the patient's agreement, serve as a representative or an advocate for the patient/subject during the research.Ibid., Chapter Two, p.12; Chapter Three, "Advance Planning, Surrogate Decisionmaking, and Assent or Objection." p.7; Chapter Five, p. 16.

One can imagine a patient/subject or more likely a patient advocate being concerned that the research process might trigger a psychotic event in the patient /subject. It would be natural for them to want to consult with their physician before consenting to participate in the study. Suppose the subject or his or her representative requests that the patient's treatment team be present at the consent process and assist in making the decision on subject participation in the experiment. Should the researcher be open to that request? Should that practice be incorporated into the consent procedure? One difficulty for the researcher is that such a procedure may blur the line between research and treatment activity in the mind of the patient/subject or that of the advocate.

Some might argue that since this research appears to be low risk, such precautions in the consent process are not warranted. However, it is a mistake to assume that only the threat of risk in research justifies or requires attention to proper informed voluntary consent. Subjects can be wronged, even if not harmed, by failure to treat them with the respect due autonomous beings. Involving them in a nontherapeutic study without gaining their informed, voluntary consent falls in that category.

Content of informed consent

Not only is there an issue of how the informed consent procedure is conducted and obtained, there is also a question of what the potential subjects are told about the study. Everyone who consents is subjected to a two-hour diagnostic interview to allow the researcher to arrive at an accurate diagnosis of the subject's illness. Is that made clear to the subjects? It would be hard to imagine a thorough explanation of the research activity during the experiment that failed to explain that two of the eight hours are devoted to diagnosis. Would it also be made clear that the diagnostic assessment is done because the hospital preliminary diagnosis is judged insufficiently accurate for the purposes of research? There is a reference in the case to Miriana's hospital charts. Duncan apparently has access to Miriana's records and is aware of her preliminary diagnosis. If that is so, are the patients aware that Duncan has access to their records?

Once the potential subjects are aware that part of the research activity is a diagnosis of their illness, it would certainly be natural for them or their representatives to ask that that diagnosis be shared with them. It is not clear from the case if they are told that the results of this diagnosis will be shared with them. If the diagnosis is to be shared with subjects, then one would think it might also be shared with the attending physician. If so, will the subjects be told that the diagnosis will be shared with the attending physician? (More below about whether the diagnosis should be shared with subjects or their physician.)

Presumably there will also be thorough discussion of the purposes of the six-hour experimental activity. If Miriana can be confused by the presence of a tape recorder, what mistaken conceptions might other psychotic subjects acquire regarding the activity of listening to headphones for six hours?

The information process should also make clear to the potential subjects and their advocates that there is no implicit quid pro quo in which subjects ought to participate in the experiment carried out on the ward in exchange for treatment given in the ward.Ibid., Chapter Three, p. 9.

Should the researcher acknowledge in the informed consent process that subjects will be given a diagnostic assessment as part of the procedure? It is hard to believe that subjects or their advocates would not want to know this information or could give informed consent without it. The subjects or their advocates would want to know about the administration of the assessment not only because of its possible effect on the subject; they may also want to know the actual diagnosis, if indeed it is a more accurate diagnosis than that of the hospital. An accurate diagnosis may well appear to be a benefit for the patient, particularly if it conflicts with the hospital's diagnosis.

Confidentiality

The subject's perspective

It is difficult to see how subjects or their advocates can be adequately informed without being told that part of the process is a diagnosis of their illness. The subjects in this case are also patients. Once they know that a diagnostic assessment will be conducted, it will be difficult for subjects/advocates to separate their concerns as subjects from their concerns as patients. It may be hard to avoid discussing with the subjects/advocates why another diagnosis is needed in addition to the hospital's diagnosis. It will be difficult to avoid the question of sharing diagnostic findings with subjects/patients or advocates. Many patients who have a mental illness or their advocates may want all the diagnostic information they can gather. If the diagnosis is shared, it could have adverse implications for the dynamics of between the patient and the treatment team, particularly if the hospital's and researcher's diagnoses are inconsistent. It could be especially difficult if physicians are unaware that a diagnosis has been shared or that it differs from their own.

Suppose that, as in this case, the informed consent agreement includes the provision that the diagnosis remains confidential and is not shared with the hospital without the subject's written permission. This provision places patients with possibly impaired judgment in the position of deciding to withhold potentially important information from the persons charged with their treatment or care. A decisional capacity sufficient to agree to a nontherapeutic experiment is not necessarily the same as a capacity sufficient to make decisions that could affect treatment.

The researcher's perspective

The case indicates that in the consent process, Duncan assures potential subjects the diagnosis will be kept confidential and not shared "with the attending physician unless the patient gives written consent to do so." A reassurance about confidentiality could be essential to ensuring the accuracy of the researcher's data. As the other commentator notes, subjects may tell the researcher things they do not want the treatment team to know. Some patients, particularly patients with schizophrenia, may have an adversarial relationship with their treatment teams. If subjects know that the information given the researcher will be shared with physicians with or without their consent, they may have an incentive to downplay their symptoms or use of drugs since that information could affect decisions made about them in the treatment program.

If patients have the right to decide whether to release the diagnosis, that allows the possibility that they can manipulate the treatment team by releasing only "good" diagnoses. It may also give subjects an incentive to manipulate the researcher's diagnosis by selective sharing of facts with the researcher. Obviously, all that could affect the accuracy of the researcher's diagnosis as well interpretation of the experimental data.

Impact on voluntary consent

If, in general, the researcher's diagnosis proved to be more accurate than the hospital's, the hospital may have an incentive to encourage patients to enroll in the program, which raises obvious issues of whether the researcher can obtain voluntary consent.

The treatment team's perspective

The treatment team could hardly consider it desirable for patients to be informed of a diagnostic assessment of their illness by someone other than patients' caregivers, particularly if that diagnosis conflicts with that of the treatment team. From their perspective, it would surely be even worse for the patient or the patient's representative to receive such a diagnosis without the treatment team's knowledge. Unknown to them, the patient and/or the patient's representative is now aware that there are conflicting diagnoses. This situation could create all kinds of difficulties in patient-physician relations and treatment. The treatment team may not place confidence in the researcher's diagnosis, in which case they may not be willing to accept it or alter treatment on the basis of that diagnosis; they may find it frustrating to have to defend their diagnosis against that of the researcher; and they may perceive sharing that diagnosis with their patient as undermining patient confidence. If they do accept the diagnosis, then the patient may benefit from an improved diagnosis. In this case, the treatment (using antipsychotic drugs) may be the same whatever the diagnosis. It might be difficult for the patient or advocate to understand that the diagnosis is really irrelevant as far as treatment is concerned.

There may be no good resolution of this issue. The option of failing to inform the subject that part of the procedure is a diagnostic assessment does not satisfy the requirements of informed consent. The option of sharing the results with the hospital without informing the patient would also violate voluntary consent and subjects' confidentiality. A third option would to be to inform subjects of the assessment but indicate that they will not be told the researcher's diagnosis. If that keeps subjects from joining the study, so be it. That would mean that they would not receive the benefit of a free diagnosis. A fourth option would be to inform the subjects/patients of the diagnosis and let the subject choose whether it is to be released to the treatment team. That alternative would be somewhat analogous to a patient seeking a second opinion.​​​​​​

Commentary On

Scenario 1

Scenario 2

Scenario 3

This case raises a host of ethical issues, including a researcher's responsibility to ensure good design in human subjects research, particularly for topics that are politically sensitive. The case presents issues of truth telling and deception in the reporting of findings and of the relative strength of obligations to report findings honestly as weighed against harms those findings might cause to others. Finally, it raises issues of the degree to which researchers have a responsibility to ensure their findings are not misreported or misused.

In this case, there are a number of stakeholders, and Lang may have moral obligations to most of them. Stakeholders include the general public, which relies on solid scientific information for making sound health policy and promoting pubic health; the scientific community; the needle exchange activists; the needle exchange participants, the opponents of needle exchange; the funders for Lang's research. Lang might also consider her own self-interest.

One can start with the observation that all stakeholders are ultimately best served if Lang is doing good science. One of the scientist's primary obligations is to design studies that have potential to provide useful results. One issue raised by this case is whether Lang has done that and consequently, whether she has results worth reporting.

Whether Lang has fulfilled this obligation depends in part on the study's objective. The point of needle exchange programs (NEP) is to reduce the spread of HIV by reducing needle sharing by injection drug users. Lang "designed a study that would provide data about the seroprevalence of HIV injection users in Capitol City. . . and track seroprevalence over time in a population that used needle exchanges and a group that did not." The significance of that data depends on the precise objective of her study. 1) One could do a study to determine NEPs' effectiveness in reducing HIV transmission. Answering that question would presumably require a randomized, controlled clinical study. As described, Lang's is not such a study but rather an observational study. So presumably her objective is not to determine NEPs' effectiveness in reducing HIV transmission. 2) Lang may be trying to simply a) measure the level of seroprevalence among NEP participants and b) monitor that level of seroprevalence over time compared to some other group; this objective could be achieved by an observational study. Lang appears to be doing the latter, but her objective is still not clear.

Surely she is not simply interested in measuring the initial level and continuing levels of seroprevalence among NEP participants, simpliciter, but rather those levels compared to some other group. One must have some context in which to make sense of the significance of measured levels of seroprevalence. For example, one could compare levels of seroprevalence among NEP participants to nondrug users, to the general population, to drug users who are not participants in NEP programs, or to drug users who are not participants in NEPs but who are otherwise relevantly similar to NEP participants.

Even if this is an observational study and not a clinical trial, the nonparticipant group can be used to provide some context for interpreting the significance of the findings. In the design of a study, one always makes judgments about the criteria of relevant similarity of the observational and control groups. The very choice of the control group is always a decision about the criterion of relevant similarity of the control group to the study group.

Thus, one could compare the seroprevalence levels of NEP participants to a group of NEP nonparticipants whose only relevant similarity is that they are intravenous drug participants. They may vary completely with regard to other risk factors for HIV. Or, one could compare the seroprevalence levels of NEP participants to a matched group of NEP nonparticipants who are also relevantly similar with regard to other risk factors such as likeliness to engage in prostitution, "inject frequently, borrow injection equipment, frequent shooting galleries, share equipment with HIV positive injection drug users."These risk factors are identified in a historically similar (but not identical) case.) See J. Bruneau et al., "High Rates of HIV Infection among Injection Drug Users Participating in Needle Exchange Programs in Montreal: Results of a Cohort Study" and P. Lurie, "Invited Commentary: Le Mystere Montreal," American Journal of Epidemiology 146 (1): 994-1005.

If the researcher ignores such well-established risk factors in the criteria for selection of nonparticipants, what has one learned by such a study? If the seroprevalence levels of the NEP participants are compared to nonparticipants, the significance of the comparison is not clear. This measurement may be an instance of garbage in, garbage out. Since the comparison is between NEP participants and nonparticipants, and since needle exchange is such a controversial issue, one can predict in advance that activists on one side or the other are likely to use the results, whatever they may be, to bolster their positions. Thus, one should at least be sure that there is a possibility of useful results. The worst outcome is to generate scientifically useless data that is still used for political agendas.

Given the case description, it appears that Lang may have failed to pay sufficient attention to selection of the nonparticipant group. The candidates for the nonparticipant group were surveyed for risk behavior at the beginning of the study. At that point, Lang could have screened the nonparticipant group to include only those who matched the study group in terms of risk behavior. For whatever reason, she did not. There are several possibilities here, regarding her moral responsibility for what follows.

Scenario 1

First, suppose there is, in fact, a clear and significant difference between the nonparticipant group and the participant group regarding these risk factors and that that difference could have been anticipated and eliminated by careful study design. In that case, Dr. Lang has just done bad science, and there is no reason to suppose any results she may obtain from her study could tell us anything about the significance of the seroprevalence data for injection drug users in Capitol City and the tracking of seroprevalence over time in a population that used needle exchanges compared to one that did not.

If that is the case, Lang has certainly acted irresponsibly as a scientist and with respect to her funding agency by using the funds for a poorly designed study. In addition, if her study produces unreliable evidence, which somehow gets publicity and is then used to undermine the work of the activists and the welfare of drug users who cooperated with her, she has harmed them as well. Finally, her unreliable results may be used to shape public policy in a way that harms the public good. All of these moral harms could have been avoided if she had taken care in the original design. Sometime scientists need to practice "preventive ethics," avoiding moral difficulties in the first place rather than having to resolve ethical issues afterward.

Scenario 2

It may be the case that there is in fact a clear and significant difference between the nonparticipant group and the participant group but for some reason, Lang was not able to identify that difference during the study's design. Perhaps the nonparticipant group misled her about their habits; perhaps some changed their behavior over the course of the three years. She may not be culpable for negligence in designing the study, but the study's results are no less suspect. We still have no reason to believe that the results tell us anything. It is still the case that if her unreliable evidence somehow gets publicity and is then used to undermine the work of the activists and welfare of drug users who cooperated with her, they may be harmed, and her results may be used to shape public policy in a way that harms the public good. Thus, harms may result from the study although not necessarily because she did poor science. The harms may result from her efforts to publicize her results, however.

Scenario 3

A third possibility is that some, but not all, of the NEP nonparticipant group did not share needles and some, but not all, engaged in less risky behavior than the NEP participant group and that these variations between the groups became clear only at the end of the experiment. At this point, Lang must try to determine whether the variations between the NEP participant and nonparticipant groups are sufficiently small to allow reliable conclusions to be drawn from the results.

In this last scenario, perhaps the results are indeed significant and that, for some reason, seroprevalence is higher among the study group than a relevantly similar nonparticipant group. If so, she has discharged her responsibilities as a scientist to design a good study that permits some confidence in the results. If those results run counter to the preponderance of studies, then perhaps she has identified some important factor overlooked by other studies. Consequently, our understanding of the epidemiology of the disease may be advanced.

As a scientist, Lang has an obligation to share those results with scientific community. The results, if published, may indeed be used in ways that work to the detriment of the activists and NEP participants. Unlike Scenarios 1 and 2, here Lang is not morally culpable for that harm either because of bad design or publicizing unreliable results. Neither is she culpable for the use of her results by others to mislead public policy deliberations simply because she publicized credible results that run counter to other results.

Some might say she is culpable for causing harm to the activists and NEP participants if she seeks to publish the results rather than to suppress them. She could refuse to publish the results. That criticism presumes that the results of the preponderance of studies are correct and that her results do not identify a factor that could improve the programs. However, if her results point to some significant factor overlooked by the other studies, suppression of the results would harm the addicts by depriving them of suitably modified programs.

There are additional moral issues regarding publicity about the results. Whether Scenario 1, 2 or 3 describes the situation, Lang is under pressure to publish for prudential reasons. Failure to publish may mean the current grant will not be renewed or future grants will not be forthcoming and her research career may be at an end.

In Scenarios 2 and 3, if she makes clear that the composition of the nonparticipant group is flawed, it is not clear that any journal would accept the paper. That may tempt her to omit the information or falsify information on the nonparticipant group in order to get published and to further or at least preserve her career in an important area of research. Lang has an obligation to the scientific community to include a full report of what she knows of the design flaws in her experiment or else not submit her paper for publication. Falsification or suppression of information regarding the nonparticipant group would not be justified by the need to publish. Deliberate suppression of information about the nonparticipant group would undermine the practice of science, undermine work in this field and undermine her own integrity. Even if she stands to gain as an individual by such an act, that is outweighed by the other considerations and is a morally unacceptable alternative.For a full general discussion of moral issues of lying and failing to reveal see Sissela Bok, Lying: Moral Choice in Public and Private Life (New York: Vintage Books, 1989) and Sissela Bok, Keeping Secrets: On the Ethics of Concealment and Revelation ( New York: Vintage Books, 1989).

In Scenario 3, if she is satisfied that the match between the participant and nonparticipant groups is sufficient to produce reliable results, then she has an obligation as a scientist to publish the results, even though she is concerned that others may misuse those results for political purposes. If researchers are to engage in research in areas that are politically sensitive, they must be prepared to let the chips fall where they may in terms of honestly reporting findings. Otherwise, why bother to do the science? The fact that some harm could come from honestly reporting her results does not necessarily justify falsifying results or suppressing results. She owes the truth about her findings to all stakeholders, including those who oppose needle exchange programs.

One consideration is the particular good and harm that publication of these results may do to various stakeholders (e.g., pro-NEP activists, anti-NEP activists, NEP participants and Lang's credibility and her future access to these populations). But an equally legitimate concern is whether one can justify a practice of deciding to report or publish scientific findings on the basis of the impact of the findings on some ongoing political debate. It is not clear that such a practice could be morally justified in scientific research.

One can also consider the researcher's responsibilities to ensure fair and responsible reporting, interpretation and use of the researcher's findings. The researcher has these responsibilities both qua researcher and qua citizen.

At one level, the researcher cannot be held morally responsible for others' irresponsible use of the researcher's findings. If a journalist is too lazy or ignorant to do responsible reporting, that behavior is beyond the researcher's control. If politicians or activists willfully misuse findings to support a political agenda, they must be held morally accountable for that. However, if the researcher is in the best position to anticipate that her findings will be misused or to recognize they are being misused, then she has some obligation as a scientist with a commitment to the truth and as a citizen with a commitment to honest civic deliberation to take whatever steps she can reasonably take to prevent that abuse or to set the record straight.For other discussions in this series of researchers' obligations to counter misuse of their results, see the cases and commentaries in "Beyond Expertise: One Person's Science, Another Person's Policy" and "Crashing into Law" in Brian Schrag, ed., Graduate Research Ethics: Cases and Commentaries, Volume 2 (Bloomington: Association for Practical and Professional Ethics, 1998).

In Scenario 3, for example, perhaps she can alert her colleagues before publication so they are aware of the problems her findings may create in the public sphere and so they can be prepared to respond. Perhaps she could meet with journalists prior to release of the results to ensure they can place the results in the context of other research in the field.

Assessment of Risk

Benefits

Voluntary, Informed Consent

Special ethical and regulatory considerations are involved in investigator design and IRB review of research on children. I will focus on these special concerns.

One step is to identify which of four categories of research the study belongs to: 1) research that does not involve greater than minimal risk to children; 2) research involving greater than minimal risk but presenting the prospect of direct benefit to the individual child-subject; 3) research involving greater than minimal risk and no prospect of benefit to the child; 4) research not otherwise approvable under one of the above categories, but the IRB determines that the study presents a reasonable opportunity to further the understanding, prevention or alleviation of a serious problem affecting the health or welfare of children. (OPRR Reports 1981)

The study in this case calls for eighth graders to take a skin prick test of 12 different allergens including those for cockroaches and dust mites. The children are divided into three groups for the purpose of the experiment based on the following characteristics. 1) children who have self reported a diagnosis of asthma; 2) children who have wheezing but have not been diagnosed with asthma, and 3) children who have neither wheezing nor asthma.

Assessment of Risk

The first category of research does not involve greater than "minimal risk" to children. Minimal risk means "the probability of and magnitude harm or discomfort anticipated in the research are not greater in and of themselves that those ordinarily encountered in daily life or during performance of routine physical or psychological tests." Allergy scratch tests are included in the category of minimal risk. (Office of Human Subjects Research 1993, p. 3) There is a possibility that a child will go into anaphylactic shock from the skin prick tests. Anaphylactic shock can lead to death in minutes if not treated. The probability in the general population of anaphylactic shock from the skin prick tests is 1 in 1 million; the magnitude of harm is great, but the probability is small.

However, the aim of the research is to determine whether asthma and wheezing are associated with exposure and sensitivity to cockroaches and dust mites. If that turns out to be the case, then there is a possibility that children in groups 1) and 2), who have asthma or wheezing, may have already developed a sensitivity to these allergens. Hence, the probability of anaphylactic shock may be higher for them. That probability is, presumably, unknown. The probability of risk for subjects in these two groups may thus be higher than for the general population and those in the control group who do not have asthma or wheezing. However, the risk for those in the two experimental groups is no higher than it would be if their parents had decided, because of the children's symptoms, to take them in for allergy testing on their own. The control group, on the other hand, might not otherwise undergo the scratch tests and hence incur the 1 in 1 million risk of the scratch test.

It is also the case that the tests will be conducted in a setting prepared to deal with such reactions and perhaps with heightened awareness to the possibility of reactions. Does this set of experimental conditions move the research subjects to a category higher than minimal risk? It is not clear that it does.

Benefits

There are some direct material benefits for all participants from participating in the program. 1) All participants would receive a free allergy test. 2) All participants would receive free assessment of levels of these allergens in their homes. 3) All would receive some inexpensive materials for control of dust mites and cockroaches.

Who stands to benefit from results of this research? If it should turn out that asthma and wheezing in children in groups 1) and 2) is caused by dust mites or cockroaches or both, then children in these two groups would presumably benefit significantly and directly from the findings since the source of their problem will have been identified and thus may be alleviated. For them, this study falls in the category of therapeutic research. The population of all children who are at risk when exposed to these elements also may benefit significantly from this knowledge. Recent publicity suggests that the number of children with wheezing and asthma is substantial and increasing and may be linked to allergies to dust mites and cockroaches. The benefits to the two experimental groups may well be said to offset the risks.

The results presumably would not directly benefit those in the control group who are resistant to these allergens, unless, of course, even they could become sensitized given high enough allergen levels. Children in the control group might have siblings who are sensitive to these allergens, and the siblings might benefit directly from the research results. The benefits for the control group are minimal. Children in the control group will be exposed to the risk of anaphylactic shock, which they would presumably not otherwise incur, but that risk is deemed minimal.

Voluntary, Informed Consent

A second consideration in the study is the process of obtaining voluntary, informed consent. The case does not indicate the socioeconomic level of the participants. If they are from a low socioeconomic and educational background, there is a possibility that they will be unduly swayed by the offer of free allergy tests and modest environmental interventions. That possibility must be taken into consideration.

Human subjects research guidelines for children require that the permission of parents or guardians must be obtained, since children are considered unable to give legally valid informed consent. However, the children in this case are in the eighth grade and are capable of providing assent. Hence, they should be asked if they assent to participating in the research.

The information given to subjects and parents should certainly include the purpose of the experiment and the risks and benefits involved. In the case of the control group, it should be made clear to both parents and children that their involvement is not for their own benefit but for the benefit of others.

The information must be made available in a manner easily understood by both parents and children, particularly if they are from a disadvantaged population. It would be appropriate to include some general educational information on problems associated with asthma and allergies so all can understand the significance of the research and, in the case of groups 1) and 2), they can learn something about the conditions they have.

If a participant experiences anaphylactic shock at some point during the allergy tests, it would be appropriate to inform the other participants and give them the option of withdrawing from the experiment.

References

  • Office of Human Subjects Research Information, National Institutes of Health. "Research Involving Children," Sheet 10. Revised 10/93. N. p.: National Institutes of Health, 1993.
  • OPRR Reports, "Protection of Human Subjects," Title 45 Code of Federal Regulations, Part 46 Subpart D -- Additional Protections for Children Involved as Subjects in Research. Revised June 18, 1981.
Commentary On

The case describes an experimental procedure conducted on a particular kind of animal (a procedure that in fact is actually and routinely carried out on the animal) and raises the fundamental issue of how we justify conducting research on animals that we would never justify conducting on humans.

One might ask whether the "facts" of the case really do or can remain the same whatever animal is "plugged in." Are the facts equally appropriate, for example, whether the animal is a monkey or a cockroach? Would carbon dioxide really be the anesthetic of choice in both cases? If it is not necessary as an anesthetic in the case of the monkey, would it really be necessary to allow the monkey to regain consciousness during the experiment? Would removal of limbs (as opposed to some restraining devices) really be necessary on monkeys?

Given these sorts of considerations, one might think it is not sensible to expect to test our intuitions about ethical treatment of animals by imagining our responses if we "plugged" various animals, including humans, into a single experimental design of this sort. Nevertheless, one should not dismiss the issues raised by such a thought experiment too quickly. However one resolves the question of the universality of this particular experiment, it is certainly reasonable to assume that one could consider a complex of cases that raise the same issues.

One could imagine, for example, experiments devised by an "alien" that inflict this sort of pain and suffering on humans. One can identify experiments devised by humans (e.g., the use of rats to assess the pain experienced by humans in burn injuries)See for example, Patricia F. Osgood, "The Assessment of Pain in the Burned Child and Associated Studies in the Laboratory Rat," ILAR NEWS 33 (1-2, Winter/Spring 1991): 13-18. that appear to subject higher vertebrates to pain and suffering similar to that inflicted in this case. Finally, actual experimentation on invertebrates results in the behaviors and reactions described in this case (however one interprets those behaviors).

Suppose for the sake of argument that this particular experiment were to be performed on a variety of species. Would we think the moral justification for applying or not applying this same experiment would differ for different animals, including humans? If so, to what criteria would one point to justify the difference?

That our actual standards and review procedures for experimentation on humans differ from those for animals reflects the fact that most persons assume that scientific protocols involving human subjects should be assessed differently than protocols involving animals. However, there is currently no consensus in the ethics community regarding the moral criteria for determining when, if ever, experimentation on animals is justified.For an excellent recent collection of essays featuring an exchange of views by leading ethicists and scientists on the use of animals in research, see Ethics and Behavior 7 (2, 1997).This lack of consensus reflects some absolutely fundamental disagreements regarding the nature of humans and their relation to other animals as well as fundamental disagreements over core concepts in morality.

One can identify a spectrum of positions on this question.For a brief overview of the issues, see Deni Elliott and Marilyn Brown, "Animal Experimentation and Ethics," in Deni Elliott and Judy E. Stern, eds., Research Ethics: A Reader (Hanover: University Press of New England, 1997), pp. 246-259. I will summarize the positions of four ethicists to illustrate that spectrum. Some would argue that there can be no moral justification at all for experimentation upon animals. One argument for this view starts from the position that we recognize humans have moral rights. Moral rights place limitations on what people are free to do to each other. Because humans have moral rights, we reject the notion that it was acceptable for the Nazis to perform hypothermia experiments on prisoners simply because the knowledge gained could benefit other humans. The source of our human rights, so the argument goes, is that we are sentient creatures; we have a capacity to experience pain and pleasure. But if that is true for humans, it is also true for many species of animals. Sentient nonhuman animals have the same moral rights as humans in this regard and should no more be experimented upon for the benefit of humans than we should experiment on prisoners, children or any other humans simply for the benefit of other humans. On this view, experimentation on nonhuman sentient animals is unjustified and should stop now. The interests of humans cannot outweigh the rights of rats. If there are some things we cannot learn because we cannot experiment on animals, so be it.See Tom Regan, The Case For Animal Rights (Berkley: University of California Press, 1983). For his own summary of his views, see Tom Regan, "The Rights of Humans and Other Animals," in Ethics and Behavior 7 (2, 1987): 104-111.

At the other end of the spectrum is the view that there is nothing wrong with experimentation on animals. One view accepts that many animals are sentient creatures, and for that reason humans have an obligation to not to avoid gratuitously inflicting pain and suffering on them. However, there is no parallel between Nazi experimentation upon humans for the benefit of other humans and experiments on rats for the benefit of humans. Humans have moral rights; rats do not. The moral status of humans and animals is very different. That difference does not arise from particular cognitive properties of humans such as self-consciousness, rationality and the ability to communicate. Rather, the human species is unique among animal species in having a capacity for moral reasoning and discourse; only humans are capable of grasping and laying down moral laws for themselves and others. Consequently, only the human species is capable of having and being in a moral community. Moral community is a necessary condition for having moral rights. Thus we do not violate the rights of animals when we experiment upon them for the benefit of humans, and it is acceptable to do so.Here I summarize the argument of Carl Cohen, "Do Animals Have Rights?" in Ethics and Behavior 7 (2, 1997): 91-102. See also Carl Cohen, "The Case for the Use of Animals in Research," New England Journal of Medicine 315: 865-870.

A third view denies that moral standing and moral rights are necessarily tied uniquely to members of the human species as specified in the previous position.See Tom Beauchamp, "Opposing Views on Animal Experimentation: Do Animals Have Rights?" Ethics and Behavior 7 (2, 1997): 113-121. Rather, moral standing in creatures is tied to properties of creatures that warrant giving them the protection of morality. If a being has moral standing, it also has moral rights. Acknowledging that a being has moral standing is to recognize that it has interests that humans must take into account; that also distinguishes it from beings whose interests have less moral weight. Certain cognitive properties are frequently invoked as the criteria for assigning moral standing. Thus, it is often suggested that self-consciousness, purposive action, capacity for language, capacity to make moral judgments and rationality are properties that justify assigning moral status to a being. But it is plausible to argue that certain nonhuman animals posses some of these qualities and sometimes at a higher level than some humans (e.g., infants or demented adults). Furthermore, it is not clear why only these qualities are relevant to assigning moral standing. Why should not the capacity to feel emotion (note the recent work on the emotional life of animals) or pain be sufficient to give a creature a degree of moral standing? "The question is not, Can they reason? nor Can they talk? but, Can they suffer?"Jeremy Bentham, The Principles of Morals and Legislation. Chapter 17, Section 1. The fact that animals can suffer imposes some obligation on us not to inflict pain on them and to recognize that infliction of pain is wrong no matter what the benefits. At minimum, this imposes upon humans an obligation not to exceed certain limits in the infliction of pain in other animals. (Setting acceptable levels of pain in experimentation on animals is required by law in some countries, but not the United States.See F. Barbara Orlans, "Ethical Decision Making About Animal Experiments," Ethics and Behavior 7 (2, 1997): 123-136.) On this view, we may be justified in conducting certain animal experimentation, but we are justifiably constrained to not exceed certain levels of pain no matter what the benefits to humans and otherwise constrained by the amount and type of pain inflicted as well as the merits of the research.

A fourth view holds that how we treat humans and animals in experimentation should be determined on the basis of the value of their lives, which is, in turn, a function of the quality of their lives.See R. G. Frey, "Moral Community and Animal Research in Medicine," Ethics and Behavior 7 (2, 1997): 173-184. Thus, moral standing is not determined by an appeal to moral rights. Rather, moral standing is determined by whether a creature can be an experiential subject, capable of having a series of experiences that can make the creature's life go well or not depending on the quality of those experiences. It has a welfare that can be improved or negatively affected by what we do to it. Normal adult humans fall in this category, but so do demented patients, frogs and lion cubs. This is so whether or not we agree they all have moral rights or are all moral agents. (Mere adaptation to environment does not count as being experiential. Some creatures, - e.g., plants in general - are not experiential.) Moral standing varies with the quality of life.

The quality of a life is a function of its capacity for richness. A normal adult human with the capacity to appreciate music, literature, science has a richness and hence quality of life not approached by a puppy. Nonhuman animal life, in general, probably would not have the same value as human life because the quality of life is not as high. Nevertheless, even if an animal lacks some of the richness of the normal adult human, it still has some value, and animals are not to be simply sacrificed the way one might discard broken lab apparatus. It is important to emphasize that on this view, it is not the species to which an animal belongs that determines the value of the life; it the animal's quality of life.

Quality of life includes more than considerations of pain and suffering, although those considerations are morally relevant. On this view, pain is pain, and the species makes no difference in terms of the moral significance of the pain. There is no moral difference in the pain caused by pouring scalding water on a baby or a puppy or a lobster. (One researcher has noted that the mortality rate for human burn victims at the turn of the century was quite high. It is now less than 5 percent, and that decline is directly attributable to experimentation on animals such as rats.Osgood, "Assessment of Pain," p. 15. On this view, such research would be justified.)

It also follows from this view that not all humans have the same quality of life and hence the same value or moral status. It is possible in principle that some nonhuman primate could have a quality of life higher than that of some human. One implication of this view for animal experimentation is that although in general one might justify experimentation on animals on the basis of benefits to humans, there may be instances in which the quality of life of nonhuman primates, say, is higher than that of some humans, and the experimentation should be done on those humans rather than the primates.

The actual standards and regulations guiding experimentation on animals have evolved in the United States over the past 30 years.For a history of development of regulation in the United States, see Nicholas H. Steneck, "Role of the Institutional Animal Care and Use Committee in Monitoring Research," Ethics and Behavior 7 (2, 1997): 173-184. The legal regulations are principally spelled out in the Animal Welfare Act, the Public Health Service Policy on Humane Care and Use of Laboratory Animals, and the Good Laboratory Practices of the Food and Drug Administration. In addition, there are the voluntary regulations of the American Association for the Accreditation of Laboratory Animal Care.For a summary of the relevant U. S. rules and regulations, see B. T. Bennett, M. J. Brown and J. C. Schofield, eds., Essentials for Animal Research (Beltsville, Md.: National Agricultural Library, 1994), pp. 1-7. Reprinted in Elliott and Stern, eds., Research Ethics, pp.246-259. The latter are set forth in the Guide for the Care and Use of Laboratory Animals, which is an influential document worldwide.Institute of Laboratory Animal Resources, Commission on Life Sciences, National Research Council, Guide for the Care and Use of Laboratory Animals (Washington D.C.: National Academy Press, 1996). The Animal Welfare Act requires the CEO of each research facility to create an Institutional Animal Care and Use Committee (IACUC) to ensure compliance with the Act. IACUCs are now the major formal review mechanism in the United States for proposed animal experimentation. The Public Health Service also requires IACUCs for institutions with projects funded by the PHS, and those projects are expected to follow the standards of the Guide. The Guide is intended to apply to vertebrates and does not specifically address the treatment of invertebrates.Ibid., p. 2. As a general rule, IACUCS do not concern themselves with protocols for the treatment of invertebrates.

There is considerable variation between the United States and some other countries regarding oversight and standards.I draw here on Orlans, "Ethical Decision Making." U.S. law does not require that IACUCs consider the ethical justification of an experiment, unlike the laws of the United Kingdom, Germany, the Netherlands and Australia. Those countries require that institutional review committees weigh the ethical cost to the animals against the human benefits derived from the research.

In several countries (not the United States), review boards are required to assess the level of pain to animals likely to be inflicted by the proposed experiment and, if the level is too high to be ethically justified, the experiment is disapproved irrespective of the scientific benefits of the experiment. Systems (scales of invasiveness or pain) to categorize the degree of animal pain in experimentation are incorporated into the national policies of Canada, The Netherlands, United Kingdom, Switzerland, Finland and Germany (but not the United States). The actual effect of such review may depend on who sits on the committees. Researchers understandably have a vested interest in advancing research and little incentive to place the welfare of animals above research. Orlans reports on a survey of licensed animal researchers in the Netherlands: When they reviewed proposed protocols that involved considerable harm to nonhuman primates, the researchers assumed the project was justified and did not question its purpose.Ibid.

The questions "Do animals experience pain and suffering?" and "If so, how could one tell?" are prior to the debate over the trade-off between benefit to humans and the pain inflicted on animals. Even if we allow that animals that are most like humans do experience pain and suffering, what about "lower" vertebrates and invertebrates? Much of the debate over our moral obligations in experimentation on animals and in the guidelines and standards centers on these questions.

Some argue that animals, compared to humans, do not feel pain or as much pain or at least we can never be sure they feel pain, and hence it is acceptable to subject them to experiments that would cause pain and suffering in humans. These questions are in part arguments in conceptual analysis in the philosophy of mind and not empirical questions.See, for example, the following exchange: Peter Harrison, "Do Animals Fell Pain?," Philosophy 66 (1991): 25-40, and Ian House, "Harrison on Animal Pain," Philosophy 66 (1991): 376-379. Even if some animals feel pain, there is still the question of the degree to which various nonhuman species experience pain. Invertebrates are often thought to be suitable replacements for vertebrates in experiments because they are thought to be insentient or at least less sentient than vertebrates. Thus, in the case under discussion, it would matter whether the experimental animal was a monkey or a cockroach.

We cannot resolve these issues here. Nor is it reasonable to suspend scientific research until the conceptual issues are resolved.For an extended discussion on both the empirical and philosophical issues, see Beckoff et. al., "Animals in Science: Some Areas Revisited," Animal Behavior 44 (1992): 473-484. Notice, however, that it would be inconsistent for researchers on the one hand to assume that rats are reasonable models for learning about human responses to pain or the effectiveness in humans of various analgesics, say in burn treatment, and yet assume that we have no idea whether or not rats experience pain.For a discussion of the empirical work on the relation of human to animal pain, see Fred Quimbly, "Pain in Animals and Humans: An Introduction," ILAR News 33 (1-2): 2-3; Francis J. Keefe et. al., "Behavioral Assessment of Pain in Animals and Humans," ILAR News 33 (1-2): 3-13; Osgood, "Assessment of Pain." At a practical level, the proposal found in the Guide for dealing with vertebrates may seem the most prudent: "In general, unless the contrary is known, it should be assumed that procedures that cause pain in humans also cause pain in animals."Guide, p. 64.

Even if we assume that vertebrates feel pain, the issue of pain in invertebrates is more difficult. Humans share a basic physiology with mammals, and similarities in neural organization with all vertebrates. Our physiological similarities with invertebrates are much more tenuous. Should we be more concerned than we are with the treatment of invertebrates in research? Can cephalopods (which have the largest brains of all invertebrates), experience anything like what we call pain?For a discussion of the evidence of pain in invertebrates, see Jane A. Smith, "A Question of Pain in Invertebrates," ILAR News 33 (1-2): 25-31. In the absence of clarity on this issue, perhaps the most reasonable course, as suggested by one researcher, is to follow a principle of respect. That is, when using invertebrates in research, we should maintain the highest possible standards of husbandry and care and, where questions of pain and suffering are concerned, give the animals the benefit of the doubt. That would include avoiding the use of the more complex species where possible, and anesthetizing the animals in procedures that have the potential to inflict pain.Ibid., p. 29.

Back to Top

Intentional Deception of Human Subjects in Research

IRB Considerations

The Use of Deception

Moral Wrongs and Harms

Moral Wrongs and Harms in Deception

Debriefing and Harm

Voluntary Consent

Benefits of the Experiments to the Subjects

Broader Issues

References

Intentional Deception of Human Subjects in Research

These three cases raise a narrower issue and a broader issue. The narrower issue is whether an IRB should approve the conduct of any or all of these experiments, which involve intentional deception of human experimental subjects. The broader issue is whether it is ethical for scientists to employ intentional deception in experiments on human subjects. The broader question has taken on increasing significance over the past 50 years as the use of deception in research has increased dramatically. The proportion of studies that use intentional deception in experimentation on human subjects increased from 18 percent in 1948 (Baumrind, 166) to 37 percent in 1963 and 47 percent by 1983. (Fisher and Fryberg, 417)

The broader issue raises not only the question of the ethical justification for intentionally deceiving the subjects but other ethical considerations including the moral significance of the particular acts of deception or a practice of deception for the researcher, the training of researchers, the university (if the research is university-based), the discipline, research science and society as a whole.

Back to Top

IRB Considerations

The IRB has a narrower focus. Its concern is primarily, although not exclusively, with protecting the rights and welfare of the human subjects in scientific research, given certain guidelines. Those guidelines may or may not adequately capture all the relevant ethical considerations concerning particular deception research or the practice of such research. Hence, even if the IRB approves any of these experiments, that does not settle the question of whether it is ethical for scientists to engage in this research. It is possible for the IRB to approve one of these experiments and it still not be ethically justified research. Nevertheless, the IRB is a good place to begin in these cases.

The federal guidelines on the protection of human subjects of research (found in the Code of Federal Regulations, Title 45, Part 46) provide the IRB with criteria for determining whether proposed research that falls under its purview will treat human subjects in an ethical manner. These guidelines specifically charge the IRB with determining two things: 1) That the subjects have given free and voluntary informed consent to participate in the study and more particularly that a) the circumstances under which the consent is obtained minimizes the possibility of coercion or undue influence; b) the informing includes a description of any reasonably foreseeable risks or discomforts; c) refusal to participate will involve no loss of benefits the subject is entitled to; d) the subject may discontinue participation at any time; e) if subjects are part of a population that may be vulnerable to undue coercion or influence that additional safeguards are included to protect their rights and welfare. 2) That risks to subjects are minimized and are reasonable in relation to any benefits of the research to the subjects and in relation to the importance of the knowledge gained in the experiment. (45 CFR 46.111)

These guidelines draw on three ethical concepts relevant to ethical practice of human research, namely, "respect for persons," "beneficence" and "justice."These principles were first articulated in The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research. OPRR Report, Department of Health, Education and Welfare, 1979. The ethical principle, "One ought to treat humans with respect," is used to ground a requirement that in scientific research, prospective human subjects should not become subjects of a scientific research experiment until and unless they have given free and voluntary, informed consent to participate in that experiment.

IRB guidelines do not categorically rule out deception of human subjects in research, even though the ethical principle and concepts outlined above would appear to preclude it. Federal guidelines allow deception of human subjects in experiments by allowing a waiver of the informed consent requirement provided that

the IRB finds and documents that: (1) The research involves no more than minimal risk to the subjects; (2) The waiver will not adversely affect the rights and welfare of the subjects; (3) The research could not practicably be carried out without the waiver or alteration and (4) whenever appropriate the subjects will be provided with additional pertinent information after participation. (45 CFR 46.116 [d])

The risks to subjects must be "reasonable in relation to anticipated benefits, if any, to subjects and to the importance of the knowledge that may reasonably be expected to result." (45 CFR 46.111 [a] [2])

The IRB may also be guided in these cases by the American Psychological Association's Ethical Principles of Psychologists and Code of Conduct (1992) as well as the American Psychological Association's Committee on Ethical Standards in Psychological Research's Ethical Principles in the Conduct of Research with Human Participants (1973).

Before the IRB approves an experiment that involves deception, it must consider the risk of harm to subjects relative to the benefits to the subjects and the importance of the knowledge gained; actual harm to subjects; the necessity of deception in the experiment and whether subjects are adequately debriefed subjects after the experiment. A central consideration is the harm to subjects caused by deception. To assess such harm requires careful attention to the kinds of deception involved.

Back to Top

The Use of Deception

All three of these cases present some element of deception of the subjects; the level of deception increases from the first to the third cases. In all three cases, the subjects are deceived as to the purpose of the experimental activity. In Case 1, the investigator deceives by failing to completely reveal the nature of the study. In Case 2, the researcher lies to the subjects about the purpose of the experiment; they are told the purpose of the activity is to measure their attitudes when in fact the research activity involves investigating the degree to which they and their attitudes are vulnerable to group pressure. In Case 3, subjects are told that purpose of the task is one thing when in fact it another, which is to observe their "helping" behavior in response to someone they are deceived into thinking is in distress.

In Cases 2 and 3, the subjects are not only deceived about the purpose of the experiment but also about the status of other persons they interact with in the experimental group. Subjects are allowed to think all members of the group are experimental subjects when in fact some are confederates in the experiment. In Case 3, subjects are additionally deceived about the status of someone outside the group and are led to believe that person is in distress.

Back to Top

Moral Wrongs and Harms

The IRB needs to think carefully about the moral wrongs and the harm to subjects that come from the deception in these cases. Because social science research has a tradition of the use of deceptive techniques and because of the possibility that some lines of research may not be pursued without the use of deception of subjects, there may be a tendency of an IRB to underestimate the moral wrong or harm of deception or to be biased in favor of the benefits of the knowledge gained in the research. For that reason I will focus in some detail on the moral wrongs and harms to the subjects that can arise from such deception. Just as there are various levels of deception, so there are various kinds of moral wrong or harm to subjects that can arise from deception.

Intentional deception, as Sissela Bok argues in her book Lying, is as much a form of deliberate assault on persons as is physical violence: Both can be used to coerce individuals to act against their will. (Bok, 1989, Chapter 2, especially pp. 18-21) Deception is used in these cases to manipulate the beliefs and choices of the subjects as well as their responses to the situations. Deception is used to manipulate the subjects' choice to be involved in the experiments. Particularly in Cases 2 and 3, had subjects known the real purpose of the experiment, some may well have chosen not to participate. Deception is also used to manipulate the subjects' choice of responses to peers and to the situation within the experimental setting. If subjects knew the confederates were actually confederates, the subjects may well have withdrawn their initial disposition to trust the reactions of those peers, and they likely would have chosen to react to the confederates' behavior quite differently.

Deception fundamentally fails to respect persons and for that reason morally wrongs the person. Subjects in all these cases are not treated as rational beings capable of rational choice but are treated solely as means to the researcher's ends. Subjects of deception are always morally wronged in this way even if they do not realize or never realize they have been deceived. Of course, they may also be harmed by deception, even if they do not realize that they are being deceived.

It is important to distinguish between morally wronging persons and harming them; it is a category mistake to equate the two.For an early discussions of these points see Tom Murray, "Was This Deception Necessary?" IRB: A Review of Human Subjects Research 2 (10, December 1980): 7-8. For a later discussion see Ruth Macklin, "Autonomy, Beneficence and Child Development," in Barbara Stanley and Joan E. Seiber, eds., Social Research on Children and Adolescents: Ethical Issues (Newbury Park: Sage Publications, 1992). The point seems to have been lost on many social scientists. We morally wrong people when we violate fundamental moral principles in our dealings with them; for example, when we fail to respect them as persons, treat them unjustly, violate their rights, invade their privacy or gratuitously harm them. The concept of morally wronging a person is independent of its criterion of application. Some would argue the criterion involves treating persons solely as means; others would argue it involves only doing a person physical or psychic harm; some would include both. When we manipulate people by lying to them, we may morally wrong them, even though we may not harm them. Harming persons is not necessarily the criterion of morally wronging persons; it is one, but only one, way of morally wronging them. Moral wrongs to persons may be accompanied by harm to them; in that case, they have been morally wronged in more than one way.

This distinction between moral wronging and harming is blurred in the federal guidelines. The language of risks and harm in the guidelines may direct our attention away from concern for the morally wronging of subjects. Focus on the language of harm has blinded researchers to the distinction and led them to assume, No harm, no moral foul, that any negative consequence of deception can be undone by undoing the harm. Ignoring the distinction makes it easier to justify deceptive research because the risk-benefit analysis takes into account only harms, not other moral wrongs. Much reasoning about the debriefing of human subjects in deceptive research misses the point because it assumes that the only wrongs to be addressed are the harms caused by the research. The harms of deception may or may not be undone by debriefing. The moral wrong of manipulating subjects by deception into acting against their will cannot be undone. I assume that moral wrongs other than harms are relevant to IRB deliberations regarding approval of experiments.

Back to Top

Moral Wrongs and Harms in Deception

Some of the moral wrong of deceptive experiments, then, comes from simply failing to treat persons with respect. Notice that consent to be morally wronged does not eliminate the wrong. If we succeed in getting people to agree to let us morally wrong them, that does not justify the wrong. Indeed, even if people were to give us permission to fail to treat them as rational persons and we subsequently do so by deceiving them, we have still wronged them as much as persons who consent to slavery are wronged if we enslave them.)

In Cases 2 and 3, the nature of the experiments enabled by the deception may also be a source of wrong and harm in particular to some subjects. Joan Seiber notes that one defensible justification for deception research is if it is the only way "to obtain information that would otherwise be unobtainable because of the subject's defensiveness, embarrassment, shame or fear of reprisal.(Seiber, 64)

One might think that Seiber's justification is precisely a justification for not allowing the deceptive research. In these two cases, deception allows the investigator to invade the privacy of the subjects without their knowledge or consent and to force the subjects (again without their knowledge or consent) to confront certain inclinations in themselves and to reveal them to their peers in the experiment and to the researcher. The inclinations revealed might, for some subjects, fall under Seiber's category of "otherwise unattainable information." We are not given the controversial topic to be discussed in Case 2. That may be significant for the IRB to consider since there may be particular sensitive topics that might be especially stressful for some subjects or particularly affect their reluctance to have their private thoughts invaded. In Case 2, the subjects' inclinations to follow group pressure and group norms are revealed. In Case 3, the subjects' reluctance to help a person presumed to be in distress is revealed.

There are two sorts of wrong here. First, both experiments invade the subjects' private behavior and emotions. As Bok argues, learning about people's private behavior and emotions without their consent is akin to spying at them through keyholes and is not "less intrusive for being done in the interests of research." (Bok, 1989, 194)The arguments here draw from her discussion in the whole of Chapter 13. It is not always true that what you do not know cannot wrong you. With regard to spying through a keyhole, we think a moral wrong has been done even if the subject is unaware of the spying.

Harm to subjects is also likely. In these cases, the subjects will learn about the invasion of privacy since they will be debriefed, and by that means further harm may be done. People may well vary in the strength of their sense of privacy and the harm from having that privacy invaded. Some may be quite bothered by this invasion of their most intimate being, others not at all. Consequently a reasonable case can be made for the claim that the subjects are best positioned to judge the harm done. Since the subjects will be deceived and denied the opportunity to give voluntary and informed consent, they cannot be asked how much they think they would be harmed by the experiment. Some would argue that if we survey a representative sample of potential subjects about participation in such experiments, we can take their responses as reasonable evidence of what the subjects would say if they were given a choice to participate. (Sharpe et al., 1992, 589) However, given the variability of individual responses to such invasion, there is no reason to think the substituted judgment of the researcher or the IRB committee, even based on such evidence, is an accurate gauge of the harm done the individual subject by this invasion of privacy.

The second sort of harm, in both Case 2 and Case 3, is that caused by forcing persons to confront or reveal to others knowledge about themselves they may not want to confront and may find painful to live with.Baumrind appropriately calls this "inflicted insight" because the subject is given painful insights into his or her flaws without asking for such insights. See Diana Baumrind, "IRBs and Social Science Research: The Costs of Deception," IRB: A Review of Human Subjects Research 1 (No. 6, October 1979): 4. For example, Seiber notes research that suggests most people perceive others as "conforming sheep" but view themselves as not being influenced by peer pressure. (Baumrind, 1979, 65) Some subjects in Case 2 may be upset by being forced to confront that bit of self deception or by revealing it to others. In Case 3, subjects may feel anxious, embarrassed, ashamed or guilty for not coming to the aid of a person they feel is in distress; they may feel the same when forced to confront that fact about themselves and have it revealed to others. Notice that mere participation in the experiments in Cases 2 and 3 may force this realization, whether or not the subjects are debriefed.For a graphic description of the negative effects on subjects of participating in helping experiments such as the one proposed in Case 3, see Tom Murray, "Learning to Deceive," The Hastings Center Report 10 (2, April 1980): 12. Again, people may vary on how much difficulty this unsolicited knowledge may cause them; the subjects and only the subjects are best positioned to judge the harm done to themselves.

Additional harm to subjects may occur when subjects realize they have been deceived in order to be used in an experiment. In these cases they will know they have been deceived since they will be debriefed. In general, when persons discover that they have been deceived and manipulated, the natural response is to a feel loss of control over their own actions, to feel used, to feel they have been played the fool and consequently to be resentful, distrustful and suspicious both toward those who deceived them and more generally toward all others. In this case that distrust may also be directed toward social scientists and scientific research in general.Seiber refers to research that indicates the extent to which college students who serve as experimental subjects now assume the researcher will be attempting to deceive them. (Seiber, 1992, 7, 65). There is no reason to assume that this suspicion and distrust is a momentary or fleeting reaction that disappears without a residual impact on the trustful disposition of subjects. We know in some instances (e.g., the Tuskegee syphilis experiment or radiation experiments conducted on citizens in the 1950s by the U.S. government) that the experience of discovering that they have been deceived into being experimental subjects had lasting effects on subjects' trust of medical and governmental officials. The loss of trust caused by such deception also has a way of spreading to those who were not subjects, but simply learn about the deceptive practice.See, for example, James H. Jones, Bad Blood: The Tuskegee Syphilis Experiment, 2nd ed. (New York: The Free Press, 1993), Chapter 14, for a discussion of the impact of the Tuskegee study on the trust of black Americans toward government health personnel and the subsequent impact of that on efforts to deal with AIDS in the black community. The makeup of the subject group may be relevant here. We do know that the bulk of social psychology research is carried out on college students. (Fisher and Fryberg, 1994, 418) The impact of being deceived is especially significant when the subjects are college students and they realize they are being deceived by a trusted faculty member who is also supposed to be a teacher and role model for the profession.

Back to Top

Debriefing and Harm

Some researchers may assume that any harm caused by deceptive research can be "wiped out" by debriefing after the fact. Debriefing includes dehoaxing (revealing the deception) and desensitizing ( attempting to remove any undesirable consequences of the experiment). The aim of desensitizing is to restore the subject to an emotional state of well-being. Seiber notes evidence that desensitizing is not always effective in removing all the damage to self esteem caused by the deceptive experiment. (Seiber, 1992, 72)For a candid description of the experience of debriefing subjects of a helping experiment see Murray (1980), 12. Indeed, the debriefing may only increase the harm by ensuring the subjects are explicitly and exactly aware of the unflattering character traits and behavior they have revealed about themselves.

Voluntary Consent

As the IRB thinks about whether to approve any of these three research proposals, an important issue to consider is the degree to which the experimental subjects have given their voluntary and informed consent to participate in the experiment. Informed consent is a necessary but not sufficient condition of voluntary consent. That is, if the consent is not informed, it cannot be completely voluntary since, if subjects do not know what they are consenting to, they cannot be said to have voluntarily consented to do it. However, giving informed consent is not necessarily sufficient to ensure the consent is voluntary.

Suppose, however, the researcher proposes in these cases to ask subjects to agree to participate in an experiment with the understanding that they will not be told about the exact nature or purpose of the experiment until afterward and that there may be some deceptive elements in the experiment. The subjects at least voluntarily agree to be deceived, even if they are unclear about the details of the deception.

To assess voluntary consent under such conditions, it is necessary to know how these subjects are recruited and under what conditions. In Case 1, for example, how were subjects recruited to the workshop? Was selection for the workshop independent of the recruitment for participation in the study? For example, was the workshop part of mandatory training on environmental issues for employees? In such a setting, participants may not feel free to refuse to participate in the testing. Is the workshop staged only for the purpose of testing the impact of the workshop on changing attitudes and if so how were subjects recruited? In Cases 2 and 3, the subjects are brought to the laboratory, so they presumably are at least aware from the beginning that this will be an experimental activity. If they agree to participate after being told some information about the experiment is being withheld until after the experiment and some deception may be involve in the experiment, then one might argue reasonable voluntary consent was obtained.

But such an arrangement does not establish that the subjects' consent was sufficiently voluntary in the sense of given without undue influence or coercion. As an illustration, consider a practice of psychology departments in many universities, thought by many to be acceptable. These departments include in the syllabus of introductory psychology classes a requirement for the course that the student either participate as a subject in a certain number of departmental research experiments or write an additional paper for the course. (This requirement is a convenient way of ensuring plenty of experimental subjects for the department.) One such practice over a twenty-year period is described by Sharpe et. al. (1992).

Although the students have an "alternative" to participating in experiments as subjects, it does not follow that their choice to engage in the experiment is uncoerced or not unduly influenced. As a practical matter of fact, many of the students may need to take this course; avoiding the course is not an option. Once in the course, there are coercive negative inducements to becoming a subject in order to avoid writing a paper. The negative consequences of writing another paper are clear to students; the negative consequences of serving as a subject may not be clear.Sharpe et al. (1992) report that virtually all students opt for the research. (p. 586). In such circumstances, there may be a negative inducement to "volunteer" for the research. There are parallels in this practice to the dispensing of aspirin to poor black subjects in the Tuskegee syphilis experiment to gain their cooperation in a nontherapeutic experiment. Even if students knew in advance exactly the experiments in which they would be asked to participate, their consent, although informed, may in these circumstances be coerced and to some degree involuntary. Furthermore, in cases of deceptive experiments, students may need to decide between the syllabus alternatives before they know the nature of the experiments; it may be too late to back out of the experiments after they realize what they will be asked to do as subjects. If a similar practice is the source of experimental subjects in the three cases, then it is not at all clear the subjects are in a position to give voluntary consent, whatever the degree of informed consent in the cases.

Back to Top

Benefits of the Experiments to the Subjects

It is not clear that there is much in the way of benefits to the subjects in any of these experiments. A standard rationale for using college students as experimental subjects is that it gives them an increased appreciation of the discipline. A recent study suggests no evidence that participation has that effect. (See Sharpe et al., 1992, 589) Some argue that the subjects receive, as a benefit in debriefing, a brief explanation of current research understanding of the issues under investigation. The subjects could learn that information by reading the research literature without participating in the experiment.

In the absence of any benefits, the harm or potential harm to the subjects, particularly in Cases 2 and 3, surely outweigh the benefits to the subjects.

The IRB is also called on to determine if the benefit to general knowledge justifies the deception of these subjects. If one accepts that charge to IRBs as morally legitimate, one of the first questions an IRB ought to ask, particularly in Cases 2 and 3, is, "Are these experiments necessary?" The experiment in Case 3 is clearly very similar to a large number of experiments on helping behavior already done over the last thirty years. Unless it can be shown that this experiment adds significantly to that research, it ought to be denied on those grounds alone. Does the experiment in Case 2 really add anything to our knowledge of the influence of peers on our willingness to assert or express our views on controversial topics? Studies of group think have been around for a long time. The Case 2 experiment ought to also be denied on those grounds alone.

But one ought to raise a more fundamental ethical question at this point about the IRB guidelines. The IRB is allowed by its guidelines to weigh the harm to research subjects in an experiment against the value of general knowledge gained in the experiment. In the case of experiments in which subjects are involved without informed, voluntary consent, the harm to subjects must be considered "minimal" by the IRB in order to approve the experiment. (CFR 46.116) The definition of "minimal risk" is

that the probability and magnitude of harm or discomfort anticipated in the research are not greater in and of themselves than those ordinarily encountered in daily life or during the performance of routine physical or psychological exams. (CFR 46.102 (i))

One might ask if deception of humans is ever a minimal harm; if not, it should never be done.

The rational for balancing minimal harm to subjects against the value of knowledge gained is the principle of beneficence invoked in the Belmont Report. The principle of beneficence in the Belmont Report is understood as an obligation expressed in terms of two rules. "Two general rules have been formulated as complimentary expressions of beneficent actions in this sense: (1) do not harm and (2) maximize possible benefits and minimize possible harms." (National Commission, 1979, 4)

There are several issues here.For a discussion of these points see William Frankena, Ethics, 2d ed (Englewood Cliffs, N.J.: Prentice Hall, 1973), pp. 45-48. First is the issue of whether beneficence is an obligation or merely a good thing to do. One might at least agree that it is a prima facie obligation to be beneficent. Second is an issue of the exact content of beneficence. The principle of beneficence is usually thought of as an obligation to do good and avoid harm. William Frankena argues that notion can be explicated as 1) One ought not to inflict harm; 2) one ought to prevent harm; 3) one ought to remove harm; and 4) one ought to promote good. (Frankena, 1973, 47) He argues that the notion of an obligation to maximize the good is yet a further principle, which presupposes but not is necessarily implied by the principle of beneficence. (Frankena, 1973, 45) A final issue is the lexical ranking of these obligations. Traditionally in ethics, the notion of not harming takes precedence over doing good, to say nothing of maximizing the good. If that is the case, on this explication of beneficence, the fact that subjects are harmed in deceptive experiments should settle the issue for the IRB. Deceptive experiments should not be done.

The rationale of the Belmont Report for giving priority to "maximizing the good" over "doing no harm" is weak on this point. The report argues that although one should do no harm,

[E]ven avoiding harm requires learning what is harmful; and in the process of obtaining this information, persons may be exposed to risk of harm. Further, the Hippocratic oath requires physicians to benefit their patients according to their best judgments." Learning what will in fact benefit may require exposing persons to risk. (National Commission, 1979, 4)

Usually the interpretation of the "do no harm" principle is that one should not intentionally do that which one already knows will do harm. It is not a requirement that one minimize harm or that one try to avoid all harm by first attempting to discover everything that may cause harm even if that discovery process itself causes harm. Nor is the dictum a general rationale for doing harm to someone in order to prevent harm to others. To say otherwise is simply to collapse the distinction between avoiding known harms and minimizing all harms, known or unknown. In the specific case of treating a patient, the dictum may allow a rationale for subjecting the patient to risk in order to find a cure for an even greater harm to the patient. But there, the risks and benefits are all borne by the same person. With the exception of such cases, "Do no harm" is silent with respect to the issue of calculating tradeoffs of harm between persons.

In cases of deceptive experiments, we do not need to do the experiments to know the harm caused by deception. It is possible that deceptive experiments may be make us aware of why humans do not alleviate harm, for example, in "helping situations." But to say it is permissible to sacrifice the interests of subjects of human experimentation without their knowledge or consent for the welfare of others in order to learn what is harmful brings us right back to a violation of the principle of respect for individuals. Notice the case is different when subjects freely give their informed consent to engage in experiments that may harm them but produce a good for others. In such situations, the principle of respect for persons is observed. One may conclude that IRBs may be allowing far more deceptive practice than is warranted by their own moral principles.For earlier discussions of some of these issues, see Ernest Marshall, "Does The Moral Philosophy of the Belmont Report Rest on a Mistake?" IRB 8 (1986, 6): 5-6 and Baumrind (1979).

We have concentrated on the harm deceptive experiments may do to subjects and criticized the notion of the IRB trying to balance the harms to the subjects of deceptive experiments against general gains in knowledge. One issue we will not have space to address is whether deceptive research is even necessary. Social scientists themselves differ on whether good science requires such research. (Compare Seiber [1992] and Baumrind [1985].)

Back to Top

Broader Issues

The practice of deceptive research raises broader ethical issues that the IRB is not charged with considering but are legitimate concerns for the professional research community as well as other social institutions. I can only mention them here. There is the harm of deception to the researchers who engage in it. Thomas Murray in his essay, "Learning to Deceive" (1980) eloquently details a first hand account of those harms. There are broader harms as well. The core values of integrity and devotion to the truth must necessarily be held by academics and in the university. Should the university really be in the business of teaching students how to deceive people? What impact does a generally acknowledged practice of deception have on the perception of the trustworthiness of the research community? What impact does a generally acknowledged practice of deception in the research community have on social perceptions of the acceptability of engaging in deception as long as the deceiver thinks it is in a good cause?

Back to Top

References

  • Baumrind, Diana. "IRBS and Social Science Research: The Costs of Deception." IRB: A Review of Human Subjects Research, 1 (6, October 1979): 4.
  • Baumrind, Diana. "Research Using Intentional Deception: Ethical Issues Revisited." The American Psychologist 40 (February 1985).
  • Bok, Sissela. Lying: Moral Choice in Public and Private Life. New York: Vintage Books, 1989.
  • Fisher, Celia, and Fryberg, Denise. "Participant Partners: College Students Weigh the Cost and Benefits of Deceptive Research." The American Psychologist 49 (May 1994).
  • Jones, James H. Bad Blood: The Tuskegee Syphilis Experiment, 2d ed. New York: The Free Press, 1993.
  • Macklin, Ruth. "Autonomy, Beneficence and Child Development" in Barbara Stanley and Joan E. Seiber, eds.
  • Social Research on Children and Adolescents: Ethical Issues. Newbury Park, Calif.: Sage Publications, 1992.
  • Murray, Thomas. "Was This Deception Necessary?" IRB: A Review of Human Subjects Research 2 (10, December 1980): 7-8.
  • OPRR, Department of Health, Education and Welfare. The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research. 1979.
  • Sieber, Joan. Planning Ethically Responsible Research: A Guide for Students and Internal Review Boards. Applied Social Research Methods Series, Vol. 31. Newbury Park, Calif.: Sage Publications.
  • Sharpe, Donald, et. al. "Twenty Years of Deception Research: A Decline in Subject's Trust?" Personality and Social Psychology Bulletin 18 (5, 1992).
  • U. S. Department of Health and Human Services, "Protection of Human Subjects." Code of Federal Regulations Title 45, Part 46 (Revised 1991).

The scientific value in studying human remains

Essential techniques for scientific study

The principles of ethical research on humans and the moral status of human remains

Human remains as property

Custodial responsibility for human remains

Moral justification of custodial claims

The moral yardstick

Conflicts of self-interest

Application to Cases

Two sets of issues run though these cases and raise ethical issues at the very core of the research activity of archaeologists and physical anthropologists as well as those of museum archaeologists and physical anthropologists. The first is whether anthropologists and archaeologists in the United States should comply with the Native American Grave Protection and Repatriation Act (NAGPRA). The act places conditions on the intentional excavation and removal of Native American human remains and sacred objects found on federal or tribal lands as well as the handling of remains in cases of inadvertent discovery. It also requires the repatriation of such materials already held in collections controlled by federal agencies and museums. Should scientists and museums comply with a law that arguably interferes with the world's acquisition of knowledge of the development of Native Americans and their societies and more generally of humans, human societies and their place in the world? If one acknowledges that the law is morally justified, shouldn't scientists simply accept that and immediately begin to comply fully until the law is changed? On the other hand, one might argue that scientists should not comply with such a law if the law is morally unjustified. This argument raises the prospect of civil disobedience by scientists to protest a morally unjustified law that impedes legitimate scientific activity.

The larger and prior issue, therefore, is whether such a law is morally justified. The law appeals to certain general moral considerations for its justification. If those general moral justifications stand, they may have implications for the ethical conduct of archaeologists, anthropologists and museums all over the world, whether laws similar to NAGPRA exist or not. This is part of the point raised in Case 6. (Indeed, such moral justifications have been invoked from South Africa to Australia to Israel to restrict the work of scientists who deal with ancient human remains.) On the other hand, if those general moral considerations are not sufficient to justify such laws, that fact undermines the laws' moral justification and moral claims made for restrictions on scientific activity in parts of the world where no such laws exist. I will focus my comments on the moral considerations raised by scientific research on human remains.

If one can establish that such laws lack moral warrant, that has implications for many of the particular ethical issues raised in this set of cases: 1) Should Justus comply with NAGPRA? 2) Should she challenge Dr. Hops for not doing so? 3) Should she reveal to others Dr. Hops' intent to thwart immediate and full compliance with NAGPRA? 4) Does Justus escape her ethical research responsibilities by moving to France, where NAGPRA does not apply? In this commentary I shall focus on these fundamental issues, rather than particular ethical issues raised in these cases and primarily on the moral guidelines for dealing with human remains.

Back to Top

The scientific value in studying human remains

How should scientists regard and treat human remains? This question goes to the heart of the scientific activity of archaeologists and physical anthropologists since ancient human remains are a fundamental source of research data. We recognize that the study of human remains -- whether recent or ancient -- can have great scientific research value. Medical schools' use of cadavers has been essential for the training of physicians and scientists. Autopsies are invaluable in understanding disease processes in humans and in some cases important for forensic science. The study of skeletal and mummified remains has sometimes led to a better understanding of human health. More broadly, the study of fossilized human remains can yield important knowledge about the development of humans and human societies and the place of humans in the world of living things. The study of the remains of people of a particular culture and especially the study of human remains in the context of burial sites can help us understand the history and development of that culture and its relation to other cultures and can advance our understanding of the development of human societies. In some instances, we learn lessons from earlier cultures that are important for future generations.

For all these reasons, the study of human remains can increase our understanding of what it is to be human and the place of humans in the world in general and in our culture in particular. That knowledge has both intrinsic and instrumental value. All humans can benefit from it, and in that sense, all humans have a stake in its acquisition.

Back to Top

Essential techniques for scientific study

However, consideration of the value of such study, particularly in physical anthropology and archaeology, cannot be divorced from the scientific processes and techniques required to acquire that knowledge. There is a need to maintain permanent collections of remains that can be regularly studied by the scientists and their students. Techniques of statistical research require as large a sample as possible. Bones once studied may be needed again when new analytic techniques become available, e.g., for the dating of bones or extracting of antibodies or genetic material to trace the evolution of specific diseases. Good science would require the preservation of evidence to allow other researchers to reexamine the evidence to check for misinterpretations, inaccuracies and bias of the researcher. (Meighan, 1994, 1992)

Recognizing the value of scientific knowledge and the necessary conditions for acquiring it, however, does not establish that its value is trump and that it automatically overrides other conflicting values and other ethical considerations. The value of the acquisition of knowledge from the study of human remains can conflict with an obligation to show basic respect for human remains, for the wishes of the dead or direct descendants and for a community's religious or cultural traditions, and a sensitivity to the historical context in which human remains have been acquired.

The principles of ethical research on humans and the moral status of human remains

How should human remains be regarded? Should they be viewed solely as artifacts that are a means to important knowledge? As such, are human remains morally indistinguishable from, say, geological samples of metamorphic rock or marine fossils embedded in limestone? Is the notion of desecration of human remains based on anything other than the notion that the remains are human? Should the fact that they are human remains require that they be viewed with a respect that derives from the respect we accord humans? If so, they deserve treatment that in some ways is different from that of a sample of marine fossils. For the same reason, the moral status of human remains differ from the status of funerary objects buried with them.

In experimental research on living human subjects and animals, we have gradually recognized the fact that the value of scientific knowledge does not automatically override other ethical considerations. It is instructive to consider what principles we have come to accept in guiding research on human subjects that might be usefully extended to research on human remains. (See, for example, other cases in this collection that deal with research on humans and human tissues.)

We believe that humans deserve to be treated with special respect as living, rational, autonomous beings. Some would say that, at death, the body loses most of what commands our respect; what is essentially human is now gone. The body is no longer a living human capable of rational volition. The respect we owe human remains is simply a kind of penumbra effect, derivative of the respect we owe living humans and including an obligation to respect the wishes of individuals regarding their remains or the wishes of those who legitimately speak for them. When dealing with cadavers or subjects of autopsies, for example, we believe we have a duty to treat them as recently deceased humans and an obligation to respect any known wishes they expressed when alive regarding the treatment of their bodies. Thus, if we know that persons have explicitly expressed the desire not to have an autopsy or that their bodies not be used in medical research or medical education, then we believe those wishes ought to be respected. If next of kin make similar requests, we think we ought to respect those wishes as well. (On the other hand, when we find the unidentified body of a person we suspect may have been the victim of foul play, we think we "owe" it to the victim to use scientific study of the body to try to identify it, notify the next of kin and let the body "tell its story" in the interests of justice.)

More generally, we presume that we ought to obtain advanced consent from persons while they are living or from relatives or guardians after their deaths before we use their bodily organs, perform autopsies or use their bodies for scientific research or medical education. There is a parallel here with the ethical guidelines we now embrace regarding experimental research on living humans. In general, we think that it is morally required to obtain informed consent from subjects before engaging in nontherapeutic research on them; if they are not in a position to give their consent, we think we are obliged to obtain proxy consent from next of kin or guardians who can speak for them. For all these reasons, we think it inappropriate to simply "harvest" bodies of the recently deceased from hospitals, funeral homes or the local graveyard, even though the practice may advance the cause of science. The principle is that we have a prima facie obligation to give greater weight to respect for the wishes of the deceased or their next of kin over the value of the advancement of scientific knowledge.

Perhaps that obligation of respect "wears off" with time when the possibility of obtaining direct or legitimate proxy consent disappears. Should our view of human remains be a function of the age of the remains or of the possibility establishing legitimate relational connections? Should that view change as the age of the remains increases and any wishes of the deceased or of close relatives are lost in the mists of time? "Ought implies can," and saying that one ought to obtain the consent of the deceased or relatives presupposes that is possible. In instances in which such consent is not possible, it may be acceptable to use the remains as the subject of scientific investigation and view them much as we might view any object of the natural world worthy of scientific investigation. Thus it is not wrong to study the remains of the 5,300-year-old "ice man" recently discovered in the Tyrolean Alps. (Incidentally, we can reasonably infer the wishes of the deceased, even for ancient human remains in some instances. The Pharaohs of Egypt, for example, went to extraordinary lengths to try to prevent their remains from being disturbed. It is unlikely that they would have given their consent, had they been asked if it would be acceptable after 3000 years, say, to disturb their burial chambers, remove their mummified remains and make them the subject of study and exhibition.) In many instances, however, one cannot infer the wishes of the deceased, and no living person can legitimately claim to speak for the dead.

Back to Top

Human remains as property

The debate on how to view and treat human remains is sometimes cast in terms of the notion of property. People claim to "own" the remains of other humans. There are arguments over who "owns" the dead: scientists, museums or descendants; or more generally, who "owns" the past. (Messenger, 1989; Mcbride, 1983) People may assert their ownership of human remains in virtue of the fact that they are next of kin, the only living relative, a direct descendant or a "cultural descendant" of the person whose remains are in question. Some who have discovered remains claim to "own" them by virtue of having "mixed their labor" with them in the process of discovering them. Others claim that the remains of the dead belong to the living human race and that museums are, appropriately, the custodians of the remains on behalf of the human race.

I believe it is a mistake to cast the issue in terms of ownership. Are human remains the sort of things that should ever be viewed as property? Some economists and social philosophers to the contrary, it is not obvious that every material thing in this world is properly classified as potential or actual property.What I have in mind here are the arguments over the moral acceptability of selling human blood or other body parts from either the living or the dead. It may be the case that no one "owns" human remains and to argue over who owns them is to beg that question. Familiar and decisive moral arguments have been made against the claim that humans can be justifiably viewed as property and owned by other persons. Why should we assume that no one can, with moral justification, claim to own a human being but can be morally justified in claiming to own the remains of that human being? I believe that the familiar arguments for respect of humans create a presumption against the notion of ownership of human remains. Rather than debate that issue or beg the issue of whether human remains can be owned, it may be more useful to focus on the point of asserting ownership.

Back to Top

Custodial responsibility for human remains

Some assert an "ownership" over human remains when what they really wish to assert is that they should have "exclusive trustee or custodial responsibility" for those remains -- that is, that one person, group of persons or corporate entity has exclusive power to decide what happens to particular human remains. That is a weaker, more restrictive notion than ownership since it does not entail that trustees or custodians could simply do anything they wish with the remains, including selling them. This sense of custodial responsibility may, for example, describe the role of a Christian church in the maintenance of relics of saints. Cultural groups may see themselves as acting on behalf of the remains of ancestors but would never consider that they have ownership rights in the sense that they could dispose of the remains in any way they pleased. Museums may view themselves as acting as custodians for remains on behalf of the general public, and scientists may claim the same role on behalf of the scientific community.

Sometimes the notion of "ownership" of human remains is invoked by someone or some group to assert an even weaker claim. They do not argue that they should have exclusive custody, but that they should have some say in what should be done with particular human remains; they assert that they are entitled to be involved in negotiations regarding the disposition of remains.See for example the Vermillion Accord on Human Remains, adopted at the World Archaeological Congress in 1989, which recognizes the legitimacy of the need for respect for human remains, for the wishes of the dead, for the wishes of the local community and relatives and guardians of the dead, for the scientific research value of skeletal, mummified and other human remains when such value is demonstrated to exist. It further recognizes that agreement on disposition of human remains shall be reached by negotiation on the basis of mutual respect for the legitimate concerns of communities of the descendants as well as the legitimate concerns of the scientific and educational communities.

Back to Top

Moral justification of custodial claims

What justifies a moral claim to be the sole custodian or trustee for particular human remains? I want to consider several criteria that have been put forward.

1. Acting as proxies for wishes of the dead

Based on the arguments above, the strongest case for an assertion of exclusive custody would involve circumstances in which the deceased person's wishes concerning the disposition of remains is clear. A paradigm case of justification would be the instance in which individuals, before they die, freely and explicitly entrust another person or institution with their remains. Absent such an explicit directive, the presumption is that next of kin or direct descendants have the strongest claim to assume and maintain custody of human remains on the ground that they are best positioned to speak as proxies for the dead. The argument is based partly on a claim they best understand the religious and other cultural beliefs likely to shape the wishes of the deceased. Distant relatives, friends and members of the community in which the person actually lived have a weaker custodial claim. As the relational bonds become weaker, the presumption of a custodial claim also becomes weaker.

A graphic illustration of the violation of this presumption was the action of the U.S. Surgeon General in 1868, ordering all Army field officers to "harvest" Native American bodies from the battlefields and ship them to Washington. The remains were used for studies of Indian craniums to determine whether the Native American was intellectually inferior to the white man. The Smithsonian has custody of more than 2300 Native American skeletons obtained in this way.See the Background section of the North American Grave Protection and Repatriation Act, Report 2nd Session.

This action clearly violated respect for the wishes of the dead, their next of kin and their tribes. For these considerations, the initial collection of these human remains by U. S. officials cannot be morally justified and undercuts any custodial claim the Smithsonian has for retaining these remains. (Although the Smithsonian is subject to repatriation laws separate from NAGPRA, nevertheless, the moral point remains the same.)

2. Acting as guardians of the dead

Sometimes the claim to custodianship is legitimately based not on the wishes of the dead or persons reasonably positioned to speak on behalf of the dead, but rather on an appeal to respect for the moral or religious obligations of the living. In certain circumstances, the living, in virtue of their relation to the dead or because of cultural or religious beliefs, perceive an obligation to act as guardians of the remains of the dead or observe taboos that forbid the disturbing of the ancestral dead. Here the ground of the moral obligation shifts from a respect for the dead to a respect for the obligations of the living. In certain circumstances it is legitimately thought to be morally wrong to interfere with or to coerce persons to prevent them from carrying out what they perceive to be their moral obligations. If it is a part of one's familial or cultural tradition that the living have a role as guardians of the remains of their ancestors, that creates an additional legitimate claim for the custodial role.

In Case 1 in this set of cases, human remains of the Macaque band were intentionally excavated from the reservation of the Macaque tribe and deposited at the Museum of the High Plains. No one disputes that these remains were ancestors to living members of the band. Presumably the excavations were done without the consent and against the wishes of the band at the time. Although the living members of the band may not have personally known the deceased, they may legitimately claim to be their descendants and to be tied to them by religious and cultural traditions as well. They have a strong claim to be able to speak as proxies for the dead and to act as guardians of the dead. In such circumstances, it is reasonable to conclude that the claimed scientific value of research on these remains does not outweigh the custodial claims of the tribe. Hence, there was no moral justification for the original excavation of the remains, and there is no moral justification for the museum's continued custody of the material without the consent of the Macaque tribe.

The notion of desecration of grave sites and of human remains, it seems to me, is best understood from this perspective. The disturbing of remains and associated funerary objects is not so much a moral violation of the dead as of the religious beliefs of the living. On this analysis, the "harvesting" of Native Americans from the battlefields was also a violation of the religious beliefs of the living; it prevented them from discharging their obligations in caring for the dead. In Case 1, the initial excavation of the burial site was a desecration because of the sense of interference with the guardianship role of the current tribe members rather than the fact that the remains were in some sense violated. It is not clear that all acts of excavation of human remains automatically fail to treat human remains with respect any more than it is clear that all autopsies fail to treat human remains with respect. To the extent that an excavation fails to treat human remains with respect (for example, the looting of a grave site strictly for the purpose of recovering treasure or to sell the bones), that may be viewed as desecration. The notion of desecration, however, carries no moral weight over and above the moral concerns for violating the wishes of the dead or interfering with the guardianship obligations of the living.

In contrast to the paradigm case of justified custody, there are also paradigm cases in which it is no longer rational to assert a claim of moral privilege in assuming custody of human remains based on considerations of wishes of the dead or legitimate guardianship. The "ice man" mentioned above, it seems to me, is such a case. We know nothing of the "ice man's" wishes in this regard; no one can reasonably be said to speak for the remains or legitimately claim a religious obligation of guardianship for the remains.

3. Having the closest cultural affiliation to the remains

It is sometimes claimed that an existing cultural group possessing the closest cultural affiliation to the remains thereby has a morally relevant ground for asserting custodial rights to remains. (This notion is in fact included in the Native American Grave Protection and Reparation Act as part of the criteria of ownership or control of human remains.) That is a much weaker moral claim.

Presumably the assumption is that proximity of cultural affiliation indicates that the current cultural group shares with that earlier culture a set of beliefs (including religious beliefs and beliefs about the treatment of human remains) that reasonably warrant the current culture's acting as proxies for the wishes of the deceased. However, even if one can establish that a certain group is culturally closer to the remains than any other group, why should that be considered morally relevant? "The closest" may not be close at all. A group may be the closest to the culture of the remains and yet may not in fact share a significant set of beliefs or practices with the culture of the deceased. Unless one can show fairly definitively that the cultural connections are sufficiently close to justify the claim that the existing cultural group can speak for the dead, such consideration is morally irrelevant.

One cannot assume that all peoples have held the view that human remains are sacred and are not to be disturbed. It is not sufficient to note that humans have been buried with ritual in order to infer anything about their beliefs regarding how human remains are to be treated or if it would be wrong to disturb them. Some Christians, for example, would not view mortal remains as sacred or hold that disturbing them is a desecration.

Even if one could establish a direct cultural affiliation, it is not clear what moral weight that ought to carry. Suppose, for example, one discovers a burial site containing human remains that were clearly those of sacrificial victims. (Consider the findings at Cahokia, near St. Louis, for example. [Iseminger, 1996]) It is possible that the victims shared the culture of the descendants and submitted to voluntary sacrifice. It is also possible that they were members of a rival and unrelated cultural tribe and were sacrificed involuntarily (murdered). In the latter case, it is not at all clear why any current cultural group that may be related to the culture at Cahokia has any moral claim to decide the disposal of the remains. Why should the descendants of those who murdered the victims have a moral claim on the custody of their remains? It may be as reasonable to think that these victims would prefer someone other than the murderer's descendants to make the decision, just as today, murder victims might well want a forensic anthropologist to examine their remains and "tell their story."

Sometimes the assertion is not that proximity in cultural affiliation allows one to act as proxy for the wishes of the dead but that current members of a cultural group own their culture and have exclusive claim to the custody of that culture. Only such members can decide to share the culture and cultural artifacts with the rest of the world. Native American heritage and its culture belongs to native Americans, and it is up to Native Americans to share that knowledge with others on their own terms. Consequently, excavation and study of human remains of the culture cannot be justified on the grounds that the world has a right to this knowledge. Nor can it be justified on the grounds that scientific study provides more reliable or accurate understanding of Native American culture than that embodied in the oral and nonverbal formulations of Native American cultures. (See Zimmerman, 1994)

4. Owning the tribal land on which remains have been discovered

NAGPRA suggests still another criterion that might have moral weight in determining who should have custody of human remains: If the remains are discovered on the tribal ground of a cultural group, then that group should have custody.

But suppose human remains that are carbon dated to be 10,000 years old are found on tribal ground, although the tribe has been known to occupy the land for no more than 1000 years. Consider, for example, a case in 1991 in which a road crew in Idaho found a skeleton on ancestral ground of the Shoshone tribe that turned out to be carbon dated at 10,600 years. It was immediately confiscated by the State of Idaho and returned to the Shoshone tribe, which has not been in Idaho for longer than 1,000 years. In a similar case, in 1988, an 8,000-year-old skeleton was discovered on grounds occupied for only the past 500 years by southern Utes, who allowed DNA testing on the remains. It was impossible to establish that the skeleton was related to any modern tribe. Nevertheless, the remains were turned over to the tribe in 1993. (New York Times, September 30, 1996, A12 1-6; New York Times, October 22, 1996, A1)

In an even more dramatic case in August 1996, an 8,400-year-old skeleton was discovered in Kennewick, Washington. It appears to have Caucasian features, and archaeological analysis indicates it does not match any Native American tribes. This may be an extremely important set of remains for understanding the development of humans in North America. Nevertheless, the Umatilla tribe, under NAGPRA guidelines, can and has claimed the skeleton for immediate reburial. DNA testing of remains has been halted, and the disposition of the remains is unresolved at this writing. ("Science Scope," 1996; Slayman, 1997)

Although the tribe in each of these cases may have legal standing under state or federal reparation laws, it is not clear why that tribe has a stronger moral claim to those remains than anyone else. At some point, the rational connection between an ethnic group and human remains is too faint and distant to have any moral standing.

Back to Top

The moral yardstick

One might reasonably argue therefore that the moral justification for custody claims for human remains is to be found on a continuum. At the one extreme, the wishes of the deceased, if known, or of next of kin or direct descendants, justifiably take precedence over the promotion of scientific knowledge. The same holds true for the respect for the religious or cultural beliefs of persons with clear and close cultural or religious connections to the person whose remains are in question. Here the guidelines of NAGPRA seem reasonable and justified. At the other extreme, human remains whose ethnic connections are lost in the mists of time are more reasonably viewed as "citizens of the world," and the presumption for custody favors scientists rather than any contemporary cultural group. Here the NAGPRA guidelines seem unwarranted and unsupported by the moral arguments. If I am right, then some parts of NAGPRA that assign "custody" of human remains are morally warranted and some are not.NAGPRA, Section 3, Ownership, states: "The ownership or control of Native American cultural items which are excavated or discovered on Federal or tribal lands after the date of enactment of this Act shall be (with priority given in the order listed) (1) in the case of Native American human remains and associated funerary objects, in lineal descendants of the Native American; or (2) in any case in which such lineal descendants of the Native American cannot be ascertained, and in the case of unassociated funerary objects, sacred objects, and objects of cultural patrimony: (A) in the Indian tribe or Native Hawaiian organization on whose tribal land such objects or remains were discovered; (B) in the Indian tribe or Native Hawaiian organization which has the closest cultural affiliation with such remains or objects and which, upon notice, state a claim for such remains or objects; (C) if the cultural objects cannot reasonably be ascertained and if the objects were discovered on Federal land that is recognized by a final judgment of the Indian Claims Commission as the aboriginal land of some tribe (1) in the Indian tribe that is recognized as aboriginally occupying the area in which the objects were discovered, if upon notice, such tribe states a claim for such remains or objects, or (2) if it can be shown by a preponderance of evidence that a different tribe has a stronger cultural relationship with the remains or objects than the tribe or organization specified in paragraph (1), in the Indian tribe that has the strongest demonstrated relationship, if upon notice, such tribe states a claim for such remains or objects." What I have argued is that the moral justification for sections 2 (A), (B), and (C) is much weaker than for section 1.

Back to Top

Conflicts of self-interest

It is important to recognize that the issues here are not solely about the moral status of human remains or high-minded arguments over moral obligations to the dead or the living. Disputes couched in terms of the moral issues are sometimes really about the self-interest of the parties involved. Scientists who have trained for long years to practice their profession may see their careers disrupted if they cannot gain access to the materials that are central to their work. On the other hand, the claims of Native American groups may sometimes have less to do with respect for the dead than with establishing claims to land. Thus the Hopi and Zuni describe as "cultural thievery" recent claims of the Navajo to be descendants of the Anasazi and thus to have a claim to collections of human remains in various museums in the United States. They suggest the real motive of the Navajo is to reclaim almost 2 million acres of land given to the Hopi and Zuni by the federal government. (Archeology, 1996)

Application to Cases

What are the implication for these considerations for the cases? I will mention only a few.

Case 1

It is reasonable to assume that the bones Justus is working on are, as they appear to be, clearly associated with the Macaques Tribe, and indeed the current members may be lineal descendants of the persons whose remains are in question. The law requires return of the bones, and the moral justification for custody clearly lies on the side of the Macaques. Possession here is not nine-tenths of the law. Morally speaking, it is no more up to Justus or Dr. Hops to decide when or whether to return the bones than it is up to a person who has stolen a car or knowingly received stolen goods to determine when or if to return them. Whether or not additional work on the bones constitutes further desecration is morally irrelevant. Even if it does not, that is no justification for delay. Additional work with the bones may increase our scientific understanding of the Plains tribes. But that fact, by itself, no more justifies continuing to retain and work on the materials against the will and without the consent of the current tribe than the possible benefit of research on syphilis justified continued research on black subjects in the Tuskegee experiment without the informed consent of the subjects. The fact that returning the remains interferes with the career development of Justus or Dr. Hops is also morally irrelevant.

Unfortunately for science, Justus should stop work and return the remains to the Macaques unless she can win their cooperation to allow further work. There are indeed instances of collaboration between archaeologists and native cultures in which reasonable agreements have been reached respecting the concerns of the culture and still allowing scientific work to be done. (See Archeology, 1995) If properly approached, the Macaque band might recognize the value of learning more about their cultural inheritance. (In this case, the way the Macaque band has been treated since 1934 may have hopelessly poisoned the situation.) If Justus' right to continue the research cannot be morally defended, then no funder could morally justify funding the research.

Notice how different the situation is morally if the remains are 10,000 years old and are being claimed, legally under NAGPRA, by a group that is known to have occupied the area for only 1,000 years. The moral claim to return the bones is much weaker (although not the legal claim.)

Case 2

Dr. Hops is in violation of the law here. If, as we have argued, she does not have the moral justification for custody on her side in this case, she cannot make a legitimate case for disobeying the law or for some sort of civil disobedience. Her actions cannot be morally defended; in this case, moral grounds are not sufficient to justify delaying reporting about the museum's collection or repatriation. This situation does raise ethical issues for Justus. What are her moral obligations to report a mentor who is in violation of a federal law? Would Justus be justified in revealing -- or is she obliged to reveal -- this secret to the Macaques? (See Bok, 1989)

Case 4

One of the consequences of a practice of disobeying the law is the encouragement of others to adopt the practice as well. If archaeologists develop a practice of violating the law whenever they think disobedience is justified by the importance of scientific research, then they should not be surprised if that encourages others to violate the law, particularly those who think the archaeologists' disrespect for the law justifies their own violation of the law, as in the case of Ten Killer's action. Archaeologists who behave in this way undercut their own appeals to respect for the law when, for example, they attempt to stop looters who rob grave sites and archaeological sites for the treasure and who destroy the archaeological evidence in the process.

Case 6

If we have argued correctly, Justus and all archaeologists have an overriding moral obligation to respect the custody claims of Native Americans in the kinds of cases we indicated and a much weaker moral obligation in cases of the sort where claims to speak for the dead or act as guardian for the remains are much less legitimate. One cannot escape a moral obligation simply because no parallel law legally requires it. Moving to France may help Justus avoid a legal prohibition, but her moral obligations remain unchanged.

Back to Top

References

  • "Barrow Burial." Archaeology 48 (4 July/August 1995): 22.
  • "Repatriation Standoff." Archaeology 49 (2, March/April 1996): 12-13.
  • Bok, Sissela. Secrets. New York: Vintage Books, 1989, Chapter XIV, "Whistleblowing and Leaking," 210-230.
  • Iseminger, William R. "Mighty Cahokia." Archaeology 49 (3, May/June, 1996): 30-37.
  • Mcbride, Isabel, ed. Who Owns the Past? Melbourne: Oxford University Press, 1983.
  • Meighan, Clement W. "Burying American Archaeology." Archaeology 47 (6 November/December 1994): 64-68.
  • Meighan, Clement W. "Some Scholars' Views on Reburial." American Antiquity 57 (4,1992): 705.
  • Messenger, Phyllis Mauch. The Ethics of Collecting Cultural Property: Whose Culture? Whose Property? Albuquerque: University of New Mexico Press, 1989.
  • "Science Scope." Science 274 (15 November 1996): 1071.
  • Slayman, Andrew L. "A Battle Over Bones" Archaeology 50 (1, January/February 1997): 16-23.
  • Zimmerman, Larry. "Sharing Control of the Past." Archaeology 47 (6, November/December 1994): 65-68.

The case of "Kennewick Man" raises a complex set of ethical and legal issues. It also illustrates the broader debate over the ethical and social-political issues surrounding the relation of archeology and archaeologist to indigenous peoples and the appropriateness of laws such as the NAGPRA to resolve these issues.

This case features arguments over who has "legitimate" claims to the remains. It is important to clarify the use of the expression "legitimate claims." "Legitimate claims" can refer either to "legally" legitimate (Who should have the legal right to determine the disposition of these remains?) or "morally" legitimate (Independently of the legal question, who, if anyone, is morally justified in determining the disposition of these remains?). Deciding the legal question does not necessarily decide the moral issue. Establishing a legally legitimate claim to something does not settle the issue of the moral legitimacy of that claim. Some laws are unwise, and others are unjust. Thus, standing behind the legal debate and the NAGPRA legislation are moral arguments over who has a morally legitimate claim to deciding the disposition of the remains. I will confine my remarks to an assessment of one moral argument regarding the disposition of the remains in this case.For a broad discussion of the some of the ethical considerations in research on human remains, see my commentary on "With Bones in Contention: Reparation of the Human Remains" in Research Ethics: Fifteen Cases and Commentaries (Bloomington, Ind.: Association for Practical and Professional Ethics, 1997), pp. 149-161. See in particular the moral arguments against violating the wish of Native Americans who have a direct relationship to human remains.

One moral claim asserted in this case is that those who are "related" to the Kennewick remains have the strongest moral claim on determining the disposition of the remains. The notion of "related" is crucial here. One sense of "related" is that of being a direct close descendent who actually knew the person whose remains were under discussion. Hence the rhetorical question, "What if these were your grandparents that were being dug up and studied?" There are, of course, very strong moral arguments for respecting the wishes of those who are "related" in that sense.Ibid. (Recognizing the power of such moral argument does not preclude the possibility that the relatives might permit study of the remains or that they might not object to such study.To illustrate the diversity of attitudes toward treatment of the remains of close relatives, consider the fact that Heidelberg University, in 1989, claimed to have had permission from next of kin to use 200 corpses including 8 children in automobile crash test. See C. E. Harris, M. Pritchard and M. Rabbins, Engineering Ethics; Concepts and Cases (Belmont, Calif.: Wadsworth Publishing, 1995), pp. 183-184.) However, the remains in this case are not closely related to any living group. The rhetorical question is inappropriate here.

Various remote senses of "related" are captured in the NAGPRA requirement: "In the case of human remains inadvertently discovered on federal land, NAGPRA regulations require the government to notify Indian Tribes 'likely to be culturally affiliated with' the remains, tribes 'which aboriginally occupied the area,' and 'any other Indian tribe. . . reasonably known to have a cultural relationship to' the remains."Andrew Slayman, "A Battle Over Bones," Archaeology 50 (January/February 1997): 16-23, p. 17.

One moral argument supporting the concern for cultural affiliation is that disposition of the remains by someone other than the culturally affiliated may violate the religious beliefs of the culturally affiliated people. Two anthropologists articulate the argument:

People cannot own people, even the remains of dead people, according to virtually all Native American traditions. Thus it is inappropriate for anyone, Indian or otherwise to possess such remains for whatever purpose. . . . [T]he rights of those being studied take precedence over the rights of anthropologists who study them. . . when that act interferes with or is contrary to the religious and cultural beliefs of those being studied or their descendants.Anthony Klesert and Shirley Powell, "A Perspective on Ethics and the Reburial Controversy," American Antiquity 58 (2, 1993): 249-250.

Presumably, the argument applies only to remains of Native Americans. Since the "religious and cultural beliefs of those being studied" are invoked to justify such a ban on the study of remains, if other, non-Native American groups have different beliefs, then by the argument, it would be inappropriate to impose the beliefs of Native Americans upon them. That is, Native Americans could not argue that they should direct the disposition of the remains of peoples who do not share their religious beliefs.

The argument is invoked in this case. The Umatilla are one group who have asserted a legal claim for Kennewick Man. Armand Minthorn, a Umatilla trustee and religious leader, has written: "Our religious beliefs, culture, and our adopted procedures tell us this individual must be reburied as soon as possible."Slayman, "A Battle Over Bones," p. 18. The argument seems to be as follows:

  1. All Native Americans now living hold the view that "People cannot own people, even the remains of dead people." Furthermore, any Native Americans who have ever lived in the past also held this view.
  2. Any ancient human remains found in North America must be the ancestors of current Native American populations.
  3. If anyone were to excavate, study or maintain a collection of any ancient human remains found in North America, they would be violating the cultural and religious beliefs of Native Americans.
  4. Respect for the religious beliefs of Native Americans in this matter overrides all other considerations including pursuit of scientific understanding of the population of the North American continent.
  5. Therefore, it is inappropriate to excavate, study or maintain a collection of any ancient human remains found in North America.

With respect to Premise 1, one might wonder whether the religious and cultural beliefs of contemporary Native Americans are so univocal. The Colville tribe also has asserted a claim to Kennewick Man. However, Adeline Fredlin of the Colville tribe's archaeology and history department reportedly said, "[The] Colville are interested in further study of ancient skeletons found in the region by non destructive analysis."Ibid.

Premise 1 must include the proviso that all Native Americans who ever lived in the past also held this view regarding remains. Otherwise one has a situation in which contemporary Native Americans are imposing their religious beliefs on those who lived in the past and had different religious beliefs, or at minimum that contemporary Native Americans are failing to respect the different religious beliefs of earlier inhabitants of North America. One wonders whether there is really sufficient evidence to assert such a sweeping claim regarding the religious beliefs of peoples who lived 9,000 years ago. It may be true, but how do we know?

To the degree to which the argument rests on the religious beliefs of the culturally affiliated, Premise 2 is the crucial question. Is it really true that all human remains on this continent are the ancestors of the current Native Americans and can therefore be assumed to have shared the religious beliefs of contemporary Native Americans?

The first anthropologist to examine the Kennewick Man found "a long, narrow skull, a projecting nose, a high chin, and a square Mandible. The lower bones of the arm and legs were relatively long compared to the upper bones. . . traits. . . not characteristic of modern American Indians in the area though many of them are common among Caucasoid peoples."Ibid., p. 16. A second anthropologist viewed the skull and "concurred the skeleton was Caucasian."Ibid., p. 17. A third anthropologist examined the bones and concluded the skeleton "cannot be anatomically assigned to any existing tribe in the area or even to the western Native American type in general. . . . It shows some traits that are more commonly encountered in material from the eastern United States or even of European origin, while certain diagnostic traits cannot presently be determined."Ibid.

The director of the Center for the Study of First Americans at Oregon State University has suggested "'Kennewick Man could have been part of a different migration' -- that is, his forebears may have come not from North Asia like those of other Native Americans, but from other parts of Asia or even Greenland."Science 275 (March 7, 1997): 1423. See also Science 277 (July 11, 1997): 173.

Minthorn gives one response to the scientists' claims: "If this individual is truly over 9,000 years old, that only substantiates our belief that he is Native American. From our oral histories, we know that our people have been part of this land since the beginning of time."Slayton, "A Battle Over Bones," p. 18.

The moral claim to the right to determine the disposition of his remains is based on the assertion of a relation of Kennewick Man to a group. It appears that the assertion of a connection comes down to an assertion of empirical fact (that only ancestors of Native Americans lived on this continent and that Kennewick Man must be related to contemporary Native Americans). However, no empirical evidence is allowed to count against the asserted empirical fact. In such circumstances, it is fair to ask whether this is an empirical claim at all.

One might argue that the assertion of a connection is an article of religious faith and that disregarding it would violate and fail to respect religious beliefs. If such is the case, then this situation may be closer to the classic issues involving the medical treatment of Jehovah's Witnesses. It is beyond the scope of this commentary to comment in detail about the epistemological status of such claims or the degree to which principles of religious toleration ought to be invoked. Suffice it to say that such considerations would change the parameters of the debate. The argument then is no longer one over the scientific issues but an argument in political philosophy.

Scope and nature of the study

Informed consent and assent

Judy's moral obligation for subjects' welfare

Three claims for excusing an obligation for subjects' welfare

Privacy, confidentiality and parental responsibility

Scope and nature of the study

Judy proposes to study fourth, sixth and eighth graders who have been "exposed to violence in their community." The scope and nature of the study are important in thinking about the ethical considerations and implications of the study. The children range widely in age and maturity, and they will presumably mature considerably by the end of the four-year study. The subjects' capacity for informed consent and their concern for confidentiality and privacy may vary initially and may change over the course of the study.

What does the category of "community violence" include? Does it include domestic violence within their own nuclear families? Does it exclude domestic violence within the nuclear family, but include violence in the extended family and all other exposures to violence in their community? Does it exclude all family violence (nuclear and extended) but include all other violence in their immediate community? Does the study propose to investigate violence suffered directly by students at the hands of family, neighbors and strangers; does it propose to study the impact of merely observing or being aware of such violence; or will it attempt all of the above?

Through individual interviews and group-administered surveys, the study proposes to measure the subjects' amount and frequency of exposure to community violence as well as their psychological, behavioral and adaptational responses to violence. If the study includes experiences of violence in the nuclear or extended family, subjects may be probed about experiences of child abuse and other violence by family members, among other things. If it focuses only on violence witnessed by the students, including the family setting, the investigator may solicit information from subjects that may include information about domestic abuse in their families. In either case, this probing certainly constitutes invasion of privacy and has implications for obtaining informed consent from the parents or appropriate family members. Will parents clearly understand that their children's participation in the study may result in invasion of the family's privacy?

If the study focuses only on the student's direct involvement in violence outside the family, the investigator may solicit incidents of violence the subject has committed or which were committed against the subject. At the extreme, that may involve admitting to participation in gang activity, or criminal activity or being the victim of sexual assault. If the study focuses only on the student's awareness of violence in the community, students may admit witnessing violent criminal behavior. These questions make the students extremely vulnerable and have serious implications for the process of obtaining informed consent from both parents and students as well as for level of confidentiality maintained in the study.

Will students understand that they may reveal such information as subjects? Will they be clear about the degree of confidentiality maintained? Will parents expect to be informed of the subjects' behavior?

The study solicits students' psychological responses (depression, suicidal thoughts), behavioral responses (drinking) and their adaptational responses (delinquency and sexual promiscuity). These questions raise all sorts of issues of privacy, confidentiality, informed consent and researcher's responsibilities for the welfare of subjects. Judy may become aware of many instances of dangerous and even illegal behavior that she is legally required to report. Will students understand that they may reveal such information during the study? Will they expect it to be kept confidential? Will parents expect this information? Will the parents expect Judy's first concern to be the welfare of her subjects? Will parents assume Judy has the expertise needed to act for their child's welfare if she discovers the child is engaged in risky behavior? Does Judy have an obligation to intervene on behalf of the subjects even if that means weakening or ruining the study? Is Judy's paramount obligation not to the subjects' welfare but to carry out the research, in the best manner possible?

Judy can anticipate many of these issues before the protocol is designed. It may be possible to practice "preventive ethics" and design the protocol to avoid some of the ethical issues that may otherwise develop as the study progresses.

Back to Top

Informed consent and assent

This research will be highly invasive of the privacy of the children and may be equally invasive of the privacy of other family members. The study may also place the subjects at considerable risk if their confidences about sensitive matters are violated. These facts alone are one argument for requiring the child's assent as well as the family's permission. The research is not likely to provide any direct benefit to either the children or their families, although it is possible that it will yield generalized knowledge of benefit to all children. In such circumstances, federal guidelines require that the researchers solicit the "assent" of minor children to participate in research if the children are capable of assent. (Title 45, Code of Federal Regulations 46.408 (a) 1991) Parental or guardian permission is also required. (45 CFR 46.408 (b) 1991) In this case, the quality of the subjects' assent and of the parents' permission is an extremely important moral issue.

The notion of assent of minors and how it relates to informed consent is unclear. (Macklin, 1992, 90) Fourth graders are not adults. They may not be capable of fully appreciating the limitations of the trust they should place in the researcher. They may not understand the risks when confidences are breached. They may underestimate how their attitude toward invasion of privacy may change as they mature. They may not be capable of balancing all these considerations as well as an adult, who will have greater experience and maturity. (See Macklin, 1992, 101, on understanding privacy.) Nevertheless, even fourth graders, and clearly sixth and eighth graders, are capable of understanding a great deal if information is presented appropriately. (Thompson, 1992, 61) It is morally significant that their assent be sought only after they receive that information.

The variability of ages, developing maturity levels and the length of the study all complicate the assent issue. Some subjects will enter the study as fourth graders; others will end the study as juniors or seniors in high school. Children become much more sensitive to privacy as they move into adolescence. (Thompson, 1992) Subjects may assent to invasions of privacy as fourth graders that they would not assent to as eighth graders. The law recognizes the capacity of young adolescents to make adult-like decisions. In some states, minors can legally self-refer themselves for medical treatment for venereal diseases, alcoholism, contraceptives and abortions without the knowledge or consent of their parents. (Brooks-Gunn, 1994, 116; Rogers et. al., 1994, 3)

Children in this study have a right to understand what they are assenting to. Judy should provide the subjects with at least the following age-appropriate information: 1) An explanation of the nature and purpose of the research and the nature and role of the researcher. The children should understand that the research activity is not intended to benefit them and that the researcher's primary concern is not to benefit them as subjects. This point is particularly important for younger children. They may tend to think that an authority figure who is called doctor (Ph.D.) and meets with them in the school setting has a role akin to their family physician or a teacher and is acting in their best interests. 2) A clear understanding of the sorts of information that they will be asked to share. 3) A clear understanding that they have a right to refuse to answer any of the questions raised in the study. 4) A clear understanding of the level of confidentiality of the information they share and the limits of that confidentiality. They should clearly understand the circumstances, if any, in which Judy will break their confidences and with whom she might share that information, including parents, health officials or legal authorities. 5) A clear indication that they can drop out of the study at any point. 6) Since the maturation levels of the subjects may change significantly over four years, Judy should consider the provision that the subjects' assent will be renegotiated at the beginning of each year of the study.

Parents have a right to know what they are giving permission for. A careful procedure for informing the parents is required. 1) Judy should take pains to ensure that parents understands that the research activity is not intended for the therapeutic benefit of their children. 2) Judy should tell parents specifically the sorts of information she will collect from the children and with whom it might be shared. Parents should be clear about the kinds of information Judy is legally required to report to authorities, such as suspected child abuse. 3) Parents should understand what, if any, confidential information gained from their child Judy will or will not share with the parents about the child. In particular, parents should be clear how Judy will deal with information about serious psychological symptoms or risky behavior manifested by their child or whether she will disclose information about the child's self-referrals. If such information is not shared with the parents, can the parents expect that Judy will seek interventions on behalf of the child, where appropriate?

All of this implies that Judy must plan for a sophisticated process of informing potential subjects and their parents before obtaining assent and permission. Merely sending out permission slips to be signed and returned will not be sufficient. Judy needs to plan a more elaborate method of informing potential subjects and their parents in the consent process. She may also need to contact a much wider pool of potential subjects than otherwise anticipated. It is likely that when parents understand what she will be doing and students are clear about their rights to withdraw from the research, more parents will refuse to give their permission, and more subjects will withdraw or may be dropped during the course of the research.

We have argued that Judy has a moral obligation to make clear to both prospective subjects and their parents exactly what they can expect from Judy should she become aware that their child is in a high-risk situations. What exactly should she be prepared to do in such situations?

Back to Top

Judy's moral obligation for subjects' welfare

1. The case for doing nothing

Judy may argue on three grounds that she has no moral obligation to do anything to safeguard the welfare of her subjects during the research. Judy's research simply involves watching what would have happened to these children anyway, whether or not she was conducting the research; her study is a kind of natural laboratory. Whether or not Judy conducts the research, the same children would have engaged in the same risky behavior in exactly the same way, without their parents' knowledge. She has an obligation not to harm subjects but she is not causing any harm to the subjects.

If she intervenes, she may not be able to maintain the integrity of her research and research program. The long-term benefits of the study would outweigh anything that might happen to subjects because she did not intervene.

Suppose that Judy obtains permission from parents for their child to participate in the study with the clear understanding that should Judy discover that their child is engaging in high risk behavior, she will not act on that information in any way, unless required by law. The child also assents. Judy will not inform parents, she will not take steps to assist the child, even if she knows that she is the sole adult who is aware that the child is in a threatening situation or is engaging in self-destructive activity that presents a clear and present danger to the child (anorexic behavior, heavy drinking, drugs, sexual promiscuity or gang activity). Judy will simply carry on her research.

Judy may take the position that she is not morally obligated to do anything, given the fact that she is doing no harm to the children, that she is armed with parental permission and child assent to a noninterventionist policy on her part, and that, given such a policy, she will maximize benefits by conducting the best study possible. She has no obligation to intervene on behalf of children at risk; all things considered, she has an obligation to refrain from intervening in order to maximize benefits. She is satisfying the principle of beneficence as articulated in the Belmont Report, which provides ethical guideline for research on human subjects. (National Commission, 1979)

The first part of this argument parallels the argument given by researchers in the Tuskegee syphilis experiment (Jones, 1993). Unlike Judy's project, those adult subjects had not given their informed and voluntary consent to be experimental subjects. Nevertheless, Judy's position is subject to some of the same criticisms.

2. The case for doing something

To see why Judy is mistaken, consider for a moment how we would define the moral obligations of ordinary persons (who are not researchers) in a somewhat parallel situation. Imagine that Judy is not a researcher but an ordinary citizen who walks through a park and notices a fourth grader she knows, sitting alone, playing Russian roulette with a gun. She realizes that she is the only adult in the area aware of the child's activity, and yet she takes no action but walks on by. Normally, we would say Citizen Judy has an obligation to intervene to stop the child from harming himself, if she can do so at minimal risk to herself.

On what moral grounds would we make such a claim? The principle of beneficence is one moral principle that we recognize as applicable to all persons. That principle states that we all have an obligation to promote the good and to prevent or avoid doing harm.

Notice that an obligation to maximize benefit presupposes this obligation. Unless we already have an obligation to promote the good and avoid harm, we could not have an obligation to maximize the good. The obligation to promote good and avoid harm can actually be regarded as a set of prima facie obligations to: 1) avoid doing harm, 2) prevent harm, 3) remove harm and 4) promote good. Furthermore, these obligations are listed in their order of stringency. The stringency has in part to do with the fact that it takes less effort avoid doing harm than it does to prevent a harm, to remove a harm or to do good. I should not push children off the end of a pier; I have a less stringent obligation to buy them an ice cream cone. If we can prevent or remove a harm at little risk or cost to ourselves, we have an obligation to do so. If I am standing on the pier and notice that a child has fallen into the water, and if I can save it by throwing a lifeline, I have a moral obligation to do so. The implication of this articulation of the principle for Citizen Judy is that she not only has an obligation to avoid harming the child, she also has an obligation to try to prevent the harm about to happen, if she can do so at minimal risk. (Frankena, 1973, 45-47)

In her research, Judy may become aware that one of her subjects is engaging in risky behavior such as contemplating suicide, practicing unprotected sex, engaging in heavy drinking, abusing drugs or beginning to run with a gang. Suppose that Judy can, at minimal risk to herself, do something to prevent the child from coming to harm but decides to do nothing about it. Unless Judy can show how her status as a researcher excuses her from this obligation, she is mistaken in thinking she can simply do nothing. One might argue that the difference is that Judy is a scientist and, as such, she has professional moral obligations that override the obligation of ordinary morality to prevent harm to another. Judy might give three different claims to support that position.

Back to Top

Three claims for excusing an obligation for subjects' welfare

1. The obligation to welfare is overridden by the obligation to do research.

Judy might argue that scientists have a professional obligation -- and indeed an overriding obligation -- to conduct their research in the most rigorous and scientifically sound manner possible. The best contribution a scientist can make to the general welfare is a contribution to the general knowledge, which will allow effective social policy. In a conflict of general and professional moral obligations, professional obligations are trump. In a research project designed to study the impact of violence in children, one is likely to encounter a higher incidence of children with risky behavior. To intervene to help a subject would threaten the integrity of the research project and thus society might lose the tremendous benefit of such research for children everywhere. Hence Judy ought not intervene.

However, a mere conflict of professional obligations with the obligations of ordinary morality does not excuse scientists from the obligations of ordinary morality. Simply being engaged in scientific activity does not excuse scientists, for example, from prima facie obligations not to steal, to tell the truth or to come to the aid of a person in distress even if doing so interferes with their scientific activity or jeopardizes the results of a particular research project. Professional moral obligations presuppose general moral obligations; they are not independent of them. (Bayles, 1989, Chapter 2)

2. The obligation to welfare is waived by parental permission.

Judy cannot be excused from obligations of ordinary morality in this case by the permission of parents and child. Recall the case of the child playing Russian roulette. Suppose, for some bizarre reason, the parent had given Judy permission to ignore their child in the event she ever saw the child playing Russian roulette. We would not say Judy was absolved from a moral responsibility to act simply because the parent gave her permission. (We would be more inclined to say that the parent was acting irresponsibly.) Would it make any difference if, as a researcher, Judy had obtained similar permission from the parent?

The Belmont Report articulates two principles guiding human research that are relevant here. First is the principle of treating subjects with the respect due humans: One must never treat humans solely as a means to one's own ends. Suppose that Judy, in virtue of her research, is the sole adult who is aware that a child is entertaining the possibility of suicide. She knows that if she does not act, no one will. If Judy takes no action because that would interfere with the research program, then she is treating the child solely as a means to her own ends and is violating the principle of respect for persons.

The second ethical principle is that of beneficence. As the Belmont Report puts it, the principle of beneficence is a twofold obligation:

Two general rules have been formulated as complimentary expressions of beneficent actions in this sense: (1) do not harm and (2) maximize possible benefits and minimize possible harms.

I have already indicated that I do not think this articulation of the principle of beneficence is adequate because the claim that we have an obligation to maximize possible benefits already presupposes that we have a prior and more stringent obligation to do good and avoid evil. Judy has a stronger prima facie obligation to prevent harm to her subject than she has to maximize benefits.

3. The welfare obligation is overridden by the obligation of confidentiality.

Suppose Judy promises the subjects that she will never violate their confidence and never reveal to anyone, including parents, anything the subjects tell her. Normally we would say we have an obligation to maintain such a confidence because we have promised to do so. It is not clear on what grounds one would argue that keeping promises and confidentiality always trumps all other moral considerations, however. There may be situations in which other moral considerations outweigh an obligation to keep a promise -- for example, situations that threaten the life of the person promised. There are no compelling moral grounds for asserting that keeping confidences is always the highest moral obligation. It is also the case that Judy has legal obligations to report certain kinds of criminal behavior such as suspected child abuse. She should not make promises she knows she cannot keep.

I have argued that one must conclude that Judy cannot morally justify the position that she should never intervene by appealing to the mere fact that she is a researcher. If Judy's research proposal can reasonably be expected to give her knowledge that a child subject's health or welfare is seriously threatened and the situation requires immediate intervention, she must recognize that she has a prima facie obligation to take some action in some of these circumstances. She cannot morally defend a position of never doing anything. It is not clear that either the federal guidelines or the Belmont Report would reach this conclusion, but other standards do. The Society for Research in Child Development asserts:

When, in the course of research , information comes to the investigator's attention that may jeopardize the child's well-being, the investigator has the responsibility to discuss the information with the parents, guardians or with those expert in the field in order that they may arrange the necessary assistance for the child. (SRCD, 1993, 339)

Back to Top

Specifying an obligation to act.

It behooves Judy to take this moral responsibility into account as she designs, seeks funding for and carries out her research. She should make clear to the funding agent, and to parents and subjects, what interventions she is prepared to undertake on behalf of the children, even if those steps impair the quality of the research. However, the fact that Judy has an obligation to take some action to prevent harm to subjects, does not establish what action she ought to take.

There are certain parameters on what Judy should and should not do. 1) Judy has a legal obligation to report to appropriate authorities certain things such as suspected child abuse. (Some have argued that the specific issue of reporting suspected child abuse is a morally problematic requirement since less than half of all reported cases are substantiated and reporting suspected abuse when it has not happened may well do serious harm to both child and parents. [ Scott-Jones, 1994, 101-103]) 2) If Judy has some obligation to protect the welfare of the child, she should not report to parents things that may result in the parents harming the child. Notifying parents may only make things worse (for example, if there is a clear case of child abuse by a parent). 3) It is reasonable to argue that Judy should not report to parents the fact that their child has referred herself to agencies to seek medical help in those instances in which the law allows the child to do so without the knowledge or consent of parents. (Scarr, 1994, 153). Judy has an obligation to make clear in the process of obtaining informed consent from parents and subjects that she will operate within these parameters.

If we assume that Judy, parents and subjects all understand she will act within these three parameters, a large gray area remains of risky behavior of subjects she may uncover in the course of her research. Should she ever notify the parents about such activity, encourage students to refer themselves for help, or initiate action on behalf of the subjects?

Privacy, confidentiality and parental responsibility

A central issue here involves conflicts between maintaining the confidentiality and privacy of child subjects, interfering with the responsibilities of parents, protecting the subjects' welfare and maintaining a viable research program.

As a moral agent, Judy has a prima facie obligation to maintain her subjects' privacy. All persons, including children, have a right to privacy, and can be wronged when it is violated. Research suggests that children value privacy, even at a young age. (Melton, 1992) As children mature, privacy is increasingly important as an indicator of independence and self esteem; it is necessarily paralleled by a reduction of parents' right of control over the child. (Macklin, 1992, 103; Melton, 1992) Notice that one can morally wrong a person by invading his or her privacy whether or not confidentiality is involved. Violating confidences is only one way of invading privacy.

If Judy has promised the children to keep certain information from the study strictly confidential, then breaking that confidence both invades their privacy and violates a promise. An obligation to preserve the subject child's confidentiality is especially strong if breaking that confidence to parents or others is likely to result in harm to the interests of the subject.

As a scientist, Judy may be concerned that breaking confidentiality may also unravel the study. Once one child's confidence has been broken, that child is unlikely to be candid and may have to be dropped from the study. As other children learn that their information is not kept confidential, they may wish to drop out of the study. If children know in advance that their confidences may be breached, they may not agree to do the study.

Judy might consider informing subjects and parents that, subject to the parameters already identified, she will keep everything else she learns about the subject absolutely confidential. But she might conceal from them her intention to break a confidence, if extreme circumstances warrant it. This plan may initially reassure subjects of confidentiality and thus increase the likelihood that they will agree to participate, and it includes a plan for protecting the subjects' welfare or informing parents, if necessary.

This strategy is not a good idea. If Judy knows from the beginning that this is what she intends to do, then she is engaging in a deceptive practice that undercuts the moral legitimacy of the subjects' assent. If she breaks confidences more than once, the word may get around that Judy does not maintain the confidences she said she would maintain, and the study may unravel anyway. It would be better to tell all parties from the beginning that if Judy believes that the subject is in clear and present danger, she may break confidences and tell the parents or take other action.

This discussion has focused on preserving the subjects' privacy and confidentiality, maintaining the integrity of the study and meeting the researcher's obligation to protect the subjects' welfare. Parents' authority and responsibility to care for their children must also be considered. What claim do parents have to information about their child's risky behavior?

Return to the case of Citizen Judy walking by the fourth grader who is playing Russian roulette. Suppose she intervenes and stops the child from playing Russian roulette and then notices that his family, whom she also knows, is not far away and is unaware of what has just transpired. Instead of notifying the family, Judy says nothing to them and whisks the child off to see a counselor without the family's knowledge. She recognizes that she has an obligation to do something for the child's welfare but maintains that her obligation does not include notifying of the parents since that might violate the child's privacy rights.

Normally we would say that Citizen Judy has an obligation to make the family aware of the event because of the presumption that it is the family's business and responsibility to care for their children; that includes keeping children from harming themselves. Parents may well claim that the family is best able to judge what is in the child's best interests in these situations. It is also the family that will have to deal with the situation and the consequences of the child's action.

We have already indicated Citizen Judy may be right to avoid notifying the parents if she has good reason to think the parents would make the situation even more dangerous for the child. That important set of circumstances aside, why should Judy assume that the child's best interest is served by substituting her judgment for that of the family? Why should she assume that maintaining the child's privacy rights is preferable to notifying the family and allowing their judgment to take over?

There is something wrong with Citizen Judy assuming the role of protector of the child because it is precisely the responsibility of parents to care for and nurture their own children and to keep them from harming themselves. Parents have a prima facie claim to be informed when their children are engaged in harmful activity even if that notification is at the cost of the child's privacy, confidentiality and self-determination. Parents' claims may diminish with the maturity of the child, but the burden of proof ought to be on those who would ignore that prima facie claim. Even John Stuart Mill, one of the most ardent defenders of individual rights and the private right to private damage, recognized the limitations on the notion of self-determination as applied to children:

Over himself and over his own body and mind, the individual is sovereign. it is perhaps hardly necessary to say that this doctrine is meant to apply only to humans being in the maturity of their faculties. We are not speaking of children or young persons below the age which the law may fix as that of manhood or womanhood. Those who are in a state that requires being taken care of by others, must be protected from themselves as well as against external injury. (Mill, 1961, 263)

Judy might argue that if she notices any risky behavior of such a serious nature, either the parents are already aware of it or, if they are not, that is evidence that they are not competent to deal with the problem. Therefore it is not in the child's best interest to notify the parents, and the parents do not have a right to be informed, at least not until other appropriate steps are taken. Once Citizen Judy is aware of a subject's risky behavior, she might counsel the child, offer to refer the child to competent professionals, or inform or consult with appropriate care givers or authorities. All of these options would address the child's welfare but preserve the child's privacy and confidentiality from their parents. Only at some later date, if ever, would the parents be notified.

The rationale for this approach, some have argued, presupposes a problematic view of the family and the relation of children's interests to the family and to parental and authority and responsibilities (Brown, 1982; Steinfels, 1982; Macklin, 1982). In particular the child is simply one member of an aggregate of individuals (family) bent on self-development and self-fulfillment. The interests of parents and children may conflict. Consequently, parents have only limited capacity to speak for the child's interests in the best of circumstances, and it is reasonable to think that a skilled professional may do as well as the parent. Hence the moral authority of parents over their children and the right to have information about their children is limited. It would not be wrong for an outsider to intervene to serve as interpreter, spokesman and protector of the child's interests. Resolving this issue is beyond the scope of this commentary. Notice that Citizen Judy knows some very important information about the child that the parents do not have. That fact gives her the power to prevent parents from carrying out their obligation to protect their child from harming himself.

Does the situation change if Judy is not just an ordinary citizen but a researcher? If she has this knowledge about the child, it is not by happenstance. Rather, it is because she is carrying on research with the parents' permission. What would good and reasonable parents agree to in such research? Would they expect a researcher to notify them if their child were engaged in risky behavior? They may expect it, precisely because they take seriously their responsibility to protect their children from harming themselves. Would they agree to having their children participate in an experiment if they knew such information would not be shared with them?

If Judy has a moral obligation to act on behalf of the welfare of the child, and if parents waive a claim to be informed of risky behavior, that constitutes ceding decision-making power to Judy to initiate treatment for the child. Would good and reasonable parents agree to that?

Parents might reason that any information about their child's risky behavior acquired by the researcher is information the parents would not have received otherwise. Parents might be willing to have a researcher act for the benefit of the child on the grounds that getting help for their child in this way is better than no help at all.

This case raises very complex issues for designing a protocol that is ethical and still allows the possibility of good research. Judy faces issues in Part 4 that could be avoided by full disclosure to prospective subjects and their parents of the degree to which she will share information of risky behavior with parents. It is perhaps an open question if enough subjects and parents will agree to participate to make the study possible, once they have been so informed. What Judy cannot do is to cut any of these corners in order to conduct the research.

Back to Top

References

  • Bayles, Michael D. Professional Ethics. 2d ed. Belmont: Wadsworth, 1989, pp.17-31.
  • Brooks-Gunn, J., and Rotheram-Borus, M. J. "Rights to Privacy In Research: Adolescents Versus Parents," Ethics and Behavior 4 (2, 1994): 109-121.
  • Brown, P. "Human Independence and Parental Proxy." In Willard Gaylin and Ruth Macklin, eds. Who Speaks for the Child? New York: The Hastings Center, 1982.
  • Department of Health and Human Services. "Protection of Human Subjects." Code of Federal Regulations, Title 45, Part 46 revised, 1991.
  • Frankena, William. Ethics. Englewood Cliffs: Prentice Hall, 1973, pp. 45-48.
  • Jones, James H. Bad Blood: The Tuskegee Syphilis Experiment, 2d ed. New York: The Free Press, 1993.
  • Macklin, Ruth. "Autonomy, Beneficence and Child Development: An Ethical Analysis." In B. Stanley and J. Seiber, eds. Social Science Research on Children and Adolescents: Ethical Issues. Newbury Park, Calif.: Sage, 1992, pp.88-105.
  • Macklin, Ruth. "Return to the Best Interests for the Child." In Willard Gaylin and Ruth Macklin, eds. Who Speaks for the Child. New York: The Hastings Center, 1982.
  • Mill, John Stuart. "On Liberty" The Essential Works of John Stuart Mill. New York: Bantam Books, 1961.
  • Melton, Gary B. "Respecting Boundaries: Minors, Privacy and Behavioral Research." In B. Stanley and J. Seiber, eds. Social Science Research on Children and Adolescents: Ethical Issues. Newbury Park, Calif.: Sage, 1992, pp. 65-88.
  • National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research. Washington, D.C.: Government Printing Office, 1979.
  • Rogers, A. S.; D'Angelo, L.; Futterman, D. "Guidelines for Adolescent Participation in Research: Current Realities and Possible Solutions." IRB: A Review of Human Subjects Research 16 (4, 1994): 1-6.
  • Scarr, S. "Ethical Problems in Research on Risky Behaviors and Risky Populations" Ethics and Behavior 4 (2,1994): 147-155
  • Scott-Jones, D. "Ethical Issues in Reporting and Referring in Research With Low-Income Minority Children." Ethics and Behavior 4 (2,1994): 97-108.
  • Society for Research in Child Development. " Ethical Standards for Research With Children." In Directory of Members, 1993, pp. 337-339.
  • Steinfels, M. "Children's Rights, Parental Rights, Family Privacy and Autonomy" In Who Speaks for the Child? Willard Gaylin and Ruth Macklin, eds. New York: The Hastings Center, 1982.
  • Thompson, Ross. "Developmental Changes in Research Risk and Benefit: A Changing Calculus of Concern." In B. Stanley and J. Seiber eds. Social Science Research on Children and Adolescents: Ethical Issues. Newbury Park, Calif.: Sage, 1992, pp. 31-65.
  • U. S. Department of Health and Human Services. "Protection of Human Subjects." Code of Federal Regulations Title 45, Part 46 (rev. 1991).
  • Weithorn, L. (1983) "Children's Capacities to Decide to Participate in Research." IRB: A Review of Human Subjects Research 5 (2): 1-5.

Parts 1 and 2

Preventive Ethics

Dealing With the Actual Situation

Part 3

Parts 1 and 2

Mariel and Jorge are graduate student research colleagues. Paid on the same grant, they share the same animal subjects for their research work, but they are working on different research projects. It is important to note the unequal relationship between Mariel and Jorge: Mariel is a first year graduate student with no advanced degrees while Jorge, already a veterinarian, is nearing the end of his Ph.D. program. Thus, the two students differ in graduate experience as well as recognized expertise in veterinary surgery. A dependency relation is evident here as well. Since Mariel is not a veterinarian, she is dependent upon Jorge to do the surgery she needs for her research. The differential in credentials is significant. Although Mariel has four years of experience as a veterinary surgical technician, and may be very well qualified to recognize deviations from surgery protocol, she lacks the credentials to challenge Jorge should their assessments differ on deviations from surgery protocol.

Furthermore, Mariel and Jorge have potentially conflicting interests in carrying out this protocol. After a first round of surgery, it becomes clear that the surgery protocol will take much longer than they anticipated and hence much more time will be required to process all the animals they need for their research. Jorge is on a tighter time schedule. He is considering a job offer and wants to graduate on time; thus, he has an incentive to rush the work. Since Jorge's research requires only tissue samples obtained during surgery, it will be unaffected if the sheep die a result of rushed work. Mariel's research will be severely affected, however, if the sheep die shortly after surgery.

At the completion of the second round of surgery, three facts are undisputed: 1) Following surgery, several of the sheep show signs of increased agitation and discomfort. This outcome is a departure from the first round of surgery. 2) Three of five sheep die within a day of surgery; no deaths occurred after the first round. 3) An autopsy of the three animals shows signs of tissue damage and bleeding at the site of the insertion of the sampling tubes. Presumably this result did not occur after the first round of surgery.

All the researchers in this case are expected to comply with U.S. Government Principles for the Utilization and Care of Vertebrate Animals Used in Testing, Research and Training.For a definitive guide to care and use of laboratory animals, see Institute of Animal Resources Commission on Life Sciences, Guide for the Care and Use of Laboratory Animals (Washington, D.C.: National Academy Press, 1996). The U. S. Government Principles for the Utilization and Care of Vertebrate Animals Used in Testing, Research and Training are included in Appendix D. For an overview of regulations and requirements in the care and use of animals in research, see B. T. Bennett, M. J. Brown and J. C. Schofield, eds., Essentials for Animal Research: A Primer for Research Personnel (Beltsville, Md.: National Agriculture Library, 19994), pp. 1 - 7. Reprinted in Deni Elliott and Judy Stern, eds., Research Ethics: A Reader (Hanover, N.H.: University Press of New England, 1997). One of the nine principles in that document (Principle IV) states an obligation to ensure "Proper use of animals, including the avoidance or minimization of discomfort, distress, and pain when consistent with sound scientific practice." Principle III states, "The animals selected for a procedure should be of an appropriate species and quality and the minimum number required to obtain valid results." Animals should not die needlessly. The researchers also are expected to comply with the "Guide for the Care and Use of Laboratory Animals," which spells out procedures to ensure that these principles are observed.

Mariel believes that she observed Jorge rushing through surgery, paying less attention to surgical details (e.g. careful tissue handling and proper suturing during the cannulation procedure). Suppose that Mariel is right and that Jorge did deviate from the surgery protocol, which led to distress in the animals and caused their deaths. If nothing changes, one can assume that the same outcomes will be encountered in varying degrees in future surgeries. Sheep will suffer needlessly and will die needlessly; both outcomes are violations of the guidelines.

This situation presents a potential moral problem for Mariel. She has an obligation to observe the research principles for animal use and protect the animals from needless pain, suffering and death. What is her moral obligation to act if she has reason to believe that Jorge is violating those principles? At a practical level, she has another problem. If she does nothing, she may lose a substantial numbers of the sheep, and her project may be significantly delayed.

As the least senior and, in some senses, the most vulnerable member of the research team, Mariel is forced to pit her expertise against Jorge's in challenging his surgical techniques as well as his possible violation of surgery protocols. In the second scenario, Mariel and Jorge differ on the facts in this case: whether Jorge deviated from the surgery protocol and what caused the animals' deaths. Because, as a veterinarian, Jorge can claim more expertise in these matters, Mariel may have difficulty in making her case, even if she is right. In addition, she runs the risk of losing the cooperation of the person she is dependent upon to finish her research.

Back to Top

Preventive Ethics

Sometimes it is easier to prevent an ethical problem rather than try to determine what to do after it arises. Mariel's "problem" is due, in part, to the failure of other members of the team to meet their ethical responsibilities. Jorge has a responsibility to show collegial regard for the effect of his actions on Mariel's research, and Carroll has a responsibility to oversee the research to minimize the likelihood that such problems will develop. A wise adviser might recognize the potential for problems, given the conflicting interests and the unequal power relationship between Mariel and Jorge. She could set up the protocol to prevent or minimize the chances that Mariel will be forced to decide whether to "blow the whistle" on Jorge.

One technique used in other organizational settings is making the reporting of bad news mandatory, not optional, thus relieving the most vulnerable persons of decision-making pressure. This strategy helps to eliminate concerns about disloyalty to a colleague or fear of reprisal.

In this instance Carroll, Jorge and Mariel all collaborate in developing the animal use protocol, which includes the surgery protocol. What Carroll could do is to specify in the surgery protocol that, after each round of surgery, any deviations from expected outcomes of surgery must routinely be reported to her, including evidence of post-surgical suffering or death of sheep. In the unexpected death of a sheep, an autopsy would be done automatically and the reports forwarded to her. Carroll could then decide whether the information warrants investigating to determine if the protocol needs to be changed for reasons that could not be or were not anticipated or whether any violations of protocol have occurred. This approach would also make Carroll aware of the unacceptable implications of high death rates of the sheep for Mariel's research project.

Recommendations in the Guidelines regarding surgery and the monitoring of post-surgical pain and stress in animal subjects suggest preventive measures that could be taken in the planning of the protocol.

  1. In developing the surgery protocol, Carroll could ensure that pre surgery planning includes a careful preoperative animal health assessment to be sure the animals are healthy enough to withstand surgery.Institute of Animal Resources Commission on Life Sciences, Guide, p. 61. That judgment could be made by the supervising veterinarian rather than by Jorge. If sheep have been so certified any post-surgical deaths should trigger a review of surgical procedures.
  2. The development of the surgery protocol would be an appropriate point at which to estimate the amount of time required to properly carry out the surgery protocol on each sheep and the implications of that time frame on Jorge's research program. If an honest assessment indicates that they will only be able to do, for example, five sheep per day, that provides an opportunity to discuss alternate ways of meeting the protocol requirements. The pressure on Jorge to rush the surgery could be thus anticipated and dealt with. Carroll could build into the protocol a requirement that significant deviations from the anticipated time required for surgery be reported to her after the first round.
  3. Carroll should ensure that it is clear who is responsible for monitoring and keeping records of evidence of post-surgical stress and pain in the sheep. She could require that such evidence must be reported to her.Ibid., pp. 63 - 64

If such provisions were in place, then it would be Carroll, not Mariel, who would confront Jorge about the post-surgical suffering and death of the sheep. Carroll could ask Mariel for her observations of the surgical procedures, rather than leaving it up to Mariel to volunteer them. These measures should help to preserve a working relation between Mariel and Jorge and also provide an occasion for Carroll to have a frank talk with Jorge and Mariel about expectations of mutual collegial responsibility. If Jorge's actions are interfering with Mariel's research, that problem needs to be addressed. Carroll could take action at the earliest instant to get Mariel's research back on track.

If Jorge were to deny that his surgical technique caused the sheep's suffering and deaths, arguing instead that the diseased state of the sheep caused the problem, then that claim could be tested by referring to the pre-surgery certification of the health of the animals. It is possible that Jorge is correct. That may indicate the need to radically redesign the protocol or perhaps the need for a more refined certification procedures to identify diseased sheep that are sufficiently healthy to withstand the surgery.

If Jorge's technique is the culprit, that problem can be addressed and corrected more quickly than is likely if Mariel is carrying the whole burden of correcting the situation. The net result of involving Carroll is that the research is more likely to go smoothly and to be completed sooner with the research animals experiencing less suffering and pain.

These provisions allow Carroll to do at a lower level what the IACUC has formal responsibility to do. More importantly, it shields the most vulnerable member of the research team and gives Carroll an opportunity to nip a problem in the bud. At minimum, this strategy prevents wasted time in her research program.

Back to Top

Dealing With the Actual Situation

Suppose, however, that Carroll has not had the foresight to build in these preventive measures and Mariel must deal with the situation. What should she do?

Given the animals' suffering and distress and the number that have died, Mariel cannot justifiably choose to do nothing. She must at least begin to address the cause of their suffering and death and whether anything can be done to alleviate it. If something can be done and she fails to do it, she has not exhibited proper care for the animals.

Since she suspects Jorge's surgery procedures, it will probably be least threatening to Jorge if she goes directly to him, rather than to Carroll or the IACUC. She needs to approach him in a collegial manner, point out the post surgery results and the autopsy findings, and ask if he thinks he rushed the surgery in the second round. He may be willing to concede that he rushed the work and try to take more care on the next round. If so, that may solve the problem.

Suppose Jorge denies that he is responsible and blames the poor outcomes on the diseased state of the sheep. He may be right. Perhaps he did not violate protocol. The sheep may have experienced discomfort and died because of their weakened condition. This possibility raises a question of whether the animal protocol is adequate. Mariel is now put in the position of having to press her case, increasingly alienating Jorge and /or watching her research go down the tubes because she loses his cooperation as well as a significant number of sheep.

As a next step, with or without Jorge's cooperation, she can ask the supervising veterinarian to review the necropsy reports of the sheep who died in the current round and to certify the preoperative health of the next set of sheep. If the problem persists in the sheep after the third round of surgery, she will have stronger evidence and the expertise of the supervising veterinarian to buttress her claims that Jorge's technique is causing the problem. She may convince Jorge and win his cooperation. If so, the delay, loss of time and sheep may be justified by the need to secure his cooperation. If not, she has little alternative but to go to Carroll or report the situation to the IACUC in order to correct the problem.

Back to Top

Part 3

Suppose the sheep do not die but show signs of pain and discomfort during the recovery period. If the sheep are in distress for any significant length of time, should Mariel keep them alive and suffering and continue to collect research data or should she euthanize them and thus lose the possibility of data collection?

Recall that the first round of surgeries produced no signs of suffering or distress in the animals during the recovery period. That suggests that it is possible to perform this surgery without the undesirable side effects. Hence, it is reasonable to expect that the protocol, if followed, will not cause post-surgical distress in the animals.

This scenario suggests several possible outcomes from surgery. 1) Some of the animals exhibit distress for a short recovery period (perhaps 1-2 days). 2) Some of the animals exhibit distress for a longer period after surgery (several days). 3) Some of the animals experience chronic pain induced by the surgery that lasts for the entire month of the experiment.

Mariel's team's first obligation is to see if the sheep's pain can be relieved. If it can, it should be done. If not, then she will have to consider euthanizing this batch of sheep.

Her second obligation is to determine the cause of suffering and whether it can be prevented. If it is the result of a deviation from protocol, then that needs to be addressed before the next batch of sheep are subjected to surgery.

Suppose, however, the sheep's suffering is not the result of deviation from protocol but is, as Jorge suggests, the inevitable result of the weakened state of some of the diseased sheep. There are several possibilities here: 1) The pain occurs only in sheep in which the disease is too advanced. Furthermore, these sheep can be detected in a pre-surgical screening and eliminated from the group. The result is that the remaining sheep will not experience post-operative distress. If that is the situation, then the team should revise the protocol to ensure proper screening. 2) The pain is the result of the weakened condition of some of the sheep that cannot be detected by pre-operative screening. In that case, it is likely that some animals will experience post-operative distress.

It now becomes crucial to know whether the post-operative distress can be eliminated or controlled by analgesia or other means. If it can, then the IACUC must decide whether to permit the experiment with the proviso that the anticipated suffering can be alleviated for the duration of the animals' post operative discomfort. The ethical and practical issues for the IACUC may be especially difficult if pain control were to be required for the entire month of the experiment.

Finally, it may be the case that the pain (apparently) inevitably induced by the surgery in some of the diseased sheep cannot be alleviated for any length of time. This possibility puts in starkest terms, the trade off between the animals' discomfort and the knowledge gained by Mariel's experiment. It is now clear that the price of Mariel's research will be that some of the animals may experience stress, pain and discomfort for some length of time. This issue must be brought to the IACUC for review, and the IACUC will now need to decide whether that suffering can be justified.For a beginning discussion of some of the relevant moral issues in the use of animals in research, see Deni Elliott and Marilyn Brown, "Animal Experimentation and Ethics" and Richard P. Vance, "An Introduction to the Philosophical Presuppositions of the Animal Liberation/Rights Movement," both in Elliott and Stern, Research Ethics. For a discussion of pain in vertebrate animals, see Fred. W. Quimbly, "Pain in Animals and Humans: An Introduction" and Francis J. Keefe, Roger B. Fillingim and David A. Williams, "Behavioral Assessment of Pain: Nonverbal Measures in Animals and Humans," both in ILAR News 33 (1-2, Winter/Spring 1991). For a discussion of the moral relevance of animal pain, see P. Harrison, "Do Animals Feel Pain?" Philosophy 66 (1991): 25-40; Ian House, "Harrison on Animal Pain," Philosophy 66 (1991): 376-379; and Gordon M. Burghardt, "Heeding the Cry" in Hastings Center Report 21 (2, March-April 1991): 48-50.

Back to Top