Anonymous

Autonomous vehicles combine the ethical concerns found in robotics, autonomous systems, and connected devices — and synthesize novel concerns out of these. As with systems that demonstrate increasing autonomy, they also raise questions about moral responsibility. To be morally responsible for an outcome is to be the proper object of praise or blame as a result of that outcome. Because the status of autonomous vehicles as agents is unclear, and because they are involved in complex socio-technical systems involving many agents and institutions, answering questions about the moral responsibility for outcomes they are involved in is especially vexing.

We can use several lenses to help us determine who is morally responsible for an outcome:

(1) Who caused this outcome? Whose causal contributions to the outcome were most significant?

We almost always think that the person who is responsible for an outcome is the person who caused it most directly.

In the case of autonomous vehicles, this is difficult to determine, since autonomous systems are said to “launder” the agency of the people who operate them.[1] If an autonomous vehicle gets in a crash, should we blame the driver, even if they were not operating the car? Or did the car “make” the decision that led to the crash? This is further exacerbated by the fact that autonomous vehicle design is the result of a complex of legal and economic incentives that ultimately help explain why the cars are built and programmed the way they are, and thus why they cause the outcomes they do.

(2) Who is it appropriate to blame or praise for this outcome? Who would it be appropriate to punish for this outcome?

Moral responsibility is closely bound up with what philosophers call “reactive attitudes”: moralized attitudes we express in response to what someone has done.[2] Blame and praise are the most obvious of these, but indignation, resentment, and gratitude are other examples. When trying to locate responsibility, we should think about who it would be fair to express these attitudes towards.

Autonomous systems present especially difficult cases for identifying responsible parties because, according to a significant thread in the literature, there is a “responsibility gap” that is created between human agency and the decisions of autonomous machines. If an autonomous machine — like a car — were to make a “mistake,” there would be no one we could fairly punish. That argument is a clear example of this way of approaching the question of responsibility: find the people it would be fair to blame or praise, and you have found the responsible party.

Consider scenario 1A: the driver of Car A should have been paying attention because their car has merely level 3 autonomy. It is reasonable to think that they have a greater share of the responsibility than the driver of Car B. The driver of Car B could have reasonably believed that their car would be able to handle such a situation, and this attenuates their blameworthiness.

In scenario 1B, when both cars have V2V communication, then our sense of responsibility shifts from the drivers of the cars to the designers of the V2V system. As long as this failure took place in a relatively normal situation, this is the kind of situation that the V2V system should have been able to handle. Therefore, in turn, the drivers would have been reasonable to delegate the decision making to their cars. (Still, the driver of Car B can claim even greater justification for offloading this decision making since, again, their car has level 4 autonomy.)

(3) What is it reasonable to expect or demand of a person, given the role that they occupy?

In a perfect world, the several components of the socio-technical system that designs, manufacturers, regulates, and operates autonomous vehicles would be performing ably and diligently. Each has a complementary role to play:

Institutions can shape the design decisions of autonomous vehicles in a way that individual consumers never could. They can also shape the environment and infrastructure in which they operate. Did the car crash because the lane markings were eroded or unclear, for example?

Designers and manufacturers have a responsibility to test their designs to establish their reliability throughout the spectrum of scenarios that drivers will tend to face. (At least, as far as is practicable, since the permutations of those situations are in fact infinite.) They then have a responsibility to then communicate transparently the capabilities and shortcomings of their vehicles to consumers, and perhaps include designs that nudge — or force — drivers to behave responsibly.

Drivers, finally, need to operate cars responsibly only within their limits.

In these cases, we can ask: Who has failed in their duties to contribute to this harmonious interdependent system? Is there a shortcoming that regulators are uniquely placed to anticipate, but they failed to do so? Or a scenario that manufacturers should have tested for? Should the driver have kept their hands on the wheel, but was texting instead (and is there car level 3, 4, or 5?)? Deciding who failed in their specific obligations, and how far their behavior departed from what society can reasonably demand of them, will help us apportion responsibility.

Consider the specific scenarios:

Scenario 2[RJ1] : All three ways of thinking about how to distribute responsibility seem to point to the driver of the standard car that rear-ends the autonomous car: they directly caused the crash; they are more to blame than the autonomous vehicle, and we should expect more of them as a human driver. Note that the autonomous vehicle did something that was unexpected, i.e. it stopped while the light in front of it was green. However, just because it behaved unexpectedly does not mean it behaved recklessly. In fact, the autonomous vehicle behaved as it should have, since the alternative would have likely been to injure the pedestrian crossing in the crosswalk. Thus, it is difficult to blame the autonomous vehicle or its designer[RJ2] .

The pedestrian is also clearly responsible — perhaps as much or more than the driver of the standard car. It is the pedestrian’s recklessness that initiates this chain reaction that results in the crash. They seem, in fact, to make the greatest causal contribution to the situation.

Scenario 3[RJ3] : In this scenario, we again have a system that would normally prevent the crash, which has failed because of a rare situation. Both of the drivers involved behave irresponsibly. Should the designers of the system have designed it better, or given it a failure mode to cope with situations like this? Which of the drivers is more at fault?

It is hard to say which of the drivers is more at fault. Both are equally reckless in being distracted and both make equal causal contributions to the crash.

The more interesting locus of responsibility may be the automated intersection. Should the designers have tested its capabilities in inclement weather? (Is this an area with a rainy season, or a desert that’s experiencing a once-in-a-lifetime downpour? That is to say: what should they have expected?) A more graceful failure mode would probably have been to turn the intersection into a four-way stop. This requires all of the drivers who approach the intersection to be much more cautious, increasing safety at the (plainly acceptable) cost of efficiency.

Ideally these approaches would all align. but they don’t always. This is why philosophers, lawyers, and others continue to tussle over which method of determining responsibility is most appropriate. However, we can certainly separate the viable from the non-viable answers; and these lenses can help focus our intuitions and show us the path forward in apportioning blame. By clearing the way for productive conversations, moral philosophy has thus shown itself useful.

General readings:

  • Jenkins, Ryan. “Autonomous Vehicles Ethics and Law.” New America Foundation. September, 2016.

Apportioning responsibility for automated systems:

  • Matthias, Andreas. "The responsibility gap: Ascribing responsibility for the actions of learning automata." Ethics and information technology 6.3 (2004): 175-183.
  • Mittelstadt, Brent Daniel, et al. "The ethics of algorithms: Mapping the debate." Big Data & Society 3.2 (2016): 2053951716679679.

Trolley problems and the distribution of harm:

  • Himmelreich, Johannes. "Never mind the trolley: The ethics of autonomous vehicles in mundane situations." Ethical Theory and Moral Practice 21.3 (2018): 669-684.
  • Nyholm, Sven, and Jilles Smids. "The ethics of accident-algorithms for self-driving cars: An applied trolley problem?." Ethical theory and moral practice 19.5 (2016): 1275-1289.

The conclusion reached by the Norwegian National Committee for Research Ethics in Science and Technology (NENT) committee challenges an ideology and ethics of inevitability present in fossil fuel industries. The anthropologist Laura Nader first identified an ideology of inevitability during her service on the US National Academy of Science’s Committee on Nuclear and Alternative Energy Systems (CONAES). Her observations led her to identify the implicit cultural assumptions animating much policymaking, from ‘group think’ and a rejection of energy conservation and ‘soft paths’ like solar energy to an ‘inevitability syndrome’ that excluded from consideration models that did not rest on ever-expanding resource use.

Since then, anthropologists such as David Hughes and Chelsea Chapman and historians such as Matthew Huber have similarly found professionals in the oil and gas industry, including scientists and engineers, expressing positions that defend fossil fuels on the grounds that our society will always require them. Hughes in particular argues that this position is an ethical one. He starts with the position that oil is immoral because the ‘contemporary great evil of dumping carbon dioxide into the skies’ hastens global climate change that harms the environment and vulnerable populations (2016: 14). Therefore, he argues, treating oil production and consumption as inevitable is also an immoral position, since it allows climate change to continue unabated without considering how energy can be conserved or produced in more carbon-neutral methods. By concluding that petroleum research would be indefensible if it hindered transitions to sustainable energy, the NENT challenged prevailing assumptions that continued reliance on oil is inevitable. But rather than discourage petroleum research in its entirety, the committee also acknowledged that petroleum research ‘still has a role to play in the transition process, for example by establishing a defensible balance between research on various energy sources in which the key constituents are research on renewable energy and on how negative impacts on the ecology can be reduced.’

The challenge and opportunity lie in the nature of the ‘collaborations’ between industry and universities, given the conflicts of interest that exist when academic research is funded by companies such as Statoil. In their statement the NENT found it ‘striking that the universities do not reflect to a greater extent on their own role in possibly preserving the status quo through their collaboration with the petroleum industry,’ by prolonging and legitimizing the oil age, for example. The committee called for efforts to ensure that ‘the universities’ research and education and the special interests of business sector actors are independent of each other.’ This raises the crucial question of how university scientists and engineers could collaborate with industry to make more sustainable technologies and techniques.

References

Chapman, C. 2013. Multinatural resources: Ontologies of energy and the politics of inevitability in Alaska. In Cultures of Energy: Power, Practices, Technologies (eds) S. Strauss, S. Rupp and T. Love, 96–109. Walnut Creek, CA: Left Coast Press.

Huber, M. T. 2013. Lifeblood: Oil, freedom, and the forces of capital. Minneapolis: University Of Minnesota Press.

Hughes, D. M. 2017. Energy without conscience: Oil, climate change, and complicity. Durham: Duke University Press Books.

Nader, L. 1980. Energy choices in a democratic society. Washington: National Academy of Sciences.

––––––– 2004.  The harder path: Shifting gears. Anthropological Quarterly 77, 771–791.

Although this case raises a variety of ethical issues, all with their own subtleties and complexities, two particular issues will be discussed in this commentary. 

Assessing Risk/Benefit Ratio

The first issue, which is raised in Part 1 of the case, concerns risk/benefit ratios when working with human participants.  One of the central tenets of the Belmont Report (1979) is that researchers must (1) do no harm and (2) maximize possible benefits and minimize possible harms.  This case demonstrates several of the complexities inherent in the principle of beneficence.  

First, how should risk be defined and determined?  In Brian’s case, it is possible but not definite that focusing on stress experiences will have negative effects for the participants.  How should Brian determine the probability of risk?  If there is no realistic way to do this, how should Brian proceed?

Many scientists conducting research similar to this have relied on the fact that, in most cases, the negative impact of stress exposure is short lived.  In other words, while people may experience negative effects in the moment, they quickly return to baseline and are fine in the long-term. Should calculations of risk differentially weight short-term versus long-term consequences?

Additionally, researchers conducting this type of work have stood behind their informed consent procedures.  These researchers reason that as long as they are up-front from the very beginning about potential repercussions of study participation then they are within the guidelines of ethical practice.  Responsibility for participant safety is ascribed to the consenting participant as opposed to the researcher.  This practice may be fine in cases where the consent document is completely clear, where the potential participant is fully engaged in understanding the document and fully capable of consenting, and where the researcher is capable of answering questions about procedures and potential threats.  Unfortunately, these circumstances are rare, especially in psychological research where the majority of participants are undergraduates fulfilling a course requirement.   Who is ultimately responsible for determining the risk of participation? Who is ultimately responsible for the safety of research participants?  The researcher? The consenting participant? The IRB?

Second, who should be considered when calculating risk/benefit ratios? Only the research participants? Society-at-large?  Future generations?  Brian is aware that he is possibly putting participants at risk by asking them to focus on the stress they experience in their daily lives.  However, he is also aware that basic information about gay-related stress must be collected in order to create effective interventions and to help myriad gay men and lesbians cope with their minority status. If Brian were to consider only his research participants in his calculations, he would likely conclude that the benefits do not outweigh the potential harms.  However, if Brian were to consider his research participants, the gay community and future generations of gay individuals, he would likely arrive at a very different conclusion.  Is there an appropriate way to calculate risk/benefit ratios? Is it ever ethical to sacrifice a few individuals for a larger goal?

Bias in Social Science Research

The second issue, raised in Part 3, concerns personal or political bias in research.  Many social scientists emphasize the importance of objectivity in the pursuit of knowledge.  These individuals assert that science must be free of bias, and that the researcher must be neutral in relation to the topic and communities being investigated.  Elias (1987) summarized this position nicely, stating that those who study human groups must learn to “keep their two roles as participant and enquirer clearly and consistently apart and . . . to establish in their work the undisputed dominance of the latter” (cited in Perry, Thurston & Green, 2004; p. 135).  Others, however, argue that objectivity is impossible to attain and that better science is derived from active involvement on the researcher’s part.  In their discussion of qualitative research, Perry and colleagues (2004) argued that a critical piece of the research process involves interpretation and that the researcher necessarily plays a central role in this analysis.  These researchers concluded that instead of ignoring “emotional involvement” in research, we should recognize “the inevitability of involvement and the potentially significant part it can play in developing a more reality-congruent picture of complex aspects of the social world. . . .” (p. 139). Is there such thing as objective social science?  If not, should scientists be responsible for revealing their biases?  Is science valid if there is bias linked with data?

In his research on gay-related topics, Brian wears two hats: one as a scientist and one as a gay man.  While these two identities do not have to conflict, they nevertheless can conflict.  In the case, for example, Brian discovers that those who are “out” experience more gay-related stress than those who are not “out.”  “Scientist Brian” finds this discovery interesting, while “Gay Brian” finds it problematic. Brian fears that revealing such a finding could encourage people to lead closeted lives and could subsequently set the gay rights movement back several decades.  In deciding whether to publicize this finding or not, which one of Brian’s hats should have more weight? Is it possible to wear both hats at the same time? How?

In discussing this piece of the case it is important to realize that there are both pros and cons to Brian’s involvement in the research.  On the pro side, Brian’s status as a gay man gives him credibility with the people he is studying.  The gay men that Brian is reaching out to are more likely to trust him and to involve themselves in the research given that he is “one of them.”  This becomes crucial when studying a group such as gay men, a group that has long been manipulated by researchers and ostracized by the psychological community.  Many gay men are suspect of the research enterprise and want to know that they are not being used for research that will come to haunt them and their community later on.  If Brian were heterosexual, many potential participants may opt out of the research, skeptical of the ultimate aims.  On the con side, Brian’s insider status interferes with his ability to be objective. What are other pros and cons of Brian’s involvement in research on a community to which he belongs?

References

Ethical Issues and Analysis

Part I of this case study introduces Dr. Luci Menendez as both a researcher and a clinician who seeks to develop an integrative program of research whereby her clinical work informs her research and vice versa. Critical to this case is an understanding of the ways in which general systems theory informs Luci’s research and clinical practice.  General Systems Theory (von Bertalanffy, 1968), the basis of family therapy and many theories of family process, is most readily epitomized as ‘the whole is greater than the sum of its parts.’  Individual parts of the system are interdependent and information feedback loops between parts or between the system and the broader environment function to keep the organization of the system relatively stable.  “This systemic approach has led to a method of treating psychological problems and of posing research questions that is fundamentally different from the traditional, individually based one”(Copeland & White, 1991, p. 8). 

Copeland and White (1991) argue that family researchers, such as Dr. Luci Menendez, not only have the traditionally recognized responsibility to assess the effects of a study on individual research participants, but have a special ethical responsibility to attend to the impact of the study on the family as a whole.  Similarly, family therapists are ethically required to attend to both the well-being of individual family members and the well-being of the family as a whole, a difficult balance to achieve at best. 

One may rightly question from the start whether Luci should have recruited families to her study with whom she would eventually have a clinical relationship.  Whether or not Luci recruits research subjects from the same population that she will be serving clinically is partially influenced by the availability of palliative care consultation services.  These services are still relatively new and not all hospitals or communities have interdisciplinary palliative care teams. The very fact that these services are new may be an argument for the importance of researching currently unexplored issues so as to increase evidence-based clinical practices. However, if Luci’s team is the only one of its kind that is readily accessible to Luci for research, she may be at higher risk for unwittingly pressuring her clients to participate. 

Only interviewing palliative care patients and families not receiving services from Luci’s professional team would ostensibly lessen the complexities of this case by reducing the formal fiduciary relationships Luci has with clients/subjects. However it is debatable whether the absence of formal relationships with specific family members completely eliminates Luci’s more general duty as a socially sanctioned professional to protect the well-being of society’s members.  In other words, even if Luci interviewed research subjects with whom she does not have formal therapeutic relationships, the fact that she is a clinician with a specialized knowledge and skill set may still have some ethical bearing on her research relationships.

Though others may disagree, I would argue that Luci’s two roles are neither 100% separable, nor equally exchangeable.  Luci’s membership in a publicly recognized and regulated clinical profession with all of its attending benefits (e.g., status), obligates her to give priority to her clinical role over her researcher role.  In other words, Luci can be a clinician without assuming a researcher role, but her clinical knowledge must inform her research choices. Her clinical knowledge likely makes her more sensitive to the types of harm that may befall individuals and families participating in her research project which may obligate her to take steps above and beyond those required by federal, state, and institutional research regulations.

Recognizing the complexity of her dual role as a clinician/researcher, Luci took precautions in her research design.  First, she used a two stage recruiting process, whereby patients and families were first invited to consider participation in research by someone other than the researcher, the physician in this case.  Whereas this was intended to increase the autonomy of family members in deciding whether or not to participate in the study, it increased Luci’s risk of having nuances of the study misrepresented.  Furthermore, Luci failed fully to account for perceived power dynamics in the physician’s relationship with the family, leaving them vulnerable to perceived (if not actual) “authoritative persuasion.”

Second, when meeting with families to describe the research opportunity, Luci made explicit the dual nature of her relationship with patients and families, stressing that clinical care is a higher priority than research, and that the decision whether or not to participate in research would not negatively affect the clinical services they received.  During informed consent procedures, Luci also explained the on-going voluntariness of research participation.  While these precautions are commonly required by Institutional Review Boards as means of protecting individual research subjects, additional efforts may be necessary to protect the family system.

For instance, the case does not specify the exact nature of the informed consent document Luci has each family member sign, but it does say the discussion took place with everyone present.  Copeland and White (1991) note that “especially in studies in which families are asked to discuss important, real issues together [e.g., end-of-life care], the promises of anonymity and confidentiality about what they say, usually afforded to research subjects, are limited because the other family members are sitting there and listening” (p.4).  Per most IRB requirements, the informed consent document should discuss the limits of confidentiality.  This is typically understood as delineating the conditions under which the researcher may not keep absolute confidentiality. 

Confidentiality is typically understood as the ethical mechanism through which we respect the right of privacy of individuals.  But does this individual-focused understanding of privacy and confidentiality adequately apply to information about relationships, which by definition involve more than one individual?  Family researchers are faced with the dilemma of gathering and protecting information that from the perspective of individual family members may be considered quasi-private.  There may be a genuine risk of harm to individuals and/or family relationships if some members of the family disclose relational information that the other members did not want disclosed. 

In this case, a fully ethical approach to informed consent in family research might also include a discussion of the fact that data collected from one individual, even during individual interviews, cannot be completely separated from information about other members of the family because the focus of the research is on shared family history and dynamics. One approach is to include a statement on the consent document stating that agreement to participate in family data collection includes giving permission to other family members to disclose potentially private information about one another. 

Having such a statement included in consent procedures allows the researcher to explain the importance of gathering “un-edited” family data, while simultaneously facilitating family members’ discussions about possible limitations on the type of information they will share with the researcher.  Of course, research subjects are always free to edit their responses, but by making this process explicit, the researcher may be able to at least gather information directly from subjects about the limitations of the data rather than solely relying on hindsight speculation about missing data.

Explicitly highlighting interest in the family as a whole also gives the researcher an opportunity overtly to discuss family dynamics in the process of consenting to participate in research.  Families differ in important ways from other groups studied by researchers (Copeland & White, 1991; Greenstein, 2001).  In addition to being interdependent systems of individuals, families “develop private, idiosyncratic norms and meanings about their own activities. . . , [creating unwritten] patterns of, and rules for, behavior” (Larzelere & Klein, 1987, in Greenstein, 2001, p. 11) that are often hidden from public view.  Families have ways of restructuring their view of themselves in order fit these family rules and expectations as a means of managing family tensions and maintaining family stability (Copeland & White, 1991). Family members also have multiple statuses and enact multiple roles simultaneously (e.g., father, son, and brother) requiring researchers to be sensitive to the fact that the kinds of responses offered by family members may depend on the role and status the individual is occupying in the context of gathering family data (Gelles, 1978, in Greenstein, 2001). 

These systemic considerations are not typically considered in the traditional bioethics or research ethics literatures.  Relying on an individualistic approach to research ethics, it is tempting to resolve Luci’s case by simply saying, “If a family member does not want to participate, that’s the end of the story; just collect data from those who agree.”  This response is problematic in at least two ways.  First, the validity of system-level data is likely to be compromised, thereby altering the risk-benefit analysis used by IRB reviewers.  Second, assuming a purely individualized approach to ethics in the context of family dynamics may itself be a morally questionable activity that may increase the risk of harm to the family system

Ivan Boszormenyi-Nagy (1984, 1986, 1987, 1991), a founding family therapy theorist, argues that “relational ethics” is critical to healthy family functioning, such that failure of each family member to give “due consideration” to the interests of other members is seen as the heart of family dysfunction.  Nagy (1991) claims that family functioning is enhanced when members of the family can trust that the family system as a whole will facilitate the process of balancing considerations of the well-being of oneself with considerations of the well-being of others. 

In this case study, some family members acknowledged during data collection that their motivation for participation had been out of a perceived benefit to the dying patient.  From a traditional perspective, subject participation “out of fear” of lost benefits raises questions of voluntariness and possible coercion (both direct and indirect).  Superficially, this circumstance arose due to miscommunication.  At a deeper moral level, however, it could be argued that the situation is also borne of “relational ethics,” in that family members gave “due consideration” to the wishes and interests of other members of the family system. 

Luci’s response is in keeping with traditional research ethics: she reminds family members of their individual freedom to withdraw from the study.  In her attempt to protect the rights of individuals, however, does Luci risk harming the system by challenging the family’s “idiosyncratic norms. . .[and unwritten] patterns of, and rules for, behavior” (Larzelere & Klein, 1987, in Greenstein, 2001, p. 11), which has demonstrably included “due consideration”?  In other words, by highlighting individuals’ rights to withdraw their participation, is Luci, in effect, suggesting that “due consideration” of other family members’ interest in contributing family-level data (e.g., the dying patient) is not relevant?  In doing so, does she undermine the trustworthiness of the family system to support “due consideration” — a key factor in healthy functioning according to Nagy  (1991)?  If this line of reasoning holds, then Luci’s adherence to traditional research ethics protocols may violate her ethical responsibilities as a family clinician and researcher to protect (and enhance when possible) the welfare of the family system.

Biomedical ethics and most approaches to research ethics emphasize individual autonomy in decision-making, but this tends to decontextualize people from their social context, a criticism increasingly explored in feminist ethics.  Recognizing that human beings have autonomous moral status (i.e., their moral worth is not dependent on external considerations) need not automatically be equated with decision-making that is free from the influence of others.  Certainly, the influence of the researcher on the consent process needs to be kept to a minimum.  However, it is morally suspect to presume that decision-making itself must always be free of the influence of others. 

While some attention has been given to cultural or societal-level groups (e.g., Native American tribal considerations), little discussion has occurred about the moral relevance to decision-making of intermediate level groups such as the family. Yet in many cultures these more personal groupings impact one’s daily life most, and it is not uncommon for loyalty to one’s family to be given priority over individual interests. If Nagy’s theory of family functioning is correct, it would suggest that being in intimate relationships with others changes the level of influence on ethical decision-making we consider to be appropriate, particularly in contrast to non-intimate relationships. 

References

  • Bertalanffy, L. von. 1968. General Systems Theory. New York: Guilford Press.
  • Boszormenyi-Nagy, I. 1984. Invisible Loyalties: Reciprocity in Intergeneration Family Therapy. New York: Brunner/Mazel, Publishers.
  • Boszormenyi-Nagy, I. & Krasner, B. R. 1986. Between Give and Take: A Clinical Guide to Contextual Therapy. New York: Brunner/Mazel, Publishers.
  • Boszormenyi-Nagy, I. 1987. Foundations of Contextual Therapy: Collected Papers of Ivan Boszormenyi-Nagy, M.D. New York: Brunner/Mazel, Publishers.
  • Boszormenyi-Nagy, I.,  Grunebaum, J. & Ulrich, D. 1991. Contextual Therapy. In A. S. Gurman & D. P. Kniskern (Eds.). Handbook of Family Therapy, Volume II. New York: Brunner/Mazel.
  • Copeland, A. P., & White, K. M. 1991. Studying Families (Applied Social Research Methods Series, Volume 27).  Thousand Oaks, CA: Sage Publications.
  • Gelles, R. J. 1978. Methods for studying sensitive family topics. American Journal of Orthopsychiatry, 48(3), 408-424.
  • Greenstein, T. N. 2001. Methods of family research. Thousand Oaks, CA: Sage
  • Larzelere, R. E., & Klein, D. M. 1987. Methodology. In M. B. Sussman & S. K. Steinmetz (Eds.), Handbook of marriage and the family (pp. 126-156). New York: Plenum.

This case, like “The Case of the Over Eager Collaborator,” deals particularly with those populations who are affected by, or affect, archaeological research (stakeholders).  In the past, archaeology has focused primarily on the study of ancient cultures.  Famous finds such as Schliemann at Troy and Carter’s Tutankhamen made archaeology a world-famous discipline by the early 20th century, and archaeology has continued to be popular and important in the modern world.  As archaeology progressed, so did the depth and variety of archaeological research and discussions of archaeological ethics.  Presently, archaeologists work around the world at sites millions of years to tens of years old.  There are also archaeologists today who are interested in studying the discipline and practice of archaeology in modern social, economic, political and other contexts.

As archaeologists began questioning the place of archaeology in modern contexts, archaeological ethics came to the forefront of research and writing.  Books and articles written on ethics have included discussions of such issues as stakeholders, protection of the archaeological record from looting, public education and intellectual property (Lynott and Wylie 1995; Vitelli 1996; Zimmerman, Vitelli and Hollowell-Zimmer 2003).  In 2004, the Society for American Archaeology initiated the archaeological “Ethics Bowl,” for graduate students to debate case studies in front of an audience at the SAA annual meeting (SAA Web 2005).  These articles, books, and events have placed archaeological ethics at the forefront of important issues in the discipline.

This case raises an archaeological ethics nightmare: a community split with heated debate over the value of an archaeological site. Though archaeologists, as stewards of the past and participants in creating it, see the value of archaeology and its broader discipline anthropology, it is often difficult to communicate that value to others.  In the booming modern context of American suburbia, how do archaeologists fight for preservation in the face of “progress”?

There are three important discussion topics related to ethics in this case: 1) The struggle to define “stakeholders” and their roles in the profession of archaeology, 2) the conflicting and ambiguous ethical standards in the profession of archaeology, and 3) ethical issues arising from team research in the social sciences.  Although this case is fictional, discussions of these issues are important to the discipline, as such dialogue could influence the decisions made by future researchers and students, especially those in or near American communities.

One commentator on “The Case of the Over Eager Collaborator” (see section 6 in this volume) notes that archaeologists necessarily deal with a myriad of stakeholders on any given project.  In this case, there are at least eight primary stakeholders who have interests related to the management of archaeological resources in Arrowhead.  These stakeholders include the following: Avery, his research team, and other archaeologists in the discipline, community members who support mall construction, community members who are against mall construction, a corporate organization (Global Malls Inc.), members of a local Native American tribe, and people with various other opinions.  On a broader scale, stakeholders might also include archaeologists employed by the state, funding agencies supporting Avery’s research, other Native American groups, political officers, and many others.  If ethical archaeologists should consider the contexts of their research, and respect the concerns of stakeholders, how are they to reconcile so many differing opinions?  Is this even possible without forfeiting some professional interest in stewardship?

Recently, archaeologists have been praising community-based archaeological research and, especially, archaeological practice that involves local indigenous populations.  In the SAA “Ethics Bowl,” the three C’s (Communicate, Cooperate, and Collaborate) have been an appropriate and well-received solution for most of the fictional case studies involving community dilemmas.  However, few archaeologists have discussed the potential difficulties and conflicts in community-based research utilizing such methods as Participatory Action Research (PAR).  For instance, no two communities or group of stakeholders are the same and, thus, no two community-based projects will present the same challenges. This case elucidates the complexities of working with or in different communities. It is wonderful when the public learns from archaeologists or participates in archaeological research.  It is not enough, however, to say archaeologists should simply work with local communities — social scientists should be aware of the consequences of such research.   People (individually or in groups) are not predictable and no two community-based projects will be the same.  Therefore, we must be flexible and open-minded and should prepare to deal with multiple stakeholders in our research in the most efficient, effective, and respectful ways possible.

The second major topic of the case reflects the seemingly opposing ethical codes in the profession of archaeology.  Today, archaeologists work all over the world and in each nation they encounter unique situations involving stakeholders and the archaeological record.  A plethora of international and national conventions, agreements, and laws help guide archaeologists in their research, though these are not usually binding, especially in regard to stakeholder responsibilities.  For additional guidance and discussion, many archaeologists turn to the ethical codes of archaeological or anthropological organizations.

In this case, there are three such focal organizations: the Society for American Archaeology (SAA), the World Archaeological Congress (WAC), and the American Anthropological Association (AAA).  As indicated by the case, some of the ethical recommendations made by these organizations seem to be contradictory (for the full text, see SAA 2005; WAC 2005; AAA 2005). One can question the utility of such codes, by-laws, or principles in a discipline if they are incongruous.  Principally, if one of the goals of ethical codes is to teach future archaeologists responsible research practices, what are students to think of or learn from codes that provide contradictory advice?

Again, the goal is not to argue for the end of ethical codes in archaeology. The main point in this section is that in any real-life (or even fictional) research situation, the circumstances and stakeholders will differ.  Because of this, no one or even three ethical codes will present definitive ethical research standards.  In every case, archaeologists should debate stewardship, accountability to local populations, commercialization, etc. and come up with compromised solutions (or at least steps toward a common goal).  There are no simple and straightforward answers to issues of ethics — instead, there are principles, responsibilities, debates, and compromises.

The final section of the case study calls into question the ethical responsibilities of lead researchers and team members in group research situations.  During the GREE workshop, we discussed various ethical situations which could arise when multiple researchers work together on the same project.  These include questions about: ownership of data, right to publication, authority, mentor/mentee relationships, etc.  This case asks how differing opinions within a research group should be handled, specifically within the context of community/research group disagreement.

The majority of archaeology done in the United States today is Cultural Resource Management (CRM) archaeology.  These projects are run by public or private companies and, in short, CRM archaeologists attempt to identify archaeological resources which may be destroyed by new construction projects and mitigate the loss of information by performing different scales of excavation.  CRM work is often quick work, but it still involves stakeholders.  An additional group of stakeholders in CRM projects are the team-members, since CRM is almost never an individually accomplished project.  Team-members, who may number between two and twenty, often work under the leadership of the Principal Investigator (PI).  This arrangement may lead to some of the same research ethics questions listed above (i.e. right to publication, authoritative voice).  Furthermore, the transient nature of CRM archaeology often results in workers who are disconnected to their research site, resulting in group research that is dominated by the research goals and analysis of a principal investigator.  Ideally, all social science research should be poly-vocal and researchers should exchange ideas before, during, and after projects.  Especially within the social sciences, the opinions of the public should also be considered.  Again, ethical research in archaeology should include preparatory work and consideration of multiple viewpoints. 

An increased awareness and popularity of public archaeology and archaeological ethics have brought archaeologists face-to-face with situations such as the one presented in this case study.  Few archaeologists still believe that archaeological research exists in a political, economic, or social vacuum.   After all, social science is research that deals, primarily, with living people.  It is time all social scientists consider the contexts in which they work and the consequences of their research.  The work of groups such as the Association for Practical and Professional Ethics and the discussion of ethical research situations will help inform future social scientists of these issues. 

References

AAA 2005 “Code of Ethics of the American Anthropological Association," available on the World Wide Web at: http://www.aaanet.org/committees/ethics/ethcode.htm.

Lynott, Mark J., and Alison Wylie, eds. 1995. Ethics in American Archaeology: Challenges for the 1990s, 2nd ed. Society for American Archaeology, Washington, D.C.

SAA 2005 “Principles of Archaeological Ethics," available on the World Wide Web at: http://www.saa.org.

Vitelli, Karen D., ed. 1996. Archaeological Ethics. AltaMira Press, California.

WAC 2005 “World Archaeological Congress Codes of Ethics," available on the World Wide Web at: http://ehlt.flinders.edu.au/wac/site/about_ethi.php.

Zimmerman, Larry, Karen D. Vitelli and Julie Hollowell-Zimmer, eds. 2003. Ethical Issues in Archaeology, AltaMira Press, California.

Social science researchers have an obligation to protect research participants. While most researchers hold this as a central tenet of their research, it is by no means a straightforward process. This case study highlights two aspects of conducting ethical research — obtaining informed voluntary consent and evaluating the costs and benefits of research. Both are challenging endeavors considering how social science research navigates a sea of multiple interests and meanings relating to both informed consent and cost/benefit analyses.

Dr. Clark, like many researchers, is affiliated with multiple institutions (e.g. IFSN, the university) and conducts research within many different cultural contexts. While her university’s review board (IRB) may grant alternate informed consent considering her concerns, IFSN’s consent process may have little to do with ethics. Indeed, IFSN’s primary concern may involve issues of litigation rather than ethical considerations.

What if IFSN agrees to follow the decision of Dr. Clark’s university IRB to grant an alternate form of consent (e.g. verbal)? How should Dr. Clark go about drafting this considering appropriate forms vary depending on different contexts? For example, it may be more appropriate to use verbal consent when literacy rates among participants are very low.

It also is important to evaluate the positionality (i.e. cultural viewpoint) of all people involved with the study.  For example, how does Dr. Clark’s positionality (e.g. status as a Ph.D. researcher, woman, etc.) affect how she evaluates and interprets what constitutes informed consent? How might this be different from how Zigiwaians conceptualize informed consent? How might positionality (e.g. social class, race, etc.) among Zigiwaians affect interpretations of informed consent?  For example, does an “educated” city dweller conceptualize consent different from a “non-educated” rural dweller? How should Dr. Clark approach informed consent considering these differences?

Whether or not Dr. Clark proceeds with signed consent or some other form of consent, she also will need to conduct a cost/benefit analysis. This includes evaluating the potential costs and benefits of her research on individual community members as well as the community as a whole. Unfortunately, this is not a clear-cut process. For example, how should Dr. Clark weigh costs and benefits between individuals and the community as a whole? Is it ethical potentially to compromise the safety of a few community members (e.g. by having signed consent forms and asking about illicit timber harvesting activities) for the potential benefit of the community as a whole? 

Complicating the cost/benefit analysis further, there are many variables that cannot be clearly determined. For example, after conducting a cost/benefit analysis Dr. Clark may decide to move ahead with her research because she thinks its benefits outweigh the costs. In doing so, she is confident her research can strengthen IFSN's agro-forestry program. She cannot, however, guarantee that the results and recommendations derived from her research will be implemented or even considered. Should Dr. Clark take this into account when evaluating the costs and benefits of conducting the research considering these factors are out of her control?

While this case study highlights ethical considerations of informed consent in an international context, it illustrates ethical concerns that affect all social science research. Informed consent and cost/benefit analyses are central tenets of the research process, and we need to take them seriously. While there is no straightforward process of determining the best course of action, we can remain committed to protecting the rights of research participants by anticipating and evaluating as many factors as our faculties allow. Only then can we be assured that we are doing everything in our power to meet the needs of the very people social scientists are committed to helping.

References

  • Cooper, David E. 2004. Ethics for Professionals in a Multicultural World. New Jersey: Pearson–Prentice Hall.
  • Fluehr-Lobban, Carolyn. 2003. Ethics and the Profession of Anthropology. New York: Altamira Press.
  • Sieber, Joan E. 1992. Planning Ethically Responsible Research: A Guide for Students and Internal Review Boards. Applied Social Research Methods Series 31: 1-161.
  • Schiltz, Michael E. 1992. “Ethics and Standards in Institutional Research,” Journal of New Directions for Institutional Research 73: 1-85.  

Archaeology differs from most social and behavioral sciences in that living peoples are often not the direct subjects of archaeological research, particularly when dealing with the past in North America before European contact.  However, as recent research in the field and this case both demonstrate, archaeologists often must negotiate between several groups of living peoples in order to complete their research in what is becoming an increasingly complex political landscape. Substantial research has gone into exploring the relationship between archaeologists and Native Americans who are the living descendants of the people archaeologists study (Dongoske et al., eds. 2000; Swidler et al., eds. 1997).  While this relationship plays a role in this case study, the main focus is the broader relationship between archaeologists and other groups that have an interest in the past, also called stakeholders in archaeological research.

Every archaeological project has to deal with multiple stakeholders who have varying levels of power and authority over the research itself.  A typical project run by a professor at an American university may have several stakeholders, including the granting agency that provided the funding for the project, the land managing agency or landowner who owns the land upon which the research will be conducted, the university the professor works for, the facility in which the artifacts, notes, and reports from the project will be curated, the Native American groups who claim cultural affiliation with the area of study, the communities local to the area of study, and the archaeologist who is conducting the research.  Some of the relationships between these stakeholders and the archaeological research are codified in law; for example, land managing agencies will only allow research after legally required permits are obtained.  Other relationships are not quite as formalized, such as the relationship between archaeologists and the archaeological record.  While archaeologists do have some legal responsibilities to the archaeological record under state and federal permitting requirements, archaeologists are mostly guided by several codes of ethics developed by professional societies in the discipline (American Anthropological Association 2005; Register of Professional Archaeologists 2005; Society for American Archaeology 2005; World Archaeological Congress 2005).  For the most part, these codes of ethics do not explicitly prohibit specific actions, but instead attempt to encourage archaeologists to think and act responsibly towards the archaeological record.

The Society for American Archaeology’s Principles of Archaeological Ethics is probably referred to the most often when dealing with ethical dilemmas in archaeological research.  However, one of the main pitfalls of the Principles of Archaeological Ethics is the assumption that the scientific value of archaeological research takes precedence over all other ways in which the archaeological record can be valued.  In the case presented here, this system that values archaeological research for its scientific value, under which the protagonist, Millie, operates is pitted directly against other value systems that emphasize the commercial value of artifacts and the less tangible connections that landowners and communities feel to the past through the archaeological record.  Most if not all archaeologists would argue that the scientific value of the archaeological record far outweighs the commercial value, but archaeologists often falter when trying to explain why this is the case to other stakeholders, especially in a way that resonates with the general public.

The situation presented in this case is challenging, as all potential courses of action have negative consequences.  Clearly, Millie initiated her research alongside an effort to educate the local community and the owners of archaeological sites about why archaeologists value the scientific research potential of the archaeological record in order to prevent pothunting from occurring on the archaeological sites in her study area.  However, it is less clear whether Millie adequately took into account other ways that people, specifically landowners, value archaeology.  The landowner in this case had an obvious interest in learning more about the archaeological record, but may have felt that the best way for him to learn was to have a tangible link to the past through artifacts from a site.  Situations like this one are not uncommon in archaeological research, and archaeologists should carefully consider their actions and try to effectively take preventative measures to avoid such value conflicts in their own research.

References

  • American Anthropological Association 2005 Code of Ethics of the American Anthropological Association. American Anthropological Association. http://www.aaanet.org/committees/ethics/ethcode.htm (accessed July 27, 2005).
  • Dongoske, Kurt E., Mark Aldenderfer, and Karen Doehner, eds. 2000. Working Together: Native Americans and Archaeologists.  The Society for American Archaeology, Washington, D.C.
  • Register of Professional Archaeologists 2005 Code of Conduct and Standards of Research Performance. Register of Professional Archaeologists, http://http://www.rpanet.org/ (accessed July 27, 2005).
  • Society for American Archaeology 2005 Principle of Archaeological Ethics. Society for American Archaeology, http://www.saa.org/aboutSAA/ethics.html (accessed July 27, 2005).
  • Swidler, Nina, Kurt E. Dongoske, Roger Anyon, and Alan S. Downer, eds. 1997. Native Americans and Archaeologists: Stepping Stones to Common Ground.  AltaMira Press, Walnut Creek.
  • World Archaeological Congress 2005 World Archaeological Congress First Code of Ethics, World Archaeological Congress, http://ehlt.flinders.edu.au/wac/site/about_ethi.php (accessed July 27, 2005).
Commentary On

In this case study, the central issue revolves around Kenneth’s role as a researcher.  First, how does this role affect what people at the site can expect from him in terms of confidentiality?  Second, how does this role affect how he responds to overhearing information that may change the course of the impeding union vote?  And how will it affect his research goals?

In terms of Kenneth’s role in his research site, does he have an obligation to act on behalf of the workers whose union votes may be tampered with?  Or does he have an even stronger obligation to avoid disrupting or changing the situation at his research site?  Should Kenneth act as an “objective” researcher, avoiding involvement in the situation, or should he be an advocate for his participants?  This is an age-old question in the social sciences and one without a completely satisfactory answer.

Proponents of traditionalist, positivist social science would probably argue that intervening in this developing situation would somehow contaminate Kenneth’s data, or keep the researcher from accessing the “Truth” — the one and only “objective” reality of the research site, which should unfold without his interference.  This may be true in the sense that getting involved may block Kenneth’s ability to conduct further observations at this company.  But growing numbers of social scientists realize that not only does the researcher’s very presence at the site affect his or her data, but that there are many “truths”, and not one objective reality.  Feminist researchers in particular have argued that the position of the researcher (his or her gender, race, social class, and other characteristics) as well as that of the participants, will influence the questions the researcher asks as well as the answers he or she finds (Deutsch 2004).  So there are many truths in each research site.  Since all researchers carry their own backgrounds and biases, truly “objective” social science is not a realistic goal and never has been.  Researchers need only to be honest with their audience about their own positionality, and, in some circumstances, should become involved in their research sites, especially when they have knowledge that may help their participants.  The goal is to retain validity while being honest in a way that traditional positivist research has often not been.  Although this latter perspective has gained much legitimacy within sociology, there is still some disagreement within the discipline along the fault lines between qualitative and quantitative researchers, and even among qualitative researchers (Taylor 1999).

At the same time, in this situation, there are other circumstances to consider. Will going public with the information he has overheard compromise the physical safety of the researcher?  Will it involve him in a legal battle if plans to tamper with the union vote are uncovered?  Not only does the researcher face the epistemological questions of his discipline, but the additional issues faced by whistleblowers everywhere.  Further complicating matters is the fact that he did not hear specifically what was being planned, only that one or more drivers, aided by management, are planning to do something to challenge the rightful outcome of the vote.

In this case, it seems that his responsibilities are conflicting.  The terms of confidentiality he offered to workers at the site would seem to cover the information that he overheard.  On the other hand, he seems to have an ethical responsibility to the other workers at the site that may be harmed by those who would tamper with the vote.  Perhaps he could mediate this conflict by reporting the information he overheard, but not providing names.  This would protect confidentiality while keeping union officials on heightened alert for vote fraud.

References

  • Deutsch, Nancy L. 2004. “Positionality and the Pen: Reflections on the Process of Becoming a Feminist Researcher and Writer.”  Qualitative Inquiry 10(6): 885-902.
  • Taylor, Peter Leigh. 1999. “Qualitative Cowboy or Qualitative Dude: An Impasse of Validity, Politics and Ethics?”  Sociological Inquiry 69(1): 1-32.

The key questions in this case lie in the tensions between maintaining a research participant’s confidentiality and a researcher’s ethical obligations to the public weal.  While Barnes is at one level ethically obliged to maintain the confidentiality of his participant, tension emerges from his awareness of these weapons and the harmful use to which they could be employed, as well as from the possibility that he could be held accountable for not revealing their existence.   Barnes’s decision is further complicated by the certainty that reporting this individual to the authorities will ruin his research prospects, and by the not-insignificant possibility that to do so will also place him at risk of retaliation by the participant or his comrades.  In discussing this case, one might look to other examples of research involving criminality — studies of illicit sexual activity (Humphries, 1970) or of drug sales and trafficking (Adler and Adler, 1983), for example—in which researchers have maintained the confidentiality of their participants and have not reported illegal behaviors.  In these cases, however, the crimes are widely considered “victimless.”

The possibility, however remote, that these weapons could be used for violent criminal activity or in a revolt against government authority — and thus the possibility that the consequences would be much more severe and widespread — problematizes a comparison with victimless crimes.  On the other hand, one might argue that the types of weapons involved in this case are commonly owned, and thus represent a level of threat to which law enforcement agents are accustomed, and for which they are prepared to encounter.  Ironically, it is perhaps the rather prosaic nature of the weapons that complicates the issue: if Barnes’s participant had revealed that he’d constructed a truck bomb, Barnes’s obligation to the public weal would be unquestionable.  In this sense, Barnes’s decision may be guided by considering the threat posed to the public by various weapons along a continuum of lethal force.  By this utilitarian logic, Barnes might dismiss the need to report illegal personal weapons such as rifles.  What complicates this scheme, however, is the difficulty that would arise in intermediate cases — discovering an illegal pistol at one end of the continuum or a massive truck bomb at the other makes for an easy decision, while a machine gun might not.

In such cases, it might be useful for researchers to rely on cues of an individual or group’s intent, or on the group’s narratives vis-à-vis their weaponry.  In this case, Barnes might note that American militias generally consider their personal weapons (rifles, pistols, etc.) as defensive in nature — necessary tools to protect themselves from potential governmental coercion or tyranny.  A large bomb, by this standard, would be an offensive weapon, and is thus not consistent with the group’s narrative of action.  Possession alone of such a device could then be reasonably assumed to reflect imminent criminal intent, and would thus warrant action on the part of the researcher.  The converse is not true, however — it does not necessarily follow that possession of weapons considered by their owners to be defensive in nature can be seen as an absence of violent intent.  Ultimately, a researcher in such a case cannot definitively gauge his respondent’s intent.

Nancy Scheper-Hughes (2004) provides an illustration of one alternative to non-reporting of observed criminal behavior.  After studying illicit human organ trafficking networks, she offered general testimony to various legislative bodies and health agencies about the nature of the networks and their operation.  Following this example, Barnes might offer a description of militia activities to the appropriate authorities without naming specific members of the groups he’s studied. Upon finding evidence of coerced organ donation, however, Scheper-Hughes actively cooperated with international law enforcement agencies to target traffickers and surgeons.  Her decision to do so was clearly reached as the result of reaching an ethical tipping point — and how to identify that boundary is precisely the dilemma Barnes faces. Unfortunately, though, this example offers little guidance, for coercive organ harvesting provides such an egregious violation of all humane ethical standards that it cannot be seen as comparable to the crime of violation of gun possession laws.

In considering his design of the study and his discussion of informed consent with his participants, one might ask whether Barnes’s knowledge that many of his research participants maintain a defiant attitude towards many forms of Federal regulation—particularly in matters of gun control—should have led him to a more explicit or specific formulation of his informed consent materials.  Looking forward, this could be instructive to his future research and that of others. Offering warnings against discussing specific illegal acts—ranging from the common (illegal firearms possession) to the most extreme (bomb plots or other conspiracies)—rather than a blanket proscription against discussing “illegal activity,” might have prevented this situation in the first place, and could prevent a reoccurrence of such a case.  One must ask, however, whether we can reasonably expect a researcher to anticipate each and every possibility.

Similarly, by obtaining a Certificate of Confidentiality from NIH or other Federal agencies, Barnes can protect his participants’ recorded interviews from subpoena, thus further minimizing the risks they face by participating.  These documents, however, do not prohibit researchers from voluntarily disclosing information about research participants in cases in which the researcher believes them to be at risk or a danger to others.  Regulations governing these cases, however, explicitly state that if a researcher intends to make such voluntary disclosures, he should clearly indicate this on the consent form provided to potential participants.  This suggests that Barnes, ideally, should have more thoroughly considered the possible criminality he would encounter and set the standard for disclosure beforehand.  It is, however, difficult to predict how an interviewee would react to such a practice.  On the one hand, this might produce a more guarded interview.  On the other, such forthrightness and honesty in the early stages of the consent process might be seen as an indicator of the trustworthiness of the researcher.  Unfortunately, for any benefit this practice might provide, it could actually increase the risks faced by the researcher: what might happen when a research participant, in the middle of a taped interview, catches himself revealing behavior the researcher has indicated he will report to the police?  This represents an illustration of the unpredictability that characterizes the core of this case.  The uncertainty to what purpose the weapons will be put, the unpredictability of the reaction from participants, and the uncertain risks to both the public and the researcher himself creates the dilemma.

References

  • Adler, Patricia and Pete Adler. 1983. “Shifts and Oscillations in Deviant Careers: The Case of Upper-Level Drug Dealers and Smugglers.” Social Problems 31(2): 195-207.
  • Humphries, Laud. 1970. The Tearoom Trade: Impersonal Sex in Public Places.  Aldine Publishing Company: Chicago.
  • Scheper-Hughes, Nancy. 2004. “Parts Unknown: Undercover Ethnography of the Organs Trafficking Underworld.”  Ethnography 5(1): 29-73.

This case study brings to light some of the potential problems that can arise when people with very different belief systems interact.  It also highlights some of the issues inherent to the extreme power differentials created by colonialism.  American anthropology was born out of a colonialist ideology, and this legacy continues to complicate relationships between anthropologists and indigenous groups today.

The colonization of North America has been devastating to the continent’s indigenous populations. The westward expansion of Euro-Americans acting on the ideological assertions of manifest destiny caused the wholesale slaughter and eventual extinction of some American Indian cultural groups, and displaced many of those who survived the assaults.  The driving of the final golden railroad stake joining the Union and Pacific railroads in 1869 symbolized the opening of the west for Euro-American settlement, while the 1904 San Francisco World’s fair display “End of the Trail” emblemized prevalent Euro-American assertions that the “Indian Race” was doomed to extinction.

During and since the era of initial colonization in North America, tens of thousands of sets of historic and pre-contact indigenous human remains have been exhumed and placed in repositories around the country.  The continued possession of these human remains by federal and state agencies is viewed by some as a continuation of colonialism; first control of the living and now control of the dead.

Since passage of The Native American Graves Protection and Repatriation Act (NAGPRA) of 1990, public attention has increasingly focused on the Indigenous dead of North America. The NAGPRA requires all federally funded repositories of Native American (as defined by the law) human remains to evaluate whether any living lineal descendents of particular sets of human remains exist, and/or whether “cultural affiliation” (as defined by the law) between a set of human remains and any “contemporarily federally recognized Native American group(s) can be reasonably identified.”  The NAGPRA provides a process for repatriation of the remains should recognized lineal descendents and/or culturally affiliated groups choose to employ it. But the NAGPRA only gives authority to federally recognized Native American groups and questions have arisen as to whether “cultural affiliation” can be identified through scientific analysis as some have assumed the NAGPRA requires.  Although the law was initially thought to support human rights, its numerous weaknesses for this purpose are becoming apparent.  Although many anthropologists support the repatriation of human remains to tribal groups, others have voiced opposition to the NAGPRA repatriation process.  The NAGPRA has sparked a renewed interest among some to conduct additional studies on these sets of human remains.

At primary issue in many contemporary conflicts between Native Americans and Western scientists is control of indigenous North American human remains.  Some indigenous North Americans have asserted their legal right and moral obligation to protect their ancestor’s remains.  These cultural groups assert that Native American dead should be given the same respect given any human.  Federal agencies assert their claim that human remains recovered from federal lands are federal property.  Some scientists argue they have a right to scientific freedom which includes performing studies on indigenous human remains.

Recent controversies regarding ancient North American human remains have often focused on questions of race.  These disputes have been further aggravated by hyperbole in the media.  Although the majority of anthropologists assert that race is a cultural construct, the “First Americans” debate has reinvigorated racism against Indigenous peoples in some communities.

A question remains as to how much can be learned from the study of pre-contact North American human remains and what importance should be placed on the potential knowledge recovered from such studies.  One should ask if Western scientists should prevail when their work has the potential to cause more harm then good.

Western belief systems dominate others due to colonialism, but is might always right?  Or, do we owe it to ourselves to question the foundations of all belief systems, including our own, before we force our ways of finding truth on others?

This case raises a number of issues concerning the challenges of conducting research in an international setting where cultural factors have the potential to interfere with the requirements of ethical research as desired and required from the home country. I will comment on the issues of informed voluntary consent and respect for persons.

Informed voluntary consent is critical to conducting ethical research, and this case compromised informed voluntary consent due to inequality of power. The head teacher exercises considerable power over the teachers, and his insistence that the teachers participate interferes with their right to volunteer for the study or not.  While there are three elements of consent, information, comprehension, and voluntary, the head teacher would like to bypass all three of these elements.  When Dr. Sheridan attempts to share information that would provide teachers with information about the study, the head teacher lets her know this is not necessary at all since the all the teachers will be participating.  The head teacher interferes with the teachers’ rights to volunteer and also interferes with their right to comprehension. By not allowing a discussion of participation and just stating that they will all participate, Mr. Konadu hinders their ability to ask questions about the study so that they would be able to offer informed consent for participation. Mr. Konadu’s actions also violate the element of voluntary participation, which means free of coercion and undue influence, by insisting they all participate.

In examining respect for persons, it is important to examine the nature of relationships in the research process.  Due to the power and authority in the relationships in this case, respect for persons is challenged on multiple levels. While it is important to avoid coercion, the researcher is in an ethical quandary. The support offered by both the district director of education and the head teacher is essential to the study, but this support stands to coerce participation in the study and compromise the study.  Participants have the right to agree to participate or not agree to participate, and the strong armed support by administrators seeks to take this right to choose away from the participants.  The power dynamic at work is a boss-employee relationship for the head teacher and the district director of education as well as for Mr. Konadu and the teachers.  This not so subtle pressure from the district director of education has led to outright pressure by the head teacher to force the teachers to participate in the project.

Respect for persons clearly means that you cannot coerce participation.  It also means that participants should not be unduly influenced by other people. This is the difficult part of this case. Although Dr. Sheridan is not coercing the participants, they have indeed been coerced into participating in the study.  This is a challenge for Dr. Sheridan.  Should she proceed with the study knowing that the participants were coerced into participating? What if the participants would have participated anyway?

Since the coercion seems to come from two levels, Dr. Sheridan may have to address these issues at the school level and at the district level. Due to cultural norms, it would not be appropriate for Dr. Sheridan to disagree with Mr. Konadu in front of the teachers. Since she is female, and he is male, she is expected to defer to him.  It is at this point that she must excuse herself and have this conversation with Mr. Konadu in a delicate manner so that he can save face, and she can let him know of her institutional and ethical responsibilities.  Perhaps in this smaller setting, she can assure him of her appreciation and willingness to have all of his teachers participate, but she can share the institutional paperwork which requires voluntary participation.

If Dr. Sheridan is unable to convince Mr. Konadu to allow the teachers to choose to participate, what should she do? She could leave the site and go to a different district where she also has permission to complete the study. What if she had no other areas to conduct the study? If she went ahead and completed the study at this site, she could speak with the teachers individually to gain consent, but it would be possible that some of the teachers would still be influenced by Mr. Konadu’s insistence that they participate. Perhaps, she could continue the study, but she would need to document this coercion.

Dr. Sheridan would also need to meet with the district director of education to discuss his role in coercion of teachers to participate. She will need to meet with the district director of education and convey her appreciation for his support of her work in the district while also describing the requirements for her study as outlined by her institution. During this discussion, Dr. Sheridan must explain the concepts of informed consent and voluntary participation as well as her ethical responsibility to these principles in her study.  If the district director of education does not agree to inform the head teachers that the teachers do have a right to participate or not to participate, Dr. Sheridan’s entire study will be compromised, and she will have to decide whether or not she should proceed with the study in this district.

Although Julie had the best intentions, she made a mistake common in many research situations. She should have taken more time to discuss her research with participant communities and individuals. The easiest way to do this is to design research to be collaborative. With this approach community members are also immersed in the research. It also opens opportunities to increase public outreach in the regions where Julie works, rather than limiting outreach to North American institutions and communities. Unfortunately, collaboration, immersion, and public outreach are difficult concepts to define and even harder to actualize.

Why didn’t Julie know to do these things? Often students are not adequately prepared for including collaboration and public outreach as parts of fieldwork. Fieldwork is unpredictable, unfamiliar, and often uncomfortable. Taking the time to interact with people in a foreign community is extremely time consuming, often taking more time than the research itself.

Compensation is a difficult notion to reconcile, especially when one considers that Julie’s career and reputation are strongly rooted in the information she collected during her PhD research in these communities. Adequate compensation is certainly important to consider in light of this. Although her fieldwork was short-term, she is gaining long-term benefits. Compensation should probably benefit the community for the long-term as well. There were probably things Julie could have done to meet long-term community needs.  For example, as an anthropologist, she may have been able to offer her experience and training to meet local community goals of cultural preservation.

If Julie had discussed her project with the community more, she would know if a return visit was necessary. Although the resolution of problems, such as Julie’s, are project specific, it is important to realize that cross-cultural research is undoubtedly going to involve unfamiliarity and naiveté on the part of the researcher. This is especially true when individuals approach communities with personal goals in mind.

Julie should have at least translated her journal articles into Spanish for the community. A rough translation would be better than nothing at all. Even if she did not do that, she should have brought English copies of her publications. The act of sharing her work is just as important as the information itself.

Hopefully this case study invites discussion of these issues and some sharing of experience that may highlight the unfamiliar and unexpected considerations of fieldwork.

Although this case involves a specific experiment in psycholinguistic research, several general ethical questions are addressed that can be applied to work outside of the area, including risk assessment, formation of the informed consent, subject selection, credit/participation, and reporting to the Institutional Review Board (IRB).  To aid in this treatment, the American Psychological Association (APA)’s Ethical Principles of Psychologists and Code of Conduct will be consulted as the ethical standard of this field. APA guidelines consist of five overarching principles that are meant to be general and aspirational coupled with ethical standards that are meant to address specific incidences that may arise in the course of psychological research.  This commentary will address Part 1 and Part 2 of the case study in turn.

Part 1

Part 1 introduces the experimental situation and raises background issues that may arise with research of this sort. The following are several themes that can be elaborated upon in discussion. Underlying these themes is a more general moral tension that runs throughout this case concerning the obligations of a researcher to science and their obvious need to protect the rights of their subjects. 

Risk assessment

The APA’s ethical standard 3.04 (Avoiding Harm) states that researchers must “take reasonable steps to avoid harming their clients/patients, students, supervisees, research participants, organizational clients, and others with whom they work, and to minimize harm where it is foreseeable and unavoidable.”  This scenario, however, is meant to provoke thought on this standard when the appropriate level of safeguarding in an experimental situation is less than obvious.  In Sophia’s case, the literature provides no guidelines for use of a negative role-playing task.  Research suggests that writing about negative events may be harmful, but it is not clear that that is what subjects are doing.  How should researchers assess such situations?  What are the “reasonable steps” that could be taken to minimize harm if the experiment should be allowed to proceed?  

Also underlying this dilemma is the role of the experimenter in making these judgment calls and in deciding whether the benefit of the research outweighs its potential risk. Question 1 challenges this role.  Further discussion can center on the position of the IRB versus the professional responsibilities that are placed upon members of academia. For example, APA explains that their guidelines were purposely written in such a way as to allow professional judgment on the part of psychologists (stated in introduction). How does this judgment come into play when ethical dilemmas arise? When should potential biases be protected against? In other words, how much responsibility should be given solely to the investigator rather than to a governing board such as the IRB?

Informed consent

An important concern in this study lies in the formation of the informed consent. This is raised early on in the fact that the data are intended for development of government technology.  It is plausible that some subjects would not want to participate in such an endeavor.  An obvious course of action would be to include this information in the informed consent. However, this may change the results substantially and affect the benefit such research has on homeland security.  Other issues may be brought up in discussion that stem from this problem.  For example, what if the scenario is slightly changed such that the experiment is being funded by these agencies and they put this information under security clearance? Does this change the moral obligations of the researcher from that of subject to country?  Should the experiment not be run if subjects cannot know the use of their data?  Would it be enough to let subjects know of this restriction? 

An additional issue concerning informed consent formation raised in this case lies again in the potential risks students face from participation in this task and how much information concerning this should be divulged. This is a classic ethical dilemma when conducting research (applicable also to the previous issue).  On one hand subjects have the right to know what they are agreeing to do. APA ethical standards dictate that researchers must inform participants of any “reasonably foreseeable factors that may be expected to influence their willingness to participate such as potential risks, discomfort, or adverse effects” (ethical standard 8.02a).  However, if the task is divulged the experiment may be jeopardized.  The argument from the literature for a potential risk is not very strong, but does this matter? Where is the line and who decides this?  When does it become necessary to include hypothetical problems in an informed consent?

Subject selection

A third general issue addressed in this case deals with subject selection and recruitment.  Question 3 raises issues concerning screening and use of students as subjects.  Use of language groups, though seemingly innocent, sometimes involves separation of ethnic groups (in this case: Hispanic, Asian, and Caucasian).  Combined with the essay topics (terrorism and crime), this may cause discomfort in participants just by its implications.  What are the ethical responsibilities of a researcher in this situation?  Additionally, the vulnerability of students as subjects can also be addressed in discussion at this point.  Should they be treated with more care than other sampling populations?

This scenario also touches on the use of incentives.  Having the experiment fulfill all of the student’s course requirements induces students to want to participate (leading to problems like those seen in Part 2).  APA’s recommendation for use of inducements seems inappropriate for this situation in stating that psychologists must “make reasonable efforts to avoid offering excessive or inappropriate financial or other inducements for research participation when such inducements are likely to coerce participation”  (ethical standard 8.06a). The current incentive is not excessive.   But, is it coercive?  An interesting point for discussion centers around the potential distinction between personal ethical choices and principles laid out by an institution.  Is simply following standard guidelines enough? What if these guidelines do not specifically address the moral issue in question?

Part 2

Part 2 is concerned not so much with experiment preparation as in Part 1, but with issues that may arise during the experimental situation.  More specifically, the problems Sophia faces concern subject credit/participation and reporting to the IRB.

Credit and participation

Sophia, in managing the concerns and behavior of the participant, chooses to refuse him full credit and participation in her study. Was this the correct solution?  APA guidelines state that when the concerns of the researcher are in conflict, they must “attempt to resolve these conflicts in a responsible fashion that avoids or minimizes harm” (Principle A).  Is this what occurred?  Did Sophia let her personal annoyance get in the way of resolving the situation peacefully?  Challenge students in discussion to come up with alternative courses of action along with the pros and cons of each.

Reporting to the IRB

Several concerns arise when considering the role of the IRB in this case study. The crux of the dilemma lies in whether or not Sophia should report the incident with the offending participant.  Doing so would jeopardize her research and its use, yet provide a safeguard against potential future harm of participants as well as provide a second opinion on a judgment that is potentially biased.  This raises two topics for discussion. First, how much information needs to be given to the IRB?  What qualifies as a harmful situation?  Second, should researchers rely on their own subjective judgment?  What about experimenter bias?  Are there ever situations where experimenters can rely on their own judgment calls?  In the discussion, it might be interesting to highlight the conflict between thorough reporting and wasting the IRB’s (usually taxed) resources.

The IRB is not only in place to protect the participants, but the experimenters as well.  A second thread of discussion—not often addressed—concerns the potential harm that researchers face in some experimental situations.  Sophia was bullied by the male participant and sexually harassed.  Is this something she should report to the IRB as well?  Is the task designed in such a way that these situations may reasonably arise in the future?  Should the experiment be re-evaluated for her safety as well?  Should she make this decision or allow the IRB to decide?

References

  • American Psychological Association. 2002. Ethical Principles of Psychologists and Code of Conduct. http://www.apa.org/ethics/code2002.html.

This case examines ethical issues involved in conducting student research, a practice common in undergraduate experimental psychology classes. Specifically, it considers the circumstances under which student research is exempt from review by an institutional review board (IRB) and suggests the importance of incorporating research ethics training into experimental psychology class curricula. This case also examines broader issues in conducting research, and is an example of how poor planning at early stages of research development can lead to complex and potentially risky circumstances. James, the main character in this case, faces increasingly difficult ethical choices that might have been avoided if he had taken greater care in assessing the risks of his students’ research project and submitted their proposal to the IRB for review.

The case begins with James deciding whether he must submit his students’ research proposals for review by his university’s IRB. James considers whether his students’ research, which will be conducted for in-class, an educational purpose only, is exempt from review. Although the National Research Act, Public Law 93-348, states that the generalizability of the knowledge gained from a research study should be considered when making decisions regarding exemption from review, it also states that the potential for harm must be considered. Studies that do not contribute to generalizable knowledge are only exempt from review if they pose no harm to their participants. James is making his decision about whether his students’ projects will require review before he knows enough about them to make such a decision. James must know the nature of the studies before he can make an informed decision whether or not they should be submitted for review.

James consults with more experienced graduate students when deciding whether or not to submit his students’ research projects. Although the input of one’s peers can be invaluable in making ethical decisions, they can also be a source of bias since one’s peers share a common perspective. James and his fellow graduate students may share the perspective that submitting in-class projects for review is far too time consuming to be practical. Including other perspectives into the discussion, including those of potential participants, would assist in predicting risks to participants that may otherwise be difficult to imagine. Submitting research proposals to an IRB is an efficient and effective way to gain diverse perspectives, because the typical IRB includes representatives from outside the scientific community as well as research scientists from a variety of disciplines.

James’ students generate a variety of research project ideas, and most pose no harm to research participants. However, one project involves the assessment of depressive symptoms, and it is less clear what risks may be involved. It is at this point, when James knows the exact nature of the proposed research studies, that he is able to consider whether or not he should submit the proposals for review by his university’s IRB. The studies that clearly pose no harm to research participants would be exempt from review, according to the regulations of his IRB. However, the project involving the assessment of depressive symptoms should be submitted because James is probably unprepared to assess the potential harm of the study. The IRB would most likely be better prepared to assess accurately the risk involved. It is possible that having participants reflect on their depressive symptoms could increase their severity, and because the research is being conducted by students and on students from the same class, the possibility arises that students could learn about each other’s depressive symptoms. Thus, potential risks include the negative effects of asking about psychopathology and the loss of privacy and subsequent damage to the depressed students’ reputations.

Even if asking about depressive symptoms does not harm research participants directly, having this information could increase James’ degree of responsibility for the well-being of his participants and students. James never considers what his responsibility toward his students would be if he learned that several of them were depressed. His role as teacher requires him to consider the well-being of each individual student, and although his role as researcher requires him to consider the safety of his research participants, it also requires him to maintain confidentiality. James faces this ethical dilemma when he learns that several of his students are endorsing symptoms of hopelessness, thoughts of suicide, sleeplessness, problems concentrating, and irritability. Because James failed to prepare for this situation, he is left with imperfect response options. He is unable to identify the depressed students directly, and he feels that saying nothing to the students would be irresponsible. James decides that his best option is to announce to the class that several students may have depression, and he recommends that these students visit the student counseling center.

Because James did not consider the ethical implications of his students’ research projects, and because he did not submit the depression study for review, he faces a series of increasingly difficult ethical dilemmas. James should have submitted the one questionable study for review because he was incapable of assessing the risks and responsibilities involved. In addition, James should have involved his students in discussions about research ethics and the IRB since these are central aspects of conducting research in psychology. This may have helped James to avoid the ethical dilemmas that were to come. However, once he knew about his students depressive symptoms, he was compelled both as a teacher and researcher to take action. Furthermore, once James knew about his students’ depressive symptoms, the harm involved in potentially breaking confidentially was probably less than the harm involved in allowing potentially-depressed students to go without help. Although many research studies assess psychopathology without including treatment, they are typically designed in such a way that research participants are informed of their diagnoses and provided with treatment referrals. James should never have allowed the study to have been conducted as it was, and submitting the study in question to his IRB probably would have prevented him from doing so.

References

National Research Act, Pub. L. No. 93-348. (1974).

Commentary On

This case raises issues of the role of the IRB and the relationship between this ethical governance board and the individual researcher. Initially some issues raised by the case may seem ethically blurred. However, this is a case in which the researcher has a clear and well- established responsibility to submit a human subject’s research proposal to formalized, peer oversight.

The National Research Act, Public Law 93-348, requires that any institution conducting research that involves human participants establish an Institutional Review Board. All proposals for research involving human participants must be submitted to this board, which is charged with determining the legality of the research, and more importantly, compliance with higher ethical standards. The jurisdiction of the board extends to all research conducted to add to “generalizable knowledge.” These boards have ultimate authority over what research can and cannot be conducted at an institution. Research that has not yet been approved (or more obviously, been rejected by the board) cannot be pursued.  The question of what constitutes “research conducted to add to generalizable knowledge” and hence, what forms of research obligate a researcher to submit to board oversight is perhaps best answered in the negative. That is, what kind of “research” falls outside the jurisdiction of the Institutional Review Board? (See Title 45 CFR Part 46.101 for complete exemptions)  Pilot testing is one form that may not require formal oversight. In many cases, pilot testing of a new method or measure is first conducted with a small number of people. Often these pilot participants are members of the laboratory, graduate students, or a few of the researchers close friends. These pilot tests, of extremely limited scope, with little risk, and participants who are often also formally involved in the research, are typically exempt from IRB oversight. Note that this exemption is quite narrow. It does not include research with any possibility of risk or “pilot” research that includes participants with little connection to the laboratory. Given the limited nature of the exemption, it may be prudent for researchers to check informally with their IRB before deciding to proceed without board oversight.

Second, data collected solely for administrative purposes are not subject to IRB oversight. For example, university Registrar offices maintain large databases of student academic records. These data are employed to track student and university progress, but are not systematically collected to answer scientific research questions. For this reason, university administrators are not required to submit their tracking system (or similar databases) to the IRB for approval.

Finally, research that is conducted in the classroom for didactic purposes is also considered exempt. For example, a professor teaching a statistics course might collect a small data set from his students in order to illustrate a statistical technique (for example, the physical height of men and women in the class). The data are clearly not collected to add to scientific knowledge and carry no potential for harm. As such, it would be unwieldy and excessive to ask this professor to submit a proposal and wait for formal approval (not to mention, a waste of time for the reviewers).  This exemption is slightly less clear in the case of student led research. For example, research methods courses in psychology and sociology often involve a component in which each student (or groups of students) conducts a small study in order to provide hands-on experience with research design, data collection and statistical analysis. Typically, the student’s classmates serve as the research participants. This student research is technically exempt. However, the instructor should be sensitive to ethical considerations and ensure that student research meets the same standards required of research intended to add to scientific knowledge.

It is a common misconception that research not intended to be published is also an exempt category. It is sometimes mistakenly believed that the “generalizable knowledge” clause refers only to research that is published in scientific journals. In fact, the clause should be interpreted more broadly.  For example, a graduate student who conducts a small study and who plans to present this data at a departmental colloquium, but not to publish, is indeed adding to generalizable knowledge. The study was conducted to answer a research question and the obtained answer was shared with a small group of the research community. Extensive pilot testing, whether or not it is published, is also not exempt. Data obtained from these pilots contributes to the researchers understanding of the research question and even if not directly published, informs the direction of future published research. Lack of intent to publish is not considered a legitimate reason to bypass the oversight of an ethics board. Human participants have the right to be protected by independent ethical oversight whether or not the data they contribute is ultimately printed on the published page.

The study that Joshua plans to conduct does not meet requirements for exempt research. Although the study is a “pilot” study, the use of community participants moves this proposal outside the confines of typical pilot work and must be considered by an IRB like all other research. There may be a temptation for researchers to do the work of the IRB themselves. This is illustrated by Joshua’s argument about the non-coercive nature of the gift and the limited risk of the project. However, researchers have a vested interest in the process and may not be capable of making an unbiased decision about the risks involved in their research. For example, Joshua does not seem to consider the risk the experiment may pose to individuals with gambling problems. IRB members may have noticed this risk and been able to work with Joshua to mitigate it. By pursuing his research without the input of the IRB, he lost this valuable insight.  Joshua’s committee member, Dr. Johanson, also demonstrates the temptation of researchers to predict the ruling of the IRB.  In addition, he provides a poor example to a graduate student. His behavior indicates to Joshua that IRBs and ethics are not primary concerns in psychological research.

Finally, the graduate student who counsels Joshua that research conducted for didactic purposes is exempt from IRB approval is correct in this point. However she is incorrect to stretch the exclusion to cover Joshua’s research. While it is true that graduate training is a learning process, it also produces (and is intended to produce) empirical findings with implications outside of the classroom. As such, the fact that the research was conducted during graduate school does not constitute a broad exemption from ethical oversight. In fact, part of graduate training ought to be in research ethics and in the applied skills of communicating with an IRB.  

This case study includes many of the arguments hurried or frustrated researchers may use to justify bypassing the oversight of an ethics board. When deadlines approach it may be particularly tempting to find ways to avoid an extra step in the research process. However, all researchers who employ human subjects should be grateful for the donation that participants make to scientific knowledge and should repay this debt with a genuine consideration of their welfare. Submitting research proposals to the IRB is only one way, but an extremely important way, to ensure that subjects are protected.

Reference

Title 45 CFR Part 46.101.

Introduction

“Oral History Projects and Research Involving Human Subjects” focuses on a number of prominent issues in the ongoing debate about whether or not oral history is “research” defined by HHS and subject to HHS regulations, namely, IRB review.  Through this case, questions about the role of IRBs and professional organizations develop and illustrate the problems that emerge when IRB guidelines are applied to disciplines previously excluded from such review (e.g., oral history, anthropology, ethnography, and folklore).  While this case primarily focuses on whether or not oral history is subject to IRB review, other issues develop, such as the role of professional organizations in the research process and their relationship to IRB governance, how academic and professional goals inhibit ethical judgments, and how the role of a student’s advisor differs from his or her mentor.  In this commentary, I will focus on the debate on whether or not oral history interviewing should be subject to IRB review.

Background

On September 22, 2003, Michael A. Carome, Associate Director for Regulatory Affairs for the Office of Human Research Protections (OHRP), concurred with a policy statement drafted by the American Historical Association (AHA) and the Oral History Association (OHA) stating that most oral history interviews do not need Institutional Review Board approval.  After this concurrence, the position that the AHA and OHA strongly supported was oral history “does not meet the regulatory definition of ‘research’ and therefore is excluded entirely [emphasis mine] from IRB review, without seeking formal exemption.”1 Since the OHRP never released its own policy on oral history interviewing, IRBs around the country did not adopt the AHA and OHA’s policy statement.  In October of 2003, at the request of the Office for Protection of Research Subjects at UCLA, Dr. Carome stated his position on the AHA and OHA’s policy statement:

In summary, the August 26, 2003 Policy Statement attached to OHRP’s September 22, 2003 letter was not drafted by OHRP, does not constitute OHRP guidance, and the characterizations of oral history activities in the third paragraph of the Policy Statement alone do not provide sufficient basis for OHRP’s determination that oral history activities in general do not involve research as defined by HHS regulations at 45 CFR part 46.2

This statement seemingly contradicted his prior concurrence; however, Carome’s statement did make it clear that the OHRP did not exclude oral history from IRB review.  But even after Carome’s statement to UCLA was widely distributed, the AHA issued a press release on June 8, 2004 that reaffirmed that most forms of oral history can be excluded from IRB oversight and ignored Carome’s communication entirely.

Ethical Issues and Analysis

The position of the AHA and OHA is based on the belief that IRBs have overstepped their purpose and jeopardized academic freedom by including oral history in the IRB review process. To them, the division between the scientific and nonscientific disciplines is vast and using the same federal guidelines to regulate all research is problematic.  Linda Shopes, a representative of the AHA, stated, “Applied to oral history interviews and other forms of nonscientific research, they [IRBs] present numerous, serious difficulties, especially because many IRBs are constituted of medical and behavioral scientists, who have little understanding of the principles and protocols of humanistic inquiry.”3  Furthermore, Linda Shopes stated, “Institutional Review Boards were established to prevent the very real physical and mental harm that some biomedical and behavioral research had inflicted on human subjects.”4 Instead of IRB review, the AHA and OHA defend the position that with firm ethical guidelines in place oral history can be effectively monitored through professional organizations and processes such as peer review.5

The essential questions presented by the AHA and OHA are what is research defined by HHS and what, if any, harm can come of oral history interviewing. The AHA and OHA do not believe that oral history interviewing leads to “generalizable knowledge” and, therefore, does not meet the definition of research as defined by HHS.  When Michael Carome clarified his position on oral history interviewing, he stated,

Oral history activities, such as open-ended interviews, that ONLY [emphasis in original] document a specific historical event or the experiences of individuals without an intent to draw conclusions or generalize findings would NOT [emphasis in original] constitute “research” as defined by HHS regulations.6

This position made it evidently clear that most oral history interviewing does require IRB review since oral history interviewing, especially by academics, leads to the formation of conclusions and general findings (i.e., generalizable knowledge).  In addition, oral history interviewing that is archived has the potential to be used by other researchers and become the source of generalizable knowledge as defined by the HHS.7

In addition, the potential for psychological harm for oral history subjects, while perhaps minimal in most cases, presents risks to human subjects.  The AHA and OHA have totally ignored these risks in their policy statement.  E. Taylor Atkins, associate professor at Northern Illinois University, expressed concern on the AHA and OHA’s policy statement and stated, “The principal concern of the AHA and OHA is the academic freedom of their members, but the recent decision [policy statement] does nothing to reduce the possible risks to interview subjects who participate in oral history projects.”8  Atkins also reminded researchers of Alistair Thomson’s Oral History Reader that warns of the risks associated with interviewing groups such as Holocaust survivors and veterans with post-traumatic stress disorder.9

Conclusion

This case sheds light on the ongoing debate between those who believe oral history interviewing should be excluded from IRB review and those who believe that IRB oversight is necessary.  The AHA and OHA’s policy statement advocating the exclusion of oral history interviews fails to show that oral history interviewing is not generalizable knowledge and ignores the inherent risks for oral history subjects. The AHA and OHA policy statement is, above all else, an attempt to avoid a perceived inconvenience, IRB review.  When what is ethically right is weighed against this, it is obvious that oral historians should value IRB oversight.  Other professional organizations such as the American Anthropological Association advocate that researchers involve the IRB and hold their research to the highest standards.  It is time that the AHA and OHA commit to a similar position.10

Notes

1American Historical Association, “Questions Regarding the Policy Statement,” American Historical Association

2Office for Protection of Research Subjects UCLA, memorandum

3Linda Shopes, “Institutional Review Boards Have a Chilling Effect on Oral History,” Perspectives 38, no. 6 (September 2000)

4Linda Shopes and Donald A. Ritchie, letter to the editor, Perspectives 41, no. 9 (December 2003)

5Two examples include John N. Neuenschwander Oral History and the Law (Denton, Texas, Oral History Association, 1985) and Oral History Association, “Evaluation Guidelines,” Oral History Association

http:// omega.dickinson.edu/organizations/oha/pub_eg.html

6Office for Protection of Research Subjects UCLA, memorandum, http://www.oprs.ucla.edu/human/newsletters/Oral%20History%20031209.pdf

7Ibid.

8E. Taylor Atkins, letter to the editor, Perspectives 41, no. 9 (December 2003)

9Ibid.

10American Anthropological Association, “American Anthropological Association Statement on Ethnography and Institutional Review Boards,” American Anthropological Association

Commentary On

This case examines the ethical responsibilities of a researcher to protect the confidentiality of her research subjects.  According to Sieber (1992), confidentiality refers to the researcher’s “agreements with persons about what may be done with their data” (52).  Confidentiality differs from privacy, which refers to individuals’ control over access by others to them or to information about them, and anonymity, wherein individual identifiers such as names are not connected to the data or even known to the researcher (Sieber 1992).

In this case, the researcher is faced with questions about how to present her findings and with whom while still protecting her respondent’s confidentiality.  Sociologists and other social scientists who work with large data sets and present results as aggregate statistics often face little risk of their respondents being identified through research reports.  However, when samples are chosen for convenience or when purposeful sampling is used, identifying the research subjects becomes a real possibility.  For example, if a researcher studying teachers named the school district where the research occurred, someone with knowledge of the school district could likely identify individual teachers based on traits such as age, gender, and number of years with the school district (Sieber 1992).  Or, as is the case here, when a population contains only a small number of certain types of individuals, such as persons of a particular race, anyone with knowledge of the population used to draw the sample can likely identify these unique persons in the sample.

This “deductive disclosure,” as Sieber refers to it, is a particularly important ethical issue in qualitative research.  In much ethnographic or in- depth interview research researchers strive to understand a research question by using rich descriptions of individuals and particular social situations.  With in-depth interviewing, the words of respondents are critical pieces of data and are typically presented to support the conclusions the researcher has drawn after analyzing the data.  As such, the unique traits of individuals and groups are key components of the data and become essential to answering the research question.

A classic example of this dilemma is Carolyn Ellis’s ethnographic research which was the basis for her book Fisher Folk (1986).  Ellis’s data came from a single, remote and insular community.  When Ellis’s book was given to the research participants they were able to identify themselves and their neighbors in the book, even though their real names had not been used.  In this case, many of the study participants were angered by the perceived breach in confidentiality that occurred when Ellis published what they had told her.  Breaches in confidentiality such as those in the Fisher Folk example can shatter the researcher-subject relationship and can damage the public’s trust in researchers (Allen 1997).

In hindsight, Ellis (1995) contends that her problems could have perhaps been prevented by approaching the respondents with the data she planned to publish before she published it, thus allowing them to know what would become of their “data” and how they would be portrayed in the final research.  This undoubtedly means more work for the researcher, particularly when working with certain populations.  However, this approach could not only ensure ethically sound research, but may also lead to more theoretically sound research by allowing respondents to comment on the accuracy of the researcher’s data and interpretations.

Sieber takes the position that all issues of confidentiality should be considered beforehand and clearly stated in the consent form.  Thus, the researcher should carefully consider all potential uses of the data and clearly explain those uses in the consent form.  Following Sieber’s recommendation, in this case, Dr. Kline should have mentioned the presentation to the doctors in the consent form.  However, the extent to which one can foresee every possible threat to confidentiality is questionable.  Furthermore, researchers may not feel comfortable if bound to specifics laid out in the consent form.  For example, like Dr. Kline, a researcher may wonder if compromising respondent confidentiality is necessary in order to maximize the good that flows from sharing the study results.

Typically, consent forms ensure that identifying information will be removed from reports.  However, with qualitative research what constitutes identifying information can be very subtle and may depend on who the audience is that receives research reports.  Many qualitative researchers may then face the challenge of changing enough of the characteristics of the individual while still maintaining the essence of the data. 

References

Allen, Charlotte. 1997. “Spies Like Us: When Sociologists Deceive their Subjects.” Lingua Franca 7(9).

Ellis, Carolyn. 1995. “Emotional and Ethical Quagmires in Returning to the Field.”  Journal of Contemporary Ethnography 24: 68-98.

Ellis, Carolyn. 1986. Fisher Folk:  Two Communities on Chesapeake Bay.  Lexington: University of Kentucky Press. 

Sieber, Joan E. 1992. Planning Ethically Responsible Research: A Guide for Students and Internal Review Boards.  New York: Sage.

A critical ethical concern in this case is the issue of informed consent by students. From the time a student contacts a university or college to express interest in applying to the moment a student departs (either as a graduate, a transfer, or a drop out), a variety of data are collected about the student. These data, such as financial aid status, academic progress, and application material, are necessary for the business of the institution. They can be used by the institution for any operational purposes or internal program evaluation without student consent. However, the use of these operational data for secondary purposes (that is, purposes other than those originally intended during collection) raises many questions about how to treat informed consent on the part of students.

Obtaining informed consent from the hundreds of thousands of students whose information would be contained in databases similar to those proposed in this case presents considerable challenges. For example, the monetary costs associated with contacting each student and explaining the proposed research to be conducted would be prohibitive. In addition, because personal identifiers are often completely stripped from the databases, it may in fact be impossible to contact individual students for their consent. These obstacles make it even more incumbent upon researchers to consider the ethical issues raised in using these data.

The question of developing and maintaining comprehensive student unit records is complex. On the one hand, legislators, taxpayers, parents, and students increasingly demand accountability from institutions of higher education. For example, legislators want to know if taxpayer money is being spent effectively to educate citizens. Student unit records enable institutions to answer such questions more precisely as well as more broadly. The finer the level of detail stored in research databases, the more precisely questions about effective education can be answered.

On the other hand, student unit record databases are not a panacea. They cannot answer all questions raised by the constituents of higher education in a definitive manner. The nature of the educational enterprise is so complex (think about all the factors than can influence whether a student graduates and in what length of time) and so varied that it is almost certain that disagreement about what constitutes an effective education will be around as long as there are institutions of higher education. Given this reality, the potential costs and consequences of a student unit record database must be seriously considered. To illustrate ways in which SUR systems impact the lived experiences of students, it may be helpful to briefly consider a current and sometimes contentious debate within the higher education community.

Equal opportunity to pursue higher education — particularly for low-income and historically underrepresented groups — is a major concern for policy makers and researchers in education. A considerable body of literature exists exploring the pathways students take to college, the factors that influence opportunity to attend, and the variables that impact whether students are successful in their educational pursuits. Within this debate, the impact of financial aid and academic preparation are two key areas of exploration. Student unit record databases have enabled researchers to examine the effects of high school curriculum on college enrollment. Likewise, the effects of financial aid have been closely examined using SURs. One school of thought argues that academic preparation has the greatest influence on college enrollment. Another school of thought agrees that academic preparation is important, but alone is insufficient to ensure college qualified students enroll. Rather, meeting financial need — particularly for low-income students — is equally important. As policy makers have increasingly focused on academic preparation in high school as the key factor in college enrollment, low-income, college qualified students are losing the opportunity to attend college because of unmet financial need (St. John & Parsons, 2003).

Regardless of the school of thought with which one agrees, data collected for administrative purposes by institutions is used in the research both sides use to support their arguments—all without informed student consent. In addition, policy makers may leverage particular aspects of this research base to support ideological arguments that may or may not be in the best interests of particular students. For example, focusing on academic preparation to the exclusion of financial aid may disproportionately impact students of color. With decreasing public support of education and increasing demand, the equal education opportunity stakes are high. Research often plays a crucial role in shifting or buttressing terms of the debate.

In addition to potential policy effects of SUR based research, security comprises another area of ethical consideration. Although technological advances enable increasingly secure storage and transmission of private data, recent high visibility data theft at institutions of higher education (Northwestern University, California State University at Chico, Boston College, University of California at Berkeley, to name a few) illustrate the potential for abuse of large student unit record systems (Carnevale, 2005).

In conclusion, researchers, policy makers, and administrators who currently use or are part of the creation of student unit record systems must weigh the potential costs and benefits of such a system. If possible, students themselves should be involved at some level of the discussion. The ethical implications of creating a database should be considered before more technical discussions about security are had. In short, the “why” of student unit record systems should be addressed before the “how.” Central to the debate of SUR systems is the issue of informed consent. If informed consent cannot be obtained, researchers may want to consider other ways in which the autonomy of subjects can be respected. For example, researchers might make the effort to distribute research findings to constituent groups represented in the databases. Minimally, researchers should engage other researchers as well as policy makers in ongoing debate about how to be responsible stewards of data which was obtained without explicit consent.

References

St. John, E., & Parsons, M. D. 2004. Public Funding of Higher Education: Changing Contexts and Rationales. Baltimore, MD: Johns Hopkins University Press.

Carnevale, D. May 6, 2005. “Why Can’t Colleges Hold On to Their Data? A string of high-profile security breaches raises questions about the safety of personal information.” The Chronicle of Higher Education, Volume 51, Issue 35, Page A35.

This case highlights the role of ethics in incorporating online information with confidential data from personal interviews, using an emergent research design, and managing concerns over internal confidentiality. Some of the concerns raised in this case are issues that often arise in social science research, while others are fairly new issues that relate to technological changes. 

New websites pop up each day, but the ethical guidelines surrounding them often lag far behind. Professional codes of ethics in many disciplines do not have specific policies for research using online data. This case raises a number of issues concerning new technologies. It is important that researchers are aware of the ethical principles of respect for persons, beneficence, and justice that are discussed in the Belmont Report (1979) and work to incorporate these principles into any research that they conduct and/or write about, regardless of whether there are specific guidelines that relate to the research they are conducting.

Another set of issues that are raised in this case surround internal confidentiality. Internal confidentiality involves participants in research being able to figure out the identity of, or other details about, other research participants. The issue of internal confidentiality is a concern with or without the online information, although the inclusion of the online information raises the stakes and makes it more likely that respondents may figure out the identities of other respondents — most likely their friends — even with the use of pseudonyms. Very few researchers discuss issues of internal confidentiality; one notable exception is that by Tolich (2004). More often, the focus is on protection against identification by individuals who are not participants in the research, which is referred to as external confidentiality. Although it is rarely discussed, internal confidentiality is important in social science research. Two notable cases where internal confidentiality was breached include Carolyn Ellis’ (1986) study of Fisher Folk and William Whyte’s (1943) study of Street Corner Society. In both these cases, participants knew each other intimately and, therefore, could identify some of the other respondents in research publications, which, in turn, may have allowed them to find out confidential information about these people that they did not know before the study (Tolich 2004). This disrupts relationships among people, including respondents, the researcher, and other individuals who are part of the community. For example, when Carolyn Ellis returned to the field after publishing her results, she faced a cold reception from her respondents, who were previously very friendly and warm towards her, due to their concerns over internal confidentiality, and the interconnected issues surrounding their representation in research publications (Ellis 1995). As Marie’s research focuses on intimate relationships (i.e., friendships), sometimes between research participants, maintaining internal confidentiality will be one of the challenges of writing up the results of this research in an ethical manner.

In order to publish research that maintains the rigor demanded by her discipline while also adhering to ethical principles she believes in, including respect for persons, beneficence, and justice, Marie must do a careful cost-benefit analysis to decide how to proceed.  Should she privilege maintaining the accuracy of her data at the cost of respondents’ privacy or confidentiality?  Or should she privilege internal and external confidentiality over the accuracy of the data?  How much can she paraphrase — or change, if she deems it necessary — the details of people’s profiles on the Internet site without altering the results of her study and its validity?  She has given each person — both those she interviewed as well as their friends and other individuals they mention — a pseudonym to protect their confidentiality.  She also has changed other identifying information, such as names of clubs or organizations to which they belong as well as hometowns, to further protect her respondents’ privacy.  However, it is possible that Marie does not know exactly which details will identify her respondents to their friends.  A related set of issues revolve around whether or not to identify the website.  Should Marie identify the name and web address of the website where she got this Internet information so that other researchers can check the validity of her interpretations?  Or should she not reveal the identity of the website as a further precaution in terms of confidentiality (both internal and external confidentiality)?  Marie struggles with the cost-benefit analysis of preserving the accuracy of her data and her respondents’ confidentiality and privacy.  There is no easy solution.

As discussed in the case, these issues become even trickier because Marie is using an emergent research design.  Her research design was flexible to allow for shifts in data collection based on what she learned in the field.  Most important to this case, the inclusion of the Internet data was a decision that occurred after her research was underway.  In part 1 of the case, Marie wonders if she should view this online data and if there is an ethical difference in viewing Internet data for participants (such as Jane), for whom she has obtained informed consent, versus other students, for whom she does not have informed consent.  A question that arises from this is when it is appropriate to obtain permission for this aspect of her research from the university’s Institutional Review Board for the Protection of Human Subjects (IRB).  Marie could have filed an amendment to her IRB proposal as soon as she suspected that she might want to use data from this website in her dissertation.  Given that all of this was emerging as she was in the field and talking with students, she decided to wait until she had a better idea of what data she would like to incorporate into her research and a better idea of the ethical issues involved before filing the IRB amendment.  While this approach seemed preferable to Marie, it seems possible that others would argue the opposite:  that to protect the rights and confidentiality of her respondents, Marie had an obligation to seek IRB approval as soon as she thought she might want to use this data.  One important point is that Marie did not publish or present on aspects of the projects that incorporated the Internet data until she had IRB approval for these activities.  If she had, this would certainly be a breach of her obligations to her research participants and her university’s IRB.  Some ethical decisions are clearer cut than others.

The emergent research design allows Marie a good deal of flexibility in her methodological decisions, which opens up alternative solutions to these ethical dilemmas.  In addition to the options discussed in Part 2 of the case and the questions that follow, there are other options available to Marie.  She may incorporate the Internet data into her dissertation for methodological rather than substantive reasons.  Marie could use the information on the website as a validity check on the information that she gathered during the interviews.  To what extent do students’ website postings match what they told her during the interview?  If they do not match, what are some possible reasons and how should she deal with it?  This is one way to make use of the rich data gathered from the website and tie it to the interview data without putting interview respondents at further risk of being identified.

One final issue is that surrounding Marie’s desire to incorporate feminist methodology in her research.  Most germane to this case, feminist methodology seeks to reduce the distance between researcher and subject as well as to give back to research participants (Reinharz 1992).  As discussed in the case, Marie offered each participant a copy of her transcript and interview recording as well as a copy of a published paper that comes out of the research; nearly all participants requested a copy of a published paper.  In this case, Marie’s desire to use feminist methodology and give back to her research participants complicates her ability to maintain internal confidentiality.  It will require more time on her part if she decides to send different people different published papers.  However, she will know to consider these confidentiality issues surrounding interpersonal relations in future research projects.

References

  • Ellis, Carolyn.  1995.  “Emotional and Ethical Quagmires in Returning to the Field.”  Journal of Contemporary Ethnography 24: 68-98.
  • Ellis, Carolyn.  1986.  Fisher Folk:  Two Communities on Chesapeake Bay.  Lexington: University of Kentucky Press.
  • The National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research.  1979.  The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects (Federal Register Document 79- 12065).  Washington, D.C.: Government Printing Office.
  • Reinharz, Shulamit.  1992.  Feminist Methods in Social Research.  New York:  Oxford University Press.
  • Tolich, Martin.  2004 “Internal Confidentiality:  When Confidentiality Assurances Fail Relational Informants.”  Qualitative Sociology 27 (1): 101-106.
  • Whyte, William T. 1943. Street Corner Society: The Social Structure of an Italian Slum. Chicago: University of Chicago Press.