Anonymous

Autonomous vehicles combine the ethical concerns found in robotics, autonomous systems, and connected devices — and synthesize novel concerns out of these. As with systems that demonstrate increasing autonomy, they also raise questions about moral responsibility. To be morally responsible for an outcome is to be the proper object of praise or blame as a result of that outcome. Because the status of autonomous vehicles as agents is unclear, and because they are involved in complex socio-technical systems involving many agents and institutions, answering questions about the moral responsibility for outcomes they are involved in is especially vexing.

We can use several lenses to help us determine who is morally responsible for an outcome:

(1) Who caused this outcome? Whose causal contributions to the outcome were most significant?

We almost always think that the person who is responsible for an outcome is the person who caused it most directly.

In the case of autonomous vehicles, this is difficult to determine, since autonomous systems are said to “launder” the agency of the people who operate them.[1] If an autonomous vehicle gets in a crash, should we blame the driver, even if they were not operating the car? Or did the car “make” the decision that led to the crash? This is further exacerbated by the fact that autonomous vehicle design is the result of a complex of legal and economic incentives that ultimately help explain why the cars are built and programmed the way they are, and thus why they cause the outcomes they do.

(2) Who is it appropriate to blame or praise for this outcome? Who would it be appropriate to punish for this outcome?

Moral responsibility is closely bound up with what philosophers call “reactive attitudes”: moralized attitudes we express in response to what someone has done.[2] Blame and praise are the most obvious of these, but indignation, resentment, and gratitude are other examples. When trying to locate responsibility, we should think about who it would be fair to express these attitudes towards.

Autonomous systems present especially difficult cases for identifying responsible parties because, according to a significant thread in the literature, there is a “responsibility gap” that is created between human agency and the decisions of autonomous machines. If an autonomous machine — like a car — were to make a “mistake,” there would be no one we could fairly punish. That argument is a clear example of this way of approaching the question of responsibility: find the people it would be fair to blame or praise, and you have found the responsible party.

Consider scenario 1A: the driver of Car A should have been paying attention because their car has merely level 3 autonomy. It is reasonable to think that they have a greater share of the responsibility than the driver of Car B. The driver of Car B could have reasonably believed that their car would be able to handle such a situation, and this attenuates their blameworthiness.

In scenario 1B, when both cars have V2V communication, then our sense of responsibility shifts from the drivers of the cars to the designers of the V2V system. As long as this failure took place in a relatively normal situation, this is the kind of situation that the V2V system should have been able to handle. Therefore, in turn, the drivers would have been reasonable to delegate the decision making to their cars. (Still, the driver of Car B can claim even greater justification for offloading this decision making since, again, their car has level 4 autonomy.)

(3) What is it reasonable to expect or demand of a person, given the role that they occupy?

In a perfect world, the several components of the socio-technical system that designs, manufacturers, regulates, and operates autonomous vehicles would be performing ably and diligently. Each has a complementary role to play:

Institutions can shape the design decisions of autonomous vehicles in a way that individual consumers never could. They can also shape the environment and infrastructure in which they operate. Did the car crash because the lane markings were eroded or unclear, for example?

Designers and manufacturers have a responsibility to test their designs to establish their reliability throughout the spectrum of scenarios that drivers will tend to face. (At least, as far as is practicable, since the permutations of those situations are in fact infinite.) They then have a responsibility to then communicate transparently the capabilities and shortcomings of their vehicles to consumers, and perhaps include designs that nudge — or force — drivers to behave responsibly.

Drivers, finally, need to operate cars responsibly only within their limits.

In these cases, we can ask: Who has failed in their duties to contribute to this harmonious interdependent system? Is there a shortcoming that regulators are uniquely placed to anticipate, but they failed to do so? Or a scenario that manufacturers should have tested for? Should the driver have kept their hands on the wheel, but was texting instead (and is there car level 3, 4, or 5?)? Deciding who failed in their specific obligations, and how far their behavior departed from what society can reasonably demand of them, will help us apportion responsibility.

Consider the specific scenarios:

Scenario 2[RJ1] : All three ways of thinking about how to distribute responsibility seem to point to the driver of the standard car that rear-ends the autonomous car: they directly caused the crash; they are more to blame than the autonomous vehicle, and we should expect more of them as a human driver. Note that the autonomous vehicle did something that was unexpected, i.e. it stopped while the light in front of it was green. However, just because it behaved unexpectedly does not mean it behaved recklessly. In fact, the autonomous vehicle behaved as it should have, since the alternative would have likely been to injure the pedestrian crossing in the crosswalk. Thus, it is difficult to blame the autonomous vehicle or its designer[RJ2] .

The pedestrian is also clearly responsible — perhaps as much or more than the driver of the standard car. It is the pedestrian’s recklessness that initiates this chain reaction that results in the crash. They seem, in fact, to make the greatest causal contribution to the situation.

Scenario 3[RJ3] : In this scenario, we again have a system that would normally prevent the crash, which has failed because of a rare situation. Both of the drivers involved behave irresponsibly. Should the designers of the system have designed it better, or given it a failure mode to cope with situations like this? Which of the drivers is more at fault?

It is hard to say which of the drivers is more at fault. Both are equally reckless in being distracted and both make equal causal contributions to the crash.

The more interesting locus of responsibility may be the automated intersection. Should the designers have tested its capabilities in inclement weather? (Is this an area with a rainy season, or a desert that’s experiencing a once-in-a-lifetime downpour? That is to say: what should they have expected?) A more graceful failure mode would probably have been to turn the intersection into a four-way stop. This requires all of the drivers who approach the intersection to be much more cautious, increasing safety at the (plainly acceptable) cost of efficiency.

Ideally these approaches would all align. but they don’t always. This is why philosophers, lawyers, and others continue to tussle over which method of determining responsibility is most appropriate. However, we can certainly separate the viable from the non-viable answers; and these lenses can help focus our intuitions and show us the path forward in apportioning blame. By clearing the way for productive conversations, moral philosophy has thus shown itself useful.

General readings:

  • Jenkins, Ryan. “Autonomous Vehicles Ethics and Law.” New America Foundation. September, 2016.

Apportioning responsibility for automated systems:

  • Matthias, Andreas. "The responsibility gap: Ascribing responsibility for the actions of learning automata." Ethics and information technology 6.3 (2004): 175-183.
  • Mittelstadt, Brent Daniel, et al. "The ethics of algorithms: Mapping the debate." Big Data & Society 3.2 (2016): 2053951716679679.

Trolley problems and the distribution of harm:

  • Himmelreich, Johannes. "Never mind the trolley: The ethics of autonomous vehicles in mundane situations." Ethical Theory and Moral Practice 21.3 (2018): 669-684.
  • Nyholm, Sven, and Jilles Smids. "The ethics of accident-algorithms for self-driving cars: An applied trolley problem?." Ethical theory and moral practice 19.5 (2016): 1275-1289.

The conclusion reached by the Norwegian National Committee for Research Ethics in Science and Technology (NENT) committee challenges an ideology and ethics of inevitability present in fossil fuel industries. The anthropologist Laura Nader first identified an ideology of inevitability during her service on the US National Academy of Science’s Committee on Nuclear and Alternative Energy Systems (CONAES). Her observations led her to identify the implicit cultural assumptions animating much policymaking, from ‘group think’ and a rejection of energy conservation and ‘soft paths’ like solar energy to an ‘inevitability syndrome’ that excluded from consideration models that did not rest on ever-expanding resource use.

Since then, anthropologists such as David Hughes and Chelsea Chapman and historians such as Matthew Huber have similarly found professionals in the oil and gas industry, including scientists and engineers, expressing positions that defend fossil fuels on the grounds that our society will always require them. Hughes in particular argues that this position is an ethical one. He starts with the position that oil is immoral because the ‘contemporary great evil of dumping carbon dioxide into the skies’ hastens global climate change that harms the environment and vulnerable populations (2016: 14). Therefore, he argues, treating oil production and consumption as inevitable is also an immoral position, since it allows climate change to continue unabated without considering how energy can be conserved or produced in more carbon-neutral methods. By concluding that petroleum research would be indefensible if it hindered transitions to sustainable energy, the NENT challenged prevailing assumptions that continued reliance on oil is inevitable. But rather than discourage petroleum research in its entirety, the committee also acknowledged that petroleum research ‘still has a role to play in the transition process, for example by establishing a defensible balance between research on various energy sources in which the key constituents are research on renewable energy and on how negative impacts on the ecology can be reduced.’

The challenge and opportunity lie in the nature of the ‘collaborations’ between industry and universities, given the conflicts of interest that exist when academic research is funded by companies such as Statoil. In their statement the NENT found it ‘striking that the universities do not reflect to a greater extent on their own role in possibly preserving the status quo through their collaboration with the petroleum industry,’ by prolonging and legitimizing the oil age, for example. The committee called for efforts to ensure that ‘the universities’ research and education and the special interests of business sector actors are independent of each other.’ This raises the crucial question of how university scientists and engineers could collaborate with industry to make more sustainable technologies and techniques.

References

Chapman, C. 2013. Multinatural resources: Ontologies of energy and the politics of inevitability in Alaska. In Cultures of Energy: Power, Practices, Technologies (eds) S. Strauss, S. Rupp and T. Love, 96–109. Walnut Creek, CA: Left Coast Press.

Huber, M. T. 2013. Lifeblood: Oil, freedom, and the forces of capital. Minneapolis: University Of Minnesota Press.

Hughes, D. M. 2017. Energy without conscience: Oil, climate change, and complicity. Durham: Duke University Press Books.

Nader, L. 1980. Energy choices in a democratic society. Washington: National Academy of Sciences.

––––––– 2004.  The harder path: Shifting gears. Anthropological Quarterly 77, 771–791.

Although this case raises a variety of ethical issues, all with their own subtleties and complexities, two particular issues will be discussed in this commentary. 

Assessing Risk/Benefit Ratio

The first issue, which is raised in Part 1 of the case, concerns risk/benefit ratios when working with human participants.  One of the central tenets of the Belmont Report (1979) is that researchers must (1) do no harm and (2) maximize possible benefits and minimize possible harms.  This case demonstrates several of the complexities inherent in the principle of beneficence.  

First, how should risk be defined and determined?  In Brian’s case, it is possible but not definite that focusing on stress experiences will have negative effects for the participants.  How should Brian determine the probability of risk?  If there is no realistic way to do this, how should Brian proceed?

Many scientists conducting research similar to this have relied on the fact that, in most cases, the negative impact of stress exposure is short lived.  In other words, while people may experience negative effects in the moment, they quickly return to baseline and are fine in the long-term. Should calculations of risk differentially weight short-term versus long-term consequences?

Additionally, researchers conducting this type of work have stood behind their informed consent procedures.  These researchers reason that as long as they are up-front from the very beginning about potential repercussions of study participation then they are within the guidelines of ethical practice.  Responsibility for participant safety is ascribed to the consenting participant as opposed to the researcher.  This practice may be fine in cases where the consent document is completely clear, where the potential participant is fully engaged in understanding the document and fully capable of consenting, and where the researcher is capable of answering questions about procedures and potential threats.  Unfortunately, these circumstances are rare, especially in psychological research where the majority of participants are undergraduates fulfilling a course requirement.   Who is ultimately responsible for determining the risk of participation? Who is ultimately responsible for the safety of research participants?  The researcher? The consenting participant? The IRB?

Second, who should be considered when calculating risk/benefit ratios? Only the research participants? Society-at-large?  Future generations?  Brian is aware that he is possibly putting participants at risk by asking them to focus on the stress they experience in their daily lives.  However, he is also aware that basic information about gay-related stress must be collected in order to create effective interventions and to help myriad gay men and lesbians cope with their minority status. If Brian were to consider only his research participants in his calculations, he would likely conclude that the benefits do not outweigh the potential harms.  However, if Brian were to consider his research participants, the gay community and future generations of gay individuals, he would likely arrive at a very different conclusion.  Is there an appropriate way to calculate risk/benefit ratios? Is it ever ethical to sacrifice a few individuals for a larger goal?

Bias in Social Science Research

The second issue, raised in Part 3, concerns personal or political bias in research.  Many social scientists emphasize the importance of objectivity in the pursuit of knowledge.  These individuals assert that science must be free of bias, and that the researcher must be neutral in relation to the topic and communities being investigated.  Elias (1987) summarized this position nicely, stating that those who study human groups must learn to “keep their two roles as participant and enquirer clearly and consistently apart and . . . to establish in their work the undisputed dominance of the latter” (cited in Perry, Thurston & Green, 2004; p. 135).  Others, however, argue that objectivity is impossible to attain and that better science is derived from active involvement on the researcher’s part.  In their discussion of qualitative research, Perry and colleagues (2004) argued that a critical piece of the research process involves interpretation and that the researcher necessarily plays a central role in this analysis.  These researchers concluded that instead of ignoring “emotional involvement” in research, we should recognize “the inevitability of involvement and the potentially significant part it can play in developing a more reality-congruent picture of complex aspects of the social world. . . .” (p. 139). Is there such thing as objective social science?  If not, should scientists be responsible for revealing their biases?  Is science valid if there is bias linked with data?

In his research on gay-related topics, Brian wears two hats: one as a scientist and one as a gay man.  While these two identities do not have to conflict, they nevertheless can conflict.  In the case, for example, Brian discovers that those who are “out” experience more gay-related stress than those who are not “out.”  “Scientist Brian” finds this discovery interesting, while “Gay Brian” finds it problematic. Brian fears that revealing such a finding could encourage people to lead closeted lives and could subsequently set the gay rights movement back several decades.  In deciding whether to publicize this finding or not, which one of Brian’s hats should have more weight? Is it possible to wear both hats at the same time? How?

In discussing this piece of the case it is important to realize that there are both pros and cons to Brian’s involvement in the research.  On the pro side, Brian’s status as a gay man gives him credibility with the people he is studying.  The gay men that Brian is reaching out to are more likely to trust him and to involve themselves in the research given that he is “one of them.”  This becomes crucial when studying a group such as gay men, a group that has long been manipulated by researchers and ostracized by the psychological community.  Many gay men are suspect of the research enterprise and want to know that they are not being used for research that will come to haunt them and their community later on.  If Brian were heterosexual, many potential participants may opt out of the research, skeptical of the ultimate aims.  On the con side, Brian’s insider status interferes with his ability to be objective. What are other pros and cons of Brian’s involvement in research on a community to which he belongs?

References

Ethical Issues and Analysis

Part I of this case study introduces Dr. Luci Menendez as both a researcher and a clinician who seeks to develop an integrative program of research whereby her clinical work informs her research and vice versa. Critical to this case is an understanding of the ways in which general systems theory informs Luci’s research and clinical practice.  General Systems Theory (von Bertalanffy, 1968), the basis of family therapy and many theories of family process, is most readily epitomized as ‘the whole is greater than the sum of its parts.’  Individual parts of the system are interdependent and information feedback loops between parts or between the system and the broader environment function to keep the organization of the system relatively stable.  “This systemic approach has led to a method of treating psychological problems and of posing research questions that is fundamentally different from the traditional, individually based one”(Copeland & White, 1991, p. 8). 

Copeland and White (1991) argue that family researchers, such as Dr. Luci Menendez, not only have the traditionally recognized responsibility to assess the effects of a study on individual research participants, but have a special ethical responsibility to attend to the impact of the study on the family as a whole.  Similarly, family therapists are ethically required to attend to both the well-being of individual family members and the well-being of the family as a whole, a difficult balance to achieve at best. 

One may rightly question from the start whether Luci should have recruited families to her study with whom she would eventually have a clinical relationship.  Whether or not Luci recruits research subjects from the same population that she will be serving clinically is partially influenced by the availability of palliative care consultation services.  These services are still relatively new and not all hospitals or communities have interdisciplinary palliative care teams. The very fact that these services are new may be an argument for the importance of researching currently unexplored issues so as to increase evidence-based clinical practices. However, if Luci’s team is the only one of its kind that is readily accessible to Luci for research, she may be at higher risk for unwittingly pressuring her clients to participate. 

Only interviewing palliative care patients and families not receiving services from Luci’s professional team would ostensibly lessen the complexities of this case by reducing the formal fiduciary relationships Luci has with clients/subjects. However it is debatable whether the absence of formal relationships with specific family members completely eliminates Luci’s more general duty as a socially sanctioned professional to protect the well-being of society’s members.  In other words, even if Luci interviewed research subjects with whom she does not have formal therapeutic relationships, the fact that she is a clinician with a specialized knowledge and skill set may still have some ethical bearing on her research relationships.

Though others may disagree, I would argue that Luci’s two roles are neither 100% separable, nor equally exchangeable.  Luci’s membership in a publicly recognized and regulated clinical profession with all of its attending benefits (e.g., status), obligates her to give priority to her clinical role over her researcher role.  In other words, Luci can be a clinician without assuming a researcher role, but her clinical knowledge must inform her research choices. Her clinical knowledge likely makes her more sensitive to the types of harm that may befall individuals and families participating in her research project which may obligate her to take steps above and beyond those required by federal, state, and institutional research regulations.

Recognizing the complexity of her dual role as a clinician/researcher, Luci took precautions in her research design.  First, she used a two stage recruiting process, whereby patients and families were first invited to consider participation in research by someone other than the researcher, the physician in this case.  Whereas this was intended to increase the autonomy of family members in deciding whether or not to participate in the study, it increased Luci’s risk of having nuances of the study misrepresented.  Furthermore, Luci failed fully to account for perceived power dynamics in the physician’s relationship with the family, leaving them vulnerable to perceived (if not actual) “authoritative persuasion.”

Second, when meeting with families to describe the research opportunity, Luci made explicit the dual nature of her relationship with patients and families, stressing that clinical care is a higher priority than research, and that the decision whether or not to participate in research would not negatively affect the clinical services they received.  During informed consent procedures, Luci also explained the on-going voluntariness of research participation.  While these precautions are commonly required by Institutional Review Boards as means of protecting individual research subjects, additional efforts may be necessary to protect the family system.

For instance, the case does not specify the exact nature of the informed consent document Luci has each family member sign, but it does say the discussion took place with everyone present.  Copeland and White (1991) note that “especially in studies in which families are asked to discuss important, real issues together [e.g., end-of-life care], the promises of anonymity and confidentiality about what they say, usually afforded to research subjects, are limited because the other family members are sitting there and listening” (p.4).  Per most IRB requirements, the informed consent document should discuss the limits of confidentiality.  This is typically understood as delineating the conditions under which the researcher may not keep absolute confidentiality. 

Confidentiality is typically understood as the ethical mechanism through which we respect the right of privacy of individuals.  But does this individual-focused understanding of privacy and confidentiality adequately apply to information about relationships, which by definition involve more than one individual?  Family researchers are faced with the dilemma of gathering and protecting information that from the perspective of individual family members may be considered quasi-private.  There may be a genuine risk of harm to individuals and/or family relationships if some members of the family disclose relational information that the other members did not want disclosed. 

In this case, a fully ethical approach to informed consent in family research might also include a discussion of the fact that data collected from one individual, even during individual interviews, cannot be completely separated from information about other members of the family because the focus of the research is on shared family history and dynamics. One approach is to include a statement on the consent document stating that agreement to participate in family data collection includes giving permission to other family members to disclose potentially private information about one another. 

Having such a statement included in consent procedures allows the researcher to explain the importance of gathering “un-edited” family data, while simultaneously facilitating family members’ discussions about possible limitations on the type of information they will share with the researcher.  Of course, research subjects are always free to edit their responses, but by making this process explicit, the researcher may be able to at least gather information directly from subjects about the limitations of the data rather than solely relying on hindsight speculation about missing data.

Explicitly highlighting interest in the family as a whole also gives the researcher an opportunity overtly to discuss family dynamics in the process of consenting to participate in research.  Families differ in important ways from other groups studied by researchers (Copeland & White, 1991; Greenstein, 2001).  In addition to being interdependent systems of individuals, families “develop private, idiosyncratic norms and meanings about their own activities. . . , [creating unwritten] patterns of, and rules for, behavior” (Larzelere & Klein, 1987, in Greenstein, 2001, p. 11) that are often hidden from public view.  Families have ways of restructuring their view of themselves in order fit these family rules and expectations as a means of managing family tensions and maintaining family stability (Copeland & White, 1991). Family members also have multiple statuses and enact multiple roles simultaneously (e.g., father, son, and brother) requiring researchers to be sensitive to the fact that the kinds of responses offered by family members may depend on the role and status the individual is occupying in the context of gathering family data (Gelles, 1978, in Greenstein, 2001). 

These systemic considerations are not typically considered in the traditional bioethics or research ethics literatures.  Relying on an individualistic approach to research ethics, it is tempting to resolve Luci’s case by simply saying, “If a family member does not want to participate, that’s the end of the story; just collect data from those who agree.”  This response is problematic in at least two ways.  First, the validity of system-level data is likely to be compromised, thereby altering the risk-benefit analysis used by IRB reviewers.  Second, assuming a purely individualized approach to ethics in the context of family dynamics may itself be a morally questionable activity that may increase the risk of harm to the family system

Ivan Boszormenyi-Nagy (1984, 1986, 1987, 1991), a founding family therapy theorist, argues that “relational ethics” is critical to healthy family functioning, such that failure of each family member to give “due consideration” to the interests of other members is seen as the heart of family dysfunction.  Nagy (1991) claims that family functioning is enhanced when members of the family can trust that the family system as a whole will facilitate the process of balancing considerations of the well-being of oneself with considerations of the well-being of others. 

In this case study, some family members acknowledged during data collection that their motivation for participation had been out of a perceived benefit to the dying patient.  From a traditional perspective, subject participation “out of fear” of lost benefits raises questions of voluntariness and possible coercion (both direct and indirect).  Superficially, this circumstance arose due to miscommunication.  At a deeper moral level, however, it could be argued that the situation is also borne of “relational ethics,” in that family members gave “due consideration” to the wishes and interests of other members of the family system. 

Luci’s response is in keeping with traditional research ethics: she reminds family members of their individual freedom to withdraw from the study.  In her attempt to protect the rights of individuals, however, does Luci risk harming the system by challenging the family’s “idiosyncratic norms. . .[and unwritten] patterns of, and rules for, behavior” (Larzelere & Klein, 1987, in Greenstein, 2001, p. 11), which has demonstrably included “due consideration”?  In other words, by highlighting individuals’ rights to withdraw their participation, is Luci, in effect, suggesting that “due consideration” of other family members’ interest in contributing family-level data (e.g., the dying patient) is not relevant?  In doing so, does she undermine the trustworthiness of the family system to support “due consideration” — a key factor in healthy functioning according to Nagy  (1991)?  If this line of reasoning holds, then Luci’s adherence to traditional research ethics protocols may violate her ethical responsibilities as a family clinician and researcher to protect (and enhance when possible) the welfare of the family system.

Biomedical ethics and most approaches to research ethics emphasize individual autonomy in decision-making, but this tends to decontextualize people from their social context, a criticism increasingly explored in feminist ethics.  Recognizing that human beings have autonomous moral status (i.e., their moral worth is not dependent on external considerations) need not automatically be equated with decision-making that is free from the influence of others.  Certainly, the influence of the researcher on the consent process needs to be kept to a minimum.  However, it is morally suspect to presume that decision-making itself must always be free of the influence of others. 

While some attention has been given to cultural or societal-level groups (e.g., Native American tribal considerations), little discussion has occurred about the moral relevance to decision-making of intermediate level groups such as the family. Yet in many cultures these more personal groupings impact one’s daily life most, and it is not uncommon for loyalty to one’s family to be given priority over individual interests. If Nagy’s theory of family functioning is correct, it would suggest that being in intimate relationships with others changes the level of influence on ethical decision-making we consider to be appropriate, particularly in contrast to non-intimate relationships. 

References

  • Bertalanffy, L. von. 1968. General Systems Theory. New York: Guilford Press.
  • Boszormenyi-Nagy, I. 1984. Invisible Loyalties: Reciprocity in Intergeneration Family Therapy. New York: Brunner/Mazel, Publishers.
  • Boszormenyi-Nagy, I. & Krasner, B. R. 1986. Between Give and Take: A Clinical Guide to Contextual Therapy. New York: Brunner/Mazel, Publishers.
  • Boszormenyi-Nagy, I. 1987. Foundations of Contextual Therapy: Collected Papers of Ivan Boszormenyi-Nagy, M.D. New York: Brunner/Mazel, Publishers.
  • Boszormenyi-Nagy, I.,  Grunebaum, J. & Ulrich, D. 1991. Contextual Therapy. In A. S. Gurman & D. P. Kniskern (Eds.). Handbook of Family Therapy, Volume II. New York: Brunner/Mazel.
  • Copeland, A. P., & White, K. M. 1991. Studying Families (Applied Social Research Methods Series, Volume 27).  Thousand Oaks, CA: Sage Publications.
  • Gelles, R. J. 1978. Methods for studying sensitive family topics. American Journal of Orthopsychiatry, 48(3), 408-424.
  • Greenstein, T. N. 2001. Methods of family research. Thousand Oaks, CA: Sage
  • Larzelere, R. E., & Klein, D. M. 1987. Methodology. In M. B. Sussman & S. K. Steinmetz (Eds.), Handbook of marriage and the family (pp. 126-156). New York: Plenum.

This case, like “The Case of the Over Eager Collaborator,” deals particularly with those populations who are affected by, or affect, archaeological research (stakeholders).  In the past, archaeology has focused primarily on the study of ancient cultures.  Famous finds such as Schliemann at Troy and Carter’s Tutankhamen made archaeology a world-famous discipline by the early 20th century, and archaeology has continued to be popular and important in the modern world.  As archaeology progressed, so did the depth and variety of archaeological research and discussions of archaeological ethics.  Presently, archaeologists work around the world at sites millions of years to tens of years old.  There are also archaeologists today who are interested in studying the discipline and practice of archaeology in modern social, economic, political and other contexts.

As archaeologists began questioning the place of archaeology in modern contexts, archaeological ethics came to the forefront of research and writing.  Books and articles written on ethics have included discussions of such issues as stakeholders, protection of the archaeological record from looting, public education and intellectual property (Lynott and Wylie 1995; Vitelli 1996; Zimmerman, Vitelli and Hollowell-Zimmer 2003).  In 2004, the Society for American Archaeology initiated the archaeological “Ethics Bowl,” for graduate students to debate case studies in front of an audience at the SAA annual meeting (SAA Web 2005).  These articles, books, and events have placed archaeological ethics at the forefront of important issues in the discipline.

This case raises an archaeological ethics nightmare: a community split with heated debate over the value of an archaeological site. Though archaeologists, as stewards of the past and participants in creating it, see the value of archaeology and its broader discipline anthropology, it is often difficult to communicate that value to others.  In the booming modern context of American suburbia, how do archaeologists fight for preservation in the face of “progress”?

There are three important discussion topics related to ethics in this case: 1) The struggle to define “stakeholders” and their roles in the profession of archaeology, 2) the conflicting and ambiguous ethical standards in the profession of archaeology, and 3) ethical issues arising from team research in the social sciences.  Although this case is fictional, discussions of these issues are important to the discipline, as such dialogue could influence the decisions made by future researchers and students, especially those in or near American communities.

One commentator on “The Case of the Over Eager Collaborator” (see section 6 in this volume) notes that archaeologists necessarily deal with a myriad of stakeholders on any given project.  In this case, there are at least eight primary stakeholders who have interests related to the management of archaeological resources in Arrowhead.  These stakeholders include the following: Avery, his research team, and other archaeologists in the discipline, community members who support mall construction, community members who are against mall construction, a corporate organization (Global Malls Inc.), members of a local Native American tribe, and people with various other opinions.  On a broader scale, stakeholders might also include archaeologists employed by the state, funding agencies supporting Avery’s research, other Native American groups, political officers, and many others.  If ethical archaeologists should consider the contexts of their research, and respect the concerns of stakeholders, how are they to reconcile so many differing opinions?  Is this even possible without forfeiting some professional interest in stewardship?

Recently, archaeologists have been praising community-based archaeological research and, especially, archaeological practice that involves local indigenous populations.  In the SAA “Ethics Bowl,” the three C’s (Communicate, Cooperate, and Collaborate) have been an appropriate and well-received solution for most of the fictional case studies involving community dilemmas.  However, few archaeologists have discussed the potential difficulties and conflicts in community-based research utilizing such methods as Participatory Action Research (PAR).  For instance, no two communities or group of stakeholders are the same and, thus, no two community-based projects will present the same challenges. This case elucidates the complexities of working with or in different communities. It is wonderful when the public learns from archaeologists or participates in archaeological research.  It is not enough, however, to say archaeologists should simply work with local communities — social scientists should be aware of the consequences of such research.   People (individually or in groups) are not predictable and no two community-based projects will be the same.  Therefore, we must be flexible and open-minded and should prepare to deal with multiple stakeholders in our research in the most efficient, effective, and respectful ways possible.

The second major topic of the case reflects the seemingly opposing ethical codes in the profession of archaeology.  Today, archaeologists work all over the world and in each nation they encounter unique situations involving stakeholders and the archaeological record.  A plethora of international and national conventions, agreements, and laws help guide archaeologists in their research, though these are not usually binding, especially in regard to stakeholder responsibilities.  For additional guidance and discussion, many archaeologists turn to the ethical codes of archaeological or anthropological organizations.

In this case, there are three such focal organizations: the Society for American Archaeology (SAA), the World Archaeological Congress (WAC), and the American Anthropological Association (AAA).  As indicated by the case, some of the ethical recommendations made by these organizations seem to be contradictory (for the full text, see SAA 2005; WAC 2005; AAA 2005). One can question the utility of such codes, by-laws, or principles in a discipline if they are incongruous.  Principally, if one of the goals of ethical codes is to teach future archaeologists responsible research practices, what are students to think of or learn from codes that provide contradictory advice?

Again, the goal is not to argue for the end of ethical codes in archaeology. The main point in this section is that in any real-life (or even fictional) research situation, the circumstances and stakeholders will differ.  Because of this, no one or even three ethical codes will present definitive ethical research standards.  In every case, archaeologists should debate stewardship, accountability to local populations, commercialization, etc. and come up with compromised solutions (or at least steps toward a common goal).  There are no simple and straightforward answers to issues of ethics — instead, there are principles, responsibilities, debates, and compromises.

The final section of the case study calls into question the ethical responsibilities of lead researchers and team members in group research situations.  During the GREE workshop, we discussed various ethical situations which could arise when multiple researchers work together on the same project.  These include questions about: ownership of data, right to publication, authority, mentor/mentee relationships, etc.  This case asks how differing opinions within a research group should be handled, specifically within the context of community/research group disagreement.

The majority of archaeology done in the United States today is Cultural Resource Management (CRM) archaeology.  These projects are run by public or private companies and, in short, CRM archaeologists attempt to identify archaeological resources which may be destroyed by new construction projects and mitigate the loss of information by performing different scales of excavation.  CRM work is often quick work, but it still involves stakeholders.  An additional group of stakeholders in CRM projects are the team-members, since CRM is almost never an individually accomplished project.  Team-members, who may number between two and twenty, often work under the leadership of the Principal Investigator (PI).  This arrangement may lead to some of the same research ethics questions listed above (i.e. right to publication, authoritative voice).  Furthermore, the transient nature of CRM archaeology often results in workers who are disconnected to their research site, resulting in group research that is dominated by the research goals and analysis of a principal investigator.  Ideally, all social science research should be poly-vocal and researchers should exchange ideas before, during, and after projects.  Especially within the social sciences, the opinions of the public should also be considered.  Again, ethical research in archaeology should include preparatory work and consideration of multiple viewpoints. 

An increased awareness and popularity of public archaeology and archaeological ethics have brought archaeologists face-to-face with situations such as the one presented in this case study.  Few archaeologists still believe that archaeological research exists in a political, economic, or social vacuum.   After all, social science is research that deals, primarily, with living people.  It is time all social scientists consider the contexts in which they work and the consequences of their research.  The work of groups such as the Association for Practical and Professional Ethics and the discussion of ethical research situations will help inform future social scientists of these issues. 

References

AAA 2005 “Code of Ethics of the American Anthropological Association," available on the World Wide Web at: http://www.aaanet.org/committees/ethics/ethcode.htm.

Lynott, Mark J., and Alison Wylie, eds. 1995. Ethics in American Archaeology: Challenges for the 1990s, 2nd ed. Society for American Archaeology, Washington, D.C.

SAA 2005 “Principles of Archaeological Ethics," available on the World Wide Web at: http://www.saa.org.

Vitelli, Karen D., ed. 1996. Archaeological Ethics. AltaMira Press, California.

WAC 2005 “World Archaeological Congress Codes of Ethics," available on the World Wide Web at: http://ehlt.flinders.edu.au/wac/site/about_ethi.php.

Zimmerman, Larry, Karen D. Vitelli and Julie Hollowell-Zimmer, eds. 2003. Ethical Issues in Archaeology, AltaMira Press, California.