Valerie Racine

Some of the main goals in conservation biology are to track changes in large-scale ecosystems and to conserve biodiversity. Defining and assessing ‘biodiversity’ presents many epistemological challenges to which many scientists attend (cf. Sarkar 2002; Sarkar et al. 2006). Moreover, conservation biologists must collect, maintain, and analyze large sets of data. And, with better technology to track and measure biological and environmental variables, and the ability to share or create open access databases, conservation biology faces emerging ethical issues concerning its reliance on big data.

As with other sciences, the use of big data in conservation biology has led to ethical considerations about how to best balance basic scientific virtues, like the open flow of information and collaborations across borders, with the need to protect participant privacy and to maintain confidentiality in certain contexts (Bowser et al. 2014).

In the hypothetical scenario described above, the context is a citizen science project in which amateur birders share records of their observations, which are then curated and annotated by experts to become data made available on an open access platform. Because private citizens are sharing information, the eBird website clearly outlines its privacy policy to inform participants that although no personal contact information is publicly available, be aware that any details of an observation as well as its corresponding location (species, numbers, etc.) are in fact available to all users who are registered with eBird (Cornell Lab of Ornithology 2018). Therefore, it is possible that information about sightings on private property become publicly available. In the scenario, the interactions between Andrei and his neighbor, Anna, illustrate one particular ethical issue that might arise from such circumstances.

The potential costs of reporting on the presence of sensitive or endangered species presents another concern related to confidentiality, which also arises in the hypothetical scenario. The eBird website includes guidelines for reporting on sensitive species (Team eBird 2012).

Conservationists worry that publicizing the explicit coordinates or directions to the locations of sensitive or rare species might encourage more traffic in the area, which may lead to an increase in the risk of human disturbance to a vulnerable species’ habitats. There are also potential negative effects that reports of rare birds might have on the quality of the databases to which they are submitted. For example, the reports may lead to the phenomena of “twitching” – “the act of making trips specifically to see previously reported rare birds” (Straka & Turner 2013, 40). Twitching can lead to biased samples of checklists or misleading data on bird abundance in open access databases from citizen science projects (Straka & Turner 2013; Kelling et al. 2009).

There are additional concerns about confidentiality and security with respect to publicly available data in conservation biology, sustainability, and environmental sciences (Keeso 2014). For example, poachers may gain access to the locations of endangered species and cause harm. Governments are sometimes hesitant to disclose detailed geographical maps – which might be very useful to scientists in tracking a region’s biodiversity – for reasons of national security. And, some corporations and scientists are worried about confidentiality because they view their data as proprietary.

Moreover, new technologies used by conservation biologists in the field to gather data, such as biotelemetry, might require interventions in natural habitats, which raise some ethical concerns, especially in the context of research on endangered species or sensitive ecosystems (Cooke 2008; Jewell 2013). The use of biotelemetry often requires tagging individuals of a species. This generates valuable information that may be useful to inform conservation priorities and meet conservation goals, and the assumption is that such interventions will not harm the welfare of individuals or populations, but the risk of harm is still a possibility. To mitigate these harms, biologists have made efforts to weigh the relative benefits of the research and any costs to individuals and populations. Researchers also investigate the impacts of tagging activities and test tagging techniques to develop better intervention practices (Cooke 2008, 172).

Furthermore, researchers have considered some of the large-scale effects of big data biodiversity projects, such as the global biodiversity information facility (GBIF), on the priorities and practices of ecological sciences (Devictor & Bensaude-Vincent 2016). They argue that the conversion of records and observations into data – what they call the process of datafication – results in the loss of information (e.g. de-contextualization) about particular environments or ecosystems, which in turn transforms the science of ecology from one centered on environmental management to one centered at providing and managing data for environmental management. They refer to this shift of focus as a transformation of ecology into a “technoscience” (Devictor & Bensaude-Vincent 2016, 20). This transformation might have harmful implications if it leads to a situation where scientists feel justified in accumulating data and monitoring global diversity without any concern for consequences occurring at smaller, local scales, or concern about the lack of political action needed to protect local environments or ecosystems (Devictor & Bensaude-Vincent 2016, 19-20).

While an emphasis on the accumulation of big data for conservation biology and environmental science might lead to a neglect of local contexts, some researchers have instead emphasized potential ethical upshots and societal benefits of big data, and data sharing in particular, within these fields. For example, Soranno et al. (2015) claim that “the issue of data sharing holds the potential for environmental scientists to align their practice with the discipline’s growing interest in issues of social consciousness, the democratization of science, inclusion, and scientific literacy” (Soranno et al. 2015, 71). According to these authors, the increasing reliance on public participation in, and sponsorship of, research creates an ethical obligation for scientists to promote and facilitate data sharing.

References

1. Cornell Lab of Ornithology. 2018. “Home: Privacy Policy and Cookie Policy.” Accessed 14 May 2021. https://www.birds.cornell.edu/home/privacy.

2. Team eBird. 2012. “Guidelines for Reporting Sensitive Species.” Accessed 14 May 2021. https://ebird.org/news/sensitive_species.

Current discussions concerning recently launched large-scale data collection projects in neuroscience, such as the US’s BRAIN Initiative and the EU’s Human Brain Project, raise both epistemological and ethical questions. Concerning the former type of questions, many have asked what, if anything, can “bottom-up” strategies of large-scale collection of data about the brain really tell us about the human mind, consciousness and behaviour. Those sorts of concerns (e.g. about faulty inferences, false positives, etc.) often steer the ethical questions about the implications that the collection of brainwave data may have on our notions of personal identity, privacy, property, the capacity for consent, and the control of behaviour. Additionally, the novel uses of neurotechnologies raise some of the typically issues in the ethics of emerging technologies, such as dual-use dilemmas and governance.

First, the issues of personal identity, privacy, and property in big data neuroscience projects are similar to those that have emerged in the context of genetics and genomics (Choudhury et al. 2014; Illes & Lombera 2008). If data from EEGs, for example, can be used as a biometric signature that can identify individuals, then the identifying data may include sensitive information about the mental health or capacities of individuals. That kind of sensitive information must be protected to avoid its misuse and the potential profiling of individuals (Rodenburg 2014). So, safeguards must be taken to protect the confidentiality of research participants. But, researchers also have a duty to research participants to be clear about the purpose of data collection, its use, accessibility, and purposes.

Also similar to the context of genomics databases, researchers and scientists think it is important to safeguard individuals’ mental privacy in a way that does not impede scientific and technological developments (Choudhury et al. 2014; Illes & Lombera 2008; Rose 2014). In this sense, there is a tension in whether to prioritize the principle of autonomy in research involving human subjects, or whether the principles of beneficence and justice ought to become more important in guiding the moral duties and responsibilities of researchers.

With respect to neuromarketing, many have questioned whether the appeal to consumers’ unconscious brain signals might be an invasion of privacy or an unethical manipulation of our affective states. Others have proposed that perhaps the field is exploiting useful medical equipment for frivolous and shallow purposes (Ulman et al. 2015). In light of these sorts of concerns, France has recently revised its 2004 rules on bioethics in 2011 to include a section on the appropriate use of brain-imaging technologies. It states: “Brain-imaging methods can be used only for medical and scientific research purposes or in the context of court expertise” (Oullier 2012; Ulman et al. 2015). With this revision, the commercial use of brainwave technologies is currently banned in France.

Ethicists have also noted that using these data to sell goods and services might lead to exploitation of vulnerable groups (e.g. children) who cannot understand or consent to the practices of neuromarketing. As with other research with human subjects, ethicists have argued vulnerable groups should be protected (Ulman et al. 2015).

Lastly, some fear that the potential manipulation of our cognitive and affective states for profit in commercial contexts might spill over to the political realm, where individuals can be manipulated to vote one way or another. Here again there is concern over whether the use of data gathered from neurotechnologies might interfere with our capacity for consent (Rodenburg 2014; Gutmann 2015).

The fictional scenario described above is loosely based on a recent initiative by Google. In 2009, research scientists at Google published a study in Nature, describing their methods for tracking seasonal and pandemic influenza outbreaks using data generated from monitoring health-seeking behaviour on Internet search engines (Ginsberg et al. 2009). They had developed tools to track outbreaks in real-time in order to improve upon the traditional methods used by the Center for Disease Control and Prevention (CDC), which take approximately two weeks to gather and analyze data. The algorithms developed by the scientists at Google led to the creation of Google Flu Trends (GFT), a web service launched in 2008 to track flu outbreaks. The service is no longer publishing its results, but its data are made available to other researchers.

The 2009 Nature paper is often used as a paradigm example to illustrate the emergence of a new field referred to as digital epidemiology, or digital disease detection (DDD) (Brownstein et al. 2009; Salathe et al. 2012; Vayena et al. 2015). This field shares the goals and objectives of traditional epidemiology (e.g. public health surveillance, disease outbreak detection, etc.), but makes use of electronic information sources, such as internet search engines, mobile devices, and other social media platforms, which can generate data related to public health but that are not explicitly designed for collecting public health-related data. The motivation behind DDD initiatives, like Global Flu Trends, is to mine large datasets in order to accelerate the process of tracking and responding to outbreaks of infectious diseases.

In 2013, Google’s program to track influenza outbreaks was heavily criticized for mis-estimating the prevalence of influenza outbreaks (Butler 2013, Lazer et al. 2014, Lazer & Kennedy 2015). Its first big mistake occurred in 2009, when it underestimated the Swine Flu (H1N1) pandemic (Butler 2013; White 2015), due to changes in people’s search behaviour with respect to the categories of “influenza complications” and “term for influenza” given the non-typical seasonal outbreak of H1N1 during the summer months (Cook et al. 2011). Then, in 2013, Nature reported that GFT significantly over-estimated outbreaks of influenza (Butler 2013; Lazer et al. 2014). In a comment published in Science in 2014, Lazer et al. reported that GFT had been consistently over-estimating the prevalence of flu outbreaks before then, inaccurately predicting the prevalence of flu cases in 100 of 108 weeks during the 2011-2012 flu seasons (Lazer et al. 2014).

GFT’s track record of mis-estimations has been described as “big data hubris” – “the often implicit assumption that big data are a substitute for, rather than a supplement to, traditional data collection and analysis” (Lazer et al. 2014, 1203). In epidemiology, traditional data collection and analysis involves gathering data from structured interviews, archives, censuses, and surveys, and then to look for patterns and trends in the data. However, most scientists commenting on the case of GFT have insisted that, despite its failures, the use of big data in epidemiology can be extremely valuable for public health surveillance (Lazer et al. 2014, Lazer & Kennedy 2015, White 2015).

The GFT case has invoked many epistemological questions about how to improve Google’s flu algorithms, and big data analytics more generally, and how public health policy and decision-makers ought to use these tools. But, it has also engendered ethical concerns at “the nexus of ethics and methodology” (Vayena et al. 2015).

For example, there can be harmful consequences when such models are woefully inaccurate or imprecise. False identification of outbreaks or inaccurate and imprecise predictions of outbreak trajectories could place undue stress on limited health resources (Vayena et al. 2015). Wrong results or predictions might also undermine the public’s trust in scientific findings, and worse, might lead to the public’s dismissal of public health warnings.

In addition to worries about maintaining the public’s trust on issues of public health, researchers developing models aimed at detecting outbreaks must consider that their results risk harming individuals, businesses, communities, and even entire regions or countries (Vayena et al. 2015). This harm may take the form of stigmatization of groups, and financial loss due to prejudice or restrictions on travel to tourist destinations. It can also restrict the freedom of individuals in the form of imposed travel restrictions or quarantines. Consequently, ethicists have stressed that “methodological robustness” with respect to digital epidemiology is “an ethical, not just a scientific, requirement” (Vayena et al. 2015, 4).

As with other instances of big data collection and use in the life sciences, the use of big data gathered online in social or commercial contexts for public health purposes raises ethical issues about an individual’s right to privacy and notions of informed consent when that data is used for research purposes. However, in this context, it has been suggested that private corporations that have access to relevant data might have a moral obligation to share that data for matters related to public health and public health research. This consideration raises questions about how to regulate private-public partnerships with regards to data ownership within a global context in order to uphold the values of transparency, global justice, and the common good in public health research (Vayena et al. 2015).

The study by Gymrek et al. 2013, and others like it, generated demands for additional restrictions in database sharing policies, changes to how and what kinds of data were collected and anonymized, and worries about some of the foundational concepts in research ethics, including the notions of informed consent, privacy, confidentiality, and the nature of the researcher/clinician – subject/patient relationship. This short commentary will focus on those concepts in biomedical research ethics.

Most researchers and ethicists agree that it is important to safeguard privacy and confidentiality for patients and research subjects, but to do so in a way that does not impede scientific progress. This “sweet spot” between the competing goals of scientific research and the individual’s right to privacy is especially relevant for current genomic and genetic analyses using big data. For instance, Genome Wide Association Studies (GWAS) capitalize on correlated sets of large databases of individuals’ genetic variants to determine whether certain variants are important contributors to complex diseases or disorders. There is also much optimism about the prospects of personalized medicine, in which medical professionals would access and integrate patients’ personal genomic data into targeted and tailored treatments. The success of personalized medicine, however, requires knowledge about which sorts of treatments will be effective for certain genetic variants, which depends on genomic analyses of big data.

While there are clear potential benefits of biomedical research analyses of large sets of genomic and genetic data, that information is also particularly sensitive as it can accurately reveal subjects’ identity in the same way as social security numbers can. It can also reveal the identity of an individual’s relatives. Because of the way this information can serve as accurate individual identifiers, some researchers have taken the notion of genetic privacy to denote a special instance of privacy (e.g. Rothstein 1997), based on the notion of “genetic exceptionalism” – i.e. “the view that being genetic makes information, traits, and properties qualitatively different and deserving of exceptional consideration” (Lunshof et al. 2008).

If we accept a concept of genetic privacy, based on genetic exceptionalism, then there are implications for the way we think about infringement of privacy and breach of confidentiality within the biomedical research context. For instance, Lunshof et al. (2008) argues that because some violations of privacy occur which are beyond the control of individuals or institutions (as in the above case scenario), they do not necessarily signal a moral failure even though those violations may cause harm in some instances. However, they note that the promise of confidentiality implies a relationship of trust and, with it, moral responsibilities on those who promise confidentiality. For that reason, a breach in confidentiality does entail a moral failure with respect to the relation of trust between the researcher/clinician and subject/patient.

These moral considerations have led research scientists and ethicists to rethink the model of informed consent that typically guides the relationships of trust between clinician/researcher and patient/subject in the biomedical context, and to reconsider what, if any, sense of privacy and anonymity should be promised to patients and research subjects.

Informed consent is typically used in cases of specific research studies. It is problematic in research that makes use of big data because it does not, and cannot, explicitly cover all future investigations, or future instances of sharing and aggregating data across research communities. Because of these elements in big data science, the traditional notion of informed consent cannot be implemented in the usual way.

Consequently, some have proposed more liberal notions of consent, such as “open,” “broad,” or “blanket” consent (Mittelstadt & Floridi 2015). These notions of consent require research participants to consent to all future research activities that makes use of their data. However, those approaches have been criticized for limiting patients’ or subjects’ autonomy (Mittelstadt & Floridi 2015; Master et al. 2014). An alternative proposal to the models of general consent is the notion of “tiered” consent. That notion of consent would enable patients and subjects to choose to limit future access to their data to only some kinds of research, or to require researchers to re-consent patients and subjects for specific kinds of future research. That approach has been criticized for creating too many difficulties for researchers and the management of large databanks.

Another alternative has been to emphasize the concept of solidarity rather than consent. This approach relies on the participation of “information altruists” concerned with the public good. It is mainly concerned with how research can be pursued and harms can be mitigated, “by providing data subjects with a ‘mission statement’, information on potential areas of research, future uses, risks and benefits, feedback procedures and the potential commercial value of the data, so as to establish a ‘‘contractual’’ rather than consent basis for the research relationship” (Mittelstadt & Floridi 2015; Prainsack and Buyx 2013). The proposed reliance on solidarity and public sentiment has been criticized for placing undue burdens on individuals to participate in research. However, it might also serve to emphasize the ethical responsibilities of big data researchers and database managers, and encourage scientists to be more proactive in the disclosure and transparency of risks of harm that might occur as a consequence of the loss of privacy (Lunshof et al. 2008; Barocas & Nissenbaum 2014). In this way, genomic and genetic research dependent on large sets of data has the potential to shift the moral responsibilities of researchers from protecting the privacy of individuals to ensuring the just distribution of any benefits from the outcomes of their research (Fairfield & Shtein 2014).

The emerging concepts of consent under negotiation within this research context, and the emphasis on researchers’ duty to benefit research participants and their communities more widely as well as the research participants’ duty to contribute to the public good, are areas of ethical deliberation intended to maintain the public’s trust in the medical profession, and scientific institutions more broadly. These ethical concepts and proposals, therefore, ought to be evaluated by how well they are able to do so.

The publication of Huang and colleagues’ research caused a stir in the scientific community and generated many editorials and opinion pieces in scientific publications warning about the ethical issues that must be addressed before this research is pursued any further.

Scientists were quick to call for a moratorium on all genome editing of human embryos, and invoked similarities to the technological innovation that led to recombinant DNA in the 1970s and the meeting at Asilomar in 1975, where molecular biologists met to discuss and set guidelines to ensure that genetic research would develop in a safe and ethical manner (Vogel 2015).

However, many are critical of the comparisons with the Asilomar meeting and the attempt to use that conference as a model on which to build bioethical guidelines for future research with genome editing technologies (Jasanoff et al. 2015). Critics claim that the 1975 Asilomar conference was not an inclusive meeting because many of the stakeholders were not invited, such as ethicists, politicians, religious groups, and representatives of human-rights organizations or patient-interest groups (Reardon 2015b). Because of the lack of representation from non-scientists in the discussions, critics claim that Asilomar was merely an effort by scientists to resist government restrictions and promote public trust in the idea that scientists are able to regulate themselves (Reardon 2015b).

In response to calls for a moratorium, the US National Academy of Sciences (NAS) and the National Academy of Medicine (NAM) have launched an initiative to develop new guidelines to address the use of technology which makes germ line genetic modification possible, and called for members of the scientific community to attend an international summit on the topic set in December 2015 (Reardon 2015b).

The International Summit on Human Gene Editing held in Washington, D.C., in December 2015, was hosted by the National Academy of Sciences, the National Academy of Medicine, the Chinese Academy of Sciences, and the U.K.'s Royal Society. Members of the Summit’s organizing committee submitted a public statement shortly after the meeting, outlining four recommendations. First, basic and preclinical research on gene-editing technologies is needed and should proceed. Second, clinical use of the technologies on somatic cells should be explored. Third, it is irresponsible to pursue clinical applications of gene-editing technologies on germline cells at this time. And, fourth, there is a need for ongoing discussions regarding the clinical use of germline gene editing, so the national academies should create a forum to allow for discussions which are inclusive and which engage with a variety of perspectives and expertise.   

Some science policy experts have argued that the complexity of the issues surrounding germ line genetic modification cannot be adequately addressed from a scientific perspective. For example, Daniel Sarewitz, co-director of Arizona State University’s Consortium for Science, Policy, and Outcomes, argues:

The idea that the risks, benefits and ethical challenges of these emerging technologies are something to be decided by experts is wrong-headed, futile and self-defeating. It misunderstands the role of science in public discussions about technological risk. It seriously underestimates the democratic sources of science's vitality and the capacities of democratic deliberation. And it will further delegitimize and politicize science in modern societies (Sarewitz 2015).

Sarewitz’s comment signifies the importance of a democratic deliberative process when identifying and addressing ethical issues about emerging technologies, as well as developing guidelines that will help to decide how these technologies will be further developed and used. In this particular case, there is worry that germ line genetic modification on human embryos to replace defective genes may lead to a slippery slope to eugenics, or attempts to create perfect designer babies.

Lastly, the decision by Science and Nature to decline to publish the research paper because of undisclosed ethical objections raised further ethical issues about the dissemination of scientific research within a global context. The managing editor of Protein & Cells, Xiaoxue Zhang, has claimed that their editorial board was not blind to the potential ethical objections to the research, but decided to publish the article as a way to “sound an alarm” to begin discussions about the future direction of genome editing technologies (Cressey & Cyranoski 2015). Whether these discussions should come before or after the scientific research is conducted or published raises important questions about how best to regulate innovative scientific research with uncertain outcomes or potential dual-use applications.

Scientists and ethicists have raised several ethical concerns about Deep Brain Stimulation (DBS) technology and its potential applications. The use of the technology in neuro-surgical interventions itself raises important moral questions about autonomy, identity, authenticity, and responsibility, with respect to potential immediate impacts on the individual. Broader social and moral considerations of applications of the technology in different contexts, such as military use, raise even more concerns about justice, human enhancement, and moral responsibility. 

In its therapeutic context, DBS is being used to study and treat neuro-degenerative diseases, while initial research on applications of DBS to treat psychological and psychiatric disorders is also being pursued. Ethicists and clinician have made efforts to balance the risks and benefits of DBS treatment, and have discussed the issue of autonomy for individual patients or subjects who have a reduced capacity to provide fully informed consent (Schermer 2011, Unterrainer & Oduncu 2015). They have also urged that more attention be paid to the broader psycho-social impacts of DBS treatment and the effects those impacts may have on an individual’s personal identity.

For example, Schermer argues that it is possible for DBS therapy to disrupt a patient’s personality, mood, behaviour, or cognition, so that her entire personal narrative identity – i.e. her “self-conception, [her] biography, values, and roles as well as [her] psychological characteristics and style” – is disrupted (Schermer 2011). This can affect a person’s normal “narrative flow of life” and bring about behaviour that can lead to harm to herself and to others in her social milieu. Issues about identity and authenticity also invoke questions about a patient’s personal responsibility for disruptive or harmful behaviour. Accordingly, Unterrainer and Oduncu have suggested that health professionals ought to use a Ulysses Contract as a cautionary ethical and legal measure against these possible negative impacts from DBS on an individual’s sense of identity (Unterrainer & Oduncu 2015). A Ulysses Contract refers to a scene in Homer’s tale of Ulysses’ quest, where Ulysses ties himself to the mast of his ship in order to protect himself against the Sirens’ seduction. The term represents the idea of an autonomous individual deciding, in advance, to restrict her autonomy in a future setting. However, the authors’ suggestion does not completely resolve ethical questions pertaining to identity and autonomy in DBS treatment and research because there remain the challenges of predicting the loss of autonomy in a patient/subject after brain stimulation, given that patient’s/subject’s initial disease state, and deciding whether or when physicians and/or legal representatives should intervene in terminating stimulation (Unterrainer & Oduncu 2015).   

Ethicists have also considered the potential role of neuro-surgical technologies, like DBS, in the military (Liao 2014; Tracey & Flower 2014). As presented in the hypothetical scenario described above, the Defense Advanced Research Projects Agency (DARPA) is currently investigating DBS as a way to treat post-traumatic stress disorders (PTSD) in veterans, but it is not unrealistic to consider ways that the technology might be developed and applied to enhance soldiers in the same way that many drugs, like Benzedrine and Modafinil, are already being used by the US Forces to increase focus and alertness in soldiers (Tracey & Flower 2014).

Such potential applications raise ethical concerns about coercion and personal or moral responsibility. Because the military is a hierarchical organization, some have questioned the ability of individual soldiers to freely consent to neurological interventions. In addition to the possibility of being coerced by superiors, soldiers may also be subject to subtle forms of coercion to accept interventions in order to be considered fit for duty and reliable by their peers. Additionally, if it’s possible for neurological interventions, like DBS, to change or interfere with an individual’s capacity for judgment, then it’s unclear whether we can ascribe personal or moral responsibility to soldiers who have had the intervention for the call of duty in the same way that we ascribe personal or moral responsibility to the drunk driver who causes an accident (Tracey & Flower 2014).

Neurological interventions as an enhancement tool in the military context raises particular ethical issues, but even its suggested use as a therapeutic method to treat PTSD also brings into question the extent to which painful memories may be repressed without consequences on the brain’s other functions. Whether DBS and other emerging neuro-technologies will influence the brain’s capacity for resilience or amplify its vulnerability remains unknown (Tracey & Flower 2014).

Projects within DIY biology are often thought to be part of a political movement that represents “a material re-distribution, a democratization, and an alternative to established, technoscience” (Meyer 2012). The very politics of transparency and accessibility of the DIY biology movement is what generates many ethical, social, and environmental concerns about biosafety and biosecurity (e.g. bioterrorism). The movement also invokes larger questions about governance and the regulation of scientific research.

Bioengineering research and development outside of academic and research institutions raise concerns about the potential release of harmful biological materials into the environment, and its potential effects on human health. The challenges of assessing and managing risks in this area are even greater given our current limited knowledge about complex adaptive systems, from microorganisms to ecosystems. That level of uncertainty and unpredictability poses serious concerns: “Experimentation with living organisms […] is problematic because they are self-replicating and transmissible, so they pose many hazards that one would not encounter in many other types of do-it-yourself science” (Wolinsky 2009).

However, many projects in bioengineering, including projects in DIY biology, promise beneficial applications of the new biotechnologies and the new modified organisms. For example, members in the DIY biology community have made efforts to develop biosensors and biomarkers, such as DNA bar coding, intended to improve food safety (Landrain et al. 2013). Critics of the Glowing Plant Project argue that it has no purported benefits of improving human health, safety, or the environment, whereas its promise of distributing genetically modified seeds to its supporters presents a potential risk to the environment. Supporters of the project have responded by claiming that basic scientific research motivated by pure curiosity often leads to beneficial applications down the road. The CEO of the project, Antony Evans, suggested that a future goal of the project could be the development of a biotechnology that could replace street lamps with glowing trees, which might help to reduce carbon dioxide emissions and the modified glowing trees would last longer than most current street lamps.

The issue of balancing potential risks and benefits in the development of this new biotechnology invokes a larger ethical issue. The main concern isn’t solely about the potential release of harmful biological materials into the environment, but rather about the lack of regulatory oversight that might set dangerous precedents for future projects. Given these concerns, questions arise about what kinds of oversight agents or bodies should regulate citizen-science movements, such as DIY biology, and the extent to which these projects ought to be regulated.

Currently, the DIY biology community is self-regulated (Wolinsky 2009; Landrain et al. 2013). In the case of the Glowing Plant Project, the modified plants are beyond the jurisdiction of the Animal and Plant Health Inspection Service (APHIS), an agency of the US Department of Agriculture (USDA), because the agency only regulates genetically modified plants if plant pathogens are part of the process. A common method to produce genetically modified plants makes use of a plant pathogen, Agrobacterium, to transfect foreign genes into new host cells. But, the scientists at the Glowing Plant Project sidestepped this method by using a gene gun instead, and dodged the legal and regulatory oversight of the APHIS. Because of that, detractors have also criticized the project for capitalizing on a regulatory loophole.

Despite that criticism, the DIY biology community has considered some of the worries about the release of harmful biological material. They have taken a “bottom-up” approach to self-governance by drafting a code of ethics and by encouraging transparency and collaborations with public authorities (Landrain et al. 2013). However, the extent to which members of the community follow this code remains questionable (Evans & Selgelid 2014).

An additional challenge for DIY biology is how potentially beneficial innovations, if and when they are developed, will fit into current social institutions and economic and political arrangements. Take the case of drug development as an example. There is much more to that process than developing a new drug to which a current disease has no resistance (Evans & Selgelid 2014). There needs to be knowledge about how and when to use the drug correctly, about drug resistance, and about the manufacturing and distribution processes, which invoke many economic and sometimes political challenges (Evans & Selgelid 2014). Thus, as Evans and Selgelid have argued, any benefits that come out of DIY biology efforts will be “contingent on the performance of other institutions, including but not limited to health and security establishments” (Evans & Selgelid 2014, 1076).

Lastly, a difficult question regarding governance and the regulation of DIY biology concerns finding the right scope and balance of regulation. On the one hand, ensuring global and national biosecurity and biosafety, and protecting the environment, are paramount. On the other hand, too much regulation may lead to underground operations that are more difficult to track and might pose a greater risk (Wolinsky 2009). Landrain et al. sum up the challenge accordingly:

“The regulation and governance of DIY biology calls for a balancing act: to collectively set ethical standards without alienating individuals, to establish a global set of principles that makes sense in local contexts, to be close enough to authorities, yet far enough to avoid losing the counter-cultural and innovative edge that DIYbio stands for” (Landrain et al. 2013). 

In the last few decades, there has been much research and development on reliable alternatives to non-renewable energy resources. Government mandates to adopt biofuels as a way to mitigate greenhouse gas (GHG) emissions resulted in large-scale production of plant-based liquid fuels – what are often referred to as “first-generation” biofuels.

The quick adoption of plant-based biofuel technologies during this time had many unforeseen negative social, environmental, and economic consequences. For instance, many challenged the claims that biofuels were effective at lowering GHG emission when compared to fossil fuels, and criticised large-scale production of biofuels as having adverse effects on environmental health, including the destruction of rainforests. Given that biofuel crops compete with food crops for land and resources, biofuels can also affect food prices and undermine food security. In addition to these negative impacts, resulting from direct land-use changes (dLUC), there are compounding effects of indirect land-use change (iLUC) in cases where other social and economic activities are displaced or natural resources are depleted because of large-scale production of biofuels (Buyx & Tait 2011; Mortimer 2011).

In 2009, the Nuffield Council on Bioethics established a working group to examine the ethics of biofuels and to outline an ethical framework to guide the future developmental and implementation of biofuel technologies in an economically feasible and sustainable way. The Council published its report in 2011 outlining five guiding principles for biofuel technologies:

  1. Biofuels development should not be at the expense of people’s essential rights (including access to sufficient food and water, health rights, work rights, and land entitlements).
  2. Biofuels should be environmentally sustainable.
  3. Biofuels should contribute to a net reduction of total GHG emissions and not exacerbate global climate change.  
  4. Biofuels should develop in accordance with trade principles that are fair and recognize the rights of people to just reward (including labor rights and intellectual property rights).
  5. Costs and benefits of biofuels should be distributed in an equitable way (Buyx & Tait 2011, 633).

The ethical principles were designed to provide an ethical “test” for future biofuel technologies and to prevent some of the negative consequences of first-generation biofuel production. The Council also considered whether there is a moral duty to develop biofuel technologies in light of impending climate change. They claimed that the underpinning principle to their ethical guidelines for biofuels is the “duty not to do nothing” (Buyx & Tait 2011, 636). In other words, if one accepts that biofuels can play an important role in mitigating climate change, then there is a duty to ensure the ethical and sustainable development and adoption of biofuels. 

The Council also looked forward to what some have called the “second and third generation” biofuel technologies, which aim to use less land and water resources and reduce social and environmental harms. These emerging technologies include using non-food crops, like trees, agricultural waste, and algae to produce biofuels, as well as taking advantage of better gene-modification tools to create variants with higher yields.

In addition to the ethical concerns already addressed in the Council’s report, these next-generation biofuel technologies present new challenges, such as concerns about intellectual property with new patented technologies, concerns about releasing genetically-modified organisms into the environment (and other environmental impacts), and concerns about how to govern and regulate the introduction of new technologies into existing social and economic structures (Tait & Oyelaran-Oyeyinka 2010).

In response to the Nuffield Council’s report, philosopher Paul B. Thompson, the W.K. Kellogg Chair in Agricultural Food and Community Ethics at Michigan State University, has argued that using the concept of a technological trajectory is useful to understand and analyze the ethics of different R&D strategies of biofuel technologies (Thompson 2012). He points out that some of the rationales used to justify the development and adoption of biofuels, such as a push for energy independence in the US and incentives to find alternative uses for commodities like food-crops, have very little to do with the main goal of mitigating climate change. Attention to these trajectories can help foresee possible resistance to adopting new, next-generation, biofuels under current social and economic conditions.   

The UK government’s announcement of its approval of mitochondrial transfer therapies made headlines throughout the world, with many scientists, doctors, and ethicists welcoming the decision as a positive step towards preventing children being born with debilitating conditions from dysfunctional mitochondria and giving many prospective parents hope to have healthy, genetically-related children. Despite the prospects of these benefits, the decision also raised several ethical concerns. First, a common ethical concern that emerged after news of the decision was whether the interventions created “three-parent babies,” as the resulting embryos from the modified eggs or zygotes would include genetic material from three individuals. Some scientists have suggested that the term is misleading and merely the result of media sensationalism because mitochondria possess only a very small number of genes and their functions are not known to contribute to physical attributes (Reznichenko et al. 2015). Others have insisted that scientists are still unsure about the exact role of mitochondrial DNA and the interactions between mitochondrial DNA and nuclear DNA in gene expression (Dimond 2015). However, philosophers have pointed out that the debate about the nature and extent of the genetic contribution of mitochondrial DNA rests on a problematic assumption of genetic determinism; that is, the idea that an individual’s essence or personal identity is founded on her DNA (Baylis 2013; Dimond 2015). Others have argued that the ethical permissibility of the procedures does not rest on the fact that they will affect the identity of the future child (because that is a given), but on the fact that they will safeguard the future child’s right to an open future (because the child will be free of mitochondrial disease) (Bredenoord et al. 2011, 99).

Second, because the proposed therapies have been defined as germline gene therapy, ethicists raised the possibility that the UK’s decision could lead to a slippery slope to eugenics or lead to the creation of designer babies, if/when the interventions become available for non-therapeutic purposes. For example, older women without mitochondrial mutations may seek these interventions in the future to enhance fertility (Couzin-Frankel 2015). Or, perhaps, lesbian couples might want to use these technologies to ensure that their child carries both of their genetic material (Dimond 2015). These hypothetical scenarios would be enhancements, rather than therapies, and would invoke further ethical concerns about non-therapeutic applications of these interventions for human enhancement.

Third, because germline modifications entail the transmission of those modifications to later generations, some have raised concerns about the lack of knowledge of long-term consequences and whether they pose unacceptable risk. Of course, scientists cannot be expected to know all possible consequences in advance, so some level of risk is considered to be acceptable. But, the science is complex and a lot about mitochondrial genes and their functions in gene expression is still unknown. Thus, the language used by the HFEA, claiming the procedures are “not unsafe,” might be misleading (Dimond 2015). Fourth, conservative critics of the procedures have focused their criticisms on the pronuclear transfer technique because it involves the creation and destruction of embryos and, as such, it stands in opposition to the principle of the sanctity of life (Dimond 2015).

Finally, bioethicist Francoise Baylis has provided more general criticisms of the underlying assumptions motivating these sorts of procedures. Baylis argues that a “wish,” rather than a “need,” for genetically-related children might place undue risk on egg providers and it may impose health risks on future children (Baylis 2013). In fact, women affected with mitochondrial mutations have many other options to become mothers. They can become pregnant and undergo prenatal diagnosis of the developing fetus, and then decide to terminate the pregnancy if the fetus is affected. They can use IVF technologies and pre-implantation genetic diagnosis to select healthy embryos. They can choose egg donation or embryo donation and then have IVF. Or, they can adopt (Baylis 2013). Baylis further argues that investing limited resources in the development of mitochondrial transfer interventions for a relatively non-prevalent condition, which could be addressed with many other measures, might not be morally justifiable (Baylis 2013).

Most of the ethical discussion centered around psychopharmacology as cognitive enhancement has been focused on so called “smart drugs,” or neuroenhancers, such as Modafinil and Methylphenidate (Ritalin), and their “off-label” use. These discussions address ethical topics of safety, informed consent, access and fair distribution, coercion, moral accountability and cheating (Bostrom & Sandberg 2009, Cakic 2009, Farah et al. 2004, Goodman 2010, Greely et al. 2008, Hall 2003, Maslen et al. 2014, Schermer 2008, Stix 2009). Other psychopharmacological interventions that target memory forming neurochemical processes (either to enhance or erase memories) have raised additional moral concerns, such as whether these interventions will affect our concept of the good life and our notions of authenticity and personal identity, and whether the possibility of pathologizing bad memories can lead to exploitation by the pharmaceutical industry (Henry et al. 2007). Moreover, some have argued that experiencing emotional events and having emotional memories may be a requirement for moral learning and exercising moral judgment.

If that is the case, then perhaps we ought to think twice about developing therapies that involve altering our memories. For example, philosopher Elisa A. Hurley claims: I think we have reason to worry about propranolol’s effect of severing memories of traumatic events from the emotions that would ordinarily accompany them because it seems to result in the permanent loss of epistemic access to certain information about those past occasions, namely, to their evaluative significance as registered by the emotions experienced at the time. We might say that using propranolol results in one’s losing touch with the particular moral injuries to which trauma exposes its victims (Hurley 2007, 35). Moreover, interference in the psychological mechanisms which involve emotional memories might have negative long-term effects on an individual and society. For perpetrators of violence, such as soldiers for example, emotional memories can cause regret, or the “sting of conscience,” which can play a restorative role in individuals and communities recovering from the atrocities of war (Hurley 2007).

However, others have defended the development and use of memory-altering drugs to prevent PTSD and questioned the idea that emotional memories form the basis of one’s moral judgments (Rosenberg 2007). Rosenberg argues that because patients who suffer from PTSD often have memories of events that can be so overwhelming that they can lead to serious physical symptoms, we cannot reasonably think that those same memories can in any way enhance an individual’s moral sense or judgment. Rather, she claims, “patients often feel emotionally paralyzed and generally unable to complete desired life projects for fear of triggering a disabling PTSD episode” (Rosenberg 2007, 28). Therefore, Rosenberg concludes, if propranolol is found to be safe and efficacious for preventing PTSD, there seems to be a moral imperative to do so.

Life-extending technologies and anti-aging medicine are emerging areas of focus among bio-gerontologists and molecular biologists. This research is sometimes divided between “weak” life-extension research and “strong” forms. The former label describes biomedical research aimed at preventing and treating common diseases, which occur in older individuals, such as certain forms of cancer, whereas the latter refers to slowing down or stopping the aging process and increasing the average human lifespan in a relatively quick and significant way (Partridge & Hall 2007; Partridge et al. 2009). Not surprisingly, it is the latter, “strong” sense of life-extension or anti-aging research that has provoked most ethical concerns and discussions. Some of these ethical concerns have to do with the prospects of sustaining increasing populations and shifting demographics, which could lead to drastic alterations of social and economic structures, such as the feasibility and implementation of social security policies or the provision of healthcare, the disruption of social arrangements and human relationships (e.g. family structures, rates of marriage and divorce, reproductive and child-rearing practices), and the persistence of tyrannical governments or the slower rate of social change and social progress (Fukuyama 2003; Binstock 2004).

Many of these concerns have to do with justice and fairness (i.e. the fair distribution of benefits and burdens in society), and will have consequences for individuals, society, and the environment. Additional ethical and epistemological questions have been raised about the appropriate goal of biomedical research and healthcare (e.g. whether extending life is or ought to be a goal of biomedicine), and the meaning and value of aging and its implications for our notions of human dignity and identity, and our claims to human rights (Partridge & Hall 2007; Gems 2003). Critics of anti-aging interventions, such as Leon Kass, former chairman of the President’s Council on Bioethics under President George W. Bush, Daniel Callahan, a bioethicist, and Francis Fukuyama, a political scientist, oppose the measures, albeit for different reasons. Kass and Fukuyama take issue with interfering with the natural life cycle, or the traditional human life expectancy. They think these interventions will disrupt the natural order and compromise the value of the different stages of human lives. Callahan is more concerned with consequences of social unrest or social strife that could result from increasing human lifespans, such as the radical changes to our social institutions, notions of personal identity, and economic structures (Turner 2004).

Advocates, such as Aubrey de Grey, scientist and founder of the “Strategies for Engineered Negligible Senescence” (SENS) Research Foundation, claims that the right to live is a fundamental human right, which translates into a moral duty for the medical community to pursue research into life-extension technologies and anti-aging interventions (de Grey 2005). In other words, he argues that the moral obligation to save life in medicine is the same as the duty to extend it (Partridge & Hall 2007). Meanwhile, research into these technologies and interventions are being pursued and, while scientists claim that we are still far from immortality, or even expanding lifespans to 150 or 200 years, political scientist Robert Binstock, as well as others, have argued that anticipatory deliberation concerning the social impact of these measures should be actively pursued (Binstock 2004; Juengst et al. 2003). For example, Binstock argues that we should think about how these interventions will be fairly allocated, if and when they come about. And furthermore, he claims that the scientists involved in this research, along with social scientists and ethicists, should be proactive in shaping and constraining some of the social and environmental ramifications that may result from these interventions (Binstock 2004).

Philosophers and ethicists have recently brought attention to some arguments for the moral permissibility (and, in some cases, the moral imperative) of adopting animal disenhancement technologies if (and, more likely, when) they become available. First, if our goal is to reduce overall animal suffering, and animal disenhancement would reduce or eliminate the ability to suffer in animals, then we ought to accept animal disenhancement as a moral imperative. This reasoning follows closely the logic of the utilitarian argument presented by Peter Singer to reduce animal suffering (Singer 2002).

Second, from an animal rights view, Tom Regan has argued that we should not treat animals solely in an instrumental way because in doing so we violate them as “subjects of a life” (Regan 1983). But, according to this line of argument, if disenhancement entails a lack of consciousness in animals, then they cannot be “subjects of a life,” and therefore they no longer have any moral status (Thompson 2008; Palmer 2011). Other arguments have been made to support interventions to reduce animal suffering while maintaining current industrial practices in cases like using animals for scientific research. For example, Bernard Rollin has argued for the “Principle of Welfare Conservation” (Rollin 1995). According to his principle, disenhancement is morally acceptable if it did not “create animals that were more likely to experience pain, suffering, or other deprivations of welfare as a result” (Thompson 2008, 310). As is evident, both consequentialist and non-consequentialist arguments have been made to support the proposal of animal disenhancement as a moral gain.

Conversely, there have been philosophical arguments against modifying animals, including disenhancement. For example, some philosophers have argued that disenhancement is intrinsically wrong because it compromises or violates the notion of animal dignity or species integrity, even if, with disenhancement interventions, some individuals would be better off than they otherwise might have been (Balzer, Rippe, & Schaber 2000, De Vries 2006, Heeger 2000). Others have invoked the “yuck factor” in appeal to many people’s moral intuitions about the prospects of disenhancement (Kass 1997, Midgley 2000). Philosopher Paul B. Thompson, the W.K. Kellogg Chair in Agricultural Food and Community Ethics at Michigan State University, has considered all of these arguments and questions why there is still the persistence of negative moral reactions to the proposal of animal disenhancement. He also suggests that a close look at the case of animal disenhancement might shed light on ethical concerns over human enhancement (Thompson 2008). He argues that although the idea of animal disenhancement seems rational and morally acceptable under many philosophical frameworks of animal ethics – especially given the difficulties of changing the means of production in industrial farming practices – there may be good reasons to take the widespread negative moral intuitions about disenhancement seriously. He suggests that our “yuck factor” objections to animal disenhancement may reflect our deeply-held beliefs about human virtue; that is, that disenhancement exhibits disrespect in humans. He suggests, “the entire project exhibits the vices of pride, or arrogance, of coldness, and of calculating venality” (Thompson 2008, 314).

Interestingly, Thompson goes on to consider the implications of moral intuitions and philosophical arguments about animal disenhancement for the ethical debate about human enhancement technologies. He thinks that a re-orientation of the problem of both animal disenhancement and human enhancement in the framework of virtue ethics can highlight how we associate some social practices with good and bad moral characters.* Such a shift requires thinking critically about the assumption that it is always morally justified to alter the animal/person to fit the environment, rather than the other way around. To illustrate this idea, Thompson makes an analogy to working conditions in a factory, claiming that animal disenhancement strategies “are like offering assembly line workers an aspirin in lieu of better working conditions” (Thompson 2008, 313).

His analogy illustrates the idea that the proposal to disenhance animals in factory farming so that they no longer suffer or feel pain amounts to a mere band aid solution to a larger underlying problem about how we’ve chosen to organize animal industries. In the analogous case, perhaps our moral intuitions about the character of a person, e.g. the factory owner, might differ in different contexts. Thompson suggests that we might think differently about the small factory owner, who is trying to maintain a livelihood while being under pressure from economic competitors and powerless to change the conditions of the market, and about the market leaders who regulate and influence the terms of competition (Thompson 2008, 315). In both cases, the conditions in the factory are the same and the factory workers are harmed in some sense, yet there seems to be a moral difference in the characters of the small factory owner and the market leaders.

This moral difference might map on to what is morally problematic about the proposal to reduce animal suffering in factory farms by modifying the animals so that they can no longer experience suffering. Lastly, philosophers have also brought attention to the distinction between the Dumb-Down approach and the Build-Up approach to enhancements. In the Dumb-Down approach, “researchers identify the genetic or neurological basis for certain characteristics or abilities, and produce animals that lack them by removing or otherwise disabling them either genetically or through nano-mechanical intervention in cellular and neurological processes” (Thompson 2008, 308). The Build-Up approach characterizes methods in which researchers manipulate DNA and cells in vitro to build organisms without a central nervous system, but with the ability to produce the products we consume, like eggs or meat. Whether the distinction between these two methods presents a morally salient difference is an open question. Perhaps in the case of Dumb-Down interventions, there is a sense in which these interventions are reducing overall suffering in actual circumstances, which seems to be a morally desirable outcome.

In the case of the Build-UP strategy, it seems that there is no sense in which these newly-created organisms are better or worse-off than they otherwise would have been. Yet there is a sense it which this approach facilitates the instrumentalization of disenhanced beings. If we apply this thinking to humans, say creating human-like organisms for organs and tissues, then that kind of instrumentalization may seem morally repugnant to us. *Virtue ethics is a moral normative framework that considers the moral character of an individual to account for moral judgments, rather than rules or guidelines about specific moral actions (as deontological and consequentialist frameworks do).