Ethics Sessions in a Summer Undergraduate Research Program

Description

This activity is considered an NAE Exemplar in Engineering Ethics Education and was included in a 2016 report with other exemplary activities. This activity describes an ethics module for students involved in a research experience for undergraduates that involved case study discussions and other activities that asked students to think about macro and microethical issues in research. 

Body

Exemplary features: Infusing ethics into a NSF Research Experiences for Undergraduates (REU) Sites program and critical assessment that reveals areas for improvement

Why it’s exemplary: Our ethics program consists of six interactive 1-hour sessions in which small groups of 3 to 5 students discuss short cases that are fictional but realistic. The case topics are selected to be relevant to the students’ interests. Taught with a general approach to ethical reasoning that uses everyday language rather than abstract philosophical principles, students gain skill in ethical reasoning through repeated practice, with active learning in small collaborative groups. They are assessed through pre- and post-tests using a counterbalanced design. Each test requires the analysis of a case that is scored with a common rubric that aligns with the learning objectives. This ethics program can be easily integrated into a summer undergraduate research program. The program’s small-group pedagogy can be scaled up to student groups of any size in any instructional setting with minimal changes.

Program description: In the summers of 2009–2012 the Information Trust Institute at the University of Illinois at Urbana-Champaign hosted an 8- to 10-week summer undergraduate research program on reliable and secure computing, supported by a grant from the Research Experiences for Undergraduates (REU) Sites program of the National Science Foundation (grant CNS-0851957). Most of the 21–26 students were majoring in computer science, computer engineering, electrical engineering, or another technical discipline.

Each summer included 6 weekly sessions on ethics in the responsible conduct of research (RCR) and in the development and use of computing technology. The sessions addressed both micro- and macroethical topics such as professional responsibility, authorship, plagiarism, mentoring relationships, conflict of interest, software quality, privacy of personal data, confidentiality of intellectual property, accuracy of computational models, and social impacts of computers. We chose these topics for their relevance to the students’ research projects. We omitted standard RCR topics that were not relevant to these students, such as the responsibilities of peer reviewers and the protection of human and animal subjects. Even the traditional RCR topics of fabrication, falsification, and data management were not relevant for many projects that involved the development of software or the mathematical analysis of algorithms.

We selected fictional but realistic short cases (scenarios) from a variety of sources, including textbooks on computer ethics and the NAE’s Online Ethics Center for Engineering and Science. In 2011 and 2012, we replaced the session on ethics in computational modeling by a showing and discussion of the 36-minute movie “Henry’s Daughters,” which highlights ethical issues in a dramatized case in which engineers design an intelligent transportation system with autonomous vehicles. In ethics presentations for other REU site programs in the summers of 2013 and 2014, after the Information Trust Institute’s REU grant had ended, we replaced some of the RCR cases with short videos (less than 4 minutes) developed at the University of Nebraska–Lincoln. We substituted the video cases for text cases because we expected that students would find video cases more interesting and memorable. Our expectations were confirmed in the program evaluation surveys at the end of each summer (not reported here).

The ethics sessions used active learning methods: collaborative and cooperative learning. We chose active learning through small-group discussion because, as Wilbert McKeachie and Marilla Svinicki have written in their book Teaching Tips, “Discussion methods are superior to lectures in student retention of information after the end of a course; in transfer of knowledge to new situations; in development of problem solving, thinking, or attitude change; and in motivation for further learning.” In each 60-minute ethics session the students were randomly divided into small groups of 3–5 students to simultaneously read and discuss the same case for about 10 minutes. Then a professor led a discussion of this case with the entire cohort. During this discussion period, he invited different groups to respond to questions about the case for about 10 minutes. The students were asked to identify the ethical issues and to suggest what the characters in the case should do next, for what reasons. Then the session moved on to another case, again with simultaneous discussions in small groups followed by a discussion with the entire cohort. One session was organized differently: Each small group took responsibility for reading and answering questions about one of five cases dealing with the social impacts of computers. For the first 10 minutes, all five groups read and discussed their case simultaneously, then the professor interacted with each group in turn to discuss that case while the other groups listened.

At the beginning of the first ethics session of the summer program, we presented a general approach to ethical problems. Our general approach uses everyday language because, with limited time in a summer REU program, students need guidance in thinking about ethics issues without having to learn philosophical jargon.

A General Approach to Ethical Problems

  1. Identify the affected parties, their interests (rights, expectations, desires), and their responsibilities. Determine what additional information is needed.
  2. Consider alternative actions by the main actors, and imagine possible consequences.
  3. Evaluate actions and consequences according to basic ethical values—honesty, fairness, trust, civility, respect, kindness, etc.—or the following tests:
    1. Harm test: Do the benefits outweigh the harms, short term and long term?
    2. Reversibility test: Would this choice still look good if I traded places?
    3. Common practice test: What if everyone behaved in this way?
    4. Legality test: Would this choice violate a law or a policy of my employer?
    5. Colleague test: What would professional colleagues say?
    6. Wise relative test: What would my wise old aunt or uncle do?
    7. Mirror test: Would I feel proud of myself when I look into the mirror?
    8. Publicity test: How would this choice look on the front page of a newspaper?

Each student received the Association for Computing Machinery code of ethics, a book chapter on ethics for computing professionals by Deborah Johnson and Keith Miller, and a copy of the third edition of the booklet On Being a Scientist, an overview of RCR by the National Academies of Sciences, Engineering, and Medicine. Students were not tested on these readings, however, and they were not assigned any other ethics homework. As learning objectives, through the ethics sessions, we expected students to learn to identify the ethical problems or dilemmas, recognize the people affected and understand their perspectives, identify a comprehensive list of actions, and provide a justified action to resolve the ethical problem or dilemma.

Assessment information: To assess the effectiveness of the ethics sessions, we asked students to analyze two short cases. Case A highlighted ethical issues in computing technology, and case B raised ethical issues in conducting research. The students were randomly assigned to two groups in a counterbalanced pre-/post-test design. One group received case A for the initial assessment at the beginning of the summer and case B for the final assessment at the end of the summer; the other received case B initially and case A at the end. For each case, students responded to four questions, which corresponded to the four intended learning objectives: (1) What ethical issues does this case raise? (2) Who is affected by this case? What are their perspectives on the case? (3) What actions might the characters consider to resolve the ethical issues? (4) Among these actions, which should the characters choose? For what reasons? These questions followed our general approach described above.

For each assessment, students were expected to take 30–60 minutes, working individually and without consulting any references. There was no limit on the lengths of their responses, which were independently scored by two evaluators using a common rubric that specified three performance levels for each of the four questions. They compared their scores and discussed any differences. After discussion and reconciliation, the scores differed by at most one point on each question. The scores were combined to obtain a cumulative score for each student.

In the summer of 2009, we had initial and final responses for 17 students. In the summer of 2010 we had initial and final responses for eight students. Because the numbers of students were small, we aggregated the 2009 and 2010 data by case. We used the Mann-Whitney U test for independent samples to analyze the differences between the initial and final responses because the data did not pass the Shapiro-Wilk normality test or a test of homoscedasticity. We found no significant differences between the initial and final scores for case A or for case B.

We suspect that there was essentially no difference in the initial and final scores because the content of the ethics sessions was not formally reinforced outside of the sessions through additional academic work. In addition, the ethics sessions might not have added significantly to the knowledge and skills of the students who had previously taken computer ethics courses that were required in their undergraduate computer science programs. At the end of the summer, the students probably put minimal effort into the post-test. Finally, our intended learning outcomes may have been too ambitious, and thus the assessment task was too difficult. As a consequence, students might have been unable to demonstrate what they had learned.

We believe that our assessment method can be applied broadly. As our experience suggests, however, even when the ethics sessions are taught with appropriate pedagogies, and when the assessments are aligned with the learning objectives, students might not demonstrate improved skills in analyzing ethics cases.

Additional resources:

  1. Loui, M. C., & Revelo, R. A. (2015). Cooperative learning and assessment of ethics sessions in a summer undergraduate research program. CURQ on the Web, 36(1): 4-10. http://www.cur.org/download.aspx?id=3176.  (The assessment cases and scoring rubric are presented in this paper.)
Citation
Michael Loui. . Ethics Sessions in a Summer Undergraduate Research Program. Online Ethics Center. DOI:. https://onlineethics.org/cases/nae-exemplars-engineering-ethics-education/ethics-sessions-summer-undergraduate-research.