Deborah G. Johnson

In the initial description of the case, none of the behavior of the people involved seems grossly, ethically problematic. When patient records are moved from paper to electronic mode, important changes occur in the accessibility of the data. However, that alone is not problematic as long as steps are taken to protect the accessibility of the data and the identity of the individuals.

As the case proceeds, protecting the identity of the patient/subject records comes into focus as an important issue. Before addressing this issue, however, there is another very subtle issue here. Dr. Edwards is soliciting data to help beta test Medusa. Presumably the data he is soliciting have been collected for research that meets the consent requirement for research with human subjects. However, the subjects agreed to participate in research and did not agree to have their data used to beta test a database management system. The lack of consent here becomes even more important as the case unfolds and we learn that the privacy of individual participants will be exposed to increased risk because the data are being used to test Medusa.

As the case continues, we have a straightforward situation of wrongfully balancing convenience over risks to subjects. Amy has access to the complete patient files, and she should know (and should have been instructed about) the importance of confidentiality of medical/research records. Indeed, the system has an encryption function that allows users to remove the identity of patients. However, the encryption function is cumbersome and slow, and Amy doesn't use it when she demonstrates her progress on the database to members of the lab.

Amy's behavior violates the patients'/subjects' trust in researchers when they agree to participate in research. The central importance of trust to the research endeavor should be clear. If trust in researchers is violated, over time individuals will begin to refuse to participate in research. In fact, this case illustrates a number of factors in achieving trust. For one thing, it illustrates that trust is a function of multiple actors. Achieving trust involves more than a single researcher or team of researchers. All who handle research data must behave properly. The researchers who collected the data allowed them to move out of their hands and be stored in a database that they no longer controlled. To ensure the privacy of their subjects, they should have asked for assurance that the data would be treated confidentially and without revealing identity.

Technology also plays a role, which takes us back to the difference between paper and electronic records. The change from paper to electronic storage of records is what allows and facilitates movement of data from one place to the next. Thus, the technology calls upon us to create appropriate norms of behavior around it - norms that protect the trust of research participants.

This case raises a whole host of issues having to do with faculty responsibilities and graduate students supervising undergraduate students. I would like to bring to the fore one particular aspect of this case, one that is typical of many research environments, where norms of practice are poorly understood and only vaguely articulated, rarely made explicit and not intentionally promulgated. This situation is ripe for misunderstanding.

Typically, faculty members have many expectations of graduate students, but they are not explicitly stated. Rather, faculty members assume that graduate students already know what will be expected of them, or they assume the students will pick it up from the environment.

Informal social practices are at the heart of all institutional activities. One enters a work environment and makes assumptions about all kinds of things that may or may not be explicitly stated; for example, that you will be paid on a regular basis; that you will be fairly evaluated; that you will be allowed sick time. In business environments, these general presumptions facilitate employer-employee relationships. However, when the norms of practice are not well understood or when individuals or groups of individuals who are working together have different assumptions about what they are supposed to do, how they will be evaluated and what they will get in return, the situation is ripe for conflict, disappointment and exploitation.

A variety of factors explains the poor articulation of norms and conventions in academic research environments. For one, newcomers to the environment -- graduate students -- have little opportunity to learn these norms before they actually enter the environment. Even if graduate students have worked in labs as undergraduates and have some sense of what will be required, they are likely to have had only limited experience. While newcomers may be unfamiliar with the norms of behavior in research, faculty may not recognize the importance of developing and articulating the norms for graduate students. Faculty may give little thought to this necessity and simply presume the same environment they had in graduate school. Moreover, because many aspects of the research environment are relatively new, the norms are still evolving and there is no old practice to fall back on. Consider that fifty years ago there were hardly any research environments of the kind that exist today at many universities, i.e., with high levels of funding, large numbers of projects and teams, complexity of research organization, and so on.

It is the poor articulation of norms of behavior that leads to the problems described in this case. Professor Hopkin asks Ryan to help advise Laura on her senior honors thesis, but what is entailed in this request and in Ryan's acceptance? As the case evolves, we see that Hopkin had certain ideas about what he expected Ryan to do and how long he expected Ryan to stay with it. Hopkin had criteria for Ryan's becoming a co-author on the publication of a senior thesis.

The case suggests how much better the outcome would have been if Hopkin had explicitly explained to Ryan what his responsibilities would be in his role as adviser to Laura. Hopkin might have explained roughly how much time Ryan should spend, what kind of advice he should and shouldn't provide and so on. Hopkin might have specified at the beginning that the thesis might produce potentially publishable results and in what circumstances Ryan would be included. If Hopkin had articulated the norms for a relationship in which a graduate student advises an undergraduate student, Ryan would have a much better chance of successfully managing his relationship with Laura.

The problem in this case arises, in large measure, because norms of practice are not well understood and are not made explicit. (Even if the norms are variable, an explicit discussion of the variability helps individuals manage their work.) It appears that Hopkin has not thought through the complexities of the relationship between Ryan and Laura adequately to anticipate some potentially problematic moments and give Ryan enough information to avoid problems.

The commentary to the case is right in pointing out that Ryan could have avoided this problem if he had spoken out sooner. In other words, even though Hopkin can be faulted for not giving Ryan proper instruction in how to manage the relationship with an undergraduate, Ryan can be faulted for not responding in a way that would resolve the situation quickly and with minimal damage. Even if Hopkin had told Ryan not to invest too much time in Laura's project and that he would not be co-author on Laura's thesis publication, at least Ryan would not have a bad feeling about the situation. He would have understood from the start that he would not be included; he would have understood that as the norm for the situation.

While the commentary suggests that Laura may be at fault for "taking more credit than she deserves," I am reluctant to draw this conclusion because Laura, as an undergraduate, is the least familiar with the norms of practice in research. She may simply not know that Ryan should be given credit. Again, Hopkin should have thought about and discussed this issue with her early on or at least when he saw that she was getting results that were worth publishing.

Commentary On

While it is probably not uncommon for computer laboratories to be used to access pornographic material, this case is complex. This commentary focuses primarily on the difficult question of what Jessica and Frank should do. It does not address the definition of pornography or the issue of freedom of expression.

Jessica and Frank are right to be concerned. There are good reasons why it is improper for anyone to access pornography in a public or multi-user computer lab. Not only is the use of the lab for pornography a violation of the lab's purpose, the user runs the risk of exposing others to the pornography when they have not chosen or consented to being exposed. Those who are unwittingly exposed to pornography will not consider the lab a comfortable place to work. Women are likely to made to feel particularly uncomfortable. Hence, the lab and even the university are put at risk of lawsuit and loss of research funding.

Jessica and Frank ought to be concerned. That much is clear. However, it is much more difficult to figure out what if any action they should take. Let's consider their options.

Jessica and Frank could do nothing. This response doesn't seem right. For one thing, it means that the problem persists and the risk to the lab and the university continues. Some ethicists might even argue that by doing nothing, Jessica and Frank would become complicit in the wrongdoing. They look the other way, and that allows the problem to continue.

Alternatively, Jessica and Frank could tell the lab director what they know. This course is probably their best option, for it is the lab director's responsibility to ensure that the computers in the lab are being properly used and that lab users find the lab a comfortable environment in which to work. Hence, telling the lab director about the incidents would help the director do the job of supervising the lab.

The problem with this alternative is that when Jessica and Frank tell the director what they know, they will have to convey their suspicions and evidence regarding Mark, as well as the initial experience of finding evidence of use of the computers for accessing pornography. In a perfect world, Jessica and Frank could count on the lab director to treat this information properly, not to jump to the conclusion that Mark is guilty, and not to take any rash action against Mark. However, since we don't live in a perfect world, Jessica and Frank are appropriately worried that telling the director what they know may have the effect of a false accusation. They worry that Mark may not be given a fair hearing.

A third alternative, aimed at protecting Mark from false accusations, is for Jessica and Frank to confront Mark before saying anything to the lab director. I don't think this option is a good idea; it seems somewhat shortsighted. Mark may deny the accusations, admit their truth or refuse to say anything. Either way, it is not clear what Jessica and Frank would accomplish. If Mark admits that he is the one who has been accessing pornography, has the problem been solved? There is no guarantee whatsoever that he will change his behavior. If, on the other hand, Mark denies the accusations, Jessica and Frank are no further ahead than before confronting Mark since they wonÀt know whether he is telling the truth. Moreover, if Jessica and Frank confront Mark and get one of these responses, the director of the lab is kept in the dark about a problem in her/his domain of responsibility.

Yet a fourth alternative would be to tell the lab director about the pornography but not tell about Mark. This strategy would alert the director to the problem but would not point the finger at Mark. However, this course of action seems odd. If Jessica and Frank have some reason not to trust the lab director, then they should probably go to the lab director's supervisor. If, on the other hand, they trust the lab director, they should give her/him all the information they have, explain their reluctance about identifying Mark, and then trust that the lab director will do the right thing.

The second alternative is best. Doing nothing (looking the other way) does no good and lets the problem persist. Confronting Mark seems to be a version of "taking the law into your own hands." The outcome is unclear, and this option leaves the lab director out of the picture. Telling the director what they know acknowledges the director's authority and responsibility and gives the director the opportunity to do the right thing. The lab director has the responsibility (and hopefully some training and experience) to investigate the problem and deal with Mark.

Assignment of authorship for published research is an extremely intricate matter, as this case illustrates, and it is also a highly contentious matter. No doubt this contentiousness is correlated with the high stakes associated with authorship since published research plays such a pivotal role in the careers of scientists.

The commentary suggests, however, that authorship of published research is not all that is at issue in this case. The case points to broader issues in graduate education.

An outside observer viewing the case first from Alyssa's perspective and then from Swift's perspective might be most surprised by the differences in the expectations of each. They each have different expectations regarding authorship and credit, what is supposed to happen in the lab, the role of a professor in the training of students, and so on. The fact that the two have such different expectations illustrates a highly problematic condition of graduate education. The norms for professors and graduate students are poorly articulated, rarely explicitly promulgated, and therefore, poorly understood. The situation is ripe for misunderstanding. In the absence of clear norms, intentionally transmitted to students and modeled in practice, students and faculty develop a variety of diverse, ad hoc, variable expectations.

It is easy here to suppose that the student, Alyssa, was some sort of dunce and simply had not picked up on the prevailing norm for authorship - that lab work alone does not justify authorship, that one must make an intellectual contribution. Or perhaps she was just unable to contribute to the project intellectually. Such a response is much too easy. For one thing, there are hints that Swift uses the norm inconsistently. Why has he included other students from the lab? Did these students contribute intellectually, or did they earn authorship simply by being members of the lab? Further, the investigative committee concludes that the decision is at Swift's discretion: He could include Alyssa as co-author if he chose. The norm is not definitive; sometimes lab work is sufficient to justify authorship, and sometimes not.

While we can understand that attributions of authorship are complex and intricate such that they must, to some extent, be left to the discretion of the faculty member, that does not mean that faculty can assign authorship arbitrarily or at whim. The discretion allowed faculty members correlates with obligations, and faculty members are accountable for how they use this discretion. They are obligated to tell students what to expect and to make decisions as fairly and consistently as possible.

Since attribution of authorship is an intricate matter and often a matter of faculty discretion, the potential for mistreatment of students and abuse of power is great. That makes it extremely important for faculty members to provide students with guidelines.

Norms with regard to attribution of authorship are illustrative of a broader problem in graduate education. In general, norms are not well articulated or explicitly communicated. This problem leads to a wide variety of expectations among faculty and graduate students, so much so that it is not uncommon for graduate students to experience shock and disappointment in the first years of their graduate training.

This case illustrates an extremely complex and difficult issue for researchers involved with the development of new technologies. At the heart of the case is uncertainty and the role uncertainty plays both in technological development and in ethics. Uncertainty makes for difficult decision making.

In one of the first textbooks on engineering ethics, Martin and Schinzinger(1)  suggested that engineering should be understood as social experimentation. They argued that engineering should be seen on the model of medical experimentation since engineering always involves some degree of risk and uncertainty. Even if engineers are building something that has been built before, the new undertaking will involve differences that may affect the outcome ` a different environment, different materials, a different scale and so on. Martin and Schinzinger seemed to believe that the risk and uncertainty of engineering undertakings had not been sufficiently recognized. Consequently, those who are put at risk by an engineering endeavor are rarely involved in the decision making or given an opportunity to consent or withhold consent.

In this case, engineering and medical experimentation are fused. There is no distinction. Nevertheless, the fact that the engineering endeavor is framed as medical experimentation does not seem to make the ethical issue any clearer or easier. The powerful role played by uncertainty is quickly brought into focus when we compare this case to a hypothetical situation in which researchers use standard imaging modalities to test some other aspect of the machinery. Suppose, for example, that researchers are testing a new, ergonomic design for a machine that deploys standard imaging modalities. The researchers discover an anomaly in the breast of a research participant. I believe the researchers would not hesitate to inform the patient and her doctor; they would be confident with regard to the significance of the finding.

The researchers hesitate in this case because they are uncertain of the meaning of their finding and they do not want to cause unnecessary stress to the participant. This response is understandable given that the engineers are so unsure about the validity of the imaging modalities.

The situation is actually not so uncommon in engineering. Often engineers and scientists have evidence, but the evidence is limited and doesn't give them the certainty they need to make a decision. This parallels the situation in which Roger Boisjoly found himself with regard to the launching of the Challenger.(2)  Boisjoly had some evidence that the 0-rings behaved differently in extremely cold temperatures, but he had not had time to do further testing to establish how the 0-rings would function. He had evidence, but he was unsure of the meaning or strength of the evidence. Was it strong enough to justify stopping the launch of the Challenger? Was it weak enough to be ignored? It just wasn't clear.

The parallel with this case should be obvious. Is the evidence strong enough to contact the participant or her physician? Weak enough to be ignored? It just isn't clear.

In situations of this kind, many factors come into play: the severity of the risk involved, the timeframe before outcome, details of the domain (spaceships, breast cancer, etc.), the possibility of gathering further evidence, and so on. In the case at hand, the severity of the risk of saying or doing nothing is high in the sense that a woman's life is at stake.

The engineers are reluctant to inform the woman for fear of causing her unnecessary stress. While this attitude is understandable, it also hints at paternalism. Their hesitation presumes that the woman is not capable of understanding the uncertainty of the data and the risks at stake. Thus, I believe the researchers did the right thing by telling the woman and her physician about their discovery, and I am inclined to think they should have done so earlier. Nevertheless, I admit this case is difficult because of the uncertainty of the data.

  • (1)Martin, Michael, and Schinzinger, Ronald. Ethics in Engineering. New York: McGraw Hill, 1983, 1989.
  • (2)Boisjoly, Roger. "The Challenger Disaster: Moral Responsibility and the Working Engineer" in D.G. Johnson, Ethical Issues in Engineering. Englewood, N.J.: Prentice Hall, 1991.

This case presents three distinct situations, all having to do with research on and administration of the disposal of toxic wastes. The situations are separable in the sense that any one could arise (and would be difficult to resolve) independent of any other. Comparable situations arise in dealing with other kinds of research and decision making, but in this case the dilemmas are made all that more difficult because of the risks posed by toxic wastes and because of the degree of uncertainty about risks of this kind. I will analyze two of the situations by describing the core issue and then identify reasonable interpretations of the responsibilities of Alice, the central decision maker in this case.

The first situation presents Alice's dilemma in reviewing technical reports and recommending action based on this review. The task involves making judgments about the promise of lines of research or the potential reliability of new methodologies. This is a daunting challenge in itself because it implicitly involves uncertainty; that is, judgments have to be made to create evidence and experience before they exist (or exist to a high degree). The task is all that more difficult in this case because it involves determining the risks of toxic waste sites, where errors in judgment can have catastrophic effects. So, Alice has a tough job.

Dr. von Wegner has come up with a new methodology, and he presents it to Alice as an extremely viable methodology. Alice later finds, however, that the validity of the methodology is contested. The question is raised as to whether von Wegner had a responsibility to tell Alice about the controversy over his methodology.

Two major issues are apparent here. One is the question of how Alice, or anyone in a situation like hers, should proceed when faced with controversy as well as uncertainty about new research and research techniques. The second question has to do with practices in science when it comes to research evaluation: How should researchers' responsibilities in reporting on their research be defined? Should they be expected to present their research in the best possible light? Or should they be expected to disclose any controversy or uncertainty surrounding their research?

I will make some suggestions on how to think about these questions. On the first question, it seems clear that Alice ultimately will have to use her judgment. Because of the uncertainty involved, a clear right answer is not going to emerge. Her best strategy is to gather as much information as she can, from von Wegner, as well as his critics. In the end, she will just have to make a tough call. Perhaps what is most important here is that she be able to explain to others why she has made the judgment she makes. Good reasons will help her defend her judgment whether she recommends funding for further development of the methodology and it turns out to be unusable or she does not fund the research and it turns out to be the best methodology produced - by other funding agencies.

On the second question, it seems most important that a policy be adopted so that all parties know what to expect and can determine when standards have been violated. My hunch is that it is better (for science and the public) to set a standard of scientists representing their research in the best possible light and not expecting them to disclose controversy. Scientists are less likely than others to do a good job of explaining controversy over their work.

Moreover, it seems better to have the expectation that someone in Alice's position should seek the advice of others before embracing a methodology presented by its originator. This issue is precisely why journals and funding agencies use a system of peer review when making decisions about what to publish or fund. To be sure, the process may eliminate some research that would have been beneficial, but in the long run more good will come.

I am not going to comment on the second situation described in this case. The occurrence of a tremor and publication of new estimations of the risks of a potential site exacerbate the difficulty of the situation. They make Alice's decision more difficult, but they do not add a new element.

The third part of the case seems to pose a complex conflict of interest. Alice has two roles. Her job as an employee of the federal government requires her to review research and methodologies for toxic waste disposal. Implicitly this responsibility entails making judgments that are in the best interest of the United States. As a graduate student enrolled in a university, Alice has another role. In her role as a student, she is expected to seek the best education she can get, to seek the good opinion of her teachers and to seek good grades. While the federal government may benefit from Alice getting a graduate education, when she goes to school she acts for herself. Her role as a student and her role as an employee have distinct responsibilities and values.

Initially the two roles do not conflict, and Alice can continue to keep them separate. However, if Alice decides to work with Dr. Sharpo on her dissertation, then Alice should not evaluate Sharpo's proposal for funding from her agency. Similarly, if she decides to evaluate Sharpo's proposal for funding, then she should not chose him for a dissertation adviser. Either way, she will appear to be acting/judging with a conflict of interest. If Sharpo is Alice's dissertation adviser, than he is expected to make judgments about Alice on the basis of her work as a graduate student. However, if he knows that she has recommended his grant, he may feel grateful to her and he may be reluctant to evaluate her harshly for fear of jeopardizing future funding. Similarly, if Alice chooses Sharpo as her adviser, her judgment about funding of his proposal may appear to be tainted. She is supposed to make such judgments with an eye to the best interests of her agency (and ultimately the United States), but in her role as student, she will want to please Sharpo. Indeed, as his graduate student, she could even be offered an assistantship under the grant.

I admit this conclusion is disturbing because Alice's desire to work with Sharpo and her desire to fund his proposal may arise from the same fair appraisal of the potential of his research. Because she thinks the research is so promising, she wants to work with him and fund his research. The problem is that she has two roles and there is no way to be sure that the circumstances of one role will not inappropriately influence her judgment in the other role.

Alice should either remove herself from the decision about Sharpo's proposal or she should stop working with him at the university.

The sequence of events in this case illustrates how a seemingly altruistic action (furthering the goals of science) can lead, through subsequent events, to an awkward situation in which professors, post-docs and graduate students seem to fail to fulfill their responsibilities.

Part 1

The events described in Part 1 of the case suggest a situation that is far from ideal, although not unethical per se. The case description hints at something wrong because of Doug's feeling that he is not qualified for the assignment he has been given and because of Professor Cook's apparent lack of understanding of DougÀs qualifications and/or his lack of involvement in the project. Without knowing many more details, it is difficult to determine whether Doug is overly concerned about his competence for the project or whether Professor Cook is being negligent.

Since Doug has doubts about his own competence and since he is unsure how to proceed, the ideal thing for him to do would be to tell Cook of his concerns. Of course, it is possible that Cook will fail to respond in a helpful way, but without DougÀs expression of concern, it is difficult to fault Cook. He has no way of knowing there is a problem. If Doug were to express his concerns, then the two of them might be able to come to a shared understanding of how Doug (and Cook) should proceed.

By the end of Part 1 no one has acted unethically, although it seems possible that Cook is being negligent and Doug, in not talking to Cook, is not managing his situation well.

Part 2

As the situation evolves at the beginning of Part 2, it is still difficult to pinpoint any unethical behavior on the part of either Doug or Maria. Doug has sought assistance from someone who has relevant knowledge, and Maria has given Doug assistance. She has been helpful to him and, initially, at little cost to herself or her project, has helped Doug and promoted scientific work. If the case had stopped with Maria giving Doug tips on how to proceed and if Doug had proceeded to take charge of his research building on the tips but becoming independent, there would be no ethical issue. The case would simply illustrate an altruistic act by Maria, an act illustrating cross-fertilization of ideas and the value of cooperation in science.

However, that is not how the case proceeds. Instead, Maria continues to help; Doug continues to rely on Maria's help; and Maria is being distracted from her research for Professor Black. Moreover, Doug never informs Professor Cook of Maria's involvement in the research. While it is difficult to spell out in detail the responsibilities of professors, post-docs, and graduate students, especially in a way that anticipates every possible situation that might arise, this case confronts us with a situation where the lines of responsibilities are or are about to be crossed. The case points to more than one responsibility being ignored or neglected. Let me focus on each of the actors.

This case illustrates the subtle but important responsibility issues in research relationships. The parameters of these relationships are rarely made explicit so they lurk beneath the surface. Often the responsibilities of professors, post-docs and graduate students become visible only when the grossest violations occur.

Commentary On

This case raises several of the toughest issues in research ethics. Part 1 focuses on issues that arise in the design of the research project, and Part 2 focuses on issues that arise in the implementation of the study. The questions at the end of Part 1 point to the general ethical dilemma posed by all research using placebos: the role of compensation in research (What is an appropriate amount of compensation?) and the question whether children or their parent/guardian should receive compensation for the child's participation. The first question raised at the end of Part 2 has to do with the appropriate response of the individual who is serving both as a researcher and a clinical doctor to a child who is a potential subject for the study. The last questions point to difficulties in determining the age at which a children can consent for themselves and the subtleties of representative consent, in this case consent for the child by someone other than a biological parent. I don't think I can "answer" these questions; I hope only to articulate what is at issue and why the issues are morally problematic.

Several of the ethical dilemmas posed in this case are not specific to research on children; they arise, as well, in research involving fully competent adults. In this case, the ethical issues are made even more complex and difficult because of the involvement of a child. Moreover, in most research involving children, the child is represented by a parent, but in this case there is further complexity because the child is represented by a guardian, a foster parent.

Let me first consider the issues that would be raised even if no children or representatives were involved. When, if ever, placebo studies are justified is one such issue. Drug tests are morally problematic because they put participants at risk. Since the effects of the drug are not known, there is risk of adverse effects. Indeed, independent of the use of placebos, all researchers have to ask whether the knowledge they will gain is worth the risk to which they expose their research subjects. It is for this reason that poorly designed research is unethical; it puts individuals at risk with little likelihood that good will come from it.

In this context, research involving placebos might be seen as morally preferable to other research because half the subjects will not, in fact, be put at risk. The problem is that in most placebo studies, the drug or therapy being tested is thought to have a positive effect, and yet those receiving the placebo have no chance of receiving this benefit. They are, in effect, being used to prove a point. Of course, it might be argued that this practice is morally neutral since those who receive the placebo are neither being benefited nor put at risk. But this argument is problematic too. The participant's life expectancy might be increased by receiving the experimental drug, or, worse, the subject may be denied a known but moderately effective drug in order to prove the greater effectiveness of the new drug. In cases where a known or moderately effective drug is going to be denied to a participant, or where there is some evidence of a positive effect from the drug to be tested, it would seem that placebo studies have a heavier burden of justification. The value of the knowledge to be obtained must be great enough to overcome the potential harm to the subjects who will not receive positive treatment.

Another issue that arises independent of the involvement of children surrounds the doctor's quick dismissal of Mary's concerns about the risks of participation in the study. That is, even if Mary weren't representing Liz but were herself considering participation in research, the doctor's response to her concerns about the risks would be disturbing. Generally it is recognized that consent to participate in an experiment is valid only when the person is informed and not coerced. To be informed means, among other things, to understand the risks involved; not to be coerced means to freely choose to participate. The latter entails at a minimum that the person has not been threatened with negative consequences for refusal to participate. In this case, the doctor did not threaten Mary, nor did he misinform her. Rather, there is a subtle problem here because Dr. Kid is both the doctor and part of the research team. In relation to Mary, Dr. Kid is an expert, and Mary has put Liz in his hands. In order to ensure that Liz will get good treatment, Mary will want to maintain a good relationship with Dr. Kid. One can't help but wonder if this situation doesn't pressure Mary to agree to Liz's participation in the study. And, while there is no doubt that Dr. Kid is more knowledgeable than Mary, it is not clear that he has the expertise to determine whether participation is a reasonable risk for Liz. Indeed, the fact that he is committed to finding subjects for the study seems to disqualify him from deciding whether Liz should participate. He has a bias in favor of participation. So, while the doctor should discuss the consent form with Mary, he should take care not to let his interest in the study sway her.

Also related to the matter of a valid consent is the question of the appropriate level, if any, of compensation for participation in the study. As already mentioned, one of the criteria for valid consent is that the consent not be coerced. There should be no threat of negative consequences for refusal to participate. This case raises the more subtle issue of whether the promise of compensation might also undermine a valid consent. In other words, the ideal is that individuals freely consent. We can imagine types of compensation that exploit the vulnerabilities of individuals or groups of individuals. If we allow researchers to pay subjects large amounts of money for participation, we are likely to find that poor people will readily participate. But the larger the compensation, the more it will seem that poor people are being exploited. High levels of compensation for participation take advantage of the subjects' circumstances and entice them into doing something they would prefer not to do if their circumstances were better.

So much for the issues that are independent of the involvement of a child. As noted earlier, the issues in this case are compounded by the fact that the potential subject of the research is a child - a child represented by a foster mother, not a biological mother. Children are considered a special class of research subjects because they are thought to be incapable, themselves, of giving a valid consent. They do not have the capacities and experience essential for giving a valid consent. On the other hand, children's bodies differ significantly from adult bodies. So, if research is not done on children, knowledge of how to treat or prevent their illnesses may never be acquired. The point is nicely illustrated in this case insofar as it focuses on study of a drug, Eradovir, which is known to be effective in adults, but has not been tested for treatment of pediatric AIDS. The only way to find out how Eradovir affects pediatric AIDS is to do a study.

If studies are to be done involving children and if children are not capable of giving a valid consent, then the next best thing would seem to be to have parents consent on behalf of their children. Parents, it appears, are more likely than anyone else to understand the best interests of their children. The questions at the end of Part 2 raise two issues about representative consent. The first has to do with whether a foster mother can adequately represent the interests of a child, and the second question has to do with whether the child can or should be involved in the decision to participate.

Both issues are extremely important but both seem to be difficult to deal with in general terms. From a public policy point of view, it seems reasonable: 1) to allow some research on children to be done; 2) not to allow children to consent themselves, unless they have reached a certain age or demonstrated the ability to understand the risks involved; and 3) to assign a representative to represent the best interests of the child when parents cannot do so. I admit that the age at which a child has the ability to represent him- or herself varies from child to child. The law draws a somewhat arbitrary line about the age at which children are old enough to make decisions for themselves, but a line has to be drawn for the protection children. In any case, it is a good thing for the child to be involved in the decision making about participation both because it is likely to help with participation and because it will help the child develop into an adult.

The question whether the foster mother can adequately represent the best interests of the child can be answered in a similar way. It would be unrealistic to claim that foster parents will always act in the best interests of their children, but it is important to remember that it would also be unrealistic to claim that biological parents will always act in the best interests of their children. In reality, there is a good deal of variation among foster parents as well as biological parents. Indeed, it is difficult to say what a parent or a foster parent ought to do in this case.

This is a very interesting case. At first, it appears to be about a conflict of obligations, but as one works through it, the conflict disappears and attention focuses on fundamental questions about ownership and credit for ideas and the obligations of reviewers. As I wrestled with Dr. Ethicos' obligation to her graduate student, Sarah Tonin, it became clear to me that the weight of Dr. Ethicos' decision (to give Sarah information about possible interaction between survivin and GFX) rests almost entirely on her responsibilities as a reviewer. Her obligation to her graduate student cannot entail doing something immoral to assist her. In other words, if it were clear that Dr. Ethicos has an obligation not to reveal anything she learns from reviewing a paper, then it would follow that she should not reveal anything to Sarah - whether she is distressed or not.

The problem is that the responsibilities of a reviewer (in this case Dr. Ethicos) with regard to what she learns when reviewing a paper are just not as clear as they should be. The responsibilities of reviewers and the rights of authors have been poorly articulated by the scientific community and continue to be open to a variety of interpretations. While I could only speculate about why the scientific community does not clarify the rights of authors and the responsibilities of reviewers, ideas about these rights and responsibilities seem to vacillate between trying to achieve a fair system of credit and a system of intellectual property rights. It is helpful to think through Dr. Ethicos' situation in terms of credit and property.

In the intellectual property system that prevails in the United States, it is quite clear that no one can own ideas. In the patent system, individuals can invent devices or processes that make use of abstract ideas, laws of nature and mathematical algorithms, but they cannot own the ideas, laws of nature or algorithms. Similarly, in the copyright system, authors can own the expression of ideas but not the ideas themselves. If we think through the review process in these terms, it seems that the idea that survivin and GFX interact to extend the survival-promoting effects of survivin, is not patentable or copyrightable. It is simply an idea, and no one can own an idea. In this framework, there would be nothing wrong with Dr. Ethicos giving this idea to her student.

What has traditionally prevailed in science is a credit system, which is much more informal and less clear than intellectual property. Here the idea seems to be that individuals should be given credit for the work that they do and for being first to come up with an idea. Credit systems generally do not give authors control of ideas or information, although an author may also have a copyright on text describing the work done and/or ideas expressed in a particular way. In a credit system, the important thing is that an author (the right person) be given credit. Hence, in this case it would seem that Dr. Ethicos could also tell her student about the idea. She should credit it to Dr. Spacely, and when and if Sarah Tonin's research is published, Sarah should cite Dr. Spacely's paper - either as an unpublished manuscript or as a now published article.

Some might argue that this approach is not fair to Dr. Spacely, because Dr. Spacely submitted her/his manuscript to the journal in confidence. It seems, however, that there are and should be limits on the expectations of confidentiality. That is, it is reasonable for an author to expect that reviewers will not go back to their labs and duplicate the research they read about and try to beat the author to publication of an idea. On the other hand, it seems unreasonable to expect that reviewers will not absorb ideas. It is unreasonable to suppose that reviewers will not learn things from reviewing articles, things that will help them in their research. One of the reasons scientists agree to review articles is because they learn from doing reviews; reviewing helps scientists keep up in their area of specialty. I admit that there may be some gray area here, but it seems important to acknowledge that it is appropriate for reviewers to use some ideas they discover when reviewing unpublished manuscripts. Science progresses collectively, and any system that interferes with building on one another's ideas would be counterproductive. With these thoughts in mind, I now turn to the discussion questions at the end of the case.

Should Dr. Ethicos have refused to review this paper? I don't see any reason why Dr. Ethicos should have refused to review the paper. She was probably selected as a reviewer because the paper is in the area of her expertise. If reviewers were to refuse to review papers in their areas of expertise, then papers could only be reviewed by nonexperts.

Should Dr. Ethicos suggest that Sarah try adding GFX? Yes. I don't see any reason not to mention this possibility to Sarah. The problem from the point of science, of course, is that the interaction has not been established. Sarah will probably have to do some work to establish the connection, and this work might overlap with Dr. Spacely's research. However, Sarah is primarily interested in using the interaction for another purpose. Whatever she does with the idea, she should cite Dr. Spacely.

How long would it be necessary to wait before mentioning this experiment? Given what I have already said, I don't think time is important here.

Would your answers to Questions 2 and 3 be different if Sarah came to Dr. Ethicos frustrated, dejected and ready to give up the project? I don't think Sarah's level of distress is relevant here. Either it's okay for Dr. Ethicos to tell her, or it's not okay to tell her. If it's okay for Dr. Ethicos to tell Sarah, then she should tell her before she becomes distressed.

If you were Dr. Ethicos, would your course of action be any different if another professor independently mentioned to you that you had heard a rumor that there might be an interaction between the two proteins? According to my analysis, this variation does not make a difference. However, the fact that it is possible to hear the idea as a rumor further illustrates how ideas (not texts or inventions) move about in science and how they cannot and should not be owned.

Commentary On

This seems a fairly straightforward case in which the professor, Dr. Edgar, is doing a terrible job of advising his student, Janet, and she has become the victim of his poor advice and his dishonesty. In analyzing this case, it may be helpful to consider what Dr. Edgar might say in his own defense and at the same time try to disentangle the ethical issues from the management issues.

What has Dr. Edgar done wrong? The case describes what seem to be a series of failures to fulfill his responsibilities to his student. He fails to make arranged meetings with her. He fails to give her timely feedback. Perhaps most important, he fails to give her the benefit of his knowledge when he sees a flaw in her design. Then, to save himself from the embarrassment of having approved a flawed design, he lies to his colleagues, telling them that he had told her about the flaw. Janet appropriately feels that she has not gotten the advice she needs and consequently, she has been put in a situation where she fails, i.e., the committee does not accept her proposal.

Dr. Edgar might defend himself by explaining that he is overloaded with work; he is trying to do a good job, but, he might admit, he is having a hard time managing all of his responsibilities. He might also argue that nothing has been lost since Janet can fix her proposal and resubmit it.

That is not, by any means, an adequate defense, but it is how Dr. Edgar might characterize the situation to minimize its meaning. The defense points to the entanglement of ethics and management. Because Dr. Edgar is doing such a bad job managing his responsibilities, his behavior crosses the line between poor management and unprofessional and unethical conduct. It is hard to say precisely when Dr. Edgar crosses the line; however, it seems clear that he is over the line when he fails to tell Janet about the flaw in the design. He compounds that wrong by lying to the other members of the committee and refusing to take responsibility for his own behavior.

It should be noted here that trust is at the very foundation of the student-professor relationship. Education cannot happen unless students trust their professors and professors trust their students. Students must trust that professors give them accurate knowledge and that they design courses and give assignments that will lead to knowledge and skills the student will need. Students also must trust that the process by which they are evaluated will be fair rather than arbitrary. In parallel, professors must trust that students will turn in work that they have done (rather than someone else's work), that students will take their advice and respond to their suggestions and criticisms. Professor Edgar's behavior is reprehensible because it undermines that trust.

Part 2 of the case focuses on what Janet should do. Again the ethical and the management issues seem to be entangled. Several of the decisions that Janet must make are simply a matter of how best to manage her way through graduate school; others have to do with her responsibility to Dr. Edgar and future graduate students.

I do not think that Janet has a moral responsibility to Dr. Edgar to keep quiet about what happened, but I think her best interests lay in handling the problem delicately and in a way that doesn't undermine her reputation in the department.

I am not convinced that Janet is obligated to do something to protect future graduate students, although I think it is good for her to do something. I hesitate to recommend that she do anything because she can't be sure that Dr. Edgar has behaved similarly with other students. Even though a student has told her that something similar happened to him, the report is informal, and she has no way of knowing how bad the problem is. Going to Dr. Smith is a good thing for Janet to do because Dr. Smith is in a better position to investigate the problem and determine its severity. Not only is he in the best position to take action, it is his responsibility to do so. Of course, it is important to remember that Dr. Smith will have to give Dr. Edgar a chance to explain his side of the story. He cannot simply act on an accusation.

Ideally, Dr. Smith will keep his conversation with Janet confidential and will give her advice as to whether to continue to work with Dr. Edgar or to find a new adviser.

Commentary On

Professor Norman's behavior is bad science and bad mentoring for a number of reasons. I will separate the steps in his behavior and try to explain why his actions are wrong.

The first act we can distinguish is the act of sending off a paper based on a student's work without informing the student or obtaining her agreement. Even if Sherry had completed the experiments, it would be a breach of trust for Norman to submit her paper without telling her what he is about to do. Just as he should not let any papers go out under his own name unless he has reviewed them, the student, Sherry, entitled to review her papers. Sherry is denied the opportunity to act responsibly with respect to her own work. Norman is at fault both for denying her this opportunity and for failing as a mentor to instill in Sherry a sense of responsibility for her work.

Norman's second improper act is to fabricate data. Fabrication of data is perhaps the most serious breach of research ethics. The foundations of science rest on the accuracy of research results and reporting. Science is a collective activity in which scientists build on one another's work; the whole enterprise depends heavily on the reliability and trustworthiness of each scientist's work. Imagine what would happen to science if scientists could never be certain of the truth of results reported in scientific journals!

Still, let us give Dr. Norman the benefit of the doubt We might suppose that Norman is so knowledgeable in his field that he could accurately predict the results of Sherry's experiments before they are completed. Even if that were true, it still seems that Norman is taking a short cut and that he does not recognize the whole point of doing science. The point is to verify predictions and thereby move them from hypotheses to evidence. For Norman to bypass this step in doing science is not only to do bad science, it is, in an important sense, to cease to do science at all.

By his initial actions (submitting the paper without telling the student and fabricating data), Norman traps the student in a no-win situation. Whatever Sherry does after she discovers what he has done, she jeopardizes her career. If she makes waves about what Norman has done, she might be considered a whistle-blower or trouble-maker. Even if she isn't perceived as a trouble-maker, she jeopardizes her relationship with Norman, who has a good deal of power over her career. On the other hand, if she does nothing, she runs the risk of the published article being a false representation of her research; that is, she runs the risk of becoming a co-conspirator in fabrication.

Given that Norman has trapped her in a no-win situation, I sympathize with Sherry's decision to wait until the results of her experiments are in. Of course, the risk remains that if the results do not confirm what the professor fabricated, she will be in deeper trouble. She will have knowingly let his fabrication go, and she will have to take more disruptive action to correct his wrong.

Once Sherry completes her research and finds that her results conform to Norman's fabrication, some of the pressure is relieved. At least her published results will not be false. Still, the process has been bad and it was just a matter of luck that she isn't going to publish false results. Sherry ought to do something. As the case goes, she discovers that Norman has done the same sort of thing with other students. However, that seems irrelevant since the one incident is enough to justify action.

What should Sherry do? It might be a good idea for Sherry to wait until she has defended and moved to her post-doc. Then she should contact an appropriate person back at the institution where she worked with Norman. He will still be in a position to damage her career, but she can attempt to have her concerns addressed while remaining anonymous.

I should add here that one option that Sherry had throughout the case was to go to someone with authority, report her concerns, get advice, and try to remain anonymous. I was reluctant to propose this solution because it is often difficult to remain anonymous, and often there is no clearly appropriate person to report to. Nevertheless, it is generally a good idea to keep someone informed as to your actions, even if you ask them not to act.

At the outset, it is important to point out that Thomas should not leap to a conclusion about Dr. Woodward. Admittedly, the situation looks serious from the students' reports, but there may be another side to the story. In a sense, an accusation has been made, but all the evidence is not yet in. Thomas must be fair to Woodward as well as to the students. It is possible, for example, that Woodward gave the students placebos, or that the students have underplayed their role in the use of the drug. In either case, Woodward's behavior would still be irresponsible, but Thomas should proceed carefully in order to be fair to all involved.

Thomas should take action; the accusations are too serious to ignore. It is difficult to specify a particular action because Thomas's best course depends on details that are not specified in the case. Are there designated individuals to handle grievances? Are there senior faculty who can be trusted to keep conversations confidential? Thomas could go to a department chair or director of graduate studies and request that his report be kept confidential and that he not be compelled to name the students. Keeping Thomas or the students anonymous, however, is a tricky business. For one thing, our legal system recognizes the value of knowing your accusers; this right diminishes the likelihood of false accusations. Also, if Woodward has done what the students accuse him of doing, then if anyone inquires about his activities, he is likely to infer which students have made the accusations.

It is irrelevant whether Thomas believes the beta-blockers to be harmless or not. If Woodward is not licensed to administer such drugs to humans, then what he is doing is wrong whether the drugs are harmful or harmless. Of course, Thomas may not know whether Woodward is licensed. By reporting his concerns to an appropriate person, he would leave it to the other person to find out whether Woodward is licensed.

Until all the information has been gathered, it is difficult to determine whether the students bear any responsibility. However, if their reports are accurate, it would seem that they bear some responsibility: They were not forced to take the beta-blockers, and they should have known that it was inappropriate for them to do so. We don't have enough information to indicate the extent of their responsibility. In any case, their contribution to the activity may not diminish Woodward's responsibility for his behavior.

If the students' accusations are true, then Woodward's conduct reflects quite directly on his ability to do good science. That is, his behavior indicates a degree of recklessness and a disregard for legal constraints. The accusations also suggest the possibility that he has manipulated his students into serving as subjects for research without their even knowing that they are doing so. Moreover, the accusations suggest that he may be encouraging students to become dependent on drugs to succeed rather than encouraging them to make it on their own abilities.

Insofar as Woodward's behavior may reflect his attitude toward the rules of science and his mentoring of students, it is relevant to his tenure decision. The problem with regard to his tenure decision is, however, that at this point what we have are accusations; accusations, without investigation and hearing, can do a great deal of damage. So while Thomas should bring the accusations forward, he should do so in a manner that gives Woodward a fair opportunity to defend himself.

Thomas should report the situation to an appropriate person such as a department chair or direct or of graduate programs. He then must hope that the situation will be handled appropriately.

This case raises two important, interrelated issues. Both have to do with obtaining informed consent from those who participate in scientific studies. The first issue has to do with whether individual consent is sufficient for valid consent when the individuals are members of a larger unit with an authority structure; and the second has to do with the use of incentives, pressure and deceit to persuade individuals to give their consent.

The purpose of the informed consent requirement is to ensure that individuals are not used in research without their knowledge and agreement. The requirement ensures that individuals are respected and their autonomy is recognized. To bypass informed consent is to treat individuals merely as a means to some end, be it knowledge, the researcher's career success, or a social good such as a cure for a disease.

In addition to the informed consent issues, Tiptree is pressuring Kroeber to do things that Kroeber believes may harm the delicate relationship she has developed with a tribe, a relationship Kroeber needs to maintain so that she can continue with her own research. I will not address this aspect of the case except to say that while Tiptree's strategy is not blatantly immoral (Kroeber is free to refuse to help) it is one that probably will not serve him well in the long run. Why should Kroeber help Tiptree in the future when he shows such disregard for Kroeber's own research?

The first apparent breach of research ethics arises when Tiptree circumvents the council and approaches the families directly. Interestingly, from the perspective of traditional ethical theory, it is not at all clear that this behavior violates any moral principle since traditional ethical theory does not come to grips with an authority such as a tribal council. As long as Tiptree obtains the informed consent of the individuals from whom he obtains blood, I don't believe he is doing anything immoral. In going directly to individuals, however, he is disrespecting the authority of the tribal council. His actions will damage both his future relationship with the tribe and Kroeber's relationship with the tribe. The wrong to Kroeber is the worst of these two, since Kroeber has cooperated with him. Tiptree's behavior will severely damage his relationship with Kroeber.

Of the three strategies that Tiptree proposes to use in obtaining consent from individual members of the tribe, only the first seems to be without problems. With this strategy, Tiptree will inform the individuals about the possible positive results of his research. He also has an obligation to inform them about any potential risks or negative consequences.

The second strategy -- offering the poorer members of the tribe "things" in exchange for the blood samples -- moves informed consent closer to exploitation. When consent is coerced, it is not freely given and, therefore, is not valid. Offering things in exchange for participation is not exactly coercion, but it moves the situation in that direction. Tiptree is taking advantage of the poverty of these members of the tribe. Would they consent if they weren't poor? I hesitate to say that the offer of "things" invalidates the consent because offering compensation is a common practice in medical experimentation. Still, compensation should be flagged as something it would be preferable not to use.

The third strategy crosses the line. It is an immoral strategy because it is manipulative and deceitful. If Tiptree obtains consent by telling members of the tribe that they owe it to Kroeber and suggesting to them that they won't receive help in the future if they don't cooperate with him, he is entirely misrepresenting the the situation This strategy invalidates any consent he may obtain.

Are Tiptree's actions justified, especially given that his research is ultimately successful in locating a leukemia resistant gene? This question is simply a version of: Do the ends justify the means? There may be rare cases in which ends do justify means, but Tiptree is being arrogant and self-serving in presuming that he can do the calculation himself. His attitude is arrogant because it assumes that Tiptree knows better than the Yuchi what their best interests are. Whatever the calculation of means and ends, Tiptree should not make it since he stands to gain by the outcome.

This case is particularly interesting because of the question it raises about whether it is acceptable for a researcher to bypass the authority of a tribal council. I find it difficult to argue for a moral requirement to obtain consent from the tribal council, but it seems that it serves the long-term interest of science for researchers to recognize the systems of authority of the people with whom they want to work. In other words, even if seeking the consent of the tribal council is not morally required, it will benefit science in the long run because it shows respect for the tribe.

This case demonstrates very well how the vagueness and uncertainty of conventions on credit and ownership create subtle but complex problems in the practice of science. Part 2 illustrates the subtleties of the authority relationship between student and professor and how this relationship exacerbates issues of credit and ownership.

Questions about the behavior of Professor Black and Sean can be raised at each stage in the case description. For Professor Black, there seem to be two important questions: 1) Was he wrong to talk to Sean initially? That is, was it inappropriate for him to use Sean as a sounding board? and 2) Was it wrong for him to use Sean's ideas in the article he was co-writing with Dr. Hong?

In principle, neither type of behavior seems problematic. A professor talking out the interpretation of data with a student seems an ideal situation for student learning and training. The student sees how a professor thinks through a problem and gains practice by participating in the activity. Moreover, if the ideas that Sean had suggested to Professor Black had been published by someone else (ideas that Professor Black had unaware of), then there would be no problem here. Sean simply would have assisted Professor Black by pointing him to ideas and literature already published. We would assume that Professor Black could then read up on the ideas and make use of them in writing the paper.

A problem arises because the case description indicates that Sean's interpretation has not been published, and that Sean plans to present it in his thesis and, presumably, eventually publish it. Publication will establish his role in the development of an important idea. The case contains a degree of ambiguity about the status of Sean's contribution. He has both given Professor Black articles that point in a certain direction and shown him a model he has developed, which, we gather, goes beyond the literature. However, it is unclear whether Professor Black is using one or both of these contributions. Simply using (and citing) the articles provided by Sean would not justify co-authorship, while using a model developed by Sean would seem to justify including him as a co-author. Nevertheless, it may be quite realistic to pose a case in which this issue is unclear, for it is often difficult to distinguish the original part of a new idea from what has been suggested in the literature but not yet pulled together into an articulated theory or model.

Sean's behavior does not appear to be morally questionable. On the contrary, he has been open with his ideas, willing to assist and to share what he knows. This type of behavior has traditionally been highly valued in science. The most important goal is generally thought to be furthering knowledge; giving and getting credit is a means to this end, not an end in itself. Given what happens in the case, we might say that strategically Sean should have held back some of his ideas, but that is to say what might have been better for Sean and his career; it is not to say that his behavior was immoral.

Another major question about Professor Black's behavior is whether he was wrong to leave the decision on co-authorship up to Sean. Here I think Professor Black is wrong. In leaving the matter up to Sean, he is saying, in effect, that he has no concerns about or responsibility for standards of authorship and credit in science. He is refusing to deal with these issues in his own work. That is a double wrong: It is a refusal to accept responsibility for his own behavior, and it is wrong because Dr. Black's behavior serves as a model for Sean. In effect, he is telling Sean that scientists can treat authorship and credit in a cavalier (almost reckless) manner.

Professor Black and Sean have at least three options in dealing with this situation. 1) The specific claim of Sean's thesis can be removed from the paper. The case does not give us enough detail to know whether that is possible without ruining the paper. Could it be written in a way that points in the direction of Sean's thesis but doesn't scoop it? in a way that draws on the literature, but not Sean's model? 2) Sean's work could be cited in the paper and described as a forthcoming and extremely promising thesis. 3) Sean could be made a co-author of the paper.

It is difficult to decide which of these three options should be chosen without more details. For example, would co-authorship of the paper hurt or help Sean in defending his thesis and publishing it in the future? To what extent are the ideas Professor Black has used already in the literature?

In any case, Sean should be asked for permission to use anything that might have implications for his thesis or future publications. Whether or not he should be made co-author depends on what he agrees to and what is actually published. Professor Black should take responsibility for the final decision. Moreover, if the situation continues to be gray, Professor Black ought to err on the side of giving credit and/or authorship.