Chapter 2: What is Ethics? (Section I- A Guide To Teaching the Ethical Dimensions of Science)

Description

The teaching of ethics is particularly suited to the use of illustrative case studies. Such narratives can be used to present examples of a range of significant ethical issues related to some human enterprise and many of the complexities associated with each of the issues. The cases can be either fictional or they can be based on actual events. Chapter 2 discusses the definition of Ethics.

Body

From: Ethics in the Science Classroom: An Instructional Guide for Secondary School Science Teachers

Defining Ethics and Morality

Ethics is concerned with what is right or wrong, good or bad, fair or unfair, responsible or irresponsible, obligatory or permissible, praiseworthy or blameworthy. It is associated with guilt, shame, indignation, resentment, empathy, compassion, and care. It is interested in character as well as conduct. It addresses matters of public policy as well as more personal matters. On the one hand, it draws strength from our social environment, established practices, law, religion, and individual conscience. On the other hand, it critically assesses each of these sources of strength. So, ethics is complex and often perplexing and controversial. It defies concise, clear definition. Yet, it is something with which all of us, including young children, have a working familiarity.

This makes ethics sound like morality. This is intentional on our part. Like most contemporary texts, ours will treat ethics and morality as roughly synonymous. This is in keeping with the etymology of the two words. Moral derives from the Latin word moralis. Moralis was a term that ancient Roman philosopher Cicero made up to translate the ancient Greek ethikos into Latin.16 Both mean, roughly, pertaining to character; but today their English derivatives deal with much more than character.

It is tempting to seek a general definition of ethics before discussing any particular ethical topic. Although we have said a little bit about what we take ethics to be, we have not offered such a definition; and we will not do so. Demanding a definition at the outset can stifle discussion as easily as it can stimulate it. We offer one of Plato's dialogues as a case in point.

In the Euthyphro we find Socrates and Euthyphro meeting each other on the way to court. Socrates is being tried allegedly for corrupting the youth by encouraging them to believe in "false gods" and for making the better argument appear the worse.17 Euthyphro is setting out to prosecute his father allegedly for murdering one of his servants. Socrates expresses surprise that Euthyphro would prosecute his own father, and he asks him for an explanation. Euthyphro appeals to the justice18 of doing this. Socrates then asks him to define justice. Euthyphro offers some examples of justice and injustice. Socrates rejects them all on the grounds that they are only examples, whereas what he wants Euthyphro to tell him is what all just acts have in common that makes them just. That is, what Socrates demands is a definition that captures the essence of justice in all of its instances. Unfortunately, Euthyphro attempts to satisfy Socrates' demand rather than challenge its reasonableness. All of his efforts fail miserably, and the dialogue ends with Euthyphro indicating he must leave to get on with his business. The implication is that Euthyphro is going off to prosecute his father without the least grasp of the value in which name he is acting, justice.

As much as we might desire the sort of definition Socrates and Euthyphro were seeking, it seems an unreasonable demand. At best, this might come at the end of an inquiry rather than at its beginning. Morality, like science, should allow room for piecemeal exploration and discovery. It should not be necessary to provide a comprehensive definition of justice in order to be able to say with confidence that sometimes drawing lots is a just procedure, having the person who cuts the pie get the last piece is just, compensating people for the work they do is just, denying women the right to vote is unjust, punishing the innocent is unjust, and so on. Further reflection might reveal special features these examples all have in common, or at least special ways of grouping them. But having a solid starting point does not require having a well worked out definition of the concept under consideration.

18th century philosopher Thomas Reid has some useful advice for those interested in developing a systematic understanding of morality. He compares a system of morals to "laws of motion in the natural world, which, though few and simple, serve to regulate an infinite variety of operations throughout the universe."19 However, he contrasts a system of morals with a system of geometry:20A system of morals is not like a system of geometry, where the subsequent parts derive their evidence from the preceding, and one chain of reasoning is carried on from the beginning; so that, if the arrangement is changed, the chain is broken, and the evidence is lost. It resembles more a system of botany, or mineralogy, where the subsequent parts depend not for their evidence upon the preceding, and the arrangement is made to facilitate apprehension and memory, and not to give evidence.

Reid's view has important implications for how we should characterize moral development. On the botanical model, access to basic moral understanding need not be an all or nothing affair. Its range and complexity can be a matter of degree, and confusion in one area need not infect all others. Understanding how different, basic moral considerations are related to one another can be a matter for discovery (and dispute) without our having to say that those whose picture is incomplete, or somewhat confused have no understanding of basic moral concepts.

Back to Top

Ethics and Childhood

Children's introduction to ethics, or morality, comes rather early. They argue with siblings and playmates about what is fair or unfair. The praise and blame they receive from parents, teachers, and others encourages them to believe that they are capable of some degree of responsible behavior. They are both recipients and dispensers of resentment, indignation, and other morally reactive attitudes. There is also strong evidence that children, even as young as four, seem to have an intuitive understanding of the difference between what is merely conventional (e.g., wearing certain clothes to school) and what is morally important (e.g., not throwing paint in another child's face).21 So, despite their limited experience, children typically have a fair degree of moral sophistication by the time they enter school.

What comes next is a gradual enlargement and refinement of basic moral concepts, a process that, nevertheless, preserves many of the central features of those concepts. All of us can probably recall examples from our childhood of clear instances of fairness, unfairness, honesty, dishonesty, courage, and cowardice that have retained their grip on us as paradigms, or clear cut illustrations, of basic moral ideas. As philosopher Gareth Matthews puts it:21A young child is able to latch onto the moral kind, bravery, or lying, by grasping central paradigms of that kind, paradigms that even the most mature and sophisticated moral agents still count as paradigmatic. Moral development is ... enlarging the stock of paradigms for each moral kind; developing better and better definitions of whatever it is these paradigms exemplify; appreciating better the relation between straightforward instances of the kind and close relatives; and learning to adjudicate competing claims from different moral kinds (classically the sometimes competing claims of justice and compassion, but many other conflicts are possible).

This makes it clear that, although a child's moral start may be early and impressive, there is much conflict and confusion that needs to be sorted through. It means that there is a continual need for moral reflection, and this does not stop with adulthood, which merely adds new dimensions.

Nevertheless, some may think that morality is more a matter of subjective feelings than careful reflection. However, research by developmental psychologists such as Jean Piaget, Lawrence Kohlberg, Carol Gilligan, James Rest, and many others provides strong evidence that, important as feelings are, moral reasoning is a fundamental part of morality as well.22Piaget and Kohlberg, in particular, did pioneering work to show that there are significant parallels between the cognitive development of children and their moral development.23 Many of the details of their accounts have been hotly disputed, but a salient feature that survives is that moral judgment involves more than just feelings. Moral judgments (e.g., "Smith acted wrongly in fabricating the lab data") are amenable to being either supported or criticized by good reasons. ("By fabricating the data, Smith has misled other researchers and contributed to an atmosphere of distrust in the lab." "A thorough examination of Smith's notebooks shows that no fabrication has taken place.")

Kohlberg's account of moral development has attracted a very large following among educators, as well as a growing number of critics. He characterizes development in terms of an invariable sequence of six stages.24 The first two stages are highly self-interested and self-centered. Stage one is dominated by the fear of punishment and the promise of reward. Stage two is based on reciprocal agreements ("You scratch my back, and I'll scratch yours"). The next two stages are what Kohlberg calls conventional morality. Stage three rests on the approval and disapproval of friends and peers. Stage four appeals to "law and order" as necessary for social cohesion and order. Only the last two stages embrace what Kohlberg calls critical, or post-conventional, morality. In these two stages one acts on self-chosen principles that can be used to evaluate the appropriateness of responses in the first four stages. Kohlberg has been criticized for holding that moral development proceeds in a rigidly sequential manner (no stage can be skipped, and there is no regression to earlier stages); for assuming that later stages are more adequate morally than earlier ones; for being male biased in overemphasizing the separateness of individuals, justice, rights, duties, and abstract principles at the expense of equally important notions of interdependence, care, and responsibility; for claiming that moral development follows basically the same patterns in all societies; for underestimating the moral abilities of younger children; and for underestimating the extent to which adults employ critical moral reasoning. We will not attempt to address these issues here.25

Nevertheless, whatever its limitations, Kohlberg's theory makes some important contributions to our understanding of moral education. By describing many common types of moral reasoning, it invites us to be more reflective about how we and those around us typically do arrive at our moral judgments. It invites us to raise critical questions about how we should arrive at those judgments. It encourages us to be more autonomous, or critical, in our moral thinking rather than simply letting others set our moral values for us and allowing ourselves to accept without any questions the conventions that currently prevail. It brings vividly to mind our self-interested and egocentric tendencies and urges us to employ more perceptive and consistent habits of moral thinking. Finally, it emphasizes the importance of giving reasons in support of our judgments

Back to Top

Descriptive and Normative Inquiry

It is useful to think of ethics, or morality, as an umbrella term that covers a broad range of practical concerns, many of which are rather straightforwardly understood and dealt with, but some of which are not very clearly understood and are often quite controversial. This can help us see how the study of ethics differs from most other subjects of study, at least as they are traditionally understood.

Chemistry, for example, is typically viewed as empirical, or descriptive. We study chemistry to learn about how acids are different from bases, what the basic chemical properties of certain metals are, what the most basic principles are that explain chemical changes, and so on. Presumably, what we learn is based on careful, scientific observation. There is an attempt to describe what is the case, at least in the world of chemistry.

There is a descriptive aspect of morality, too. Psychologists, sociologists, and anthropologists might try to determine what particular values a certain group of people actually accept, how these values are related to people's behavior, their social and political institutions, or their religious beliefs. They can assemble information about the kinds of values people hold. Some of these values, although not moral values themselves (e.g., certain aesthetic values or value we attach to material goods), may nevertheless be regarded as important enough to be accorded moral (and even legal) protection. But social scientists can describe this without necessarily endorsing the values that people actually accept as values they ought to accept. To ask what values people ought to accept is to ask a normative, rather than simply a descriptive question. It is to ask what values are worthy of being accepted, rather than simply whether they are accepted; and it is the business of normative ethics to address these questions.

Back to Top

Philosophical Ethics

Traditionally, ethics has been taught at the college level mainly in departments of philosophy. (We will discuss how this has recently changed in Chapter 3.) In large part, philosophical ethics is normative in its focus. It examines basic questions about what our values should be, what, if any, fundamental grounding they can be given, and whether they can be organized into a comprehensive, coherent theory. Another part of philosophical ethics is called metaethics, which studies the nature of the language and logic we use when we are concerned about morality (as distinct from, say, law or social etiquette).

Although the study of philosophical ethics might make valuable contributions to our understanding of relationships between ethics and science, we do not regard it as a necessary preparation for bringing ethics into science classes. Thomas Reid wisely warns us not to make the mistake of thinking that in order to understand [one's] duty, [one] must needs be a philosopher and a metaphysician.26 This does not mean that careful reflection is not needed. Nor does it mean that philosophical reflection is not needed. But, just as we do not need to be logicians in order to think logically, mathematicians in order to think mathematically, or scientists in order to think scientifically, we do not have to be philosophers in order to think philosophically.

What Reid is telling us is that we do not need to be a Plato or Aristotle in order to know our way about morally. He is also telling philosophers that in framing their theories they need to respect the understanding that ordinary, thoughtful people have of morality even though they may never have opened a philosophy book. In fact, most moral philosophers do this. For example, Aristotle's account of the virtues, Immanuel Kant's categorical imperative ("Act only on those maxims that you could at the same time will to be a universal law"), and John Stuart Mill's utilitarian theories (promote the greatest good for the greatest number) all begin with what they take to be commonly accepted moral views; and they see their task as articulating, refining, and reworking these views where necessary. They do this in ways that, nevertheless, respect common, everyday morality. For example, Kant tries to show how his categorical imperative gives us an improved understanding of the moral insights provided by the Golden Rule. John Stuart Mill argues that his utilitarian theory both respects and provides a solid foundation for such basic, commonly accepted rules of morality as telling the truth and keeping promises, while at the same time providing a more fundamental principle for resolving conflicts among rules (e.g., when keeping a promise requires harming someone). Difficult to discern as their writings sometimes are, the constraints that common morality placed on them remain evident.

Back to Top

Common Moral Values

Given the apparent moral differences found among people with different national, ethnic, or religious backgrounds, it may seem naive to talk, as we have, of common moral values. What moral values, if any, might be sharable across national, ethnic, religious, or other boundaries? This is the question philosopher Sissela Bok takes up in her recent book, Common Values.27She begins by listing a number of problems that cut across these boundaries: problems of the environment; war and hostility; epidemics; overpopulation; poverty; hunger; natural disasters (earthquakes, tornados, drought, floods); and even technological disasters (Chernobyl). The fact that we recognize these as common problems suggests that we share some basic values (e.g., health, safety, and the desire for at least minimal happiness).

However, our desire to get to the bottom of things often blocks gaining a clearer understanding of what we have in common. Bok nicely outlines this problem. She notes that we may feel we need a common base from which to proceed. But there are different ways in which we might express what we think we need. Bok mentions ten different ways. We may seek a set of moral values that are

  1. divinely ordained
  2. part of the natural order
  3. eternally valid
  4. valid without exception
  5. directly knowable by anyone who is rational
  6. perceivable by a "moral sense,"
  7. independent of us, in the sense that they do not depend on us for their existence
  8. objective rather than subjective
  9. held in common by virtually all human beings
  10. such that they've had to be worked out by all human societies.

Although religious and philosophical traditions have concentrated on 1-8, Bok suggests we should start with 9 and 10. Given the inability of our religious and philosophical traditions to reach consensus thus far on 1-8, it seems unlikely, she says, that we will reach consensus on 1-8.

In regard to 9 and 10, Bok makes four basic claims. First, there is a minimalist set of values that every viable society has had to accept in order to survive collectively. This includes positive duties of mutual support, loyalty, and reciprocity; negative duties to refrain from harming others; and norms for basic procedures and standards for resolving issues of justice. Second, she says that these values are necessary (although not sufficient) for human coexistence at every level-in one's personal and working life, in one's family, community, and nation, and even in international relations. Third, these values can respect diversity while at the same time providing a general framework within which abuses can be criticized. Finally, Bok says, these values can provide a common basis for cross-cultural discussions about how to deal with problems that have global dimensions.

Bok's point about finding common values while respecting diversity is very important. It is fairly easy to see that the same general values might play themselves out quite differently from one locale to another. For example, although England and the United States drive on opposite sides of the road, they share the same basic values of safe and efficient travel. There is no reason to insist that one way is better than the other for these purposes. However, either is clearly preferable to, say, a rule that mandates driving on the left side on Monday, Wednesday, Friday and the right side on Tuesday, Thursday, and the weekend -- or no rule at all. The United States tends to use stop lights at intersections, while England favors roundabouts. They may work equally well, or one may be better than the other -- as judged by the same general values of safety and efficiency. It is also quite likely that both systems can be improved in ways yet to be discovered.

However, Bok is making another point as well. She is suggesting that, even in the absence of agreement at the most fundamental level, those with very different moral and religious backgrounds may find common ground. A good example of this is the consensus reached by the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. This commission was established by the United States Congress in 1974, and it issued what is known as the Belmont Report in 1978. This report contains the guidelines used by institutional review boards (IRBs) at colleges, universities, and other institutions that receive federal funding for research involving human subjects. The task of IRBs is to examine research protocols to make certain that the rights and welfare of human subjects are being protected.

Congress made a serious effort to make sure that different perspectives would be represented. Albert Jonsen and Stephen Toulmin describe the composition of the commission in this way:28The eleven commissioners had varied backgrounds and interests. They included men and women; blacks and whites; Catholics and Protestants, Jews, and atheists; medical scientists and behaviorist psychologists; philosophers; lawyers; theologians; and public representatives. In all, five commissioners had scientific interests and six did not.

The commission got off to a slow start. Their deep religious and philosophical differences surfaced quickly and blocked their ability to move ahead. Then they decided to talk first about specific examples rather than more foundational concerns. As they discussed particular cases of research involving human subjects (like the Tuskegee case we will present later), they discovered substantial areas of agreement that enabled them eventually to formulate three basic areas of ethical concern: respect for persons, beneficence, and justice.

In articulating their concerns about respect for persons, the commission agreed with the Kantian idea that it is inappropriate to treat persons merely as means to the ends of research. They agreed that it is important to obtain the informed consent of subjects before including them in an experiment, thus respecting their ability and right to make an informed decision (respect for autonomy). In regard to beneficence, the commission accepted the utilitarian idea of trying to maximize benefits to human subjects while minimizing the risk of harm. Finally, in regard to justice, the commission agreed that discrimination in the selection of research subjects is inappropriate and that special attention needs to be given to especially vulnerable groups such as prisoners, children, and the elderly.

However, the commission also carefully avoided committing itself to a set of inflexible guidelines. The Belmont Report confidently, but modestly, comments:29Three principles, or general prescriptive judgments, that are relevant to research involving human subjects are identified in this statement. Other principles may also be relevant. These three are comprehensive, however, and are stated at a level of generalization that should assist scientists, subjects, reviewers and interested citizens to understand the ethical issues inherent in research involving human subjects. These principles cannot always be applied so as to resolve beyond dispute particular ethical problems. The objective is to provide an analytical framework that will guide the resolution of ethical problems arising from research involving human subjects.

So, as a result of their willingness to reason with each other despite their differences, the commission succeeded in coming up with a workable document that is now reflected in the policies and practices of research institutions that receive federal funding for some of their research. Both the deliberate process and its results bear the marks of reasonableness that we might hope is obtainable in a democratic, but diverse, society. In fact, the work of the commission models many of the values that can be served by bringing ethics into the science classroom by making apparent how science and ethics are interrelated and how the challenges this poses might be thoughtfully addressed.

Back to Top

Reasonableness

In so far as we are concerned with justifying our moral judgments, as distinct from simply asserting our views, we are striving to be reasonable with others. Justification in morality is similar to justification in science in this respect. Justification in either realm is a public process. Convincing oneself privately, and only in one's own terms, is insufficient. A mark of unreasonableness is an unwillingness seriously to consider ideas unless they are cast in one's own terms and in ways congenial to one's preset views. W.H. Sibley puts the moral case rather well:30 If I desire that my conduct shall be deemed reasonable by someone taking the standpoint of moral judgment, I must exhibit something more than mere rationality or intelligence. To be reasonable here is to see the matter -- as commonly put it -- from the other person's point of view, to discover how each will be affected by the possible alternative actions; and moreover not merely to see this (for any merely prudent person would do as much) but also to be prepared to be disinterestedly influenced, in reaching a decision, by the estimate of these possible results. I must justify my conduct in terms of some principle capable of being appealed to by all parties concerned, some principle from which we can reason in common.

Coming up with principles from which we can reason in common may seem like quite a challenge. But there is a wide-reaching, long-standing principle that is useful in getting us started. Most moral systems and major religions subscribe to some form of the Golden Rule: Do unto others as you would have them do unto you. Although simple to state, interpreting this principle has proven more difficult.31 We will discuss briefly a few of the difficulties and suggest how the Golden Rule might, nevertheless, prove useful in promoting the kind of reasonableness Sibley advocates.

Sometimes the Golden Rule is understood as a maxim of prudence: If you don't treat others as you want them to treat you, they may do likewise. Of course, we can take our chances that others will not do likewise, but this will usually require concealing from others that we are willing to take advantage of them, harm them, or cause them serious inconvenience in order to get what we want. This may work on special occasions, but it is difficult to sustain this on a regular basis, especially with those with whom one has a great deal of contact. So, it seems safer not to treat others in ways we don't want them to treat us -- for the most part.32

However, this rendering of the Golden Rule seems to fall short of capturing its moral intent, which is supposed to move us beyond thinking only of ourselves. If the prudential rendering is too centered on self-interest, there is another rendering that seems to go too far in the opposite direction, altruism. This rendering suggests that, since I would appreciate others making sacrifices to help me get what I want, I should do this for them. Taken to an extreme, each of us would give up much of what we want for ourselves in order that others will get what they want. Admirable as giving to others is, this seems to go too far in the direction of self-sacrifice.

That the Golden Rule might be given two such contrary renderings (self-interested and self-sacrificial) suggests that something has been lost in the translation. The Golden Rule was brought into this discussion in order to help clarify Sibley's notion of reasonableness. Yet, both renderings seem to end up with forms of unreasonableness. The self-interested version is unreasonable because it takes too much for oneself. The altruistic version is unreasonable because it does not leave enough for oneself. Either way there is a serious imbalance between the claims of oneself and others. The sort of reasonableness commended by Sibley urges us to employ a principle from which we can reason in common. This is really an appeal to fairness -- to be fair to others and to ourselves. Neither rendering of the Golden Rule discussed so far satisfies this.

We suggest that the Golden Rule be seen as embracing two basic moral concepts. The first is universalizability: Whatever is right (or wrong) in one situation is right (or wrong) in any relevantly similar situation.33 This is a requirement of both consistency and fairness. If it is morally acceptable for Judy, a brilliant, young scientist, to alter data to make it look better, it is morally acceptable for others in relevantly similar circumstances to do likewise. This would have a rather general application--rendering morally acceptable the alteration of data by all scientists, engineers, and many others who may find themselves in similar circumstances. If Judy considers the likely consequences of all scientists altering data when it seems to be advantageous to do so, she will come up with a very different picture than if only the consequences of one alteration of data is imagined. This will make it much harder for her to justify altering data.

The second concept the Golden Rules embraces is reversibility. In treating others as I would have them treat me I need to ask what I would think if the roles were reversed. For example, in contemplating lying to someone in order to avoid a difficulty, I need to ask if I would object to the lie if I were being lied to in a similar circumstance. By subjecting our thinking to this reversibility test, we will often find it more difficult to justify lying than when we do not consider how we would feel about being on the receiving end of such a lie.

Of course, the Golden Rule cannot do everything by itself. Its successful use depends on other values we have. For example, if I place no value on human life, including my own, then universalizability and reversibility alone will not show that I should refrain from harming others (or myself). Fortunately, nearly everyone does value at least her own life, happiness, and well-being; nearly everyone objects to being lied to; and nearly everyone recognizes that her happiness and well-being depends to a large extent on cooperation and mutual trust with others. What the Golden Rule can help us see more clearly is what taking these values seriously requires of us morally.

Nevertheless, we should not assume that Golden Rule thinking is always easy, even for those with the best of intentions. Philosopher Sissela Bok notes the dizzying effect the demands of the Golden Rule can have on us:34We need to shift back and forth between the two perspectives, and even to focus on both at once, as in straining to see both aspects of an optical illusion. In ethics, such a double focus leads to applying the Golden Rule: to strain to experience one's acts not only as subject and agent but as recipient, and sometimes victim. And while it is not always easy to put oneself in the place of someone affected by a fate one will never share, there is no such difficulty with lying. We all know what it is like to lie, to be told lies, to be correctly or falsely suspected of having lied. In principle, we can all readily share both perspectives.

In principle, Bok says, we can readily grasp both perspectives. However, in practice there are familiar and formidable obstacles. We are often psychologically predisposed against seeing things clearly. Thomas Reid observes:35There is ... no branch of science wherein men would be more harmonious in their opinions than in morals were they free from all Bias and prejudice. But this is hardly the case with any man. Men's private interests, their passions, and vicious inclinations; habits, do often blind their understandings, and bias their judgments. And as men are much disposed to take the Rules of Conduct from fashion rather than from the dictates of reason, so with regard to vices which are authorized by fashion the judgments of men are apt to be blinded by the Authority of the Multitude especially when Interest or Appetite leads the same Way .

Bok and Reid make evident that there are serious obstacles to clear headed moral thinking, whether this is in the sciences or life in general. At the same time they hold out hope that there is something that we can do about this. Since science itself seeks to avoid bias and prejudice in its inquiries, it seems like hospitable ground for hosting moral inquiry. At the same time, moral inquiry can help students of science understand their own liabilities even as they engage in scientific inquiry.

Back to Top

  • 16For this observation, we are indebted to Alasdair MacIntyre, After Virtue, 2nd Ed. (Notre Dame, IN: Notre Dame University Press, 1985), p. 38.
  • 17 Plato, The Trial and Death of Socrates, G.M.A. Grube, trans. (Indianapolis, IN: Hackett, 1975).
  • 18English translations vary, including piety, righteousness, and holiness as possible renderings. The precise word does not matter here, as it is the nature of Socrates's demand that is under consideration.
  • 19Thomas Reid, On the Active Powers of the Mind, in Philosophical Works, Vol. II, with notes by Sir William Hamilton (Hildesheim: Gekorg Olms Verlagsbuchanlung, 1995), p. 642.
  • 20Ibid.
  • 21See, e.g., Richard A Shweder, Elliot Turiel, and Nancy C. Much, "The Moral Intuitions of the Child," Social Cognitive Development: Frontiers and Possible Futures, John H. Flavell and Lee Ross, eds. (Cambridge: Cambridge University Press, 1981), p. 288.
  • 22Gareth Matthews, "Concept Formation and Moral Development," in James Russell, ed., Philosophical Perspectives on Developmental Psychology (Oxford: Basil Blackwell, 1987), p. 185.
  • 23For balanced, accessible discussions of recent findings in moral development see, e.g., William Damon, The Moral Child (New York: Free Press, 1988) and Daniel K. Lapsley, Moral Psychology, (Boulder, CO: Westview Press, 1996).
  • 24See, for example, Lawrence Kohlberg, The Philosophy of Moral Development: Essays on Moral Development, Vol.1 (San Francisco: Harper & Row, 1981).
  • 25. Michael Pritchard has written extensively on many of them elsewhere. See his On Becoming Responsible (Lawrence, KS: University Press of Kansas, 1991) and Reasonable Children (Lawrence, KS: University Press of Kansas, 1996).
  • 26 Reid, p. 643.
  • 27 Sissela Bok, Common Values (Columbia, MO: University of Missouri Press, 1995).
  • 28 Albert R. Jonsen and Stephen Toulmin, The Abuse of Casuistry: A History of Moral Reasoning (Berkeley: University of California Press, 1988), p. 17. Jonsen was a member of the commission, Toulmin a consultant.
  • 29The Belmont Report: Ethical Principles and Guidelines for Protection of Human Subjects of Biomedical and Behavioral Research, Publication no. OS 78-0012 (Washington, D.C. DHEW, 1978), pp. 1-2.
  • 30 W.H. Sibley, "The Rational and the Reasonable," Philosophical Review, 62 (1953), p. 557. Where Sibley refers to "conduct" and "behavior," we can substitute "judgment" without changing the essence of what he has in mind.
  • 31 For discussions of some of these difficulties see, for example, see Richard Whately, "Critique of the Golden Rule," and Marcus G. Singer, "Defense of the Golden Rule," in Marcus G. Singer, ed., Morals and Values (New York: Scribners, 1977); Jeffrey Wattles, The Golden Rule (New York: Oxford, 1996); and James A. Jaksa and Michael S. Pritchard, Communication Ethics: Methods of Analysis (Belmont, CA: Wadsworth, 1994).
  • 32 Here one is reminded of David Hume's sensible knave, who reasons: "That honesty is the best policy, may be a good general rule, but is liable to many exceptions; and he, it may perhaps be thought, conducts himself with mot wisdom, who observes the general rule, and takes advantage of all the exceptions." [David Hume, Enquiries Concerning Human Understanding and Concerning the Principles of Morals, 3rd ed., edited by P.H. Nidditch (New York: Oxford University Press, 1975), pp. 282-3.
  • 33 Universalizability is widely discussed in philosophical ethics. See, for example, Kurt Baier, The Moral Point of View (Ithaca, NY: Cornell University Press, 1958), ch. 8; Marcus G. Singer, Generalization in Ethics (New York: Knopf, 1961), ch. 2; and any of the writings of R.M. Hare.
  • 34 Sissela Bok, Lying: Moral Choice in Public and Private Life (New York: Random House, 1978), p. 28.
  • 35 Thomas Reid, Practical Ethics, edited with commentary by Knud Haakonssen (Princeton, NJ: Princeton University Press, 1990), p. 110.
Notes

Author(s): Michael S. Pritchard, Department of Philosophy, Western Michigan University & Theodore Goldfarb, Department of Chemistry, State University of New York at Stony Brook.

Citation
Michael Pritchard, Theodore Goldfarb. . Chapter 2: What is Ethics? (Section I- A Guide To Teaching the Ethical Dimensions of Science). Online Ethics Center. DOI:. https://onlineethics.org/cases/ethics-science-classroom/chapter-2-what-ethics-section-i-guide-teaching-ethical-dimensions.