A Best-Case Scenario for the Organization of University Research

Description

Remarks delivered by Dr. Ken Pimple at a workshop held at Colorado State University on June 9, 2009, "Building a Research Environment that Promotes Best Practices."

Body

I have no financial conflicts of interest to report.

Thank you for inviting me to work with you on this important task. I’m impressed that CSU is making this effort, and I’m honored to have the opportunity to make a small contribution. I intend to leave time at the end of this hour for questions and discussion, but if at any point you want to ask a question, raise an objection, make an observation, say “Amen,” or snort in derision, please feel free to interrupt.

I want to begin by thanking Professor Rollin for reminding us of several very important things. Many past practices in science are rightly viewed as abhorrent by current standards, as they were sometimes viewed by wider community at the time. Professor Rollin focused on research using non-human animals, but truly outrageous things have been done by scientists using human subjects, including the common practice of stealing human bodies from graveyards for anatomy study. When I first learned of this practice, my initial reaction was to admire the anatomists for their courage in flouting superstition in the pursuit of science. But there’s another side of the story: Each desecrated grave had held the body of someone’s son or daughter, father or mother, sister or brother. We have somewhat different attitudes about dead bodies today, but our current standards can’t be used in any straightforward or simple way to justify actions that were illegal, considered immoral, and were certainly an affront to common decency in their own time.

Professor Rollin also reminds us that in some ways science is conservative and resistant to change, and that bucking the status quo is sometimes not only an act of courage, but also an act of moral necessity. There may well be common practices in science today that will be viewed in a few years with the kind of loathing most of us have now for some of the cruel practices that he described in his talk. I am not aware that any such practices are in common use today, but being taught or reminded that they were common in the past, and accepting that they could exist today but are not yet adequately recognized as unacceptable should be part of any effort toward promoting the ethical conduct of research.

I only know two things about CSU. First, my youngest brother, Kevin Pimple was bitten by a rattlesnake while he was a student here in the ‘80s. He survived without long-term damage. Second, I was asked by a small but very discerning group of people to help plan and be part of this workshop. Talking and corresponding with them for the last few months has made me admire their dedication to this task and the positive attitude they are taking toward a new obligation that, I suspect, many universities, and possibly some people here, are finding unwelcome.

Most of my comments will be broad-stroke generalizations about American academic research, and none will be directed toward CSU in particular. If anything seems to hit too close to home, it’s a coincidence.

1a. Three points about orientation

I want to make three general points about my basic orientation, which I hope you will agree is a profitable orientation for your own work.

i. Practical education

First, science education in the university – certainly at the post-graduate level – is both theoretical and practical. The same is true of research ethics education. We don’t want our students to have to blow the whistle on someone who fabricates data, but we might want them to have this experience vicariously through role playing or case study discussion. And we might want them to have a vicarious or, if possible, actual experience of peer review – as reviewer and reviewed – or serving on an IRB or IACUC or inquiry into misconduct. When I was telling my ten-year-old daughter, Vivian, about this talk, she provided an excellent example for my use – a scene from Harry Potter and the Order of the Phoenix.

The scene is the class on Defense Against the Dark Arts, now being taught by Professor Dolores Umbridge. She has provided her three course aims and instructed her students to read the first chapter of their textbook, adding that there will no need for talking.

… Harry turned to page five of his copy of Defensive Magical Theory and started to read. It was desperately dull, quite as bad as listening to Professor Binns. He felt his concentration sliding away from him; he had soon read the same line half a dozen times without taking in any more than the first few words. Several silent minutes passed. … Harry looked right and received a surprise to shake him out of his torpor. Hermione had not even opened her copy of Defensive Magical Theory. She was staring fixedly at Professor Umbridge with her hand in the air. [240-241]

It takes Hermione some time to get Umbridge’s attention and so she can point out that Umbridge’s “course aims” say nothing “about using defensive spells” (241). “

Using defensive spells?” Professor Umbridge repeated with a little laugh. “Why, I can’t imagine any situation arising in my classroom that would require you to use a defensive spell.” [242]

The students object that learning the theory of defensive magic is useless in the real world. Umbridge spars with them for a while, then tries to bring the conversation to an end with an appeal to authority.

… “[It] is the view of the Ministry [of Magic] that a theoretical knowledge will be more than sufficient to get you through your examination, which, after all, is what school is all about.” [243]

Parvati points out that their examinations will require them to perform countercharms, but Umbridge persists in her assertion that understanding the theory will be sufficient for them to “be able to perform the spells under carefully controlled examination conditions” (244).

Umbridge pointedly ignores the students’ real objections: When it comes to practical skills, theory may be useful, but practice is essential. That’s why science courses have labs. Harry and his friends may be safe from attack in most classrooms, but it becomes clear that they aren’t safe from her. And being prepared for success in a carefully controlled school environment is not always adequate preparation for success in a less carefully controlled real-world setting. So, both science education and research ethics education should include a good deal of practice.

This is also a nice example because, as Vivian realized, one of the objectives of research ethics education is to help our students protect themselves from the dark arts and bad actors of science.

ii. The salience of immorality

My second point about orientation concerns attitudes toward people and actions. I believe, and I think it is useful to believe, that almost everyone is basically a good person. Even someone who has done a bad thing can be a good person, and I try to resist the urge to judge or comment on any person’s moral character. This attitude is beneficial because as long as we think of an offender as a person deserving respect as a person and in spite of her or his alleged or confirmed transgression, we are more inclined to treat her or him fairly and with respect. Surely this is better both morally and practically than assuming people are guilty until proved innocent. That isn’t to say that people don’t behave badly; we do. Many years ago, a cowboy I met in Denver asked me, “You know how to tell when a cowboy’s lying? Watch his mouth real close. When it starts to move, he’s lying.”

Now, I doubt that this is true of all cowboys, and I know it isn’t true of most of the rest of us. The point is that we have an opportunity to act unethically every time we make an utterance. I’m just guessing here, but I suspect that even habitual liars – even cowboys – probably lie less than half the time, and most people probably tell no more than one or two lies on an average day. I’ve heard of an 80-20 rule: 80% of the trouble is caused by 20% of the people. I think that’s about right; I don’t think there are as many bad players as it sometimes seems.

The thing is that bad conduct is salient; it sticks out and catches our attention. My intention is to take a positive approach in this talk, but along the way I have to talk about bad behavior some of the time. If it starts to sound like I’m negative about science and scientists, please remember that part of the reason is that bad behavior takes up more space. I wouldn’t be here, or in the line of work I am, if it weren’t for my admiration for researchers and research.

Even though it may sound at times like I’m most interested in eliminating bad behavior, I’m really aiming at making good behavior easier. I believe that insofar as good behavior is difficult, it’s not generally so much a reflection on any individual’s moral character as it is a reflection of human nature, including the fact that all of our ideologies, beliefs, attitudes, and actions are profoundly influenced by other people – that is to say, by social forces.

I’m going to share some ideas about how I think academic research could be re-organized, at the university scale, to make good behavior easier and bad behavior harder and less tempting. None of my suggestions should be implemented unless they add value to research and the research community at CSU – that is, unless the cost is outweighed by the benefit to research and researchers.

Cost is counted in money, but also in time, frustration, and good will. The cost is any good thing that is diverted from another worthy endeavor. Benefit, in turn, can be counted in money, but in our case I suspect it will mostly take the form of reduced frustration and stress, increased good will, a more pleasant and productive research environment, and other benefits that are undeniably important, but sometimes hard to enumerate.

iii. Practical ethics

Third, it’s useful to remember that research ethics is practical ethics, not philosophical ethics. You don’t have to be a philosopher to be ethical, or to care about or have something to say about research ethics; any experienced researcher will have more to say about it than a typical philosopher. This isn’t to say that ethical theory and philosophical ethics have no place in research ethics, but that it’s a subordinate place. For example, there are many interesting and contested ethical issues surrounding the use of animals in research, but for many members of the research community, for practical purposes, many of these issues have been resolved, at least for now. The scientific community is clearly committed to the use of animals in research. Issues of animal welfare are very pertinent, but the argument that all use of animals in research must stop immediately – which is one claim of some philosophers who support animal rights, including Tom Regan (Regan 2004) – just have no traction in this context and we needn’t spend any time on them today. That isn’t to say that we should ignore them when we’re teaching research ethics, because we shouldn’t; it’s important for students who will become researchers to understand that those views exist, if only to protect themselves. For today, though, we can set animal rights and other such marginal issues aside.

2. Why are we here?

As you probably know, the good people in the Congress of the United States have decreed, in Section 7009 of the America COMPETES Act, that institutions applying for NSF funding must have a plan to provide “training and oversight” in the “responsible and ethical conduct of research.” Here’s how NSF proposes to implement this Congressional mandate:

Effective October 1, 2009, NSF will require that at the time of proposal submission to NSF, a proposing institution’s Authorized Organizational Representative must certify that the institution has a plan to provide appropriate training and oversight in the responsible and ethical conduct of research to undergraduates, graduate students, and postdoctoral researchers who will be supported by NSF to conduct research. While training plans are not required to be included in proposals submitted, institutions are advised that they are subject to review upon request. NSF will modify its standard award conditions to clearly stipulate that institutions are responsible for verifying that undergraduate students, graduate students, and postdoctoral researchers supported by NSF to conduct research have received RCR training.

This is only the latest mandate for some kind of ethics training for researchers. As you know, the National Institutes of Health has issued several since 1989 or 1990. Those mandates, as well as the one we face now, were imposed because of bad behavior by a small number of researchers, which made some research institutions whitewash or otherwise bungle some of the cases, both of which brought bad press and embarrassing headlines, which made Congress  unhappy, which made funding agencies nervous, which passed the buck to institutions by requiring RCR training for some grants.

This is a caricature, but it suggests an easy solution to our current challenge: Since the NSF mandate essentially originated as a PR problem, we can offer a PR solution – namely, do anything at all and claim it’s a solution.

Many researchers responded this way to the NIH mandates. Some claimed in their grant applications that they would provide this training, but never did. There have been instances of people who are identified in grant proposals as the ones who would provide the training who, when asked, knew nothing about it. Many NIH-funded trainees who should have gotten RCR training deny they got it, or say they don’t remember getting it.

I was involved in Indiana University’s effort to respond to NIH’s mandate for instruction in the protection of human subjects in research in 2000. I met with a group of impressive persons, and we agreed that IU should require online training and an online certification quiz.

Five years later I led the effort to revise the test and write a tutorial on non-biomedical research. The current test has been used since 2005.2 But the nine-year-old tutorial on biomedical research (a 50-page PDF document) is still in use, although it is out of date and defective in many ways.

I was proud of our effort at the time, but the lack of follow-up from IU administration has convinced me that it has devolved to a nearly meaningless obstacle to research, an obstacle of the kind that tends to breed cynicism among researchers. I have argued that the NIH training mandates have been counterproductive (Pimple 2008), and IU’s own handling of the human subjects protection mandate is the kind of thing I want CSU – and everyone else, for that matter – to avoid in addressing the NSF mandate.

Let me remind you that the NSF mandate concerns the provision of “appropriate training and oversight in the responsible and ethical conduct of research to undergraduates, graduate students, and postdoctoral researchers who will be supported by NSF to conduct research.” Taking the low road would entail restricting our efforts to the named groups, leaving out others who could benefit, including established researchers, research administrators, and maybe even OVPR. It would make the “training” as minimal as possible, and would claim that the “oversight” is already in place. We could view this mandate as a problem, or as an opportunity.

We could take the low road, the easy way, but do we want to use our time that way? Do any of us want to work at an institution that takes the low road? Do we want to encourage our students to think the low road is the best way to travel? I don’t. So let’s think of some ways we can take this opportunity to improve research and the climate of research.

3. Characterizing the opportunity

I’ve already said that we are here because of bad behavior on the part of a small portion of the research community. It’s easy to make too much of bad behavior, but it is also easy to make too little of it. There have been and there continue to be real problems in research integrity.

Most of you know that “research misconduct” is a quasi-legal term covering falsification, fabrication, and plagiarism in research; it’s the Federal offense of research. The Office of Research Integrity, which is responsible for oversight of research misconduct cases in NIH funded research, receives an average of about 24 reports of proven research misconduct per year. This represents about 0.01% of the researchers funded by NIH (Titus et al. 2008). Research misconduct is always to be decried, but still that’s a comfortably small number – about 1 in 10,000 researchers.

But there are many reasons to believe that research misconduct is under-reported. Sandra Titus and colleagues recently surveyed more than 2,000 NIH researchers to find out how many had observed research misconduct. The data from this survey were interpreted by Judith Swazey to indicate a yearly misconduct rate of about 0.13%, or about 300 cases – about 10 times the rate of findings of research misconduct (Swazey 2008). Titus and her colleagues themselves interpret the data differently, estimating the rate at about 1.5%, or something like 2,300 cases a year, about 100 times the reported rate.

It might be argued that a 1.5% rate of research misconduct is very low, but I think it is disturbing to think that it is under-reported by a factor of 10, let alone a factor of 100. If nothing else, this seems to indicate that hundreds or thousands of researchers see behavior that can reasonably suspected to be falsification, fabrication, or plagiarism, but don’t report it. This must have a corrosive effect on science and science education.

We obviously have to take research misconduct seriously because it happens and it strikes at the very heart of research values. If you falsify or fabricate data, you just aren’t doing science, and you’re poisoning the pond in which you and all of your colleagues swim. But since it seems to happen at a fairly low rate, insofar as we address misconduct in science at all in our teaching, our efforts should probably not be directed at preventing misconduct, but rather encouraging responsible reporting of misconduct. There is an excellent article by C. K. Gunsalus entitled, “How to blow the whistle and still have a career afterwards,” that I strongly recommend for teaching.1

Of course there are many other kinds of bad behavior in research, damaging the research record, the research community, and the research environment. Those practices are the ones that should get the lion’s share of our attention in education, and I think the term “the responsible conduct of research” captures pretty well what we want to encourage.

Little is known about why researchers behave badly. For my part, I think that sometimes it stems in part from stress, and in a very few cases from some kind of sociopathy, but I’m sure there are many other factors as well. Education or training in research ethics is valuable, but even the best research ethics training cannot have a strong impact on research behavior if the climate and organization of research pushes researchers away from responsible behavior.

Most of the efforts of the Federal government seem to have involved so-called “boundary organizations” – organizations that stand between government and science, understanding the needs, values, and systems of both, and mediating their interactions. For example, the Office of Research Integrity at HHS and the Office of Inspector General at NSF deal with misconduct, the Office of Human Research Protections oversees human research, and NIH’s Office of Technology Transfer deals with profitable research.

All of these offices and agencies were established to solve particular problems. The organization of research we have today was never designed; only the parts were designed, often without other parts in mind. It might be time to design a system that works well in all its parts, and in which all of the parts work together. The people here can’t redesign NIH and NSF, but you might be able to make some very positive changes to CSU’s research organization. And if you succeed, you might be a leader that other research universities will follow. And if they follow, you might actually lead NIH and NSF, too. Wouldn’t that be grand?

To make the most of this opportunity, we have to recognize that the players have different interests. In broad strokes, researchers want administrators and funding agencies to get off their backs and let them do their work; students want to become researchers without being ground into the dust along the way; research administrators want researchers to calm down and follow the rules; OVPR wants research administrators to keep researchers in line, and researchers to bring in more money; funding agencies want Congress to get off their backs and let them do their work, and universities to play by the rules; members of Congress want all everyone to behave themselves so they get re-elected; and the public wants all of them to do good science, learn interesting things, solve problems, and not waste our money.

That’s all I have to say about the public, Congress, and funding agencies today.

I take it as axiomatic that all of these groups have certain interests and goals in common, though the emphasis varies. They all want CSU science to advance and flourish. They all want CSU research to be ethical and CSU researchers to stay out of trouble. They all want CSU to be free of research scandal. They all want CSU to flourish as a great research university where great scientists, science support staff, and students want to be. I think we often lose sight of the simple fact that everyone involved in research is really on the same side – even researchers who are competing against each other for resources.

Sometimes, admittedly, research administrators and OVPR seem to lose sight of the fact that they exist to serve science, not the other way around. But sometimes researchers lose sight of the fact that without research administrators, their lives would be much, much harder. When I served on the IRB at IU, I sometimes said to researchers, “If you think we’re a pain in the neck, just try working with the Federal government directly.”

I’d like to be able to stop here and say, “Since we’re all on the same side, let’s just get along.” If it were that simple, I would, but it didn’t work for the Buddha, or Moses, or Jesus, or Mohamed, or any of the hundreds of sages who asked us to get along, so it isn’t going to work here, either. It can be hard to just get along for many reasons embedded both in human nature and the research environment. We all know about turf wars, entrenched interests, and resistance to change, and we all know that change is difficult. But let’s give it a shot.

3a. Whole is greater than some

You might have heard of the “Lake Wobegon effect.” In many surveys asking people to rate themselves and their peers on desirable traits, such as honesty or ability to get along with other people, a majority of those sampled – often a large majority – rate themselves as above average, and only a minority – often a very small minority – rate themselves as below average. It’s amusing to learn that the majority of people, on average, believe themselves to be above average.  They can’t all be right. Oh, the vanity and hypocrisy. But of course, every amused person is also thinking, “I’m better than that.”

There might be something else to learn here. When you sample individuals, their views are naturally going to be self-centered – after all, in very important physical and psychological ways, each one of us is, really, the center of his or her universe (Pronin 2008). I know almost everything about what I do and why I do it and almost nothing about why you do what you do.

But if, instead of acting individually, we were to work together, reviewing and revising our obviously faulty estimates, in time we’d probably come up with a bell-shaped curve.

I’m asking you all to work together not just because it’s nice, but because it’s effective. It’s a better way of accomplishing things. Insofar as we work at cross grains, we waste valuable resources – time, money, good will, and so forth.

I don’t want to deflate your self-image by pointing out that we can’t all be above average – what would be the point of that? But I suggest that if we were to inflate our images of others, not necessarily beyond our own self-image, but more-or-less up to it, we might treat other people better and get along better.

4. Tackling the challenge

Now I will suggest a few other ways of thinking about getting along, what that might look like, and how it might be accomplished.

I want our efforts to “provide appropriate training and oversight in the responsible and ethical conduct of research” to include teaching in many formats, but I’m not going to focus on teaching. I run an annual three-day workshop tackling that problem, and I can’t do the topic justice here. My focus is on teaching as part of a larger effort to make good behavior easier and more attractive and bad behavior harder and less attractive. Teaching will be part of it, but more important will be the research environment.

As we think about a comprehensive approach to promoting research ethics and protecting against misbehavior in research, it might be useful to remember that bad and good behavior come in more and less serious forms, from the very bad, such as research misconduct, serious violations of human subject regulations, and so on, to so-called questionable research practices, to mere rudeness, including selfishness, lack of collegiality, and the like. The counterparts might be called research integrity, good research practices, and good research manners. These can blend into each other; the line between rudeness and unethical behavior can be quite thin.

4a. Institutional and departmental evaluation

I want to make a strong pitch for starting with internal and external evaluation of the research climate at CSU at the departmental and institutional levels. When you want to fix something, it’s really useful to know what’s broken first. Then you know what to work on, and once you’ve worked on it, you have some chance to say that it worked, or not. I should think this would be an easy sell to scientists: Data matter.2

So how can we make good behavior easier and bad behavior harder? We have two fundamental mechanisms: informal social control and formal social control. Please don’t bridle at the term “social control;” it doesn’t mean that we’re controlled like a puppet on a string. It means that social forces, like approval and disapproval, gossip, peer pressure, acculturation, enculturation, and the like, strongly influence individuals’ behavior.

4b. Informal social control

Informal social control is far more powerful than formal social control because everyone is an agent of informal social control, but only a few people sometimes act as agents of formal social control, namely when they make use of institutional power, such as formal rewards and punishments, or the threat of punishment.

i. Face-to-face interactions

To maximize the impact of informal social control, it would be useful to make more occasions for contact between people who do not see enough of each other now. For too many researchers, the IRB and the IACUC are faceless e-mail addresses. It’s harder to be respectful of an e-mail address than a real human being. Tensions between “compliance” staff and researchers are perfectly understandable, but not productive, and they can be reduced by face-to-face interactions. In general, increasing the amount of time people spend with each other, their ability to see what others are doing, and the number of conversations they can have about how things are going should be more productive for science per se, but also more effective for informal social control.

ii. Working in pairs and learning in groups

It might be useful to have junior researchers work in pairs so that more people know in detail what’s going on. Certainly any new educational or training efforts in research ethics should be dominated by face-to-face interactions. Relegating this training to the Web wastes a golden opportunity to establish and foster communities with shared experiences, insights, and values.

iii. Professionalism

We should encourage, model, and reward professionalism, under which I include following rules, adhering to guidelines, behaving rationally instead of emotionally, not taking a request to do something differently as a personal affront or insult, showing respect to everyone, including your subordinates and inferiors, and the like. Professionalism should also include a sense of service to a greater good and an awareness that the privileges of professionals are earned by public service. The privileges of professionals are not just money and status, but they are also inherent to the practice of the profession. Physicians, for example, are privileged in every sense to tend to the sick and the dying. Scientists are privileged to pursue knowledge for its own sake and for its utility, to think widely and deeply, and to challenge themselves, one another, and the rest of us to think more clearly, understand more profoundly, and face sometimes unsettling truths about the natural world unflinchingly.

Professionalism also rightly entails or demands a sense of honor and loyalty to the profession, as well as service to the common good. I think that we in the research community are too shy about talking about our privileges and the obligations they place upon us, and too reticent about admitting the grand and honorable place science has in public life and human existence. We don’t have to be uppity about it; we should be humbled.

4c. Formal social control

i. Transparency

When I gave a talk on some of these issues several years ago, I was accused – and it was clearly an accusation – of promoting a culture of surveillance. That’s an ugly word for this context, and it doesn’t correspond to my attitude or the attitude I want to encourage. Surveillance implies suspicion, an expectation of finding guilty behavior. I don’t want us to be suspicious of each other; I want us to be aware of each other and knowledgeable about what’s going on. Instead of surveillance, I want to promote transparency. I used that word before Professor Barack Obama was elected.

Justice Lewis Brandeis said, “Sunshine is the best disinfectant.” Universities and all university systems should be perfectly transparent, including labs, study groups, and individual researchers. Transparency can be difficult to establish, but I doubt that, once established, it’s difficult to maintain. Transparency makes us seem responsible, and it also makes us actually be responsible, as well as accountable. Transparency is particularly important at public institutions. I am usually uncomfortable when I recall that I’m an employee of the state of Indiana, especially when the state legislature is in session or the governor is in the news. Whether you have a similar reaction or not, the fact is that you are employees of the state of Colorado and you are both beholden and accountable to the citizens of Colorado, as well as to the Federal government, with all of its flaws.

Transparency will increasingly be required of us, so we may as well be proactive and embrace it on our own terms. Remember, too, that aside from being an obligation, transparency brings real benefits. I’ll name just three. First, transparency can provide metrics showing success and competence – like preliminary data – that can be leveraged to secure additional resources. Second, transparency provides protection. When you are open about what you do, there are no rude surprises. Third, transparency provides great PR at every level, which can also be used to gain additional resources.

ii. Formal reward systems

It should go without saying that lab directors should spend time overseeing the work of the postdoctoral fellows, graduate students, undergraduate students, and technicians for whom they are responsible. A recent study suggests that a common factor in cases of research misconduct by a subordinate researcher may be lack of oversight. The study “found that almost three quarters of the mentors had not reviewed the source data and two thirds had not set [specific research] standards” (Wright et al. 2008:323).

It may be argued that lab directors or other research supervisors do not have time to take this responsibility. If that is the case, I suggest that universities should take the responsibility of recognizing the demands on the time of their researchers. I strongly suspect that the reward systems at universities can and should be renovated to be more in line with today’s research environment than the research environment of, say, 1965. Promotion, tenure, and other rewards ought to be based on more than the number of publications and should reward collaborative work and responsible oversight. 

Using the number of a researcher’s publications, the number of her grants that are funded, and the amount of grant money she brings in as the primary measures of success is ill-conceived. All forms of piecework – payment for unit of work accomplished, whether garments sown or patients seen – encourage quick and shoddy work. Of course we cannot change the practices elsewhere, but I feel sure that some aspects of the reward system at CSU could be improved.

We know that people who think the reward system is unfair find it easier to justify their own cheating and are more likely to cheat. It would be best to have a truly fair reward system, but at least we should find a way to make our reward system seem fair to most people, most of the time. Insofar as possible, actual unfairness in the system should be rectified and merely perceived unfairness should be targeted by education – we should explain what makes them fair in fact, if not in appearance. This, of course, would require knowing about perceptions people have about the fairness of the systems.

iii. Fixing problems, not blame

There are a number of reasons people are hesitant to report suspected incidents of research misconduct and other unethical research practices. One is that people don’t like tattle tales. But we can talk about some things in a way that isn’t like being a tattle tale, especially in an atmosphere of transparency. When good research practices are well known, out in the open, published – transparent – deviations from good research practices can be identified and it is easier to say, “Excuse me, but I thought we did it this way, not that way. Am I missing something?”

Another reason, though, is that the formal and informal punishments for research misconduct can be severe and may be perceived as unjust. Most of us don’t enjoy getting other people into trouble – even if it’s their own behavior that’s really to blame – and few of us enjoy seeing our colleagues punished unfairly or too harshly.

One model of dealing with bad events that has been wildly successful is that of the National Transportation Safety Board, the NTSB. If an airplane crashes, the first move, obviously, is to deal with the wreckage – taking care of survivors, if any, and securing the scene. The next step is to analyze the data very thoroughly to find out what happened. This usually takes twelve to eighteen months. The NTSB is not responsible for establishing guilt or administering punishment, but it makes safety recommendations (NTSB 2004) that are taken very seriously.

I’d like to see this kind of approach – find out what went wrong and figure out how to make it less likely in the future – adopted in cases suspected research misconduct and other kinds of damaging behavior or events, both at the university and Federal level. Finding guilt and handing out punishment can be handled separately.

iv. Administrative Liaisons

I’d like to see the creation of a new kind of boundary worker; the position might be called an Administrative Liaison. This would be a person who is responsible both to a particular research group and to administration, who is at home in both places, has experience in both areas, and is known by both researchers and research administrators. This person would translate and mediate between research and administration, smooth out wrinkles, and generally make life easier for everyone. At IU we have Local Support Providers (LSPs) for computers and networks. A big unit might have several LSPs; a small unit, like mine, might share an LSP with several other units. They are people who have a name and a face. They are helpful to us and simultaneously help promote good computing behavior – protecting our systems from viruses, encouraging us to change our passwords, and so forth.

I encourage you to look around for successful programs or offices that have a positive, ongoing impact on classroom teaching, research oversight, securing funding, getting a patent or starting a business – services and units that faculty want to turn to for assistance. When you find them, think about how they can be emulated in the service of the ethical conduct of research and research oversight.

v. Large-scale collaborative research

It’s my impression that one area that needs institutional reform at most universities to make teaching in research ethics workable concerns collaborative research on the interdepartmental and inter-institutional levels. I have observed and heard of cases in which a PI heading a research group – for example, a center or an institute – receives funding for a Research Experience for Undergraduates (REU) program, or an Integrative Graduate Education and Research Traineeship (IGERT), with promises from various departments to provide some kind of RCR training. When the grant is funded, the promises wither away. The structure of the university is such that it’s hard to keep these promises; there is no strong reward, there is no palpable negative consequence, and there is no real structure to help make it happen. These should all be addressed. One approach might rely on the Administrative Liaisons I mentioned earlier, or some other research administrator dedicated to coordinating large-scale collaborative efforts, including efforts to provide research ethics education and oversight.

It’s obviously important to make it possible for research administrators charged with overseeing a collaborative grant to do their job. Sometimes faculty researchers get frustrated with the red tape and ask their admins to find a run-around, or to ignore the rules. It’s proper for the PI to have authority in her or his office, but it isn’t productive for admins to be caught between an imposing bureaucracy and a bullying PI. One simple fix would be to make it clear to everyone that it is acceptable for an admin to ask for instructions in writing. If the admin says we have to do things one way because the rule says so and the PI disagrees, it might be a legitimate case of conflicting judgment, but the admin should not be put out on a limb and forced to act against her or his own best judgment. So the admin should be able to write something like a brief, describing the situation, her or his own interpretation, and the PI’s interpretation. Both the admin and the PI should sign this and send a copy to OVPR. But this will not work if the PI sees this as attack on her or his integrity. Professionalism and transparency have to be fostered and expected across the university.

5. Conclusion

I would like to see large-scale reform at every research university and funding agency of the kind I have envisioned here, and in other ways that have not occurred to me, reform that would make it easier to get along with each other and to be ethical, reform that would make research more pleasant and restore some of the fun, and that might even make research more productive and efficient. I don’t think this is a zero-sum game; I believe that honest efforts and minor sacrifices by many players can result in greater gains for all.

In some ways, the training of new scientists still resembles an apprenticeship model. It must be wonderful to be an apprentice to a skilled craftsman or scientist, but the apprenticeship system was notoriously riddled with abuse. Masters tend to be like kings, who tend to become tyrants if their power is not checked. The days of the independent researcher are over, so perhaps counterproductive vestiges of the apprenticeship model should be abandoned as well.

It’s said that it takes a village to raise a child. A village, not a city. One difference between a village and a city could be called transparency; everyone knows everyone else in a village, and no one can hide bad behavior for long. Could the university become a village that embraces diversity, including tolerance for eccentricity; that believes in correcting behavior without condemning character; and that is committed to professionalism and procedural justice so that innocent differences and personality conflicts do not readily escalate into accusations of serious wrongdoing? I think so. I think you can do it.

Thank you for your attention.

Acknowledgments

I am indebted to Vivian Livesay for her contributions to this paper as a research assistant. Her probing curiosity, native intelligence, creativity, and thirst for knowledge are impressive.

I would also like to thank Sally Todd for sharing her observations and insights on the life of a research administrator.

Jennifer Livesay provided keen insights into an earlier draft. She, Gwen Livesay, and Vivian Livesay all gave me (most of) the space and time in the last rush to finish this paper, at some cost to the quality of their own lives. I am deeply grateful to for this, and for everything else that they do and are.

Citations

Gunsalus, C.K. 1998. “How to blow the whistle and still have a career afterwards.” Science and Engineering Ethics 4:51-64 NTSB. 2004. “The investigative process.” http://www.ntsb.gov/Abt_NTSB/invest.htm (confirmed June 2, 2009)

Keith-Spiegel, Patricia, Joan Sieber, and Gerald P. Koocher. 2004. “Responding to research wrongdoing: A user-friendly guide.” http://www.ethicsresearch.com/freeresources/rrwresearchwrongdoing.html (confirmed June 2, 2009)

Pimple, Kenneth D. 2008. “Unintended consequences of RCR education, instruction, and training mandates.” Presented at the first biennial conference on Responsible Conduct of Research (RCR) Education, Instruction, and Training, St. Louis, Missouri, April 17 and 18, 2008, sponsored by the Office of Research Integrity and the Washington University School of Medicine. http://mypage.iu.edu/~pimple/ (confirmed June 2, 2009)

Pronin, Emily. 2008. “How we see ourselves and how we see others.” Science 320:1177-1180 (May 30).

Regan, Tom. 2004. The Case for Animal Rights. 2004 ed., updated with a new preface. University of California Press.

Rowling, J. K. 2003. Harry Potter and the Order of the Phoenix. New York: Scholastic Press.

Swazey, Judith P. 2008. “Integrity: How to measure breaches effectively.” Nature 454 (July 31):575.

Titus, Sandra L., James A. Wells, and Lawrence J. Rhoades. 2008. “Repairing research integrity.” Nature 453:980-982 (Jun 19).

Wright, David E., Sandra L. Titus, and Jered B. Cornelison. 2008. “Mentoring and research misconduct: An analysis of research mentoring in closed ORI cases.” Science and Engineering Ethics 14:323-336.

  • 1I recently learned of a similar resource that looks good, but I haven’t had time to read it in full (Keith-Spiegel et al. 2009)
  • 2 I wish I could make this case at greater length and in greater detail, because I feel passionately about it, but it is outside of my area of expertise.
Notes

Presented at “Building a Research Environment that Promotes Best Practices,” a workshop held at Colorado State University, June 9, 2009, sponsored by CSU’s Vice President for Research and Office for Undergraduate Research and Artistry. Copyright © 2009, Kenneth D. Pimple, Ph.D., all rights reserved.

Also available at the TeachRCR.us site. 

Citation
Kenneth D. Pimple. . A Best-Case Scenario for the Organization of University Research. Online Ethics Center. DOI:. https://onlineethics.org/cases/ken-pimple-collection/best-case-scenario-organization-university-research.