These feelings seem heartfelt and widely shared. What's worrisome is how unscientific they appear.
There is no basis for this claim, despite the air of scientific precision conferred by the four digits following the decimal point. Then(as now) we had no direct data on the accuracy of the scientific literature. We simply do not know whether a lot or just a little untruthful information is published. In fact, many scientists vehemently objected a few years ago to a proposed experiment to gather anonymous data on the prevalence of gross misconduct in biomedical research. In the absence of such data, scientists are not exempt from the normal requirement that they be accurate in their public statements.
Moreover, think about the implications of the argument that because scientific misconduct is rare, government does not need regulations and an apparatus to respond. How could the public react to the thesis that because counterfeiting is rare, laws against it and facilities for testing suspect currency cannot be justified?
It's also unscientific to make repeated assertions about the causes of scientific misconduct. Here, too, we lack data. Yet the literature is awash with pronouncements. Typical is a report in Chemical & Engineering News of a session at the 1996 meeting of the American Chemical Society in which one panelist asserted: "But 'fraud in science' is not a real problem. That is because of the psychology of the perpetrators of fraud, and the self-checking nature of the system. The psychopathology of fraud is such that its perpetrators hardly ever contain themselves to manufacturing routine data. Instead, they doctor something important.
What are "routine data"? How does a chemist understand the psychological mindset of perpetrators of fraud without conducting research into the issue? Why are accomplished scientists speaking without evidence to support their assertions? The answer, I believe, is that some structural aspects of universities lead top scientists to minimize the existence of problems and to ignore the possibilities for misconduct that are inherent in research.
For the most part, this system operates as intended, so that working scientists can, in fact, remain naive about the realities of day-to-day problems outside their labs. So it's natural that they fail to appreciate the need for rules and systems to deal with those problems. But it doesn't mean those rules and systems aren't necessary.
The second structural issue can be called the bias of the best. In their professional lives, the best people in an institution, particularly the best scientists with exemplary standards of conduct, typically associate only with other top scientists and outstanding students. They normally don't deal much with more-ordinary colleagues, including those whose work ethics or standards may be problematic.
They also have the power, when they do encounter misconduct, to handle problems efficiently. Consider a recent, well-publicized case. When Francis S. Collins, the highly respected director of the the National Center for Human Genome Research, found last year that a junior researcher had concocted data, he promptly retracted five published papers on leukemia. The length of the formal procedures to pin down the fraud and respond to it can be measured in months in that case, compared to years in other cases.
The combined effect of the structural features I've noted is to shield productive scientists--the sort who tend to become opinion leaders in science--from encountering whole categories of problems. As a result, many of them believe that problems are rare, that the few that occur can be easily handled, and, thus, that no money need be spent to develop procedures and train people to deal with misconduct.
Some scientists also contend that government procedures for responding to misconduct are superfluous because science is "self correcting." Indeed, the chemist quoted earlier in Chemical & Engineering News on the "fact" that misconduct is not a problem also said: "And there are extraordinarily efficient self-correcting features in the system of science--the more interesting the discovery or creation, the more likely it is to be repeated and tested."
Recall what Francis Collins encountered. He found that for two years, one of his graduate students had published data that were systematically manufactured. The deception came to light, Collins said, when a reviewer of the sixth manuscript in the series questioned whether the data were fabricated. Note that it was two years before the misconduct was discovered. Does fabrication that takes two years to discover in a major project, headed by one of our pre-eminent scientists, demonstrate the efficient operations of a 'self-correcting" scientific system?
In short, I believe that the leaders of science need to be more realistic about the nature of the enterprise that they supervise and defend. This includes recognizing the changes wrought by the explosive growth in the number of scientists and graduate students over the last 20 to 30 years. Too much of today's thinking about the internal workings of research is rooted in the mythology of the wise mentor standing side-by-side with the apprentice, inculcating scientific standards and traditions.
If this ever was an accurate depiction of science, it is not now. Today, faculty members run laboratories in major universities that routinely involve 20 or more people--students, postdoctoral students, and technicians. How well are the traditions and ethical practices of science being transmitted to students in such situations?
All institutions with research-training grants from the National Institutes of Health now must provide instruction in the responsible conduct of research. Some institutions have pioneered efforts such as the "group mentoring" program run by Michael Zigmond and Beth Fischer at the University of Pittsburgh. The best such programs, like theirs, offer students guidance on a broad range of professional conduct, including writing scientific papers, dealing ethically with human and animal subjects of research, and finding jobs. This information would (or should) have been transmitted directly from mentor to apprentice in a smaller system. Many institutions, however, do not offer comprehensive training; some simply arrange for one lecture on ethics each term.
If scientists and their institutions do not develop the tools (either internally or at the federal level) to deal effectively with misconduct, it seems inevitable that scandal will follow, and that more external regulation will ensue. And rules imposed by outsiders are likely to be more onerous than rules devised by scientists themselves.
Scientists also should realize that a startling number of legal claims questioning internal decision making are filed against universities these days. As a result, a conclusion among colleagues that serious scientific malfeasance has occurred may not hold up legally. The university's penalties against the malefactor may wind up being rescinded or reduced.
What happens to the scientific environment when people violate generally held concepts of right and wrong, and yet nothing happens to them, either because their institution chooses not to act or because it is powerless to act, as a result of inadequate rules and procedures? What happens when allegations of misconduct are poorly handled or white-washed, or when an innocent scientist is wrongly accused by a malicious colleague and yet the investigation languished for years, or when a whistle blower is vindicated but still suffers retaliation?
Cynicism flourishes, morale erodes, and the cohesiveness of the scientific enterprise suffers, all because of a failure to honor the scientific principle of an unbiased search for the truth. The effects are particularly devastating for students, who are supposed to be learning to act according to the highest scientific and personal standards.
In light of all this, it becomes even more important for scientists to base their opinions and actions upon factual understanding of how our current system works--and doesn't. Right now, many scientists agree that the system of dealing with misconduct charges desperately needs overhauling, but they are resisting adoption of a revised federal definition that would clarify when serious misconduct has occurred and how that should be determined.
The recommendations were based on 15 months of public hearings and on the examination of thousands of pages of material documenting past cases of misconduct. Yet some scientists seem to fear that every scientific dispute or disagreement would be transformed into the proverbial federal case if the definition of misconduct were changed. Some object to the "legalistic" tone of the proposed definition or argue that some acts, such as vandalism, already are covered by other regulations or laws.
Scant attention is paid, however, to the fact that the legal shortcomings of the current definition--its complete lack of specificity--have subjected universities to unreasonable obstacles in administering it. Nor is attention paid to the reality that state and local laws do not, in practice, cover vandalism to research equipment and experiments.
I recently led a three-day workshop for university administrators who investigate charges of research misconduct. Over and over, I heard the current definition summed up this way: "It doesn't work. it just doesn't work."
The current definition does not give enough guidance as to what conduct is covered: Does plagiarism encompass only stolen words, or ideas, too? How should investigators assess intent? How should they attempt to prove that data have been fabricated, and how conclusive must the proof be? What should be done in cases involving labs in which the records are so poor that one really can't tell whether the published data were ever collected, or when?
Similarly, researchers must understand that finding the truth about charges of misconduct is a paramount obligation, and that charges must be investigated according to established procedures that are fair to the accuser and to the accused. Probes must rely upon facts--not personalities or reputations--as the basis for decisions.
We also must create environments in which questions about the responsible conduct of research are discussed freely. Students cannot become professionals entirely by osmosis or by taking a single ethics course. The gray areas that exist in the norms (and there are many) must be legitimate and common topics of conversation.
Institutions also should adopt and enforce higher standards of professional conduct than the bare minimum required under any federal definition of misconduct.
Together, all these actions can help produce the scholarly climate we need. Such changes clearly will require new leadership from within, however. The conservatism of academic senates has meant that little is done or has been done in the past to regulate the conduct of university scientists in the absence of an external requirement.
Scientists' current stand against a new definition of misconduct to replace the existing inadequate one is dangerous. Scandals happen. We do not have adequate tools to deal effectively with the next ones, which are sure to come.
C.K. Gunsalus is associate provost at the University of Illinois at Urbana-Champaign. This article is adapted from a presentation at a symposium, "Science in Crisis at the Millennium" at George Washington University.