Rethinking Unscientific Attitudes About Scientific Misconduct

C.K. Gunsalus

© Chronicle of Higher Education, March 28, 1997, page B4

What's unscientific
Structural issues
Paradox of the university
Bias of the best
Need for greater realism
Declining trust in scientists
Paying more attention to criticism
Recommendations
Many American scientists are fed up with press reports and questions from Congress and the public about scientific misconduct. The concern is drastically overblown, they say, and the government should spend less time and money investigating the few bad apples and concentrate on expanding appropriations for research. After all, some of the most highly publicized charges of misconduct eventually have been dismissed, these scientists note. Relatively few scientists have been found guilty of misconduct, so no elaborate investigative apparatus or intrusive federal rules are needed.

These feelings seem heartfelt and widely shared. What's worrisome is how unscientific they appear.

What's unscientific?

Well, it's unscientific to make repeated assertions that scientific misconduct is an extremely small or non-existent problem when we have few or no reliable data supporting those claims. In an extreme example, a 1987 editorial in Science said: "99.9999% of all published reports are truthful and accurate, often in rapidly advancing frontiers where accurate data are difficult to collect."

There is no basis for this claim, despite the air of scientific precision conferred by the four digits following the decimal point. Then(as now) we had no direct data on the accuracy of the scientific literature. We simply do not know whether a lot or just a little untruthful information is published. In fact, many scientists vehemently objected a few years ago to a proposed experiment to gather anonymous data on the prevalence of gross misconduct in biomedical research. In the absence of such data, scientists are not exempt from the normal requirement that they be accurate in their public statements.

Moreover, think about the implications of the argument that because scientific misconduct is rare, government does not need regulations and an apparatus to respond. How could the public react to the thesis that because counterfeiting is rare, laws against it and facilities for testing suspect currency cannot be justified?

It's also unscientific to make repeated assertions about the causes of scientific misconduct. Here, too, we lack data. Yet the literature is awash with pronouncements. Typical is a report in Chemical & Engineering News of a session at the 1996 meeting of the American Chemical Society in which one panelist asserted: "But 'fraud in science' is not a real problem. That is because of the psychology of the perpetrators of fraud, and the self-checking nature of the system. The psychopathology of fraud is such that its perpetrators hardly ever contain themselves to manufacturing routine data. Instead, they doctor something important.

What are "routine data"? How does a chemist understand the psychological mindset of perpetrators of fraud without conducting research into the issue? Why are accomplished scientists speaking without evidence to support their assertions? The answer, I believe, is that some structural aspects of universities lead top scientists to minimize the existence of problems and to ignore the possibilities for misconduct that are inherent in research.

Structural Issues.

The first structural issue is what I call the paradox of the university. A good one is organized so that the active scientists are insulated from what it takes to run it, so that they can think creatively and do science. Productive scientists complain that they are plagued with administrative work and committees, but most of that work is focused on matters directly related to their professional lives --selecting their students and colleagues, and supervising research facilities. Very little is focused on the nitty-gritty of running a large enterprise: what it takes to turn the lights on every day, do the paperwork required by government agencies and foundations, pay the bills, dispose of hazardous wastes, or respond to the odd conduct of troubled individuals.

For the most part, this system operates as intended, so that working scientists can, in fact, remain naive about the realities of day-to-day problems outside their labs. So it's natural that they fail to appreciate the need for rules and systems to deal with those problems. But it doesn't mean those rules and systems aren't necessary.

The second structural issue can be called the bias of the best. In their professional lives, the best people in an institution, particularly the best scientists with exemplary standards of conduct, typically associate only with other top scientists and outstanding students. They normally don't deal much with more-ordinary colleagues, including those whose work ethics or standards may be problematic.

They also have the power, when they do encounter misconduct, to handle problems efficiently. Consider a recent, well-publicized case. When Francis S. Collins, the highly respected director of the the National Center for Human Genome Research, found last year that a junior researcher had concocted data, he promptly retracted five published papers on leukemia. The length of the formal procedures to pin down the fraud and respond to it can be measured in months in that case, compared to years in other cases.

The combined effect of the structural features I've noted is to shield productive scientists--the sort who tend to become opinion leaders in science--from encountering whole categories of problems. As a result, many of them believe that problems are rare, that the few that occur can be easily handled, and, thus, that no money need be spent to develop procedures and train people to deal with misconduct.

Need for greater realism

They are more concerned that rules about scientific conduct will be (or have been) used to penalize creative and novel science. I have been unable to find a single instance of that happening, though, and I have been searching for some time, including directly querying those who frequently voice this concern. In fact, with Drummond Rennie, deputy editor of the Journal of the American Mecial Association, I noted the lack of examples in an article in the journal in 1993. Not one example has been drawn to our attention since.

Some scientists also contend that government procedures for responding to misconduct are superfluous because science is "self correcting." Indeed, the chemist quoted earlier in Chemical & Engineering News on the "fact" that misconduct is not a problem also said: "And there are extraordinarily efficient self-correcting features in the system of science--the more interesting the discovery or creation, the more likely it is to be repeated and tested."

Recall what Francis Collins encountered. He found that for two years, one of his graduate students had published data that were systematically manufactured. The deception came to light, Collins said, when a reviewer of the sixth manuscript in the series questioned whether the data were fabricated. Note that it was two years before the misconduct was discovered. Does fabrication that takes two years to discover in a major project, headed by one of our pre-eminent scientists, demonstrate the efficient operations of a 'self-correcting" scientific system?

In short, I believe that the leaders of science need to be more realistic about the nature of the enterprise that they supervise and defend. This includes recognizing the changes wrought by the explosive growth in the number of scientists and graduate students over the last 20 to 30 years. Too much of today's thinking about the internal workings of research is rooted in the mythology of the wise mentor standing side-by-side with the apprentice, inculcating scientific standards and traditions.

If this ever was an accurate depiction of science, it is not now. Today, faculty members run laboratories in major universities that routinely involve 20 or more people--students, postdoctoral students, and technicians. How well are the traditions and ethical practices of science being transmitted to students in such situations?

All institutions with research-training grants from the National Institutes of Health now must provide instruction in the responsible conduct of research. Some institutions have pioneered efforts such as the "group mentoring" program run by Michael Zigmond and Beth Fischer at the University of Pittsburgh. The best such programs, like theirs, offer students guidance on a broad range of professional conduct, including writing scientific papers, dealing ethically with human and animal subjects of research, and finding jobs. This information would (or should) have been transmitted directly from mentor to apprentice in a smaller system. Many institutions, however, do not offer comprehensive training; some simply arrange for one lecture on ethics each term.

Declining trust in scientists.

Scientists need to realize that they are not accorded as much trust as they once were. Our society is significantly more cynical and less trusting than it was before the Vietnam War and Watergate. Universities, like almost every other sector of society, are much more heavily regulated than in the past; many of those regulations were adopted after scandals broke. Rules for protecting human research subjects are a perfect example. Congressional attention having been attracted by the Tuskegee syphilis study and a 1966 New England Journal of Medicine article by Henry Beecher, a Harvard Medical School professor, describing unethical treatment of humans in published research projects.

If scientists and their institutions do not develop the tools (either internally or at the federal level) to deal effectively with misconduct, it seems inevitable that scandal will follow, and that more external regulation will ensue. And rules imposed by outsiders are likely to be more onerous than rules devised by scientists themselves.

Scientists also should realize that a startling number of legal claims questioning internal decision making are filed against universities these days. As a result, a conclusion among colleagues that serious scientific malfeasance has occurred may not hold up legally. The university's penalties against the malefactor may wind up being rescinded or reduced.

What happens to the scientific environment when people violate generally held concepts of right and wrong, and yet nothing happens to them, either because their institution chooses not to act or because it is powerless to act, as a result of inadequate rules and procedures? What happens when allegations of misconduct are poorly handled or white-washed, or when an innocent scientist is wrongly accused by a malicious colleague and yet the investigation languished for years, or when a whistle blower is vindicated but still suffers retaliation?

Cynicism flourishes, morale erodes, and the cohesiveness of the scientific enterprise suffers, all because of a failure to honor the scientific principle of an unbiased search for the truth. The effects are particularly devastating for students, who are supposed to be learning to act according to the highest scientific and personal standards.

In light of all this, it becomes even more important for scientists to base their opinions and actions upon factual understanding of how our current system works--and doesn't. Right now, many scientists agree that the system of dealing with misconduct charges desperately needs overhauling, but they are resisting adoption of a revised federal definition that would clarify when serious misconduct has occurred and how that should be determined.

Paying more attention to criticsm.

The findings last year of the Congressionally mandated Commission on Research Integrity (on which I served) have been roundly criticized. The panel proposed expanding the current federal definition of misconduct--fabrication, falsification, and plagiarism--to include intentional theft of, or damage to, research equipment or experiments. It also would cover misconduct by scientists when they review the research proposals and manuscripts of other scientists. Finally, it would add sub-definitions of each type of misconduct. For example, it would define plagiarism as "the presentation of the documented words or ideas of another as his or her own, without attribution appropriate for the medium of presentation."

The recommendations were based on 15 months of public hearings and on the examination of thousands of pages of material documenting past cases of misconduct. Yet some scientists seem to fear that every scientific dispute or disagreement would be transformed into the proverbial federal case if the definition of misconduct were changed. Some object to the "legalistic" tone of the proposed definition or argue that some acts, such as vandalism, already are covered by other regulations or laws.

Scant attention is paid, however, to the fact that the legal shortcomings of the current definition--its complete lack of specificity--have subjected universities to unreasonable obstacles in administering it. Nor is attention paid to the reality that state and local laws do not, in practice, cover vandalism to research equipment and experiments.

I recently led a three-day workshop for university administrators who investigate charges of research misconduct. Over and over, I heard the current definition summed up this way: "It doesn't work. it just doesn't work."

The current definition does not give enough guidance as to what conduct is covered: Does plagiarism encompass only stolen words, or ideas, too? How should investigators assess intent? How should they attempt to prove that data have been fabricated, and how conclusive must the proof be? What should be done in cases involving labs in which the records are so poor that one really can't tell whether the published data were ever collected, or when?

Recommendations.

Federal rules on misconduct are not going to disappear; Congress will see to that. So it is time for scientific leaders to respond realistically to efforts to improve the federal rules. researchers must be willing to support the adoption of a workable federal definition of misconduct: one inclusive enough to cover the existing range of misconduct, treat all scientists involved fairly, and withstand legal challenges to investigators' conclusions.

Similarly, researchers must understand that finding the truth about charges of misconduct is a paramount obligation, and that charges must be investigated according to established procedures that are fair to the accuser and to the accused. Probes must rely upon facts--not personalities or reputations--as the basis for decisions.

We also must create environments in which questions about the responsible conduct of research are discussed freely. Students cannot become professionals entirely by osmosis or by taking a single ethics course. The gray areas that exist in the norms (and there are many) must be legitimate and common topics of conversation.

Institutions also should adopt and enforce higher standards of professional conduct than the bare minimum required under any federal definition of misconduct.

Together, all these actions can help produce the scholarly climate we need. Such changes clearly will require new leadership from within, however. The conservatism of academic senates has meant that little is done or has been done in the past to regulate the conduct of university scientists in the absence of an external requirement.

Scientists' current stand against a new definition of misconduct to replace the existing inadequate one is dangerous. Scandals happen. We do not have adequate tools to deal effectively with the next ones, which are sure to come.

C.K. Gunsalus is associate provost at the University of Illinois at Urbana-Champaign. This article is adapted from a presentation at a symposium, "Science in Crisis at the Millennium" at George Washington University.


Your comments and suggestions are appreciated.
[Previous] [Assignments] [Writing Home Page]
[OSU Physics] [Ohio State University]
Edited by: wilkins@mps.ohio-state.edu [April 1997]