Is science too soft on its miscreants? That could be read as the implication of a study published in Science this week1, which shows that 43% of a small sample of scientists found guilty of misconduct remained employed in academia, and half of them continued to turn out a paper a year.

What's the purpose of a penal system for science? Credit: Punchstock

Scientists have been doing a lot of hand-wringing recently about misconduct in their ranks. A commentary in Nature2 proposed that many such incidents go unreported, and suggested ways to improve that woeful state of affairs, such as adopting a ‘zero-tolerance culture’.

Misconduct potentially tarnishes the whole community, damaging the credibility of science in the eyes of the public. Whether the integrity of the scientific literature suffers seriously is less clear – the more important the false claim, the more likely it is to be uncovered quickly as others scrutinize the results or fail to reproduce them. This has been the case, for example, with the high-profile scandals and controversies over the work of Jan Hendrik Schön in nanotechnology, Hwang Woo-suk in cloning, and Rusi Taleyarkhan in bench-top nuclear fusion.

Grey areas

But the discussion needs to move beyond expressions of stern disapproval. For one thing, it isn’t clear what ‘zero tolerance’ should mean when misconduct is such a grey area. Everyone can agree that fabrication of data is beyond the pale; but as a study three years ago revealed3, huge numbers of scientists routinely engage in practices that are questionable without being blatantly improper: using another’s ideas without credit, say, or overlooking others’ use of flawed data. Papers that inflate their apparent novelty by failing to acknowledge the extent of previous research are tiresomely common.

And austere calls for penalizing scientific misconduct rarely indicate what such penalties are meant to achieve. This would be inconceivable in discussions about a civic penal system. Although there is no consensus about the relative weights that should be accorded to punishment, public protection, deterrence and rehabilitation, these are at least universally recognized as the components of the debate. In comparison, discussions of scientific misconduct seem all too often to stop at the primitive notion that it is a bad thing.

For example, the US Office of Research Integrity (ORI) provides ample explanation of its commendable procedures for handling allegations of misconduct, while the Office of Science and Technology Policy outlines the responsibilities of federal agencies and research institutions to conduct their own investigations. But where is the discussion of desired outcomes, beyond establishing the facts in a fair, efficient and transparent way?

This is why the latest study in Science, by Barbara Redman and Jon Merz of the University of Pennsylvania, Philadelphia, is so useful. As they say, "little is known about the consequences of being found guilty of misconduct." The common presumption, they say, is that such a verdict effectively spells the end of the perpetrator’s career.

Their conclusions, based on studies of 43 individuals deemed guilty by the ORI between 1994 and 2001, reveal a quite different picture. Of the 28 scientists Redman and Merz could trace, 10 were still working in academic positions. Those who agreed to be interviewed – just 7 of the 28 – were publishing an average of 1.3 papers a year, while 19 of the 37 for which publication data were available published at least a paper a year.

Road to recovery

Is this good or bad? Redman and Merz feel that the opportunity for redemption is important, not just from a liberal but also a pragmatic perspective. "The fact that some of these people retain useful scientific careers is sensible, given that they are trained as scientists," says Merz. "They just slipped up in some fundamental way, and many can rebuild a scientific career." Besides, he adds, everyone they spoke to "paid a substantial price". All reported financial and personal hardships, and some became physically ill. But on another level, says Merz, these data undermine the idea that potentially being banished from the academic community has a deterrent effect.

Scientists have so far lacked much enthusiasm for confronting questions about the objectives of punishing misconduct, perhaps because such transgressions are felt to be uniquely embarrassing to an enterprise that considers itself in pursuit of objective truths.

But the time has come to face the issue, ideally with more data to hand. In formulating civic penal policy, for example, one would like to know how the severity of sentencing affects crime rates; how different prison regimes influence recidivism [or, the numbers that reoffend?]; and whether imprisonment is primarily for punishment or public protection.

The same sorts of considerations should apply with scientific misconduct; otherwise the result can have a worryingly ad hoc flavour. Last week, the South Korean national committee on bioethics rejected an application by Hwang Woo-suk to resume research on stem cells. Why? Because "he engaged in unethical and wrongful acts in the past", according to one source.

But that's just a statement of fact, not intent. Does the committee fear that Hwang would do it again, despite the intense scrutiny that would be given to his every move? Do they think he hasn’t been sufficiently punished yet? Or perhaps that approval would have raised doubts about the rigour of the country’s bioethics procedures? Each of these reasons might be defensible — but there’s no telling which, if any, applies.

One reason why it matters is that, by all accounts, Hwang is an extremely capable scientist. If he and others like him are to be excluded from making further contributions to their fields because of past transgressions, we need to be clear about why that is being done. It's time for a rational debate on the motivations and objectives of a scientific penal code.