Nature 453, 957 (19 June 2008) | doi:10.1038/453957a; Published online 18 June 2008

Solutions, not scapegoats


Scientific misconduct may be more prevalent than most researchers would like to admit. The solution needs to be wide-ranging yet nuanced.

Many researchers would like to believe that scientific misconduct is very rare. But news reported in this issue (see page 969), and the survey results reported by Sandra Titus and her colleagues on page 980, challenge that comfortable assumption. Titus's team found that almost 9% of the respondents in their survey, mainly biomedical scientists, had witnessed some form of scientific misconduct in the past three years, and that 37% of those incidents went unreported.

The results suggest a research climate in which scientific misconduct, although uncommon, is certainly not an anomaly. Titus et al. outline a number of measures to address this situation, including better protection for whistleblowers, and promotion of a 'zero tolerance' culture in which scientists have just as much responsibility to report others' misconduct as they have for their own behaviour.

However, although these proposals have much to recommend them, they are, at best, a beginning. A more radical change of perspective may be in order — one in which misconduct is no longer viewed as problem that can be solved by identifying and banishing a few unethical individuals. Instead, the problem calls for approaches that are both more nuanced and more far-reaching.

Investigations often fail to diagnose the environment that has allowed misconduct to flourish.

Consider, for example, that not all cases of misconduct are equally egregious, and not all perpetrators deserve to be branded as cheaters for the rest of their careers. There is often room for honest mistakes and differences of opinion. Yes, institutions should develop strong guidelines for what is and is not permissible, but officials should also have the flexibility to compare individual situations to these guidelines, and to develop unique solutions as needed. In some cases — for example, a young researcher who simply yielded to temptation once — a system of warnings might be used to both correct the problem and educate the researcher. Within individual labs, moreover, airing complex matters — such as decisions about when data can be justifiably excluded from analysis, or how images can be ethically adjusted to improve their quality — may reduce the chance that any single investigator's decision will later lead to accusations of misconduct.

Meanwhile, misconduct investigations all too often focus solely on an individual offender, and fail to diagnose the environment that has allowed misconduct to flourish. Instead, institutions should seize the opportunity to learn from the experience, and to address the bigger questions. For example, did the atmosphere in the lab create the pressure to cut corners? Or did the intensity of the tenure chase contribute? One way to address such questions might be through internal departmental discussions, in which everyone is free to admit mistakes, and discuss how to fix the problems instead of apportioning the blame.

More-formal misconduct investigations may need to be kept private, as a necessary safeguard to protect the falsely accused. Nonetheless, institutions can and should share the lessons they have learned from the process. Officials at an institution may learn, for example, that mentoring needs to be improved, or that their system for reporting misbehaviour is flawed. Unfortunately, some institutions may instead feel pressure to bury or cover-up their findings for fear of negative press. But to do so is to gain a short-term reprieve at the expense of long-term loss: such institutions will only be doomed to repeat past mistakes.

This means turning attention away from scapegoats, and focusing on solutions.

Discuss this article here.