Credit: GETTY

Scientific misconduct comes in many shades of grey. At the light end of the spectrum, some authors might recycle their data over several publications, often not citing the previous ones properly so that each can be presented as new work. Although not fraud, no one can deny that it's bad practice. And when it is actual fraud, authors, scientists, editors, the media and whoever else is involved are often left pointing an accusing finger at each other. How can we, as a community, prevent such papers from being published?

Scientific integrity should be taught in the lab. But we all know someone who has removed a data point here, shifted another one there, with the aim of improving presentation. Not all data manipulation is innocent, however.

Recent high-profile fraud cases have shocked the scientific world. Five years ago, Jan Hendrik Schön of Bell Labs was thought to have the Midas touch. He used the humble field-effect transistor to demonstrate all of the main findings of twentieth-century condensed-matter physics. An independent inquiry found that he had fabricated most of the results, and his papers were duly retracted. The criminal trial of stem-cell researcher Hwang Woo-suk — for embezzlement, misuse of research funds and violation of South Korea's bioethics laws — has just entered the final stages. Although Hwang has apologized for fraud on behalf of his group, he continues to blame his junior collaborators for having duped him into believing the cloning results.

In both of these cases, some of the co-authors were innocent of fraud, but how to tell? When the experiments are complicated, it isn't always possible for every co-author to understand all aspects of the work, down to the last detail. For this reason, Nature Physics encourages authors to publish a breakdown of the individual author contributions — such as who conceived and designed the experiments, who performed the experiments, who wrote the paper, and so on. Transparency of this kind, giving credit where it is due, is also useful for funding bodies and potential employers, when it comes to rating candidates. Roughly a quarter of our published papers currently include a statement of author contributions in the acknowledgements. We hope that the proportion will increase — perhaps the community would even support a move to make such a statement compulsory for all papers.

What else can journals do to ensure good scientific practice? Peer review isn't perfect, but it's the best process we've got. We look for referees we can trust, and listen to them if they have serious concerns. But if referees do smell a rat, they often stop short of making direct allegations. It's simply not part of the culture. Instead, they may point to inconsistencies and other technical problems, such that the journal subsequently rejects the paper without further ado. But then the authors are free to resubmit elsewhere. They may clean up the obvious problems so that, at the next journal, they might be able to fool a different set of referees.

A way to curb such bad practice would be for journals to share information, but that would compromise the cornerstones of editorial independence and author confidentiality. Moreover, editors would then become both judge and jury, and the level of proof required astronomical. Most people just don't have the time to get involved, especially when it comes to those cases in the grey area. But that's not to say that we have to live with it.

Currently, there is no formal punishment for lapses in authors' judgement. Many authors of questionable integrity doubtlessly do publish papers that go undetected. But for papers published in high-profile journals, the requirement of reproducibility ensures that errors, either unintentional or deliberate, are corrected eventually. It's a self-healing process that sometimes can take a long time, especially if a mistake can only be revealed with the aid of new technology. Such was the case for the Piltdown Man, hailed in 1912 as the missing link between humans and apes, and only proved 40 years later to be a collection of modern human and orang-utan bones and some fossil teeth. Last month, Nature published a corrigendum to a paper published 13 years earlier: improved image analysis techniques enabled experts to show that different sets of data taken on the same sample were not treated to the same background subtraction, despite the authors' assurances to the contrary at the time (Nature 444, 129; 2006).

Fortunately, only a minority of papers need to be corrected or retracted. Existing safeguards such as peer review and replication do preserve the accuracy of the scientific record in the long term. However, we should always think about ways in which the system could be made more robust: requiring a statement of each author's contrubution might be one of them. But still the single most important factor on which progress relies is the training of each individual to respect the honesty and integrity required in the scientific process.