The recent examples of papers retracted from high impact journals, as well as the resignation of a cancer researcher at Duke University after suspicions of scientific misconduct, has highlighted yet again the need to promote ethical standards in scientific research. Misconduct is detrimental to scientific progress in many ways: it can result in honest researchers being disfavoured in the competition for grants, and careers may be tainted through association with fraudulent research. Moreover, it creates a bottleneck as published fabricated or falsified data are identified and disregarded. Cases of researcher misconduct, particularly in areas with potential societal implications, such as stem cells, cancer research, or climate change, can also lead to extensive negative coverage in the press and has the potential to reduce confidence in the scientific process.

The degree of scientific misconduct and whether it is increasing is difficult to estimate. A meta-analysis published in 2009 covering 18 fraud surveys, six of which targeted biomedical scientists, suggests that it may be more common than we suspect. Although 2% of those participating admitted to having falsified, fabricated or modified data at least once themselves, an alarming 14% reported to have noticed this behaviour in colleagues (D. Fanelli, PLoS ONE 4, e5738; 2009).

Still, cherry-picking data so as to selectively report only results that support a desired outcome may be more common than deliberate fabrication of data at the time of experimental set-up and data acquisition. Admittedly, given the constraints of data presentation in a typical molecular or cell biology paper, an effort must be made to construct a narrative and present only the findings that are directly relevant to the central claims of a study. But massaging the data so that they support a favoured hypothesis straddles the fine line between sloppy science and scientific misconduct. Poor experimental design and inadequately controlled or statistically questionable experiments can produce erroneous data, but if there is no discernable intent to deceive, these deficiencies would not be considered misconduct.

How do we deal with cases of scientific misconduct and potential data manipulation? The peer review process is instrumental as a first roadblock to publishing sloppy science, but reviewers can also detect more serious problems and editors at this journal will follow-up on concerns of potential data manipulation with the authors. Nature journals also have clear guidelines on data presentation that should allow authors to avoid some of the common pitfalls associated with overzealous data beautification (see Guide to Authors). Examples of image processing that are deemed unacceptable by our guidelines can also be revealed during the production checks that manuscripts are subjected to before formal acceptance, as well as during the production process itself. In all such cases, authors are called on for clarification (Guide to Authors: calling all authors!). In cases of extreme and rampant beautification, for example if a study has multiple instances of data from distinct experiments having been patched together to create more convincing western blot panels or immunofluorescence images, the journal reserves the right to not consider the study further if our confidence in the core conclusions has been eroded. Nature journals have a general policy of following up on suspicions of data manipulation and fraud even when contacted by anonymous sources. Considering the potential for damage to authors' careers, these matters are handled with complete confidentiality. If our initial investigations confirm suspicions of fraud, the matter can be referred to the institutional agency for research integrity for further investigation.

Although journals can do much to promote integrity and transparency in the practise and presentation of research by developing and implementing rigorous standards for data presentation, author contributions and statements of authors' conflict of interests, it is not our mission to police the research community. National organizations, such as the Office of Research Integrity at the NIH in the US (ORI) and the UK Research Integrity Office, work to support research integrity, and some funding bodies, such as ESF and HHMI, have detailed policies on scientific misconduct. The ESF has produced a code of practice for Europe and a similar document with brief international guidelines was agreed on at the 2nd world conference on Research Integrity (www.singaporestatement.org). This document is meant to raise awareness of science ethics and to challenge governments and organizations to develop policies to promote research integrity. It raises the importance of reporting and responding to irresponsible research practices, but also the need to create a research environment that fosters academic honesty.

In the US, institutions are obliged to report confirmed instances of misconduct within government-funded research to ORI, although worryingly a report from 2008 suggests that most cases go unreported. Research institutes should have procedures for responding to researcher misconduct and many already do, both in the US and Europe, but it is equally important to support good scientific practice through clear policies and education. At many universities in Europe and the US, a course in research ethics is mandatory for PhD students and one would hope that such courses would become a fixture of all PhD programme curricula.

It is ultimately also the responsibility of the senior investigator to create a laboratory environment that provides a strong foundation in best practice in research and research ethics, and to be a compelling role model for trainees. Encouraging lab members to compete with each other or undercut each others' efforts is counterproductive. Most importantly, the scientists at the bench who are generating and processing the data must keep their wits about them in the stress to 'publish or perish' and remember that irreproducible findings will be disregarded in the long run.