Thanks to a string of well-publicized high-profile scientific fraud cases involving image manipulation, most biologists are sensitized to the issue. Indeed, these days the conversation almost invariably turns to the topic at any conference or in any lab we visit. This is a good thing, as it will hopefully lead to a better appreciation by principal investigators of their duties to teach PhD students about these matters, and to monitor experiments and check all data that leaves their laboratories for publication against the original data. It should also reaffirm a fundamental premise of scientific literature — that published data must represent the raw experimental observations in an unbiased manner. We have discussed reasons for the apparent proliferation of image manipulation before (see editorials, February 2006 and March 2006): most manipulations we come across are not outright fraud (that is, the fabrication of data that never existed), but are ill conceived attempts to 'beautify' data to address a perceived need to present a clear-cut, 'black-and-white' story.

We have engaged in discussion with other journals, the community and the National Academy of Sciences about what, if anything, journals should do about the problem. In fact, the Academy recently set up a committee charged with 'assuring the integrity of research data in the era of E-science', chaired by Phillip Sharp and Dan Kleppner, and called a public workshop this month to discuss the matter. Many agree that journals do have a responsibility to maintain the accuracy of the scientific record. However, there is consensus that it is the senior author's responsibility to guarantee that the data submitted for publication accurately represents the primary observations made. All Nature journals agree that it is neither the editor's nor the referee's role to act as 'data police'. Importantly, it would be futile to even attempt to satisfy such a role, in that digital-image manipulation can be effectively rendered untraceable by using readily available tools in programmes such as Photoshop, and it is also unrealistic for journals to verify the primary data underlying graphs and tables.

Data misrepresentation can be committed just as easily at the experimentation stage, as at the image processing stage, and this is untraceable from outside the lab without independent reproduction of experiments. It is also noteworthy that a deterrent effect of a data-police role remains unproven: The Journal of Cell Biology — one of the only journals to routinely screen all images before publication — found that implementing a screening procedure with a strict ban on publication of manipulated images did not curb the rate of manipulations (an astonishing quarter of papers that are in principle acceptable for publication exhibit signs of illicit manipulation in at least one panel; the vast majority are readily correctable).

Nevertheless, manipulated data continues to regularly surface in high-profile journals and a more proactive approach at the editorial level is warranted. One step in this direction was the recent publication of our enhanced 'guide to authors', which contains explicit information about what is admissible and what is not (http://www.nature.com/authors/editorial_policies/image.html). Furthermore, we actively engage with the community in education about, and in further enhancing, these standards. We encourage voluntary author-contribution declarations, which aim to enhance transparency and accountability (see editorial, March 2005). We have also investigated efficient methods to routinely screen for image manipulations. We have decided to implement a system of spot checking conditionally accepted manuscripts across the Nature journals — these will include randomly picked papers, as well as papers perceived by the editors and referees as 'high risk'. The screening will be performed by our production staff and any problems identified will be discussed in detail with the handling editor, who will consult with the authors as appropriate. Finally, we will ask the corresponding author to confirm explicitly at the conditional acceptance stage that the figures in the submitted manuscript accurately represent the original data.

We are confident this will help save a good number of manuscripts from damaging corrections or retractions. Indeed, we would hope that this new system would have caught the unfortunate manipulations that resulted in this month's retraction of two papers published in 2003 by this journal. A formal investigation has found that all of the high number of manipulations identified fall into the 'beautification' category. As original data was supplied to support the conclusions of the papers, and as several independent publications have confirmed elements of the conclusion, the findings essentially seem to stand. Nevertheless, the frequency and severity of the manipulations uncovered ultimately necessitated a full retraction of both papers by the authors. Maybe one legacy of these papers will be to serve as an educational tool.