With human nature as it is, the only surprising thing about scientific misconduct should be that it continues to surprise us. After all, scientists are human — with the same range of creativity, hopes, fears, anxieties and weaknesses as anyone else. Why should we be more surprised when scientists behave unethically than, say, those in business or politics? Surprised or not, we should acknowledge that scientific misconduct is happening, will always happen, and probably always has happened. With an increased awareness, however, we can all be more vigilant and perhaps better equipped to prevent it happening.

In the physical sciences, the rise and fall of Jan Hendrik Schön should serve as a pretty strong case study in how not to conduct research1. Put briefly, Schön made up or re-used data on a staggering scale, which resulted in the retraction of eight papers from Science and seven from Nature, among other journals. Less headline-grabbing, but still worrying, were the 60 falsified structures published in Acta Crystallographica E. This time2, two groups from Jinggangshan University used the same bona fide crystallographic data to generate swathes of bogus 'structures' by, for example, 'swapping' copper atoms for zinc, or CH2 for NH groups. And completing the trinity of research misconduct (fabrication, falsification and plagiarism) are the 70 questionable articles published by Pattium Chiranjeevi, from Sri Venkateswara University3. These papers, published in a range of journals, included work done on instruments that did not exist, and were on occasion even submitted to four journals at once.

These are just three relatively well-known examples. For more, the interested reader is directed to the Retraction Watch blog4, which, contrary to its moderators' initial concerns that they would struggle to find enough examples to cover, has averaged around six posts per week since its inception in mid-2010. Of course, many of these retracted papers are not the result of unethical behaviour, but a worrying proportion are.

Very often the reaction to the discovery of these cases is 'Why on earth did journal X publish THAT?'. However, when it comes to outright fabrication or falsification, editors and peer reviewers must consider the data with which they are presented at face value. It goes without saying that reviewers require a healthy scepticism; for example, further experiments are often requested to back up those already reported, but there is no way they can be expected to assess whether the data itself is real or not. There are analytical tools available that can help identify suspicious data. For example, images that have been tampered with can be identified using forensic software5 and it is worth noting that the falsified crystal structures mentioned previously were detected using automated routines increasingly common in crystallography. It is, however, hard to see that these would have been useful in the case of the determined fabrication that Schön engaged in.

Journals have a much greater stake in cases of plagiarism, against which sophisticated tools are being aimed. Nature Chemistry and the other journals in Nature Publishing Group are part of CrossCheck6, and can use this tool to check the text of submitted articles against a large database of published papers. As the publication ethics section of our author guidelines7 clearly state, “[...] when large chunks of text have been cut-and-pasted, [s]uch manuscripts would not be considered for publication in a Nature journal.”

Of course, journals also have an important role in many other cases beyond plagiarism and cannot reject all responsibility. Publishers should ensure that data is made as widely available as possible so that any interested party can examine it closely, and should also respond quickly to requests from readers. The outcome of any action a journal does take — such as correcting or retracting a paper — should be transparent, freely available and disseminated in the same way as the original paper. Investigations into data fabrication or manipulation are beyond the remit of publishers, and should be conducted by the institutions at which the research was carried out, in conjunction with the agencies that funded the work. Nevertheless, publishers should actively support such proceedings.

One of the fundamental tenets of science is that experiments should be reproducible (to within an acceptable degree of error). 'Peer review' is broader than the pre-publication assessment that most people are referring to when they use the phrase. The true test comes once every aspect of a discovery can be scrutinized by one's peers — and then built on. In spite of automated data-checkers and text-comparison tools, physically and independently recreating an experiment remains the best way to validate data.

So what should be done to deter misconduct? As with lab safety, this is something that is best dealt with by researchers themselves — a shared awareness of correct research ethics needs to be fostered and passed on to the next generation. This should be emphasized by formal training from departments and institutions, which must have their own policies and guidelines for allegations relating to misconduct, as well as for expected ethical behaviour. But most of all, it needs to be put into everyday practice and an example of high standards should be shown by mentors.

When so much of academic success is measured by publications, the motivation behind these transgressions is relatively clear. Even more explicit than influencing hiring, tenure or promotion decisions are the financial rewards offered by universities in China, which can exceed $100,000 for publishing papers in Nature or Science8. No matter what the incentives, it is less clear how anyone expects to get away with large-scale scientific fraud for any length of time.

Ultimately, science and the scientific record is self-correcting — errors are either spotted and put right or play no part in future understanding or discoveries — but only at the expense of much unnecessary work and potential anguish by those prepared to stand up and put things straight. No-one should have to put their careers on the line — or on hold — to investigate and report deliberately incorrect results. It is surely far better to act preventatively by insisting on higher standards at every step of research.