The scientific community is sunk in one of its periodic bouts of angst over research fraud. After the Korean researcher Woo Suk Hwang's cloning work turned out to be spectacularly false, recent weeks have brought revelations that range from spurious Norwegians in a cancer study to doubts over RNA work in Japan (see page 514).

With high-profile cases putting the reputations of journals — and science as a whole — on the line, is there more that editors can or should do to prevent such embarrassments?

Whenever a published paper is exposed as a fake, editors are wont to repeat that peer review is not capable of catching fraudulent science. After a Norwegian study published in The Lancet was found to be based on imaginary patients (see Nature 439, 248–249; 200610.1038/439248b), the journal's editor, Richard Horton, pointed out: “Short of me flying to Oslo and checking out every entry on the computer, there is really no way for me to detect the fraud.”

And other groups seem to agree that the primary responsibility for determining whether a paper ought to be shelved in fiction or non-fiction should not rest with journals. This opinion is based partly on the patchy staffing and funding of most journals, which are volunteer-run society publications. “Journals don't have the resources or the expertise,” says Mary Scheetz, director of extramural research at the Maryland-based Office of Research Integrity, which investigates ethical violations in work funded by the US National Institutes of Health.

In addition, journals have very little disciplinary power — the worst they can do is refuse to publish work, or publish a retraction. The responsibility for investigating allegations lies mainly with institutions and funding agencies that pay for the work, points out James Kroll, who examines misconduct at the US National Science Foundation.

But in the past few years, journal editors have been taking a more proactive approach to dealing with fraud, and exploring what they can do with the resources they have.

Examining every paper submitted for fabrication would be pretty much impossible. “It would be an astronomically expensive and difficult thing,” says Drummond Rennie, deputy editor of the Journal of the American Medical Association. “It would take months; we are talking hundreds of thousands, and sometimes millions of dollars.”

Stephen Evans, a statistician at the London School of Hygiene and Tropical Medicine, occasionally analyses papers in which the raw data are suspect. Tricks include looking for ‘digit preference’, the tendency of humans to round towards 0s and 5s, or the amount of variance in the data. “It is very difficult to invent data that has the right variability,” says Evans. But he agrees that the time and expense make checking every study “totally impractical”.

Paper trail: statistical tests may detect fake data, but checking every submission is impractical. Credit: ALAMY

Instead, journals are investigating the potential of automated computer searches on submitted data, which could be incorporated into the review process with minimal time and effort. One idea catching hold is the introduction of screens to catch unacceptable image manipulation (see ‘Forensic software traces tweaks to images’). Editors are also exploring text-comparison software to help pick up plagiarism (see Nature 435, 258–259; 2005).

“As information technology becomes more sophisticated, I think you are going to see more journals adding new tools to their screening processes,” says Scheetz.

Testing every submission would be astronomically expensive.

Another shift is that journal editors are now more likely to challenge papers that they think are suspicious, rather than quietly rejecting them. According to Harvey Marcovitch, head of the London-based Committee on Publication Ethics (COPE), the old way was to “find some excuse not to publish it”. But the 200 or so journals that have signed up to COPE's code of conduct are now committed to going further. “Even if you wouldn't accept a paper if it was completely clean, you have an absolute duty to inquire,” says Marcovitch. “If you are not satisfied with the authors' responses, you have an absolute duty to go to the institution and ask them to investigate it.”

Most would agree with that sentiment, but there are practical problems. In Britain, for example, editors are particularly loath to accuse anyone of fraud because the country's tough libel laws mean that they risk being taken to court. “Often what you do is ask for more and more clarification in the hope that something will turn up,” says Marcovitch. A common step is to request the raw data, although this can be a headache for journals. “What do you do if you are suspicious about a paper, you ask to see the data and you get 25 cardboard boxes, 4 CDs and would have to hire a biostatistician for three months?” asks Marcovitch.

One simple step — which Scheetz says is taken surprisingly seldom — is to include misconduct policies in a journal's instructions to authors. “If something is suspicious, it puts the journal in a stronger position. Editors can request the raw data if it says they can in the instructions to authors,” she explains.

Journals are also starting to request that researchers carry out their own checks before even submitting a paper. Nature now advises authors to include independent verification for certain cloning papers, for example. And the Journal of the American Medical Association requires that industry-funded trials go through independent data analysis.

Journals are increasingly developing policies together, through programmes such as COPE, or using common policies drafted by groups such as the World Association of Medical Editors, based in Chicago, Illinois. The US Council of Science Editors, in Reston, Virginia, is currently working on a report on publication ethics. And Nature, Science and Cell are thinking about working up policies between them, according to Linda Miller, US executive editor of Nature.

Ultimately, if a journal does uncover evidence of fraud, it has to rely on the researchers' institution or funding agency to investigate fully. But this depends on such bodies having the will and authority to do so. When the British Medical Journal tried to get someone to investigate the work of cardiologist Ram Singh of Haldberg Hospital and Research Institute in Moradabad, India, for example, no institution or scientific body could be persuaded to make a judgment on the case. Singh went on to publish similar work in The Lancet. In the end, both journals published expressions of concern, but did not feel able to retract the papers. And in an ongoing case involving RNA researcher Kazunari Taira, the University of Tokyo seems unlikely to get to the bottom of whether suspicious data were faked, because it does not have the authority to make a full inquiry.