Nature Neuroscience 9, 149 (2006)

Can peer review police fraud?

Woo Suk Hwang made headlines late last year when his stem cell research was found to have been falsified. Hwang reported in Science that his group had successfully created 11 embryonic stem cell lines using DNA from patients. An investigation by Seoul National University concluded that the paper's data could not be verified, and Hwang has resigned. Science has since retracted two of his papers, and Nature has reexamined the accuracy of another paper from his group that was published there.

Media coverage of these events has suggested that high-profile journals encourage scientific fraud by cutting corners in peer review or by overruling referees to publish newsworthy results. Although it is understandable to conclude that, by accepting a paper, a journal's editors confer their authority to the findings and that, therefore, a portion of the responsibility for the work shifts from the author to the editor, we do not agree that the peer review system can or should detect deliberate fraud.

Science is a communal enterprise built on trust. Referees and editors generally take data at face value and assume that the authors have honestly reported and analyzed their results. Reviewers are asked to judge whether a report's conclusions are solid based on the data and not whether the data themselves are fraudulent. The system is not set up to work any other way; if editors and referees distrusted all authors and assumed that every result was potentially fake, few papers would be published. There are not enough known cases of outright fraud to justify the expense (in time, money or experimental animals) to require that all data be duplicated in independent laboratories.

In addition to these philosophical considerations, detecting well-done fraud during peer review is nearly impossible in practice. Science is designed to be self-correcting and because progress hinges so critically on earlier work, incorrect results of any sort (fabricated or just wrong) will be exposed over time. However, although replication is an essential part of the scientific enterprise, in practice few experiments are replicated exactly. This problem is compounded in the biological sciences because biological variability can explain away discrepant results. Replication of results can help validate the science when it is suspect, but failure to replicate can rarely be used as proof of fraud; it is common for two honest scientists to get different results from the same experiment.

Reviewers often do spot things that do not seem quite right, but such difficulties commonly result from innocent mistakes. Moreover, as Eve Marder, the editor of the Journal of Neurophysiology points out, "Self-deception is very easy with our growing reliance on image processing." Authors may genuinely be misled into thinking that they have found a specific result, only to find that this is an artifact of the way they analyzed their data. Reviewers also frequently spot instances of self-plagiarism (for example, where a particular piece of data was reported in a previous publication by the authors), and the peer-review process tends to work well to correct poor scholarship.

Even when referees point out findings that seem too good to be true, unearthing the truth can be thorny. Gary Westbrook, the editor-in-chief of the Journal of Neuroscience, believes that journal editors have a responsibility to tackle the problem head on, asking the authors for an explanation and referring the matter back to the institution if question remain, rather than just rejecting the paper. This, however, is a gray area, and it is very easy to smear someone's reputation. Because journal editors have no legal power and no access to the raw data, it is nearly impossible for them to determine whether fraud has occurred without the assistance of university or funding authorities.

Ultimately, clever fraud can be detected only by people working in the lab who have access to the raw data or by other labs who try to replicate this work later. Most refutations are never published: a claim that cannot be replicated is often ignored, and few scientists bother to go to the extent of refuting the authors publicly. Lowering the bar for publication of negative results, perhaps in online archives, would help to encourage scientists to publish such data.

As in any enterprise, it is impossible to prevent fraud in science entirely. Given the enormous career pressure that scientists face and the fierce competition for scarce resources, it is unsurprising that some would be tempted to cheat. Our society has glorified scientists as 'truth-seekers' over all else, and so we are outraged when they behave as mere mortals and falsify data for their own purposes. This honor system has generally worked; compared to the cases of corruption and misinformation that plague other fields such as government or journalism, there are very few instances of outright fraud in science. It is also a testimony to the scientific process that when fraud is supected, it is quickly exposed—Hwang's false results were exposed within a few months of the first allegation.

Journals can report authors to their institutions' ethics boards and retract papers found to be fraudulent, but ultimate responsibility lies with individual scientists who are authors of the publication and members of the laboratory. Senior scientists are perhaps the best line of defense—by educating students about research ethics and setting clear rules for data collection and analysis, they can help train future scientists to avoid the temptations of falsifying data. Good mentorship should include training all members of the lab to keep track of data from their labs and from their collaborators, evaluating raw data before publication and maintaining a culture in which reporting suspicious data is encouraged. Scientists should detail their individual contributions to a paper, as the Nature journals encourage them to do. All these approaches can help limit fraud but in the end, science cannot move forward without trust in other people's data.

View background material on Connotea at http://www.connotea.org/user/NatNeurosci/tag/editorialfeb06.


These links to content published by NPG are automatically generated.


Notch tumor suppressor function

Oncogene Review