Peer review is ubiquitous in the scientific process, having a crucial role in decisions about grants, promotions and publications. The basic idea is simple: decision makers such as funding agency staff, department heads and journal editors ask independent experts for advice on the strengths and weaknesses of a grant proposal, promotion candidate or manuscript so that they can decide whether to say yes or no. “Peer review has an almost mythical significance in the community of scientists,” as one physicist has written1, although that does not mean it is perfect.

Peer review is also one of those processes that is meant to happen in the background, while the real business of doing research takes centre stage. But occasionally peer review moves into the spotlight, as happened recently when e-mails sent by researchers at the Climatic Research Unit (CRU) at the University of East Anglia (UEA) were hacked and published on the web in the run-up to the international Climate Change Conference in Copenhagen at the end of 2009. The 'climategate' controversy has resulted in millions of words being published on topics as diverse as tree-ring data and the Freedom of Information Act, but here we will focus on the debate about peer review at scientific journals that has been prompted by some of the e-mails.

Peer review has an almost mythical significance in the community of scientists.

Almost all scientific journals use peer review, but there are important differences between them. Many journals send most of the manuscripts they receive to external referees for peer review, only rejecting without review those that are clearly outside the scope of the journal or manifestly inappropriate. Other journals, including Nature Nanotechnology, employ professional editors to decide which manuscripts are sent to external referees: in general, these journals send only a minority of manuscripts (perhaps a third or fewer).

Peer review figured in climategate because a number of the e-mails concerned the publication process — both the publication of original research results in scientific journals and the compilation of an Intergovernmental Panel on Climate Change (IPCC) report that summarized the available research on climate change. In one of the most memorable phrases from the e-mails, CRU director Phil Jones, who was a lead author for a chapter in the IPCC report, wrote to a colleague that “I can't see either of these papers being in the next IPCC report. Kevin [the other lead author for this chapter] and I will keep Them out somehow — even if we have to redefine what the peer review literature is!” (More of this e-mail can be read in ref. 2, along with other claims that CRU scientists had abused the peer review process to prevent certain papers from being published.)

In an equally memorable phrase3, a committee appointed by the UEA to examine the conduct of the CRU scientists concluded that the e-mails “reflect the rough and tumble of interaction in an area of science that has become heavily contested and where strongly opposed and aggressively expressed positions have been taken up on both sides.” This committee concluded that the rigour and honesty of the CRU scientists were not in doubt, and that there was no evidence that the conclusions of the IPCC had been undermined, although it criticized both the CRU and the UEA for a lack of openness.

Peer review does not prove that a piece of research is true.

The UEA report also contains a thoughtful essay on peer review by Richard Horton, Editor of The Lancet (a leading medical journal). “Peer review is a human process,” he concludes, “and so will always contain flaws, produce errors, and occasionally mislead.” And because the final decision on whether a paper is published or not rests with the editor, he explains that what editors seek from referees is a “powerful critique of the manuscript — testing each assumption, probing every method, questioning all results, and sceptically challenging interpretations and conclusions.” Even then, he continues, “peer review does not prove that a piece of research is true. The best it can do is say that, on the basis of a written account of what was done and some interrogation of the authors, the research seems on the face of it to be acceptable for publication.” As Horton points out, this is very different from the way that peer review is viewed by those outside the scientific community, although he is being overly pessimistic when he refers to peer review as “the main decision aid used by journals”.

Horton goes on to make a number of sensible recommendations on how peer review could be improved: develop ways to reduced unwanted bias; give formal training in peer review to all young scientists; share referee reports between journals to reduce the workloads on referees and editors (see ref. 4 for an example of this in neuroscience); introduce new methods to resolve disputes; and carry out more research into peer review itself.

Given how difficult it is to prove that a piece of research is true without actually repeating it, perhaps peer review as practised today is the best way of ensuring that what is published conforms to the highest standards possible. One can discuss whether the present system requires minor or major revisions — although none of the recommendations made by Horton are trivial, they are not show-stoppers either — but there is no need to reject peer review as it stands.