Winston Churchill famously said that democracy is the worst system of government, except for all the others that have been tried. We take a similar attitude toward peer review. As scientists, we welcome experiments aimed at improving the process, but so far, these new approaches do not seem to be achieving better—or even much different—results than does traditional peer review.

The case against the system is familiar. Peer review can take a long time (and it can seem even longer). Some scientists feel it does not materially improve papers, and may even aid dissemination of bad science by giving authors a roadmap to help them 'paper over' the defects. “I'd rather read a paper, warts and all, and judge for myself how serious the problems are,” says Charles Stevens of the Salk Institute, who is an editor for the Proceedings of the National Academy of Sciences. Also, if a flawed paper is published in a high-profile journal, some complain that it is an uphill battle to make their criticisms heard.

Worst of all, many scientists are justifiably concerned about the excessive influence of the publication process on their careers. Publication record is often treated as a convenient proxy for scientific achievement in decisions on funding, hiring and promotion. At Nature Neuroscience, we have argued repeatedly against the use of journal impact factor in evaluating individual papers or careers1,2, and we encourage hiring committees and funding institutions to find better ways to assess achievement.

Is there a better way to evaluate scientific results? Some researchers, including Stevens, believe biologists should adopt the model used by physicists, in which drafts are deposited in preprint servers for peer commentary before being submitted for publication. (All Nature journals permit posting in preprint archives, which is not considered prior publication under our policies.) This approach, they say, would level the playing field and allow everyone to publicize their findings quickly, and to post critical commentary just as quickly. Once a critical mass of useful comments had accumulated, authors could improve the papers—or defend why they should stand as is.

Such a system could succeed only if scientists were willing and able to critically evaluate this mass of information shortly after it is posted. With over 200 journals publishing neuroscience-related research, there is a daunting amount to sift through. “Like looking for a needle in a haystack,” says Gordon Fishell, of New York University. For example, a PubMed search for “neurogenesis” brings up 478 papers published in 2004; “LTP” or “long-term potentiation” returns 529 hits. Faced with the demands of their own research, and without editors nagging them to return reviews, many scientists are likely to find they have more urgent priorities than commenting on the unfiltered output of their field.

Also unknown is how much papers would improve if revisions were no longer a requirement for publication. Under the current system, editors are responsible for insisting that authors address the concerns raised by experts, and virtually all papers reviewed at Nature Neuroscience are revised before publication. In one published study of the effectiveness of open commentary3, only 25% of authors voluntarily revised their papers in response to peer comments.

Some would argue that revisions are not always necessary, and that it would be adequate to have papers 'tagged' with comments about their potential shortcomings. Whether this is scientifically responsible is another matter: how would people outside the field evaluate the validity of the papers, or of the commentary? For biomedical research, this issue is particularly important for topics with clinical or social implications. The danger of unconfirmed or flawed findings being publicized as scientific fact would surely increase.

Although commentary cannot substitute for peer review, continuing discussion can be a valuable addition to a paper. The Faculty of 1000, an online subscription service, is a partial test of a peer commentary system. Selected experts contribute written and numerical evaluations of papers they deem of high quality, regardless of source. Among 2,500 recent recommendations in neuroscience, over 200 journals were represented, an indication that good papers are being highlighted wherever they are published. Closer examination, however, shows that two-thirds of these papers appeared in just 11 journals, each of which had 50 or more recommended papers. Within this subset, papers from high-profile journals tended to be rated more highly by the faculty; there was a tight correlation (R2 = 0.93) between average score and the 2003 impact factor of the journal (see Supplementary Note online). Therefore, this form of post-publication commentary does not reveal any substantial mismatch between the results of peer-reviewed publication and the scientific opinion of the field.

The peer-review process is not without its faults. Some papers that later turn out to be influential are rejected, and the need to make binary distinctions along a continuum may lead similar papers to be treated differently. However, there is no compelling evidence that another system would be more efficient or effective. Peer review can certainly be augmented with commentary, and we encourage such efforts. But even more importantly, we encourage those who make decisions about scientists' careers to evaluate the information in each paper, and any accompanying discussion, rather than simply noting the journal where it was published.