This past November, an ongoing ferocious argument about the merits of a paper by Chen et al. published earlier in Cell1 surfaced in the popular science press, first in the British Times Higher Education (THE)2 and then in a blog hosted by The Scientist (http://www.the-scientist.com/blog/display/55240/). The paper is concerned with planar cell polarity (PCP) signaling, a pathway that regulates the organization of epithelial tissues. Several competing groups, however, protest that it mostly confirms previous work and that it did not appropriately cite the earlier literature. Predictably, the Chen et al. controversy has again ignited a debate on the flaws of editor-managed anonymous peer review in the web feedback pages of both THE and The Scientist. We maintain, however, that despite occasional unfortunate lapses, anonymous peer review remains the best quality-control process that we have.

Few would deny that peer review, as currently practiced, is without drawbacks, and the brouhaha surrounding the Chen et al. controversy highlights several of these problems. The corresponding author of the Chen et al. paper, according to THE, dodges any substantive criticism by stating that the study passed the strict peer review process at Cell and that any concerns about the review process should be directed to the journal. The same story quotes a Cell editor assuring readers that all the peer referees were indeed experts on the topic. Regardless of the merits of this particular case, such controversies leave readers wondering how a panel of expert referees could possibly have missed glaring problems in a paper. The answer simply is that, as in any human endeavor, peer review cannot be perfect; mistakes do happen.

What then can we do to minimize the occurrence and fallout of such mistakes? One of the Chen et al. critics asks for an 'ombudsman' to serve as an instance of appeal for those who feel wronged by a publication. Such an ombudsman would also need to solicit advice from experts, and essentially act as another editor of the appeal. Other participants in the discussion suggest, once again, that the journals ought to facilitate post-publication commenting as a mechanism to flag shoddy papers. Indeed, several journals are experimenting with such online reader input, but participation varies substantially. Over all, it remains quite low, and our own efforts to invite post-publication feedback have not been encouraging. We trust that the scientific community will eventually learn to take web-based discussion seriously, but, for the time being, post-publication commenting cannot replace nor even reliably supplement traditional pre-publication peer review. Tellingly, the most strident critics of Chen et al. declined to make their case in online comments directly linked to the offending paper, preferring instead to publish a rebuttal in another journal3.

Anonymous peer review remains the best practical option that we have and it seems to be here to stay. How can we best optimize it? A good review serves two purposes: first and foremost, to enable the journal editors to make a fair decision regarding a manuscript, and second, to explain to the authors whether and how their study should be improved to the point where it may become acceptable. As far as possible, referees should explain the weaknesses of the paper to the authors so that rejected authors understand the basis for the decision. If, for example, a referee considers a manuscript to be insufficiently novel, they should support this opinion with a citation from the literature. The editors conduct a first-approximation triage on what manuscripts might meet the bar for the journal, but we rely on the far greater expertise of our peer reviewers to tell us whether a particular study really moves the field forward, whether it is technically and statistically sound, and whether it cites preceding work appropriately. Good peer reviewers may recommend more experiments to bolster a manuscript's hypothesis and conclusions, and would also tell us how much these experiments are likely to improve the paper and how difficult they are likely to be. Finally, a good referee has to be fair. Referees should decline to review if they feel any conflict of interest, be it scientific, financial or personal.

Reviewing well requires a substantial amount of time and energy and it is generally acknowledged that being a good referee does not lead to tangible benefits with respect to career advancement. Usually, referees are willing to invest considerable unrecognized effort as a civic duty; they owe their colleagues the same kind of feedback that they expect for their own manuscripts. It is not easy to be a good referee, nor are good reviewing skills ever taught in any systematic manner. We provide a guide to referees on our website (http://www.nature.com/neuro/referees/index.html) and welcome any suggestions on how we could further improve the process.

Critically, all of the players in the system—authors, editors and referees—have a collective responsibility toward improving manuscripts submitted for publication. The daily pursuit of data, results, funds and jobs can sometimes obscure the larger view, but of course we all know that we draw from deep wells of preceding literature, build on the achievements of previous generations and bear a solemn responsibility to maintain the highest possible level of honesty and quality in our work. Shoddy authorship, editorship or peer review pollute the scientific record, cause colleagues to waste time and money trying to replicate findings, and can do serious damage to public trust of science. We take this opportunity to extend a heartfelt 'Thank you!' to the many conscientious expert referees without whose help we could not function effectively.