Nature (2006) | doi:10.1038/nature04995

Quality and value: How can we get the best out of peer review?

A recipe for good peer review

Improving peer review depends on making its human aspects more humane. Journals need to ask the right reviewers to review the right articles, help them to do it quickly and thoroughly, make them feel happy to sign their reports, thank them, tell them how they did, and encourage wide recognition of what's too often a thankless task.

Spend three years on a research project – stay up all night writing the last draft of the paper – send it to a journal chosen by your boss. You hope that the peer reviewers will understand your work, be fair, and know what they're doing. But will they?

Peer review is the only quality control we have for approving research and publicizing its results in journals and at conferences. But it's a largely amateur process: too often poor at detecting errors, slow, expensive and unreliable. It's not much good at picking up ethics problems or scientific fraud. At its worst, it blocks innovation, is unreasonably biased and is open to abuse. All this is because peer review is a human process, an art rather than a science. The likelihood of two reviewers agreeing on an article is only slightly better than chance. It might take at least six reviewers to get a reliable decision, but few journals have the time or money for that.

How can journals improve the peer-review process? Being honest, helpful and reasonable to authors and reviewers gets the best out of them. At the BMJ (previously the British Medical Journal), we make the whole peer-review and publication process as transparent as possible, asking authors and reviewers to declare conflicts of interest; letting reviewers see research protocols as well as completed manuscripts; asking authors to provide supporting documents, links to their own websites and sometimes raw data; and even asking authors how they've responded to reviews from other journals that have previously rejected the work. Most important, we use an entirely open peer-review system in which authors and reviewers know each others' names and addresses, and reviewers cannot make separate comments to the editor or easily succumb to bias – for instance against unknown authors from non-prestigious institutions and/or against women. Reviewers and authors like this system; only a handful of our many thousands of reviewers have stopped reviewing for us since we began it, and we have good evidence from a randomized controlled trial that it has no adverse effect on the reviews.

Journal editors need to pick the right reviewers the first time round to avoid wasting the reviewers', editors' and authors' time. A good online manuscript-tracking system, a well managed large database of reviewers, e-mailed invitations and strict deadlines for reviewers help a lot. So does choosing reviewers for their knowledge, expertise and currency, rather than their eminence.

You may say that it's all very well for a prestigious journal to insist on transparency and open peer review. It's harder for smaller specialist journals, whose editors rely on the goodwill of professors they don't want to upset. One fair and less threatening alternative to the widespread Kafkaesque system of unsigned reviews is closed ('double-blind') peer review, where neither authors' nor reviewers' names are revealed.

There are other obvious and feasible things that will improve any journal's peer-review system. Tell authors and reviewers what you want from them. Publish comprehensive and detailed instructions to authors to help them provide everything reviewers need to see. Give reviewers clear briefs, including guidance on what to include in the review, how much effort they should put into searching the literature to support their opinions, and explain what is helpful from the editor's perspective.

Better still, ask reviewers what they want. Many of the BMJ's reviewers have asked for training in peer review, so we now offer a workshop, which covers the evidence on peer-review's pitfalls and provides exercises on what BMJ editors and authors need from reviewers. It isn't a critical appraisal course. All reviewers should take one of those, of which many exist.

Reviewers have also told us they want feedback on their performance so that they can learn and improve. Any journal can send a reviewer the other reviewers' reports on the same article, with information about the decision on publication. Journals could go further: at the BMJ we don't yet give reviewers a formal editors' rating of quality – perhaps we should.

What about rewards? For many years the BMJ has paid reviewers a token £50 for each report. Some reviewers say this is an administrative hassle, and that they'd rather have a free subscription or the much more lasting and meaningful reward of academic and public recognition for the work they do. Like many other journals, we publish reviewers' names annually. We also invite them to an annual party, give an award to the best reviewer of the year, and offer certificates that reviewers can give to their bosses and prospective employers to show how much reviewing they've done. But shouldn't the many hours of important work peer reviewers do each year be recognized more formally by interview panels and research-assessment exercises?

RELATED LINKS:

BMJ transparency policy

BMJ's training package for peer reviewers

Connotea peer review debate tag contains links to studies discussed in this article.

Trish Groves is deputy editor of the British Medical Journal, BMA House, Tavistock Square, London WC1H 9JR, UK. She can be contacted by e-mail.

Visit our peer-to-peer blog to read and post comments about this article.

Extra navigation

.
ADVERTISEMENT