Most scientists have horror stories to tell about how a journal brutally rejected their landmark paper. Now researchers have taken a more rigorous approach to evaluating peer review, by tracking the fate of more than 1,000 papers that were submitted ten years ago to the Annals of Internal Medicine, the British Medical Journal and The Lancet.

Using subsequent citations as a proxy for quality, the team found that the journals' peer-review processes weeded out dross and published solid research. But they failed — quite spectacularly — to pick up the papers that went to on to garner the most citations.

Europe proposes joint Moon trips with Russia Microsoft billionaire takes on cell biology Watson's Nobel medal sells for US$4.1 million

“The shocking thing to me was that the top 14 papers had all been rejected, one of them twice,” says Kyle Siler, a sociologist at the University of Toronto in Canada, who led the study1. The work was published on 22 December in the Proceedings of the National Academy of Sciences.

Siler and his team tapped into a database of manuscripts and reviewer reports held by the University of California, San Francisco, that had been used in previous studies of the peer-review process. They found that out of 1,008 submitted manuscripts, just 62 were published in one of the three journals. Of the rejected papers, 757 were eventually published elsewhere, and the remaining 189 either underwent radical transformation or disappeared without a trace.

By giving reviewers’ reports a score representing their level of enthusiasm, the researchers found that papers that received better appraisals generally got more citations. “The gatekeepers did a good job on the whole,” says Siler.

A closer look

But the team also found that 772 of the manuscripts were ‘desk rejected’ by at least one of the journals — meaning they were not even sent out for peer review — and that 12 out of the 15 most-cited papers suffered this fate. “This raises the question: are they scared of unconventional research?” says Siler. Given the time and resources involved in peer review, he suggests, top journals that accept just a small percentage of the papers they receive can afford to be risk averse.

“The market dynamics that are at work right now tend to a certain blandness,” agrees Michèle Lamont, a sociologist at Harvard University in Cambridge, Massachusetts, whose book How Professors Think explores how academics assess the quality of others’ work2. “And although editors may be well informed about who to turn to for reviews, they don’t necessarily have a good nose for what is truly creative.”

Fiona Godlee, editor-in-chief of the British Medical Journal, says out that these desk rejections were not necessarily mistakes. A paper that reports an excellent biotechnology study, for example, might be rejected simply because it falls outside the journal’s clinical focus. “The decision-making is very much about relevance to readers,” she says. “And I fear chasing citations as a way forward.”

Siler acknowledges that using citations as a proxy for quality poses some problems. A recent survey by Nature found that the world’s most-cited scientific papers tended to be about widely used methods rather than paradigm-shifting breakthroughs.

One alternative approach would be to assess the quality of the published papers by conducting a fresh round of peer review, and perhaps even find out whether they were replicated or translated into the clinic successfully, suggests Daniele Fanelli, an evolutionary biologist currently at the University of Montreal in Canada who studies publication bias. “But that's a lot of work,” he says ruefully.

For now, peer review is clearly here to stay. “Many people think the system is full of weaknesses,” says Lamont. “It’s not perfect, but it’s the best we have.”