Some researchers expect that grant proposals with high scores should have better odds of acceptance, but the MRC says that is not necessarily how it works. Credit: David Gould/Getty

Many researchers do little more than grumble if a funding agency declines their grant applications. But when the UK Medical Research Council (MRC) turned down two proposals from biochemist Ian Eperon – despite high scores from peer reviewers — he decided to find out how closely the agency follows the evaluations it invites from experts. After much chasing and a freedom-of-information request, he has received figures that surprised him: in 2013–14, the scores given to grant proposals in his field by outside reviewers had little impact on whether the proposal got funded.

Promising antibiotic discovered in microbial ‘dark matter’ Biochemist questions peer review at UK funding agency Exoplanet bounty includes most Earth-like worlds yet

The data provide a rare glimpse into the connection between peer review scores and grant success at the agency. But whether they illuminate any wider issue is up for debate. To Eperon, at the University of Leicester, UK, the data suggest that the MRC is disregarding the advice of outside experts, and that the agency needs to be fairer and more transparent in the way it assigns grants. Some researchers agree with him. But the MRC and other researchers say that the data show nothing of the sort and are not surprising. Because reviewers’ written critiques of grant proposals are weighed together with their scores, knowing the scores alone gives a very incomplete view of the way the agency uses expert opinion, they say.

Like many agencies, the MRC, which had a gross research expenditure of £845 million (US$1.3 billion) in 2013, asks outside experts to review and score grant proposals. It then dismisses some proposals at a ‘triage’ stage and passes the trimmed selection to a board — a small panel of scientists — to discuss which to fund. The scores are revealed to the applicants but otherwise kept confidential.

Data from 302 applications in the field of molecular and cellular medicine, which Eperon shared with Nature (see MRC scores ) show that some highly rated ones were dismissed at the triage stage. And at the board stage, granted and declined proposals had a nearly identical spread of scores. To Eperon, this is evidence that outside reviewers' scores have little bearing on whether a proposal is ultimately successful. “I had some confidence that if work was scored highly by experts, there should be a good chance of it being funded,” he says.

Eperon says that the numbers raise questions about the selection process, which he says should be more transparent.

David Bates, an oncologist at the University of Nottingham, agrees. He currently holds four project grants from the agency and says that the data made him “angry and disappointed”, although they chimed with his experience of the way his own proposals have been considered by MRC boards.

“Of the nine applications I've put in under the current scoring scheme, the three funded grants actually ranked in the bottom four for scores,” Bates says. “My five best-scoring grants didn't get funded, but three of my four worst did.”

Nathan Richardson, head of the molecular and cellular medicine portfolio at the MRC, says that the data do not show a disregard for external peer review, which he says the agency sees as “hugely valuable”. The MRC places greatest emphasis on reviewers’ written critiques, rather than their numerical grades, he says, and there is often a “disconnect” between a reviewer’s comments and his or her score. He says he is not surprised to see that the high-quality proposals that make it to the board cannot be distinguished by peer-review score alone.

Douglas Kell, a biologist at the University of Manchester and former head of the UK Biotechnology and Biological Sciences Research Council, says that in his view, there is nothing negative in the discovery that awarded grants do not follow referee scores. “This is because not every referee makes sensible comments (any academic will attest to that), nor gives scores that match their written comments — and the job of board members is precisely to moderate this. So looking at scores alone without context is likely to prove misleading,” Kell wrote in an email to Nature. Richardson says that the MRC does not release reviewers’ comments, to preserve the confidentiality of the peer-review system.

A spokesperson for the UK Engineering and Physical Sciences Research Council (EPSRC) says that they would expect to see review scores correlate with funding. At the EPSRC, however, boards are explicitly told that their decisions cannot be fundamentally out of line with the advice of reviewers and their scores — a system adopted in 1994 “because we believed it to be inherently fairer”, the spokesperson says.

Data for the EPSRC and the other UK research councils were not immediately available, as Eperon's freedom-of-information request applied only to the MRC.

At the US National Institutes of Health, which uses a two-stage selection process similar to the MRC’s, peer reviewers’ scores are “clearly better” for the successful applications than for the ones that get rejected, says Ferric Fang, a microbiologist at the University of Washington School of Medicine in Seattle, who has studied the effectiveness of peer-review systems.

More broadly, says Fang, grant-funding is something of a lottery. Studies1,2 into these systems have found that reviewer scores cannot predict which projects will turn out to be more highly cited. “The bigger picture is that many applications judged to be highly meritorious by external reviewers are failing to receive funding,” he says. “That is a recipe for unhappiness in the scientific community, and a sad indicator of missed opportunities for society.”