Top of the stack: studies have disagreed about whether higher-ranked grant applications reliably lead to work that accrues more citations and patents. Credit: Altrendo images/Getty

Grant applications that are ranked more highly by peer reviewers go on to yield more patents and highly cited papers, an analysis of more than 130,000 funded grants finds.

The study contradicts the widely held belief — supported by some previous findings — that there is scant connection between reviewers’ scores of a research proposal and its subsequent impact.

According to the results, published on 23 April in the journal Science1, the connection holds even when controlling for other factors that might explain the link, such as whether highly rated applicants were already big names in their field. This suggests that peer reviewers do not just rate well-known scientists more highly (which could explain both highly rated grant applications and future well-cited papers), but also ‘add value’ by giving higher scores to better ideas.

“This doesn’t say that bad luck and dumb reviewers never happen, but it does say the reviewers are making meaningful decisions,” says Adam Jaffe, director the Motu Economic and Public Policy Research institute in Wellington, New Zealand.

But Michael Lauer, director of the cardiovascular-sciences division at the US National Heart, Lung, and Blood Institute, says the effects are “modest”. “Despite our best efforts, our ability to predict how well a grant is going to do is not that good,” he says. Lauer has published three analyses that found no relation between peer-review scores and later citations2,3,4.

Top grant today, impact tomorrow?

The study authors — economists Danielle Li at Harvard University in Cambridge, Massachusetts, and Leila Agha at Boston University in Massachusetts — obtained peer-review scores for 137,215 grants funded by the US National Institutes of Health (NIH) from 1980 to 2008, as well as the patents, publications and citations that followed those grants.

Each grant was given a score that reflected its percentile ranking among other grant applications reviewed at the time. The researchers found that a one standard-deviation decrease in ranking (equating to a drop of just over 10 percentile points) resulted in 19% fewer high-impact publications, 15% fewer citations and 14% fewer patents.

The researchers wondered if the connection between ranking and results came about because peer-reviewers gave better scores to applicants with impressive research records, and these then tended to produce better research. But when they adjusted for these and other effects, the connection between peer-review ranking and later outcome still held.

Li says that the results show that human peer review does seem to generate insight about the future prospects of grant proposals. She thinks that other studies did not find such correlations because they examined fewer grants over a shorter period.

That does not mean that the peer-review process is perfect, says Agha. “We are not saying that this is the best of all possible worlds or that peer review never makes a mistake or can’t improve.”

No good predictors

Still, others think that peer review is too imprecise to ferret out the best research. There is “tremendous overlap” between peer-review scores in a given percentile and later measures of impact in the analysis, says Ferric Fang, a microbiologist at the University of Washington in Seattle who has studied the effectiveness of peer-review systems. What is more, he says, there is no way to assess accurately the potential impact of unsuccessful grant applicants: even measures such as citations are not ideal. “We're measuring what we can measure,” says Fang. “Just because a grant has a small number of citations arising from it, doesn’t mean it's not important.”

The analysis also may not bear much resemblance to the current funding situation at the NIH, where success rates are low and researchers worry that review committees cannot detect slight differences in quality among the most impressive grants. “There are a lot of meritorious grants that could be highly productive and it ends up being the luck of a draw,” Fang says. The grants in the analysis were mostly submitted before NIH grant success rates started plunging around 2003, he points out.

Jeremy Berg, a biologist at the University of Pittsburgh in Pennsylvania who is a former director of the NIH National Institute of General Medical Sciences, says that the correlations should not be extended to conclude that any one highly ranked grant is better than a lower-ranked one.

Lauer adds that the different results from different studies pale in front of what he considers the true advance — developing the science of research funding. “A really critical message here is that it's wonderful to see the issue of grant peer review subject to rigorous analysis,” he says. “The way we're going to get better in research funding is by using the rigorous tools of science.”