A little-known algorithm that scores the influence of research articles has become an important grant-management tool at the world’s largest biomedical funding agency, the US National Institutes of Health (NIH).

In 2015, the NIH’s Office for Portfolio Analysis (OPA) in Bethesda, Maryland, devised the tool to compare the performance of articles from different fields more fairly. Now, one of the NIH’s biggest institutes is using the metric — the Relative Citation Ratio, or RCR — to identify whether some types of grant deliver more bang for their buck. Other funders have adopted the RCR, which the agency offers freely online. In the United Kingdom, biomedical charity the Wellcome Trust is using the RCR to analyse its grant outcomes; in Italy, Fondazione Telethon, a charity that supports research into genetic diseases, is testing the RCR as a way to evaluate its funding initiatives.

“It’s getting a very good reception both inside and outside the NIH,” says George Santangelo, director of the OPA. Santangelo’s team, an 18-strong group of scientists, statisticians and data wranglers, was founded five years ago to devise tools to analyse NIH funding opportunities.

Asked to measure which NIH research has had the most influence, the team chose not to judge articles simply by the journal in which they were published. That approach gives articles in highly cited journals higher scores, but it has acknowledged flaws. An important study might be underestimated because it was not published in an elite journal, for instance. Simply counting citations, meanwhile, fails to capture the idea that articles should be judged relative to similar papers: an algebra paper with a few dozen citations, for example, may have a greater impact in mathematics than a widely cited cancer study would have in oncology.

Algorithms that compare articles with others in their field are offered by commercial analysis firms such as Elsevier, but Santangelo’s team argue that its metric is technically as good, if not superior — and, importantly, more accessible. (The NIH has posted help files and its full code online.) “No other metric we’ve seen is as transparent as RCR,” Santangelo says.

The algorithm is complex. It defines an article’s research ‘field’ as the cluster of papers that it has been co-cited with: a dynamic cohort that grows all the time. It then calculates the background field’s citation rate — the average citations of the field’s journals each year. After a few months of accrued citations, an article’s actual performance can then be benchmarked against this background — although in some cases one has to wait a year, says Santangelo.

To put this benchmarking in context, the team compares it to how NIH-funded papers in the same field and year performed (B. I. Hutchins et al. PLOS Biol. 14,e1002541; 2016). This boils everything down to a simple number, the RCR. An RCR of 1.0 means that an article has had exactly as much influence as the median NIH-funded paper in the same year and field; 2.0 means a paper has had twice as much influence, and so on (see ‘A measure of influence’). Anyone can upload PubMed papers at a website called iCite to find out their RCR score.

Credit: Source: ÜberResearch

The new metric has critics. “Our analysis shows that it is not better than other indicators,” says Lutz Bornmann, a bibliometric specialist at Germany’s Max Planck Society in Munich. The society has been using at least three other field-normalized metrics for several years to evaluate its institutions, but has no plans to adopt the RCR. It says that the metric is too complicated and too restrictive because it has been applied only to the PubMed database, which contains largely biomedical papers, so doesn’t work for physical-sciences analysis.

The RCR, however, is starting to gain ground as an analysis tool. At the US National Institute of General Medical Sciences (NIGMS) in Bethesda, a team used the metric to compare the impact of large, multimillion-dollar ‘programme project’ grants — which fund teams of researchers — with smaller grants for individual principal researchers. Papers produced by both grants had similar scores. “It has helped us take a very hard look at our support for team science,” says NIGMS director Jon Lorsch.

Another question that the NIGMS asked was whether scientists who get more money produce better outcomes than those who get less funding. Again, when the RCR numbers were tallied, it turned out that more NIH money didn’t lead to higher-RCR papers. “So maybe we shouldn’t fund scientists who are already well-funded,” says Michael Lauer, deputy director for extramural research at the NIH.

The tool is also catching on outside the NIH. Jonathon Kram, a research analyst at the Wellcome Trust, says that his group uses the RCR to analyse grants, and to benchmark the performance of the trust’s funding schemes against other funders’ grants. Unlike other normalized metrics, he says, the RCR “has a transparent methodology and is available free”.

Software firm ÜberResearch in Cologne, Germany, has built a database of grants awarded by some 200 funders, and has begun publishing RCR scores for each publication in its grant database. “We use RCR to better judge the history of the researchers listed in our database,” says Stephen Leicht, a co-founder of Über-Research. (It is part-owned by Digital Science, a firm operated by the Holtzbrinck Publishing Group, which has a share in Nature’s publisher.)

And Fondazione Telethon, the Italian charity, says that it is testing the RCR and hopes to adopt it. “We are not going to use it to help decide funding decisions but more as a tool for analysis,” says Lucia Monaco, the charity’s chief scientific officer. “We want to make sure that every euro we invest is in excellent research.”