The tide is turning against the impact factor —one of publishing’s most contentious metrics — and its outsized impact on science.

Calculated by various companies and promoted by publishers, journal impact factors (JIFs) are a measure of the average number of citations that articles published by a journal in the previous two years have received in the current year.

They were designed to indicate the quality of journals, but researchers often use the metric to assess the quality of individual papers — and even, in some cases, their authors.

Now, a paper posted to the preprint server bioRxiv1 on 5 July, authored by senior employees at several leading science publishers (including Nature’s owner, SpringerNature), calls on journals to downplay the figure in favour of a metric that captures the range of citations that a journal’s articles attract.

And in an editorial that will appear on 11 July in eight of its journals, the American Society for Microbiology in Washington DC will announce plans to remove the impact factor from its journals and website, as well as from marketing and advertising.

“To me, what’s essential is to purge the conversation of the impact factor,” says ASM chief executive Stefano Bertuzzi, a prominent critic of the metric. “We want to make it so tacky that people will be embarrassed just to mention it.”

Bertuzzi was formerly the executive director of the American Society for Cell Biology, which banned the mention of impact factors from its annual meeting.

Brace for impact

Heidi Siegel, a spokesperson for London-based business-analytics firm Thomson Reuters, the major publisher of the JIF, says that the measure is a broad-brush indicator of a journal’s output — and should not be used as a proxy for the quality of any single paper or its authors. “We believe it is important to have a measure of the impact of the journal as a whole, and this is what the JIF does,” says Siegel.

But many scientists, funders and journals do not use it that way, notes Stephen Curry, a structural biologist at Imperial College London who is lead author on the bioRxiv preprint paper. Many researchers evaluate papers by the impact factor of the journals in which they appear, he worries, and impact factor can also influence decisions made by university hiring committees and funding agencies.

Credit: Source: Figure 1 in Ref. 1

Past research suggests that such uses are inappropriate. To emphasize some of the limitations of the impact factor, Curry’s team plotted the distribution of citations for articles published in 2013–14 in 11 journals, including Science, Nature, eLife and three Public Library of Science (PLoS) journals. These are the citations used to calculate the 2015 impact factors. Curry’s co-authors include senior employees at SpringerNature, eLife, PLoS, the Royal Society (which publishes several journals) and EMBO Press, and Marcia McNutt, who stepped down on 1 July from her role as editor-in-chief of Science.

Most of the papers garnered fewer citations than the impact factor for their journal: 74.8% of Nature articles were cited below its impact factor of 38.1, and 75.5% of Science papers were cited fewer than 35 times in two years (its impact factor was 34.7). PLoS Genetics had the lowest proportion of papers with fewer citations than its impact factor of 6.7, at 65.3%.

Highly cited papers explain this disconnect. Nature’s most cited paper in the analysis was referenced 905 times and Science’s 694 times. PLoS ONE’s biggest paper accrued 114 citations, versus its impact factor of 3.1.

A measure of change

Some journals, such as those published by the Royal Society and EMBO Press, already publicize citation distribution. Curry and his fellow authors explictly recommend that other publishers play down their impact factors, and, instead, emphasize citation distribution curves such as those that his team generated, because they provide a more informative snapshot of a journal’s standing. The preprint includes step-by-step instructions for journals to calculate their own distributions.

A spokesperson for Nature says that the journal will soon update its websites “to cover a broader range of metrics”, and a representative of Science has stated that the journal will consider the proposal once the preprint article is published in a peer-reviewed journal.

Ludo Waltman, a bibliometrics researcher at Leiden University in the Netherlands, says that citation distributions are more relevant than impact factors for high-stakes decisions, such as hiring and promotion. But he is wary of doing away with impact factors entirely; they can be useful for researchers who are trying to decide which among a pile of papers to read, for instance.

“Denying the value of impact factors in this situation essentially means that we deny the value of the entire journal publishing system and of all the work done by journal editors and peer reviewers to carry out quality control,” Waltman says. “To me, this doesn’t make sense.”

Anti-impact-factor crusaders say that it will take time to diminish the influence of the figure, let alone exile it. “This is a cultural thing,” says Bertuzzi, “and it takes pressure from multiple points to change behaviour”.