What’s wrong with the journal impact factor in 5 graphs

Scholars love to hate the journal impact factor, but how flawed is it?

  • Dalmeet Singh Chawla

Credit: Fanatic Studio / Alamy Stock Photo

What’s wrong with the journal impact factor in 5 graphs

Scholars love to hate the journal impact factor, but how flawed is it?

3 April 2018

Dalmeet Singh Chawla

Fanatic Studio / Alamy Stock Photo

Cassidy Sugimoto

Vincent Larivière

Since its launch around 40 years ago, the journal impact factor (JIF) has shaped academic behaviour. Originally presented by Eugene Garfield, founder of the Institute for Scientific Information (now Clarivate Analytics), to assist librarians in choosing which journals to subscribe to, the metric is also used to evaluate researchers and their work.

In recent years, many in the scholarly publishing community have voiced concerns about the metric’s shortcomings for this purpose. Information scientists Vincent Larivière and Cassidy Sugimoto quantitatively describe these flaws in a preprint, which will appear as a chapter of an upcoming book published by Springer Nature. Below are their top five concerns.

While Sugimoto and Larivière’s paper points out valid flaws in the JIF, the impact metric “should not be generally damned for its use in research evaluations,” says sociologist, Lutz Bornmann, at the Max Planck Society. The JIF remains a useful for tool if one wants to know how successfully researchers or institutions publish in reputable journals, he says.

1. Keeping front matter in cite

In addition to research and review articles, which Clarivate calls “citable items”, journals also typically publish editorials, letters to the editor, news and obituaries. These article types inflate journal citation counts when scholars choose to cite them, and is the impact factor’s most critical flaw, says Larivière, who is based at the University of Montreal.

Made with Flourish

2. Short citation window

The impact factor is based on a two-year citation window, which makes it “ill-suited across all disciplines, as it covers only a small fraction of citations received over time,” Larivière and Sugimoto note in their analysis.

The average biomedical research paper, for example, accumulates 50% of its citations in around eight years after publication, with longer lead times of 13 or more years required for psychology and social science papers. (Clarivate’s annual Journal Citation Reports, which publish the JIF, also include impact factor values calculated using five-year time periods but the two-year window remains the gold standard among journal publishers and scholars.)

Made with Flourish

3. Not all papers are cited equally

Another problem is that a few popular papers can skew the calculation, says Larivière.

A journal’s impact factor is simply an average of the citations that papers published in the previous two years attracted that year. In the majority of journals indexed in 2016, some 70% of papers were cited less times than their average.

Éric Archambault, founder of the bibliometrics firms Science-Metrix and 1science in Montreal, agrees that a few papers can create a “false impression” that a journal is highly cited. “Proper safeguards” should be in place when using the metric to compare journals, so as not to conflate the average with the typical score of articles, he says.

Made with Flourish

4. Disciplinary differences

The impact factor is also not effective for inter-field comparisons, since average JIF values vary across fields.

The mean impact factor for biology, for instance, is only 1.683 compared to 3.526 for biomedical research and 2.768 for chemistry.

Archambault argues that the JIF should not be used to compare fields without accounting for these differences.

Made with Flourish

5. Overall inflation

The average JIF has increased over the years, but this inflation is not acknowledged by journals that boast of increases in their value, notes Larivière.

In 1975, the top-scoring journal was the Journal of Experimental Medicine, with a JIF of 11.874. By 2016, A Cancer Journal for Clinicians received the highest score of 187.040.

Several factors have contributed to the progressive rise, says Archambault, including an increase in the number of scholarly journals, and references per paper. “But this a reflection of growth in quantity, not a growth in quality,” he adds.

Made with Flourish

Nature Index is editorially independent of its publisher, Springer Nature.