News

The allure of the journal impact factor holds firm, despite its flaws

“Science needs a healthy dose of its own medicine, yet refuses to take the pill.”

  • Jon Brock

Credit: undefined undefined/Getty Images

The allure of the journal impact factor holds firm, despite its flaws

“Science needs a healthy dose of its own medicine, yet refuses to take the pill.”

29 August 2019

Jon Brock

undefined undefined/Getty Images

Many researchers still see the journal impact factor (JIF) as a key metric for promotions and tenure, despite concerns that it’s a flawed measure of a researcher’s value.

A journal’s impact factor indexes the average number of citations its recently published articles receive. As critics have noted, it’s often driven by a small number of highly cited articles, is vulnerable to being gamed by editorial policy, and is not calculated in a transparent way. Nonetheless, it remains an integral part of the Review, Promotion and Tenure (RPT) process at many academic institutions.

A recent survey of 338 researchers from 55 universities in the United States and Canada showed that more than one-third (36%) consider JIFs to be “very valued” for promotions and tenure, and 27% said they were “very important” when deciding where to submit articles.

The survey was led by Meredith Niles, assistant professor at the Department of Nutrition and Food Sciences at the University of Vermont, and was part of a larger study, published as a preprint on bioRxiv, investigating how researchers feel about the JIF.

It found that the respondent’s age or status had no bearing on what they perceived to be the value of the JIF in the RPT process. But non-tenured and younger researchers, for whom RPT matters most, put more weight on JIFs when deciding where to publish.

The respondents also indicated a belief that their peers placed more importance on the JIF than they did. Niles describes this as a form of “illusory superiority”, whereby people tend to view themselves in a more favourable light than others.

This result indicates the need for “honest conversations” about what really matters when communicating academic research, Niles says.

“If we don’t actually care about the JIF as much as factors such as readership and sharing the results of our work with people who can most advance our field, then let’s stop pretending we care and treating it as the gold standard.”

A call for research assessment reform

The survey follows a study from the same project, published in eLife last month, which analyzed the text of 864 RPT documents from 129 North American universities.

Overall, 30 of the institutions (23%) referred to impact factors or related phrases such as “high impact journal” in at least one of their RPT documents. That figure rose to 40% for research-intensive institutions.

“Faculty often talk about impact factors as featuring heavily in evaluations, but we weren't aware of any studies that had tried to quantify its use,” says lead author, Erin McKiernan, professor in the Biomedical Physics programme at the National Autonomous University of Mexico.

Among the 30 universities that mentioned impact factors, the majority (87%) supported their use. Just four (13%) expressed caution against using them.

McKiernan notes that the analyses did not include possible indirect references to JIFs such as “top-tier journal”. “We may be seeing only the tip of the iceberg,” she says.

According to Björn Brembs, a neuroscientist from the University of Regensburg, in Germany, who reviewed the study for eLife, the continuing deference to the JIF shows how scientists can be highly critical in their own subject domain, yet “gullible and evidence-resistant” when evaluating productivity.

“This work shows just how much science is in dire need of a healthy dose of its own medicine, and yet refuses to take the pill,” he says.

Anna Hatch, community manager of the San Francisco Declaration on Research Assessment (DORA), which cautions against the use of journal-level metrics in academic evaluations, adds that the results provide an important benchmark by which to measure progress in research assessment reform.

“I hope the findings inspire faculty, department chairs, and other university administrators to examine their RPT documents and, if necessary, have frank discussions about how to best evaluate researchers without relying on proxy measures of quality and impact,” she says.

Read next:

What’s wrong with the journal impact factor in 5 graphs

Here’s how to deal with failure, say senior scientists

Q&A Niamh Brennan: 100 rules for publishing in top journals