WORLD VIEW

How will you judge me if not by impact factor?

Stop saying that publication metrics don’t matter, and tell early-career researchers what does, says John Tregoning.
John Tregoning is a senior lecturer at Imperial College London, where he studies the immune response to viral infections.
Contact

Search for this author in:

Rumours among junior faculty members are that reports of the death of the impact factor are greatly exaggerated. In a round of funding earlier this year, my research output was described as being in “high-impact journals” by one reviewer and in “middle-tier journals” by another, with knock-on effects on their grant scores. It is not unheard of for people to be told that the only articles that count are the ones in journals with an impact factor that is over an arbitrary value. Or, worse, that publishing in low-tier journals pollutes their CVs.

That’s true even at institutions that have signed on to the San Francisco Declaration on Research Assessment (DORA), which advocates replacing journal impact factors (JIFs) with something better and fairer.

Actually, pretty much everyone agrees that the use of the journal impact factor as the sole tool to evaluate research is a bad thing.

But for all the invective heaped on the JIF as a metric, no alternative has emerged. The activation energy to find something else is just too high. The JIF is wrong in so many ways, but it is so easy, a number that lets you rank scientists and their output in the same way as experimental data. It is also quick — scanning a list of journals takes very little time — and deeply ingrained. Also, when viewed macroscopically, it’s not entirely wrong. Papers published in journals with higher impact factors tend, on average, to be better and more important than those in journals with lower ones.

We are told that the impact factor should no longer be used, but not told what to use instead. So where does that leave the early-career researcher eyeing the conventional academic track? Straddling uncertainty and the status quo. And stressed out and less productive as a result.

Ideally, just putting our research out there should be enough for people to descry our brilliance and promote it accordingly. But that is not how the system works.

My peers who have focused on getting articles in high-impact journals seem to have outperformed those with better social-media presence. But I am judging their success in part by their ability to publish papers in high-impact journals!

It’s no secret that doing great science does not necessarily overlap with having a great career. The current system masquerades as a meritocracy, but it is subjective, biased, built on personal networks and laced with blind luck. To succeed, we need to leverage our reputation, and the main tool we have for this is our research output. So we need to be strategic about where we place our work, to ensure that the right people notice it. To stay competitive, we need a map and time to navigate it.

The impact factor used to provide that map. For people with few publications, the nice thing about JIFs is that they are prospective rather than retrospective. JIFs give an instant validation; both h-index and citations increase over time, a luxury that early-career researchers have not yet accumulated. With the old system, if you worked hard, you got your first-author papers in journals with a high impact factor. That brought tenure, keys to the executive toilet and (an ancient principal investigator once promised) lifelong happiness. Sure, this was a fickle route that favoured trainees who were lucky enough to find the rare laboratory with an on-ramp to the fast track. But at least we spent less time feeling lost.

Now, who knows what counts? What about that abstract that I naively submitted to a predatory journal when trying to get someone to pay for a trip to the United States? How does that tot up against a full paper in this journal? Or this spunky essay?

Although DORA is in my heart, impact factors are still on my mind.

Of course, I have some broad-brush suggestions to throw into the mix. There should be more than one route to your destination. There should also be more than one destination. We need to find ways to rate and recognize our broader contribution to the community (including public engagement, internal committees and teaching). In fact, the rising generation of scientists has a unique set of strengths that could make for a stronger scientific enterprise in the long term, if hiring committees thought to reward it. Those attributes include a more socially networked approach to doing science, plus a facility to use information technology to share data, methods and credit. Hiring and reviewing committees need to recognize that the old system is not ideal for selecting future scientific leadership.

Right now, however, I don’t think there is a need for more ideas about how to overhaul the game. I want more clarity on what the game actually is. It doesn’t have to be universal; it does have to be transparent. If different institutions play by different rules, that’s okay — I’ll work out a way to play to my strengths. But it is difficult to play when you don’t know the rules, harder still when the rules change each time you look for a new position. At this point, I would settle for impact factor as the least-bad option; at least it’s something.

Maybe the DORA advocates will figure out a fantastically fair way to gauge scientific output in five years, or ten. Maybe it will be holistic, broadly accepted, supportive and simple. That would be great, when it happens.In the meantime, confusion over how to judge scientific productivity is sapping scientific productivity. We need a quick fix, and the quickest fix is clarity.

Nature 558, 345 (2018)

doi: 10.1038/d41586-018-05467-5
Nature Briefing

Sign up for the daily Nature Briefing email newsletter

Stay up to date with what matters in science and why, handpicked from Nature and other publications worldwide.

Sign Up