Metrics are intrinsically reductive and, as such, can be dangerous. Relying on them as a yardstick of performance, rather than as a pointer to underlying achievements and challenges, usually leads to pathological behaviour. The journal impact factor is just such a metric.

During a talk just over a decade ago, its co-creator, Eugene Garfield, compared his invention to nuclear energy. “I expected it to be used constructively while recognizing that in the wrong hands it might be abused,” he said. “It did not occur to me that ‘impact’ would one day become so controversial.”

As readers of Nature probably know, journal impact factors measure the average number of citations, per published article, for papers published over a two-year period. Journals do not calculate their impact factor directly — it is calculated and published by Thomson Reuters.

Publishers have long celebrated strong impact factors. It is, after all, one of the measures of their output’s significance — as far as it goes.

But the impact factor is crude and also misleading. It effectively undervalues papers in disciplines that are slow-burning or have lower characteristic citation rates. Being an arith­metic mean, it gives disproportionate significance to a few very highly cited papers, and it falsely implies that papers with only a few citations are relatively unimportant.

These shortcomings are well known, but that has not prevented scientists, funders and universities from overly relying on impact factors, or publishers (Nature’s included, in the past) from excessively promoting them. As a result, researchers use the impact factor to help them decide which journals to submit to — to an extent that is undermining good science. The resulting pressures and disappointments are nothing but demoralizing, and in badly run labs can encourage sloppy research that, for example, fails to test assumptions thoroughly or to take all the data into account before submitting big claims.

The most pernicious aspect of this culture, as Nature has pointed out in the past, has been a practice of using journal impact factors as a basis for assessment of individual researchers’ achievements. For example, when compiling a shortlist from several hundred job applicants, how easy it is to rule out anyone without a high-impact-factor journal in their CV.

How to militate against such a metrics-obsessed culture?

First, an approach that some have applied in the past and whose time has surely come. Applicants for any job, promotion or funding should be asked to include a short summary of what they consider their achievements to be, rather than just to list their publications. This may sound simplistic, but some who have tried it find that it properly focuses attention on the candidate rather than on journals.

Second, journals need to be more diverse in how they display their performance. Accordingly, Nature has updated its online journal metrics page to include an array of additional bibliometric data.

As a part of this update, for Nature, the Nature journals and Scientific Reports, we have calculated the two-year median — the median number of citations that articles published in 2013 and 2014 received in 2015. The median is not subject to distortion by outliers. (The two-year median is lower than the two-year impact factor: 24, down from 38, for Nature, for example.) For details, see go.nature.com/2arq7om.

Providing these extra metrics will not address the problem mentioned above of the diversity in citation characteristics between disciplines. Nor will it make much of a dent in impact-factor obsessions. But we hope that it will at least provide a better means of assessing our output, and put the impact factor in a better perspective.

However, whether you are assessing journals or researchers, nothing beats reading the papers and forming your own opinion.