As some readers may have seen, the 2014 impact factor (IF) for Nature Methods, released last month, moved up six points from the year before. We take the opportunity here to reflect on this metric.

The IF was devised decades ago as a way of measuring the scholarly influence of the research published in a journal. That the IF is subject to bias has been discussed in many contexts, including in the recent UK report on the use of metrics in research assessment. The journal IF for a particular year reports the average number of citations for papers the journal published in the previous two years. As such, the IF is affected by what is deemed citable by Thomson Reuters, the organization that calculates the metric. IF varies by field, is affected by editorial policies—publishing a lot of reviews can have a positive effect, for example—and reflects citation practices good and ill (such as citing a paper just because others have done so, contributing to a positive feedback effect, rather than because the citation is apt).

Importantly, the IF is skewed by very highly cited papers, as illustrated by our move upward this year. Of the papers that contribute to our 2014 IF, a very small number received orders of magnitude more citations than the rest. Notably, these are followed by a healthy cohort of well-but-not-exorbitantly-cited papers, and then a long tail of papers with many fewer citations. The degree of skew can vary, but this distribution essentially holds for most journals.

An immediate conclusion that should be drawn from this, as pointed out before, is that the IF of journals in which a scientist publishes should not be the criterion on which his or her scientific contributions are judged, for instance when making hiring or funding decisions. It is likewise not a correct measure of the citations of individual papers in the journal. What the IF can do is capture, with caveats, whether a journal has within the past two years published papers that are interesting and influential for many scientists. And for all its flaws, the IF is reported to be a good predictor of a journal's five-year median number of citations, which is less skewed by outliers.

At Nature Methods, we are not apologetic about trying to publish the best and most widely interesting methods papers. If that translates into high citations, so be it. We are certainly proud of the papers at the top of our 2014 citation list. But we cannot emphasize enough that we are no less proud of the (far more numerous) papers that fall elsewhere within the distribution. A paper may not be highly cited in the two years following its publication for reasons that have little to do with its quality or importance: to name just a few, it may apply to a relatively small community, not be trendy, or even be ahead of its time.

Methods papers may well have additional quirks in their citation patterns. We try to strike a balance between publishing papers that push the boundary of what is possible and those that have immediate practical value, and to do this for many fields. This reflects the fact that methods development is a many-stage process: early conceptual breakthroughs that open up a new regime—the ability to image at super-resolution, for instance—must be followed by iterative refinement and democratization to a robust, generally usable method.

In rough correspondence to this, methods papers may be cited either to illustrate what's possible in a given field or because the method is actually used in later research. The latter type of citation is arguably at least as important as the former, but the IF makes no distinction between them. Furthermore, it may well take longer than two years to implement a method, use it for a biological study and then publish the results. Though early citation is reportedly a good indicator of long-term citation in some fields, the two-year IF will not fully capture the impact of methods papers that actually get used. The IF also does not report on other aspects of impact—whether a method is commercialized, for instance, or whether it has other societal effects.

Many additional metrics are now available. The five-year Eigenfactor or SCImago's three-year SJR indicator, for example, analyze the citation network and weight the links to papers published in a journal. A flurry of 'altmetrics' tracks the online attention a paper receives immediately upon publication. The h-index and its variants are more apposite, as metrics go, for measuring the output of an individual scientist. Although these may avoid some of the problems of the IF, they are still just tools, each with their own flaws.

It is a truism, which nonetheless bears repeating, that no metric should be wielded without judgment. This depends, in turn, on knowing what the metric reports and what its assumptions and biases are. Just as for any other method.