So much science, so little time. Amid an ever-increasing mountain of research articles, data sets and other output, hard-pressed research funders and employers need shortcuts to identify and reward the work that matters. They have plenty of options: research impact is now recognized as a multidimensional affair.

The conventional measures of scholarly importance — citation metrics, publication in influential journals and the opinion of peers as expressed in letters and interviews — still loom large. But to those are now added metrics such as article downloads and views, and measures of importance beyond the academic realm, including influence on policy-makers or health and environment officials, effects on industry and the economy, and public outreach.

It has never been easier for scientists to show off the various ways in which their work deserves attention — and funds.

Researchers at the Center for the Study of Interdisciplinarity, part of the University of North Texas in Denton, this year came up with 56 measures of impact (see Nature 497, 439; 2013), including influence on curriculum creation, authorship of textbooks and success in surveys of colleagues’ esteem. Some of these measures are a little fanciful, but they demonstrate that it has never been easier for scientists to show off the various ways in which their work deserves attention — and funds.

That variety is worth celebrating, but it can lead to dizzying confusion. How are researchers and evaluators to choose between measures? In this issue, Nature looks at some traditional and emerging ways to track research quality (see page 287). Ultimately, it is for institutions and funders to choose their preferences, but in doing so they should take two important considerations into account.

First, it is important to be aware of the positive and negative effects of privileging certain measures.

For example, emphasizing that research is considered especially important if it is published in one of a few historically influential journals — Cell, Nature, Science— could be a laudable attempt to get scientists to think ambitiously about their research goals. But it can also result in excessive pressure to publish big claims, leading to problems of irreproducibility, for example. (Nature’s position is that it has been publishing research using essentially the same criteria for decades; it is up to the scientific community and evaluators to decide how much importance they want to place on papers that appear in the journal).

Nature special: Impact

It is a mistake to consider a research paper important because it is published in a journal with a good citation record, as measured by its impact factor. As this publication has highlighted many times (see in particular Nature 435, 1003–1004; 2005), two articles in the same journal may have very different citation records. It is much better to focus on the citations, views or downloads of an individual article — and to recognize that these metrics vary between research disciplines.

In another example, emphasizing the economic impacts of research may force scientists to think about justifying their taxpayer-funded work, but it also runs the risk of distracting them with the lure of meaningless patents and ill-considered spin-out companies.

Nature special: Science metrics

The second important consideration is the need for research evaluators to be explicit about the methods they use to measure impact. Openness is an essential part of earning trust. Evaluators should publish worked examples showing how they score assessments, and the reasoning behind such scores; even better would be, where possible, to publish the full data. Otherwise, researchers might rightfully feel suspicious (see, for example, writer Colin Macilwain’s scepticism towards performance metrics: Nature 500, 255; 2013).

When scientists rail against the ‘impact agenda’, their arguments sometimes founder on irrelevant confusion between terms: too often, such discussion devolves into attacks on misuse of the impact factor, rather than looking at the range of possible metrics. The journal citation measure gains misleading prominence because its name happens to include the word impact — a semantic synergy that can cloud debate.

Arguments against impact metrics are strongest when they reference cases in which evaluators do not heed the considerations we mention above: in which evaluators choose metrics blindly, without sufficient thought for pernicious effects, or are secretive or inconsistent about their methodologies. If evaluators are to earn the acceptance — rather than the scorn — of the scientists whose work they want to fund, they had better pay attention to these concerns.