To develop and apply adequate metrics (Nature 464, 488–489; 2010), a proper understanding of the methodology of measuring and of the phenomenon to be measured is essential.

Key contributors to the analysis of scientific metrics may therefore be statisticians and historians of science. Both groups urge caution in applying science metrics (see, for example, B. Lightman et al. Isis 100, 1–3; 2009).

When substantiating claims about the prominence of researchers, science historians draw on publication numbers, citation numbers, invitations, editorial duties, awards, promotions, grant funding, membership of academies, honorary titles, institutional affiliations and links to other prominent scientists. But they rarely use these measures alone: rather they are used as indicators to supplement and vindicate thorough analysis (H. Kragh An Introduction to the Historiography of Science Cambridge Univ. Press, 1987).

Statisticians would add that, for most of the present popular measures, there is no properly defined model of the relation between variables, little attention to confounding factors, and ignorance about the uncertainty of the measures and how that uncertainty affects rankings derived from them (R. Adler et al. Statist. Sci. 24, 1–14; 2009).

In addition, the feedback mechanisms that arise when scientists change their publishing and citing behaviour in order to maximize their metric outcome will be a major challenge in developing realistic models. For predictions from past to future successes, these challenges will intensify.

Being aware of these shortcomings of scientific metrics is crucial for any endeavour that aims to improve them.