I question the worth of many of the altmetrics ('alternative metrics') that Heather Piwowar discusses for evaluating a researcher's total output (Nature 493, 159; 2013).

Altmetrics include, for example, the number of downloads, 'likes' or shares for a research-related YouTube video, tweet or blog post. Although these may provide insight into how such research 'products' have influenced the community and the public, they lack authority and credibility as a performance measure, not least because it is easy to cheat by creating multiple accounts.

Grant reviewers cannot therefore rely on such data as indicators of the value of their associated research products. In fact, the US National Science Foundation's new funding policy, around which Piwowar bases her arguments, requires the research products to be in citable and accessible form — implying that it is only the number of citations in scientific journals that will be taken into account in assessing the value of these products.

I believe that it is premature to integrate most altmetrics, apart from citation statistics, into research-assessment schemes at a time when the merits of publication quantity versus quality are still being debated (N. Haslam and S. M. Laham Eur. J. Soc. Psychol. 40, 216–220; 2010).