Today, a growing frustration among researchers is that the impact of their contribution to science is mostly assessed on the basis of out-of-date mechanisms including impact factor and citation measurements. This discontent occurs as we are reaching a turning point in science publishing history where the essence of the peer-review process has been called into question.

Indeed, the drive to find alternative metrics is a symptom of a community where research evaluation is not functioning well. A new movement called altmetrics — eloquently described through a manifesto1 published in 2010 and arguably a variation on the theme of what is referred to as webometrics or social media metrics — revisits the measurement of a scientist's worth. Rather than using peer-reviewed journal articles, alternative metrics range from other types of research output to a researchers' reputation made via their footprint on the social web.

A flurry of tracking tools applied to research information have already emerged. For example, a software solution called Total-Impact includes social media factors as well as preprints, datasets, presentation slides and other output formats in addition to PLoS-style article-level-metrics that reflect how an article is being read, discussed and cited. In parallel, a reputation-based tool called Plum Analytics promises to profile researchers in relation to their institution, their work and those who engage with their findings. Meanwhile, altmetric.com tracks conversations around scientific articles online.

These tools claim to exploit the tantalizing possibility offered by web 2.0 technology. Ultimately they could help in monitoring in almost real-time how new research findings are being read, cited, used and transformed in practical applications. By adjusting the research evaluation process to the technological reality of web 2.0, important research results could, in principle, be replicated and confirmed much quicker than before.

Yet, two years on since the publication of the altmetrics manifesto, little has changed in academia. Social media has not been taken up as an integral part of the academic measurement of scientific achievement. There seems to be a lot of scepticism concerning its real value. Resistance to such change is rooted in that these new evaluation methods have not sufficiently been validated to be ready for adoption, as outlined by a recent report from the SURF foundation2.

Figure 1
figure 1

The emergence of altmetrics is a welcome addition to existing scholarly evaluation.

Indeed, social-media-output metrics may yet need to be further refined to identify truly significant research results. By giving as much prominence to amateurs as it does to experts, as argued by internet entrepreneur Andrew Keen3, the web exerts distorting effects on our cultural standards. And research does not escape this general distortion.

As a result altmetrics seems to fall short of the established research evaluation standards. It is worth, however, distinguishing shorter-term and longer-term metrics. The former, embodied by blog posts, news pieces, tweets and likes, display a short life span and decay fast, reflecting the transient nature of popularity. By contrast, longer-term online metrics such as download, readers, comment numbers, albeit collected at a much slower pace, may be more meaningful. They could act as a proxy for quality while going beyond the traditional h-index or impact factor metrics.

What is more, using popularity as an evaluation criterion may have some merits. Research that is popular online could be useful, for instance, in suggesting what subjects are topical, or to reach out to account for scientific progress to the wider society.

But popularity in social media circles is not a reflection on quality. With choice made easy at the click of a button, the web induces a herd behaviour whereby users could be more easily swayed than offline by fashion concerning certain research topics. And such trends develop at an exponential scale compared to the way topics that are en vogue shape a journal editors' choice. Worse, there could be negative reasons why regular chattering turns into virtual clamour, particularly with controversial papers. These would typically only receive very few citations due to their limited scientific quality.

Taking a step back, any new metrics introduced today may not have time to be validated and gain acceptance. Instead, they would constantly need to evolve in line with the rapid way in which research information is managed4. A few years from now, scientific findings may be found in specialized archives attended only by field specialists. Comments from readers arising from the archiving could be considered as adequately refined metrics susceptible to be part of an alternative assessment system. They would present the advantages of emanating from a select sub-group of peers with a strong interest in the topic; a filter that existing social media site filters cannot offer.

Although spontaneous reviews from readers and novel altmetrics are welcomed complementary evaluation tools, they will not replace a thorough scientific quality assessment of papers and scientists through a selected-expert peer review any time soon.