Ideas and discoveries make research the exciting endeavour that it is, but probably the most satisfying reward for many scientists is recognition from their peers. However, impact and reputation are more than just an intangible bonus for years of hard work; when considered in tenure and funding decisions they can make or break academic careers.

To make such decisions more transparent and objective, some have called for scientific output to be measured. It may be debatable whether the value of scientific findings can be quantified at all. However, since Eugene Garfield founded the Institute of Scientific Information (ISI), at least one aspect of the academic process can be captured by numbers: the publication and citation of scientific work.

On the occasion of the International Year of Chemistry, analysts at Thomson Reuters — the company behind the ISI — recently tapped their citation index to extract a list of the top 100 materials scientists of the past decade1. At first glance, the numerical analysis seems capable of revealing superior performance: the top 10 researchers have published a significant amount of papers, which received, on average, an impressive 100 citations or more. On closer investigation, however, it seems peculiar that 78% of the top 100 scientists work in nanotechnology. Materials science is certainly more diverse than that, and, in our view, there are many outstanding researchers who do not appear on the list. Another endeavour at Thomson Reuters — the annual citation-guided prediction of Nobel Prize winners2 — also suggests that citation analyses hardly reflect scientific acumen and impact across entire fields. Although the analysts correctly foresaw Novoselov's and Geim's 2010 prize for work on graphene, the total number of successful predictions since 2005 amounts to only 6 out of more than 50 'citation Nobelists' in chemistry and physics.

Citation network around the journal Nature. The visualization is based on Eigenfactor and Article Influence scores, which are determined using a ranking algorithm8. Credit: © CARL BERGSTROM/MORITZ STEFANER

The exercise makes clear that the complex process of nominating, evaluating and selecting researchers on the basis of their scientific impact — not only for the Nobel Prize but also for less prestigious awards or research grants — cannot be replaced by mere number crunching. As Pavel Exner, the recently elected Vice President of the European Research Council (ERC), told Nature Materials (page 478), peer review remains the best instrument that the €7.5 billion-heavy funding agency has for the identification of outstanding researchers and proposals. However, he also acknowledged a growing interest in metrics, and the ERC's panels seem to discuss their usefulness.

Until recently, the scientific output of individuals was evaluated based on various metrics, such as the total number of publications and citations or the impact factor of the journals they published in. In 2005, physicist Jorge Hirsch introduced the h-index to capture both the productivity and the impact of scientists in a single number3. To reach an h-index of i, researchers need to have i publications with at least i citations each, regardless of the journals these have appeared in.

Since its introduction, the h-index has received enormous attention, but Hirsch does not think this has affected the way in which scientists work: “In my view one positive development of the introduction and dissemination of the h-index would be that scientists become less concerned about publishing in the journals of highest impact factor, such as Nature or Science, as the h-index recognizes highly cited papers in any journal,” he told us by e-mail, “but I haven't seen evidence that this is happening.”

Besides, even this metric is far from perfect. It may need to be refined further to discriminate between researchers working alone and those who publish with larger teams4. Furthermore, significant differences exist in the h-indices of scientists at different career stages and in different disciplines. The variation between fields, which is not only a result of size but also of different scholarly practices, can again be alleviated5, but it remains doubtful whether citation metrics can ever fully reflect a researcher's impact.

On a practical level, any such metric will only be as comprehensive as the publication index it is based on, and citations only come into effect late after a paper has been published. In the age of digital publishing, more timely metrics might be required, such as online usage statistics. Data collected on the physics e-print archive arXiv.org suggest that the number of times a paper is downloaded is correlated with its later success in getting cited6. Moreover, other forms of scholarly web content — such as preprints, published datasets or even blog entries and comments — could be evaluated alongside citations.

One point seems certain: the impact and reputation of scientists are defined by others building on their findings and appraising their work. Such 'citation' and 'peer review' mechanisms, whatever form they may take, will remain central to the scientific process. Social-media tools such as commenting and highlighting may not only serve as powerful filters for the scientific community; they may also offer ample opportunity to develop a refined, scalable system in which scholarly impact is defined and tracked online, and where it transcends simple single-number metrics. First steps in this direction have been taken by various publishers and the online service Faculty of 1000 (ref. 7) through download statistics and post-publication peer review, respectively. It will be exciting to see how this system will evolve.