In an ideal world, scientists applying for grants or jobs would be judged holistically — balancing quantitative measures such as their publication record against indications of their potential from recommendation letters, personal interactions and other activities. So even if a candidate had not generated many papers, it would count in their favour if the few they had published had received positive post-publication review (comments, tweets and blog posts, for instance). Also favourable would be a tendency to ask insightful questions at talks that lead to valuable discussions and new experiments, or a willingness to share reagents and expertise with their colleagues. That would be ideal. But that is not the world in which most scientists live.

Instead, hiring committees and grant reviewers sweat through hundreds of applications, often with only enough time to give each submission a cursory glance. In 2010, a Nature poll found that most administrators say that metrics — quantifiable measures of scientists' achievements — matter less in job decisions than scientists often think (see Nature 465, 860–862; 2010), but good peer review is often simply not possible.

As a result, evaluators are increasingly turning to metrics, such as total citation count and the h-index, a measure of both the quality and quantity of papers (a scientist has an h-index of 12 if they have published 12 papers that have each received at least 12 citations). Naturally, many scientists object to such cold quantification of their contribution. Plus, all metrics have obvious flaws — a paper may gather many citations not because of its importance, but because it is in a large field that publishes frequently, so generates more opportunities for citations. Review articles, which may not add much to the research, count the same as original research papers, which contribute a great deal. And all existing metrics capture only what a scientist has done, not what he or she might be capable of. Clearly, there is a need for more and better measures.

On page 201, Daniel Acuna, Stefano Allesina and Konrad Kording suggest an alternative: the future h-index. Unlike other metrics, this index estimates a scientist's publication prowess five years or so into the future — a useful timescale for tenure decisions.

Using publicly available data on the history of publication, citation and funding for thousands of neuroscientists, researchers working on the fruitfly Drosophila and evolutionary biologists, the authors constructed an algorithm that converts information on a typical scientist's CV — the number of journals published in and articles in top journals for instance — into a number that represents their probable h-index in the years that follow.

Outraged? Please send complaints to the usual address. Interested? Calculate your own future h-index here: go.nature.com/z4rroc.

Nature receives thousands of submissions a year, some of which point out the flaws in existing metrics and propose alternatives. We accepted the piece by Acuna et al. after submitting it to peer review. The reviewers and our editors felt that the authors had used appropriate methods to obtain their algorithm, and its predictive values seemed realistic. Furthermore, the authors are cautious about its value, pointing out that it is probably less accurate for scientists in other disciplines, and should not be considered a replacement for peer review. At the very least, the future h-index should help to address some problems with the current h-index, which tends to favour established scientists because they have had more time to accrue citations. A forward-looking metric may give a leg up to promising, early-career scientists who don't yet have impressive CVs.

There is no substitute for examining the research itself to appreciate its value.

Nevertheless, no one wants their career potential to be reduced to a number. Nature publishes many scientific gems that nevertheless achieve few citations; there is no substitute for examining the research itself to appreciate its value. We know that the idea of a new metric published in these pages will raise some anxieties, and a few hackles. But metrics are already being used, so it is important that they create the most accurate picture possible of someone's potential. Plus, they do hold some advantages over peer review, by helping to eliminate the unconscious biases that can creep into personal evaluations.

In that vein, scientists should continue to hunt for metrics that capture a scientist's true value, including aspects such as teaching, reviewing and public-speaking ability, as well as online responses to publications in blogs and comments — 'alt-metrics'. We may not live in an ideal world, but we can still improve the recruitment, reward and opportunities for scientists.