How to judge the performance of researchers? Whether one is assessing individuals or their institutions, everyone knows that most citation measures, while alluring, are overly simplistic. Unsurprisingly, most researchers prefer an explicit peer assessment of their work. Yet those same researchers know how time-consuming peer assessment can be.

Against that background, two new efforts to tackle the challenge deserve readers' attention and feedback. One, a citations metric, has the virtue of focusing explicitly on a researcher's cumulative citation achievements. The other, the next UK Research Assessment Exercise, is rooted in a deeper, more qualitative assessment, but feeds into a numerical rating of university departments, the results of which hang around the necks of the less successful for years.

Can there be a fair numerical measure of a researcher's achievements? Jorge Hirsch, a physicist at the University of California, San Diego, believes there can. He has thought about the weaknesses of current attempts to use citations — total counts of citations, averaged or peak citations, or counts of papers above certain citation thresholds — and has come up with the ‘h-index’. This is the highest number of papers that a scientist has written that have each received at least that number of citations; an h-index of 50, for example, means someone has written 50 papers that have each had at least 50 citations. The citations are counted using the tables of citations-to-date provided by Thomson ISI of Philadelphia. Within a discipline, the approach generates a scale of comparison that does seem to reflect an individual's achievement thus far, and has already attracted favourable comment (see Index aims for fair ranking of scientists). The top ten physicists on this scale have h values exceeding 70, and the top ten biologists have h values of 120 or more, the difference reflecting the citation characteristics of the two fields.

Given the potential for such indicators to be seized upon by administrators, readers should examine the suggestion and provide the author with some peer assessment.

The author placed his proposal on a preprint server last week (http://www.arxiv.org/abs/physics/0508025), thereby inviting comment before publication. Given the potential for indicators to be seized upon by administrators, readers should examine the suggestion and provide the author with peer assessment.

Whatever its virtues, any citation analysis raises as many questions as it answers and also tracks just one dimension of scientific outputs. Nature has consistently advocated caution in the deployment of the impact factor in particular as a criterion of achievement (an index that Hirsch's h indicator happily ignores).Wisely, the UK Research Assessment Exercise (RAE) has long committed itself to a broader view and the organizers of the next RAE, to take place in 2008, have prohibited assessment panels from judging papers by the impact factors of the journals in which they appeared. What the costs of that will be in panel members' time remains to be seen.

The common approach of the RAE's disciplinary panels is to assess up to four submitted outputs (typically research papers or patents) per researcher, of which a proportion will be assessed in some detail (25% for the biologists, 50% for the physicists). There will no doubt be something of a challenge in taking into account the fact that a typical publication has several co-authors.

These outputs will sit alongside indicators of the research environment such as funds and infrastructure, and of esteem, such as personal awards and prestige lectures. The specific indicators to be considered and the weightings applied are now open for public consultation (see http://www.rae.ac.uk/pubs/2005/04/docs/consult.doc). Given that the RAE is so influential both nationally and, as a technique, internationally, there is a lot at stake. Stakeholders should express any concerns they may have by the deadline of 19 September.