Konrad Hinsen said:

Credit: ILLUSTRATION BY DAVID PARKINS

Two fundamental problems with metrics in science are that quantity does not imply quality, and that short-term impact does not imply long-term significance. The real value of many scientific discoveries often becomes apparent only many years later. It would be interesting to evaluate metrics by applying them to research that is a few decades old. Would they have identified ideas and discoveries that we now recognize as breakthroughs?

Long-term services to the scientific community are undervalued by current metrics, which simply count visible signs of activity. Take the development of scientific software: a new piece of software can be the subject of a publication, but the years of maintenance and technical support that usually follow remain invisible. e-mail: research@khinsen.fastmail.net

Martin Fenner said:

Another important motivation for improving science metrics is to reduce the burden on researchers and administrators in evaluating research. The proportion of time spent doing research versus time spent applying for funding, submitting manuscripts, filling out evaluation forms, undertaking peer review and the rest has become ridiculous for many active scientists.

Science metrics are not only important for evaluating scientific output, they are also great discovery tools, which may turn out to be more useful. Traditional ways of discovering science (such as keyword searches in bibliographic databases) are increasingly superseded by non-traditional approaches that rely on social networking tools for awareness, evaluations and popularity measurements of research findings. e-mail: fenner.martin@mh-hannover.de

Luigi Foschini said:

In the same issue, you run a News Feature on large collaborations in high-energy physics (Z. Merali Nature 464, 482; 2010) — some 10,000 researchers in the case of the Large Hadron Collider (enough to fill a small city). People who build enormous instruments of course do great work that enables important parameters to be measured.

But the practice of listing as authors on papers anyone who just tightens bolts or brings in money is killing the concept of authorship and hence any chance of measuring the productivity of individuals. Should I include Steve Jobs on papers I publish simply because I use a Mac to analyse data and to write articles and books? e-mail: luigi.foschini@brera.inaf.it

Björn Brembs said:

No matter how complex and sophisticated, any system is liable to gaming. Even in an ideal world, in which we might have the most comprehensive and advanced system for reputation-building and automated assessment of the huge scientific enterprise in all its diversity, wouldn't the evolutionary dynamics engaged by the selection pressures within such systems demand that we keep randomly shuffling the weights and rules of these future metrics faster than the population can adapt? e-mail: bjoern@brembs.net