The use of metrics to measure and assess scientific performance is a subject of deep concern, especially among younger scientists. In this issue, Nature begins what we hope will be an ongoing conversation on such measures and how they should be developed and used. All the metrics-related articles are collected at http://www.nature.com/metrics and are available for online comment.

A poll of Nature's readers suggests that feelings about metrics are mixed (see page 860). Many researchers say that, in principle, they welcome the use of quantitative performance metrics because of the potential for clarity and objectivity. Yet they also worry that the hiring, promotion and tenure committees that control their fate will ignore crucial but hard-to-quantify aspects of scientific performance such as mentorship and collaboration building, and instead focus exclusively on a handful of easy-to-measure numbers related mostly to their publication and citation rates.

Administrators need to understand what the various metrics can and cannot tell them.

Academic administrators contacted by Nature suggest that this fear may be exaggerated. Most institutions seem to take a gratifyingly nuanced approach to hiring and tenure decisions, relying less on numbers and more on wide-ranging, qualitative assessments of a candidate's performance made by experts in the relevant field.

Yet such enlightened nuancing cannot be taken for granted. Numbers can be surprisingly seductive, and evaluation committees need to guard against letting a superficial precision undermine their time-consuming assessment of a scientist's full body of work. This is particularly true in countries such as Britain, where metrics-heavy national assessments of universities can trickle down, so that individuals feel more rewarded for quantity than for quality — and change their behaviour to match.

New measures of scientific impact are being developed all the time (see page 864), in part driven by government agencies looking to quantify the results they are getting for their investment. Such innovation is to be encouraged. But researchers must be mindful of how and why the metrics they are making are being used. When one person in this field was interviewed by Nature, he expressed a keen interest in reading our News Feature on how metrics are used for individual assessment, and to what extent, because he had no evidence with which to answer these questions himself. This isn't an optimal situation, to put it mildly. There needs to be much more discussion between specialists such as social scientists, economists and scientometricians to ensure that metrics development goes hand-in-hand with a discussion of what the metrics are for, and how they are affecting people. Only then can good suggestions be made about how to improve the system (see page 870).

Academic administrators, conversely, need to understand what the various metrics can and cannot tell them. Many measures — including the classic 'impact factor' that attempts to describe a journal's influence — were not designed to assess individual scientists. Yet people still sometimes try to apply them in that way. Given that scientometricians continue to devise metrics of ever-increasing sophistication, universities and scientific societies need to help decision-makers keep abreast. Setting a good example is the European Summer School for Scientometrics, a programme that is being inaugurated in Berlin on 16–18 June and will run in Vienna from 2011. It promises a science-based approach to tutoring on the merits and pitfalls of various metrics.

Institutions must also ensure that they give their researchers a clear and complete picture of how assessments are made. This can be awkward — one dean said that he was reluctant to list the qualities he looked for in tenure applications because this could encourage list-ticking behaviour rather than innovation in his faculty. But transparency is essential: no matter how earnestly evaluation committees say that they are assessing the full body of a scientist's work, not being open about the criteria breeds the impression that a fixed number of publications is a strict requirement, that teaching is undervalued and that service to the community is worthless. Such impressions do more than breed discontent — they alter the way that scientists behave. To promote good science, those doors must be opened wide.