As in any profession, researchers are assessed on the merit of their contributions at every step of their careers, and these evaluations influence funding, job opportunities, promotions, and peer recognition. Scientists are painfully aware of the importance of these evaluation systems, the fairness and operation of which are consequently very important. No doubt, the issues involved are complex, and it is for this reason that researchers are worried about an increasingly prevalent and rather discomforting trend among those who evaluate scientific merit to sum up years of labour, achievements and contributions simply in abstract numbers.

Much has been said about impact factors and their pervasive influence on a scientist’s career. Although impact factors are used more and more frequently by universities and research institutions to evaluate their staff, most of those being assessed would agree that their significance is overrated. Numbers alone, for instance, certainly do not reflect the quality of an individual paper, which is the same regardless of the journal in which it is published.

The new merit systems

One recent development is a merit system introduced — sometimes on the advice of outside consultancy firms — at several medical schools to translate scientific merit into cold numerical values. The system requires a scientist to fill out a form every few months, plus a more comprehensive form on a yearly basis, to assess scientific excellence in a number of categories. The data compiled include the number of publications, the impact factors of the journals in which they appeared, the total amount of grant money received, involvement in organizing conferences, invitations to talk at meetings, professional awards, activities that are perceived as profile-raising, teaching load and so on. Dollars per square metre, the amount of funding and grants received by the scientist divided by the size of his or her laboratory, is unquestionably the strongest contender for ‘most absurd value in the system’, and is something most people would associate with real estate, not science. Numbers are assigned within each category and are used to compare and rank peers within a department or an institution. As a result, it is then possible to base the allocation of laboratory space, funding and promotions on a simple hierarchy of numbers.

While the aims of such systems — to motivate researchers to excel, to raise the reputation of the institution and to reward the sometimes neglected contribution made by those who teach — may be noble, the maximization of revenue often seems to represent the greatest incentive behind their introduction. The disadvantages and dangers, such as the bias towards popular and financially more privileged fields of research, are glaringly obvious. Many scientists would agree that these figures can never accurately reflect or measure scientific merit and are open to deliberate manipulation and misuse. The policy of assessment at short intervals also seems rather short-sighted, given the nature of scientific research, and tends to favour short-term trends as opposed to solid, long-term contributions and success.

Are we, then, heading towards a system of ‘personal impact factors’? Will everyone receive a more or less impressive number at the end of the year in a scientists’ hit parade? As absurd and scary as this sounds, things do seem to be heading in this direction. One can only hope that common sense will prevail, and that the independence and creativity of scientific endeavour will not be drowned in a system of abstract, and often meaningless, numbers.