Sir

I expected to read some robust criticism of Peter A. Lawrence's Commentary “The politics of publication” (Nature 422, 259–261; 2003), so I was surprised at the chorus of approval in Correspondence (Nature 423, 479–480 & 585; 200310.1038/423479a, and Nature 424, 14; 200310.1038/424014a). These views and proposals require rebuttal. I believe that the present system of evaluation is the only one possible, and that Lawrence's apparently utopian proposals would do more harm than good.

It is a cliché that modern societies can hardly function without science, and that science has become very expensive and highly specialized, hence requiring an evaluation system. There are two socially justifiable reasons for supporting science. First, scientists make discoveries that increase our knowledge, understanding and predictive power. However, many well-educated people in all fields of science are needed to translate these into progress. Universities can produce these people and can test their skill and knowledge, but they cannot test the skill and knowledge of their own teachers, which has to be done through the engagement of the teachers themselves in scientific activity.

The second important reason for supporting science, therefore, is to teach students and to maintain a group of specialists in different fields who can adapt the newest scientific achievements to their society. Politicians and others who fund science need a tool to identify these people.

In the best laboratories, the first reason for maintaining science alone is considered paramount. But the second reason is vital to all modern societies, including those unable to produce Nobel prizewinners. Scientists maintain the polite fiction that all of them are equal and do equally good science. But this is not the case. The best laboratories make the most important scientific discoveries. A little lower are those in which less important discoveries are made, but which contain researchers who fully understand what others are doing and who can apply this knowledge. At the bottom are places where people only pretend to do science and are unable to follow progress in their field.

The system of rewards in science must assure promotion of the best laboratories, improvement of the decent and denial of public funds to the worst. Neither international congresses nor big international programmes can make this objective distinction between good and poor science, so some other means of evaluation are required.

The appearance of the Science Citation Index (SCI) in the 1960s was a breakthrough in the development of objective numerical methods for the evaluation of science and scientists. This can be seen by comparing now with then, and by looking at places where numerical methods of evaluation are unknown. In countries far behind the scientific leaders, scientists are no less numerous, and many universities and scientific journals are supported by public funds. These journals publish many papers but have very low circulations and an insignificant impact on other scientists. This is a waste for the society supporting such research, as the scientists cannot make important discoveries, convey or build on discoveries made by others, or follow developments in their own field.

Sometimes this can be seen in rich countries too. In the 1960s and 1970s it was a waste of time to browse German and French journals on ecology and evolutionary biology. This state of affairs changed completely after young researchers started to be rewarded for publishing in journals with a high impact factor: now German and French researchers in these fields write papers that are well worth reading.

Evaluation of scientists on the basis of the impact factor and other indices is like the market economy: the system is wrong and unjust, but other systems are much worse. Thousands of books have been written on the evils of capitalism, and now we have articles on the evils of evaluations derived from citation indices. The authors of these articles ignore the global effect of applying this system and concentrate instead on particular cases: a paper got many citations despite being published in a journal with a low impact factor, or a poor paper was cited many times. Evaluation based on citations is a statistical method that has to be used on large samples and carefully applied to avoid pitfalls. Arguments against the system should be statistical, not particular.

Critics of this evaluation system propose a utopia with high moral standards. They want science managers and journal editors not to be narrow specialists but to be able to evaluate scientists in all different fields within, say, ecology or molecular biology. With the present extent of specialization this seems hardly possible. In this utopia, managers and editors would be absolutely honest and not guided by their own scientific interests, predilections or aversions. The entire system relies on the best side of human nature.

Abandonment of objective methods of science evaluation derived from the SCI would be most dangerous in developing countries and others where science is not first-rate. It would keep their societies from knowing how far behind their scientific institutions are. Worse, it would remove a tool for rewarding researchers who attempt to do good science and for eliminating those who do not.