Sir

I have read with interest your debate on the impact of scientific work (for example, Nature 422, 259–261; 2003, Nature 423, 479–480 & 585; 200310.1038/423479a and Nature 424, 14; 200310.1038/424014a) but I do not agree with the position taken by Adam Łomnicki (Nature 424, 487; 200310.1038/424487a).

The problem is that Łomnicki and others established their careers at a time when competition among scientists had a different meaning. I am a young scientist and like everyone I would like to discover something interesting and new. However, when my colleagues and I discuss biological problems we always think about impact factors.

Recently, my friends wondered where to send their new paper — to journal X with an impact factor of 1.4, or to journal Y with an impact factor of 1.8. After 10 years, the average paper will be cited 14 times in journal X and 18 times in journal Y (assuming a constant citation rate, which is not the case, of course). Thus we compete furiously for just a few more citations, as the impact factor of most journals does not exceed three.

As Łomnicki states, citations are statistical processes, but even very good papers are cited only a few times. The question is whether the difference between 10 and 20 citations can really change our knowledge and understanding of nature.

Journals as well as scientists compete for impact factors. A journal that wants a higher impact factor has to encourage authors to publish in it, but with more papers coming in, more must be rejected. Generally, authors want to publish in journals with the highest possible impact factor, but it is very difficult for most journals to improve their impact factors as most submitted papers have been rejected by better journals.

Impact factors provide an easy way to assess our achievements. But we do not know if small differences in citation number are valid indicators of our work, or how citations are related to the real world and solving its problems.