This Correspondence is in response to a Commentary by Peter Lawrence in Nature on 20 March, 2003. Click here to read this article.

Sir – Peter A. Lawrence's Commentary “The politics of publication” (Nature 422, 259–261; 2003), about journal mania and the tyranny of impact factors, was like a breath of fresh air. How did this self-inflicted misery arise, and how on Earth are we going to get rid of it?

Clearly it arises largely from laziness and poor homework on the part of senior scientists and bureaucrats (the dividing line is sometimes thin). It is amazing how unscientific some of them can be once outside their own narrow field. Eugene Garfield, who invented the wretched impact factor, himself said that it is not appropriate for ranking individuals (see http://www.garfield.library.upenn.edu/papers/derunfallchirurg_v101(6)p413y1998english.html).

Astonishingly, these facts are not known (or are ignored ) by some selection committees. Serious studies have been done, such as that of P. O. Seglen (Br. Med. J. 314, 498–502; 1997), which shows that the citation rate for individual papers is essentially uncorrelated to the impact factor of the journal in which it was published. This happens because of the very skewed distribution of citation rates, which means that high-impact journals get most of their citations from a few articles.

The distribution for Nature is shown in Fig. 1. Far from being gaussian, it is even more skewed than a geometric distribution; the mean number of citations is 114, but 69% of papers have fewer than the mean, and 24% have fewer than 30 citations. One paper has 2,364 citations but 35 have 10 or fewer. (The Institute of Scientific Information, ISI, is guilty of the unsound statistical practice of characterizing a distribution by its mean only, with no indication of its shape or even its spread.) ISI data for citations in 2001 of the 858 papers published in Nature in 1999 show that the 80 most-cited papers (16% of all papers) account for half of all the citations.

Figure 1: Distribution of the number of citations in five years for 500 biomedical papers published in Nature: 100 papers published in each of 1981, 1984, 1988, 1992 and 1996 were chosen at random, and for each paper the number of citations in the subsequent five years was counted.
figure 1

Data provided by Grant Lewison (Department of Information Science, City University, London EC1V 0HB, UK).

In my own work, for example, I have published a Nature (impact factor 27.9) article with only 57 citations, and an article in Philosophical Transactions of the Royal Society (impact factor 3.1) with more than 400 citations. Gratifyingly, this represents roughly my own assessment of the relative worth of the articles, but it emphasizes the ludicrousness of the recent current obsession with journal impact factors, especially at a time when, because of the Web, it matters less than ever before where an article is published.

Perhaps one way to cope with the problem is to turn it on its head. Candidates can judge institutions by the questions they ask, rather than the other way round. Any selection or promotion committee that asks you for impact factors is probably a second-rate organization. A good place will want to know about the quality of what you have written, not where you published it — and will be aware that the two things are uncorrelated. A useful method for job interviews that has been used in our department is to ask candidates to nominate their best three or four papers, then question them on the content of those papers. This selects against publication of over-condensed reports in high-impact journals (unless it is one of the relatively few genuinely important papers of this type). It also selects against 'salami slicing', and is a wonderful way to root out guest authors, another problem of the age. Experience has shown that candidates can have astonishingly little knowledge of the papers on which their names appear.