Sir

Journal impact factors (IFs) were introduced in the 1970s to rank different journals by citation analysis1. IF is defined as the mean number of citations received in a particular year to articles published in the journal in the preceding two years.

It has been recognized for some time that the absolute values of IF, for a given qualitative journal standard, vary widely among disciplines (for example, biology, chemistry and engineering) and that review (secondary publication) journals have much higher IFs than do the primary publication journals carrying original research articles. Nevertheless, the IF has become an important parameter helping librarians to make difficult decisions concerning journal subscriptions, and consequently for the journals themselves in their attempts to increase sales and advertising revenues.

We wished to determine recent changes in the editorial or publication policies of different journals in biomedical fields that may have a bearing on their IFs. Our search has been limited, but the following findings are noteworthy.

  1. 1

    Since the early 1990s, many primary-publication biology journals have introduced 'mini-reviews' or their equivalent. Review articles attract citations more rapidly, and in larger numbers, than pri- mary articles.

  2. 2

    Research articles in medical journals are often the subject of short comments that are published, with a response from the original authors, in a subsequent issue. For the first time in its 175-year history, The Lancet altered its policy in 1996 so that an explicit (that is, countable) citation to the original article is now included in every one of the published letters of comment; earlier, such citations were included in the text itself and were not countable.

  3. 3

    Perhaps most important, one must consider the case of articles that are not counted as ‘source’ items (for the denominator) in the calculation of IF. These include editorials, book reviews, letters to the editor, and obituaries. Perhaps surprisingly, citations to such ‘non-source’ articles are counted in the numerator in the IF calculation. For the journals concerned, such items may therefore be taken to represent instances of ‘nothing lost, anything gained’. Included in the examples of non-source articles are the numerous short clinical or laboratory study reports published as Letters to the Editor in The Lancet, some of which may even be considered as citation classics; a pair2,3 in 1993 on non-01 cholera received approximately 200 citations in the next two years.

Some categories of published items originally classified as source articles have been reclassified as non-source. For example, meeting abstracts published in FASEB Journal were reclassified as non-source articles from 1988, and the IF for the journal registered a leap from 0.24 in 1988 to 18.3 in 1989. In 1983, Nature started publishing Scientific Correspondence which, together with the prestigious News and Views section, now comprises a large repository of citable non-source articles. Many other journals appear to be following these trends. When one compares the number of pages devoted to source articles with those for non-source items, the ratio for Nature had more than halved, from 3.5 in January 1977 to 1.6 twenty years later, even though the total number of pages in the journal was virtually unchanged over this period. If nothing else, our findings support the case for a change in the present method of IF calculation, so that citations to source articles alone are counted.

Finally, we have identified a loophole that could allow a less than scrupulous, perhaps obscure, journal to increase its IF from 0.1 to a healthy 2.1, by the mere expedient of adding two spurious self-citations in each of its source articles. Is it possible that this may be happening already?