You may be forgiven for thinking that an impact factor is an absolute thing that is automatically and unequivocally generated by some nondescript computer on the Eastern seaboard; in reality, Journal Citation Reports require human intervention from the handful of ISI/Thomson database curators. Only now do we publish the 2004 ISI impact factor for Nature Cell Biology, because again this year we were in discussion with ISI curators as to which type of articles should be counted. This is not a trivial matter, as many journals publish much that is not intended for citation, such as historical perspectives, meeting reports or indeed this editorial — worthy articles, which would nevertheless decrease the journal impact factor if entered into the denominator of the calculation. It is heartening to see that our 2004 impact factor of 22.1 (for the official number, add 2s for decimal places two and three) is our highest yet. This figure again places Nature Cell Biology in the top quartet of journals that publish papers relevant to the cell biology and molecular biology communities.

Journal impact factors represent the average of how often articles published in a given journal over two years are cited during the following year. This measure is open to a number of caveats: one immediate consideration is that citation rates across the articles published are not evenly distributed, and most of a journal's cumulative citations derive from a small fraction of their total published articles (Nature 435, 1003; 2005). This is not a fatal flaw, since inter-journal comparisons are not really affected, but it may encourage editors to hunt for papers that are likely to form 'citation classics', as a single such paper — be it the human genome, or methods papers such as Laemmli's on gel electrophoresis — can elevate a journal's impact factor significantly. Furthermore, citation rates vary dramatically over time, so depending on when an article appeared relative to the 'counting window', it may contribute more, or less, to a journal impact factor. However, this evens out if one takes into account multiple years. More importantly, both review and primary research papers are counted, and in these days of high publication volumes and restricted reference lists, reviews tend to accumulate more citations — you will notice that review journals tend to do exceptionally well and scientists are often keen to contribute, probably not always purely with the desire to educate. Finally, an obvious point that nevertheless requires continuous reinforcement is that disciplines vary in size and publication culture, so that average citation rates vary significantly. For this reason, comparisons across fields are only partially useful at best, and journals with a broader scope tend to loose out relative to journals restricted to high citation fields such as molecular biology. To address some of these concerns, we have previously argued for a separation of primary research and review impact factors (Nature Cell Biol. 5, 681; 2003). We also pushed for an improved journal ranking by subject area, and for dropping two of the three decimal places (which highly overestimates the accuracy of the journal impact factor). Alas, the system appears to be set to stay and it will take competing citation systems such as 'Google scholar' or 'Scopus' to reinvigorate the debate (Nature Cell Biol. 7, 1; 2005). Given the status quo, it is reassuring to see that journal impact factors correlate well with other bibliometric measures; indeed, there is an excellent correlation with 'Faculty of 1000' scores (Nature Neurosci. 8, 397; 2005). It is worth remembering that a key limitation of Faculty of 1000 scores is that they are rarely awarded by more than two named scientists. One assumes that their choice of papers represents the community at large and that it is not influenced by the name of the journal or the author. That said, the sizeable spectrum of journals, and authors, represented in Faculty of 1000 would support that notion.

It is highly likely that many a reader will find these endless impact factor discussions rather tedious. However, in numerous countries bibliometric assessment directly affects funding or indeed salaries. A researcher's cumulative citation number is a blunt instrument — not much more refined than the total number of papers published (even if a minimum citation threshold is applied). An algorithm taking journal citation factors into account can be more informative, but still relies on a measure that has the caveats discussed above. So, are there alternative methods to judge a researcher's performance? Jorge Hirsch, a physicist at the University of California, San Diego, recently suggested the 'h-index', which is defined as the number of papers with a citation number higher than or equal to h (Nature 436, 900; 2005 and http://www.arxiv.org/abs/physics/0508025). This measure is designed to account for an individual's cumulative research contributions and incorporates both volume and citation rate. Simple but beautiful, this measure has to our knowledge generated no unexpected outliers (if you have found one, please let us know).

Finally, we would like to encourage the institution of yet another non-redundant way to rank journals: the 'community appreciation factor (CAF)'. An independent body could question a sufficiently large number of scientists in a given community as to how they would rank a given list of journals in the discipline. The ranking could usefully differentiate between novelty, breadth, thoroughness of datasets and data quality. We suggest that this could provide an informative measure of how a given journal is perceived within a community — after all, that is what really counts.

View background material on Connotea: http://www.connotea.org/user/bpulverer/tag/Journal%20Citation%20Report