The classic impact factor is outmoded. Is there an alternative for assessing both a researcher's productivity and a journal's quality?
During the late spring and with much fanfare, ISI (now Thomson Reuters) releases the previous year's impact factors of scientific journals. Originally designed to assess the popularity of peer-reviewed journals, impact factors are now often used inappropriately to determine a researcher's prospects of employment, promotion and tenure, and occasionally even to grant awards. Government funding of institutions or universities sometimes even include an assessment of impact factors. The use of this outmoded metric to assess a scientist's productivity and a journal's rank has become a ball and chain for both researchers and editors alike.
The 2009 impact factor for Nature Immunology is 26.000, according to the Thomson Reuters Journal Citation Reports. This puts Nature Immunology in third place among 128 journals in the field of immunology and first among primary research journals in this field. The 2009 impact factor represents the number of citations in 2009 to papers published in 2007 and 2008 divided by the total number of papers published in 2007 and 2008. Although we are pleased that Nature Immunology has the continued support of the immunology community, we would like to deemphasize the importance of this metric, especially when the validity of the impact factor, its possible manipulation and its misuse have been highlighted by many different quarters of the scientific community.
Because the impact factor is discipline dependent, with total citations varying wildly between subdisciplines within the first two years of publication, a comparison of all journals ranked is akin to a comparison of apples and oranges. Likewise, a comparison of impact factors within a subdiscipline such as immunology is also potentially flawed. The denominator of a journal's impact factor is decided by Thomson Reuters, but how the company decides what articles are deemed citable is not transparent. For Nature Immunology, our essays have suddenly been included as part of the denominator, despite the fact that they are historical commentaries (more journalistic than scholarly in style) that lack abstracts or a full citation index. A quick citation analysis shows that these articles are cited very rarely, if at all. It is unclear what 'front-half' material from other immunology journals, such as Immunity, counts toward the denominator, which makes comparison of impact factors that much more fuzzy.
Because the calculation is based on the mean, papers that receive huge numbers of citations will skew the impact factor. An extreme example that illustrates this point well is that of Acta Crystallographica Section A, whose impact factor rose more than 20-fold in 2009, to 49.926, due to one paper that was cited more than 6,600 times. Review articles, which are often cited more than primary research articles, have become a common fixture in many journals as a means of bolstering the impact factor. As a consequence, trendy areas such as TH17 cells have seen a flood of review articles. Perhaps a better way to measure the impact factor for primary research journals would be to exclude such secondary material from the calculation.
The number of self citations can also bolster a journal's impact factor, although at least this is more transparent and can be seen by citation analysis on the ISI Web of Knowledge site. A much greater bone of contention is that citations to retracted articles are not excluded from calculation of the impact factor. Because of the nature of research, such papers are often highly cited as many researchers publish articles that refute the findings in these papers. Given the large number of caveats about the calculation of the impact factor, and these are just a few, it is clear that when metrics are used to determine an author's influence and a journal's quality, other calculations should be used alongside the classic impact factor.
The 'h-index' has been touted as a reasonable metric for assessing a scientist's standing. As defined by the Web of Science, “The h-index is based on a list of publications ranked in descending order by the Times Cited. The value of h is equal to the number of papers (N) in the list that have N or more citations.” However, this system is not without its problems, as a person's h-index can reflect longevity as well as quality and cannot decrease even if a scientist's output does. Another metric gaining popularity is the 'evaluative informetric', which resembles the PageRank algorithm of the Google internet search engine. Essentially, with this system, a link of citation from a popular article or researcher is weighted more heavily. Both Thomson Reuters and Elsevier offer this metric in the form of Epigenfactor and SCImago Journal Rank, respectively. No standards yet exist to apply this to individual researchers and, like the impact factor, it is difficult to use this metric to compare different fields.
In this age of digital publishing, analysis of the number of times a research paper is accessed or downloaded online could be a more fruitful way of evaluating scientific clout. Although this is more up to date than assessing citations, reaches a wider audience than just researchers and focuses on individual articles, no global standards for reporting exist at present. Nevertheless, for authors who are interested, Nature Immunology now provides access on our Manuscript Tracking System to statistics that monitor views and downloads of published articles.
The system for assessing both an author's output and a journal's impact needs to change and is clearly evolving. In the meantime, the classic impact factor will probably remain at center stage. Nature Immunology, however, will not place an emphasis on this metric but instead will continue to strive to publish high quality immunology research of broad interest to the community.
About this article
Cite this article
Ball and chain. Nat Immunol 11, 873 (2010). https://doi.org/10.1038/ni1010-873