Thomson Reuters vows to be clearer about how science's most misused metric is calculated.
The most misused metric in science is getting a makeover — although many researchers would like it to disappear altogether.
Information firm Thomson Reuters says that it will become more transparent over how it calculates impact factors, an annual ranking of more than 10,900 scientific journals that it published on 29 July, along with the names of 39 journals that it is barring from the list.
The firm, which is headquartered in New York, is also revamping its commercial analysis product, InCites, to add metrics based on individual articles, and to allow users to make their own calculations. But critics say that more change is needed.
The impact factor was invented to help libraries decide which journals to purchase: roughly speaking, a journal with a higher impact factor attracts more citations. But it has become a seductive yardstick by which to judge the quality of researchers and their papers — angering scientists who say that they are judged by where they publish, rather than what they publish.
The result is a race to get into journals with high impact factors, and almost everyone is unhappy with this situation, says Stefano Bertuzzi, executive director of the American Society for Cell Biology in Bethesda, Maryland.
Thomson Reuters says that the problem lies in how the impact factor is being used, not in the metric itself. But even librarians and journal editors are not content, because they say that the firm is not clear about how it calculates the metric. “We’re not sure how reliable their data are,” says Bernd Pulverer, chief editor of The EMBO Journal in Heidelberg, Germany, who says that he has struggled to get his scores to match the firm's.
Last year, Bertuzzi coordinated a statement signed by hundreds of research organizations and more than 11,000 scientists, the San Francisco Declaration on Research Assessment (DORA), which deplored the abuse of the impact factor and called for better ways to evaluate research. But he and Pulverer also sent a private letter to Thomson Reuters asking it to improve the way in which it calculates impact factors. That letter was never answered, they say, so on Friday 25 July they made it public on the DORA website.
Now Thomson Reuters, which says that it did answer the letter, says that it is “taking significant steps to increase transparency around the calculation of Journal Impact Factors”. For example, it will allow (paying) users to view every item included in the calculation.
It is also providing citation metrics for articles, not just journals. It should now be possible for any subscriber to calculate the impact of any collection of articles — whether for a journal, a university or a scientist — and to normalize these counts, which is important because some disciplines cite more heavily than others, so it is unfair to compare biology articles with mathematics articles, for instance.
Is this enough to head off the criticisms levelled at Thomson Reuters? “We appreciate these new capabilities, but Thomson Reuters puts the onus on the user,” says Bertuzzi. That is a problem, he says, because researchers will still prefer an 'official' number. He wants the firm to improve its published metrics — by, for example, excluding review articles because they include many more citations than research articles.
Thomson Reuters also announced that 39 journals will not receive an impact factor this year — a record number for journals barred in a single year — because of excessive self-citation or ‘citation stacking’ from papers in other journals.
One journal that has now been caught out in two successive years for citation stacking is the International Journal of Sensor Networks (IJSN). Thomson Reuters found that it was heavily cited in articles published in the Proceedings of the 2013 IEEE Consumer Communications and Networking Conference. And two articles in those proceedings that cited the IJSN heavily were co-authored by the IJSN's editor-in-chief, Yang Xiao. The IEEE (Institute of Electrical and Electronics Engineers) says that it is assessing the situation and "will take appropriate action as deemed necessary".
Xiao, a computer scientist at the University of Alabama in Tuscaloosa, had already seen the IJSN censured for the same practice last year, when Thomson Reuters found a 2011 paper in the Journal of Parallel and Distributed Computing that contained bursts of references to IJSN. Again, Xiao had co-authored the paper. In February this year, it was retracted by its publisher, Elsevier, who said that it violated its policy on citation manipulation. Xiao had not responded to an e-mail by the time this article went to press.
To coincide with the Thomson Reuters announcements, a group of physics journal editors also launched an attempt to ditch their journals' reliance on the impact factor altogether, in favour of their own measure based on an open citations database.
Last year, Thomson Reuters denied the Journal of Instrumentation an impact factor because it had been heavily cross-cited by electronics engineer Ryszard Romaniuk in a series of papers in SPIE Proceedings. After much argument, the journal, which is published by the London-based Institute of Physics and the Italian International School for Advanced Studies (SISSA), was reinstated. But “the delay damaged the reputation of the journal and that of its authors, not least for the lack of clarity with which the issue was handled”, says Enrico Balli, chief executive of SISSA Medialab, a non-profit company owned by the SISSA.
Instead, Balli has led the development of a parallel journal impact factor, called the Jfactor, that is based on open data collected by INSPIRE — a system of information about high-energy-physics articles and citations built by Fermilab, CERN and other labs. If physics journals adopt it, then Thomson Reuters' proprietary metric will not be needed, he notes.
Bertuzzi hopes that other that metrics will gain popularity for assessing individuals. And at a DORA webpage updated on 29 July, he and others are collecting examples of good practices in research assessment that avoid impact factors altogether. "We can discuss all the metrics that you want, but ultimately, it is what is in a paper that really matters," he says.
Related links in Nature Research
Related external links
About this article
Cite this article
Van Noorden, R. Transparency promised for vilified impact factor. Nature (2014). https://doi.org/10.1038/nature.2014.15642
This article is cited by
Behavior Analysis in Practice (2023)
Journal for General Philosophy of Science (2020)