Elsevier’s CiteScore uses a larger database — and provides different results for the quality of journals.
One of science’s most contentious metrics has a flashy new rival. On 8 December, publishing giant Elsevier launched the CiteScore index to assess the quality of academic journals.
Although the index ranks journals with a formula that largely mimics the influential Journal Impact Factor (JIF), it covers twice as many journals — 22,000 to the JIF’s 11,000 — and its formula includes tweaks that produce some notably different results, including lower scores for some high-JIF journals.
If CiteScore becomes popular, these quirks could change the behaviour of journals hoping to maximize their score, say analysts. But CiteScore comes at a challenging time for such metrics. It’s not obvious that there is an appetite for a similar competitor to the JIF, and scientists note that no matter what differences CiteScore provides, it will have to survive the same criticisms that are lobbed at its rival — most notably that the JIF is so commonly promoted by publishers as a yardstick for ‘quality’ that researchers are judged by the impact factor of the journal in which their work appears, rather than by what they actually write.
“In my view, journal metrics should always be accompanied by health warnings that are at least as prominent as the ones you see on cigarette packets,” says Stephen Curry, a structural biologist at Imperial College London. “Such metrics are at the root of many of the current evils in research assessment.”
Amsterdam-based Elsevier has for many years provided a suite of analytical indicators, including journal metrics that have never become as popular as the JIF. It says that it has launched CiteScore owing to “overwhelming demand” from authors and editors.
The publisher is uniquely placed to challenge the JIF’s hegemony. It owns the Scopus database, a record of article abstracts and their reference lists. Aside from Web of Science, on which the JIF is based, it is the world’s only reasonably comprehensive and carefully curated citation database. But Scopus is bigger, enabling scientists, librarians and funders to check the popularity of many more journals. Furthermore, unlike the JIF, which is available only to subscribers, CiteScore figures will be free online for anyone to view and analyse, although full details of the documents included in the calculations are visible only to subscribers.
When it comes to their underlying formulae, CiteScore and JIF are near-doppelgängers. To score any journal in any given year, both tot up the citations received to documents that were published in previous years, and divide that by the total number of documents. The most popular version of the JIF looks at research articles published in the previous two years, whereas CiteScore stretches back to the previous three.
But one significant difference leads some high-JIF journals, such as Nature, Science and The Lancet, to do worse in CiteScore. The new metric counts all documents as potentially citable, including editorials, letters to the editor, corrections and news items. These are less cited by scholars, so they drag down the average. The Lancet, for instance, drops from a healthy average of 44 in JIF — putting it in 4th position overall — to a mere 7.7 in CiteScore, where it is outside the top 200.
Such a distinction could have major consequences for the behaviour of publishers. “As there is intense competition among top-tier journals, adoption of CiteScore will push editors to stop publishing non-research documents, or shunting them into a marginal publication or their society website,” predicts Phil Davis, a publishing consultant in Ithaca, New York.
The Lancet, Nature and other journals declined to comment on CiteScore. But Jeremy Berg, the editor-in-chief of Science, says that the journal is “very proud of our content that lies outside traditional research reports and articles” and that “any metric that is based on citation data alone will undervalue the impact of such non-research content”.
“The portfolio performance of all publishers may look a bit different using CiteScore metrics, including Elsevier, but all publishers gain in that they can explore the performance of more of their titles because of the broader coverage of Scopus,” says Lisa Colledge, director of research metrics at Elsevier. She says that CiteScore should be used only to compare related journals, not to compare raw scores across different fields. For example, the index ranks The Lancet 25th out of 1,549 ‘general medicine’ journals — putting it in the top 98th percentile of journals in that subject category.
After the index was released, scientists at the Eigenfactor project, a research group at the University of Washington in Seattle, published a preliminary calculation finding that Elsevier’s portfolio of journals gains a 25% boost relative to others if CiteScore is used instead of the JIF.
That is because any publisher with journals that produce lesser-cited ‘non-research’ documents will see its portfolios drop under CiteScore, says Colledge, whereas those whose journals mainly publish only research articles — such as Wiley, the American Chemical Society and Elsevier — will see a relative gain. “Scopus has included all document types in CiteScore metrics because this is the most simple and transparent approach that also acknowledges every item’s potential to cite and be cited,” she says.
Clarivate Analytics in Philadelphia, which bought the JIF and the Web of Science earlier this year from Thomson Reuters, says that it doesn’t see any new insights in CiteScore. Other, more complex metrics —– including several published by Elsevier and Thomson Reuters — have been developed to rank journals in the past, but none has yet proved as popular as the JIF. “If anything, another, different metric will reinforce the status that the JIF has as the definitive assessment of journal impact,” says Clarivate spokesperson Heidi Siegel.
Some even wonder whether Elsevier, which publishes more than 2,500 journals, should be producing CiteScore at all. The JIF has always been owned by non-publishers. “I question the appropriateness of a publisher getting involved with the metrics that evaluate the very content that it publishes,” says Joseph Esposito, a publishing consultant in New York City. But Elsevier says that it is "a provider of information solutions as well as a publisher", and treats all the publishers it analyses equally.
Related links in Nature Research
Related external links
About this article
Cite this article
Van Noorden, R. Controversial impact factor gets a heavyweight rival. Nature 540, 325–326 (2016). https://doi.org/10.1038/nature.2016.21131
Research performance and scholarly communication profile of competitive research funding: the case of Academy of Finland
Measuring the excellence contribution at the journal level: an alternative to Garfield’s impact factor
Superior identification index: Quantifying the capability of academic journals to recognize good research
Publishing Research Quarterly (2020)