We are sometimes asked by readers and authors what is the impact of our content and which were the ‘best’ articles we published? These are difficult questions because for reviews, in particular, research metrics are very crude, and often misleading measures of impact. We need to move away from the simplistic notion of high–low impact and acknowledge that there are many more dimensions to consider.

“We need to move away from the simplistic notion of high–low impact and acknowledge that there are many more dimensions to consider.”

Research metrics are quantitative measures that attempt to capture some of the impact of a research output. They can be bibliometrics, which are citation-based, or altmetrics, which are web-based and measure attention from news outlets, blogs, forums and social media, but also sites like Wikipedia or mentions in policy documents or patents. So broadly speaking, bibliometrics capture some of the impact within the academic community, through the lens of scholarly publications, whereas altmetrics give an idea of the level of broader interest. These metrics can be used together (though they are not necessarily correlated) to evaluate impact, but it’s essential to interpret them in context and understand their limitations.

The use of citation-based metrics has often been criticized1. For reviews, in particular, citation volumes (related to the relative size of the field or topic), patterns (shaped by discipline-specific citation habits and various biases2, including inertia in citing outdated sources when newer ones exist) and publication timescales (slow-moving versus fast-moving fields) dramatically influence these metrics. Therefore, it is not meaningful to compare in terms of citations, or even downloads, the performance of different reviews published in a multi-disciplinary or broad-scope journal.

It’s more interesting to consider where citations are coming from. Some reviews may be predominantly cited from their own field, or occasionally by closely-related areas, whereas others can be cited by other disciplines, signalling boarder interest in the topic, even from outside the physical sciences. The distribution of citation sources can be interpreted as a measure of the breadth of academic impact, or interdisciplinary relevance. This distribution will look very different for different articles published in a broad-scope journal.

Altmetrics have their own limitations. Some topics, such as astronomy or quantum computing, tend to be picked up more often by news media than condensed matter physics3, for example. Review articles are rarely covered by news outlets, because they are, naturally, old news. Even before last year’s shifts4, the engagement with social media across physics was uneven; some communities being very active, some completely absent. This makes the comparison of altmetric scores for individual articles across different fields meaningless. But altmetrics provide interesting insights into non-traditional (at least for physics) forms of broader impact such as mentions in policy documents, patents or Wikipedia.

High-quality review articles are particularly important to inform public policy, decision making and clinical practice. This has long been true for life and clinical sciences and is now starting to also be the case in physical sciences. There are other forms of impact particular to reviews such as their extensive use as teaching material, influencing thinking or bridging communities, but these are not quantifiable beyond anecdotal feedback or editors delightedly spotting mentions of their journal at conferences.

So, how can we identify the best articles we published? We can’t. In fact, the question that we should be asking is who is using our content and how? This cannot be answered solely by traditional metrics and certainly not without an in-depth understanding of the characteristics and culture of each research sub-field.

Some articles are highly-cited, some are broadly-cited, some have wider impact being mentioned in policy documents, some may even have been picked by news outlets. You may recognize some of our figures or have seen quotes from our articles at conferences, or perhaps you used one of our reviews to learn about a new topic, start a collaboration or maybe one of our comments or editorials gave you food for thought. Each article has its unique success story.