Cassidy Sugimoto and Vincent Larivière

Vincent Larivière and Cassidy Sugimoto want to help people use metrics responsibly.Credit: Vincent Larivière/S. Craig Finlay

Measuring Research: What Everyone Needs to Know Cassidy R. Sugimoto and Vincent Larivière Oxford University Press: 2018.

Information scientists Cassidy Sugimoto and Vincent Larivière crunch data to explore the changing nature of research — from uncovering science’s gender disparities to charting the impact of migration on citations. Now, they have written a guidebook, Measuring Research. They talk here about the misuse of citation metrics to judge individual researchers, the Wild West of indicators and the cultural bias of databases.

Why did you write this book?

CS: Seeing the gross misuse of bibliometrics, we both felt a need for an accessible manual to help people use them more responsibly. For scientists, it’s an overview of the way their output and impact is measured. For those who manage science, this book provides the tools necessary for interpreting bibliometric data and guidelines for responsible use of indicators.

What are the main lessons about indicators?

CS: For any indicator, we have to make sure that using it does not cause mass distortions in what we are trying to measure. The key thing is to know where and when it can be usefully applied. Critics sometimes point out the flaws of citations or impact factors and say we should only use peer judgement. If you are evaluating one scholar’s work, or one article, absolutely use peer judgement. But if you are evaluating the output of an entire country, that can’t work at scale. And indicators might have benefits in some contexts that fade in others. For example, standardized indicators can be extremely useful in countries with high degrees of cronyism, as objective measurements to counter the old boy network.

VL: Indicators are just one perspective on underlying concepts such as research impact, and they do not represent a single ‘ground truth’. It is vital to know when it makes sense to apply them.

In your book, you point to an increase in the number of science indicators and data sources in recent years. Is that a sign of sophistication in measurement, or an alarm bell of chaos?

CS: When people construct a new indicator, they are stating that they want to pay attention to new values: attention on social media, for instance. This starts a useful conversation about the values we want to incentivize in science. But if we are to use indicators to rank or evaluate, they have to be standardized: everyone has to use them in the same way.

VL: Yes, right now it’s a Wild West of indicators and data sources. If you’re trying to ascertain impact, you can find hundreds of answers based on different indicators and data sources. From a data point of view, an organization such as Google could change that. They have the means to crawl everything and make the data available. Unfortunately, they give away their search tools but keep all of the data, so we don’t have open, collective, standardized tools yet.

You discuss problems with the “economy of research measurement”. What’s that?

VL: Just as in most areas of the digital economy, we depend at the moment on corporations for our indicators — in particular, on Clarivate’s Web of Science and Elsevier’s Scopus databases, as well as Google and Microsoft. Firms can put restrictions on the use or openness of data and that is what they are all doing, albeit in different ways. This is why we’re pushing for unrestricted, or open, metadata — so the research community has full access to the data needed to assess itself.

And you note that our key citation databases are culturally biased — what kinds of problems does that cause?

CS: Major sources of data on the scientific workforce systematically under-represent some languages, countries and disciplines. So it can be hard to distinguish between disparities that reflect real differences — for instance, in production or impact — and disparities caused by these issues of under-representation. Without appreciating this, there’s a danger of perpetuating several thematic, linguistic and cultural inequalities. Encouraging publication only in particular international journals, for example, could incentivize researchers away from certain topics or from publishing in national journals with high local impact. The research community must remain vigilant to ensure that metrics serve research, rather than enslave it.