Bibliometrics: The citation game

Journal name:
Nature
Volume:
510,
Pages:
470–471
Date published:
DOI:
doi:10.1038/510470a
Published online

Jonathan Adams takes the measure of the uses and misuses of scholarly impact.

Beyond Bibliometrics: Harnessing Multidimensional Indicators of Scholarly Impact

Eds Blaise Cronin and Cassidy R. Sugimoto MIT Press: 2014. ISBN: 9780262525510

Buy this book: US UK Japan

Visualization: Moritz Stefaner, with Martin Rosvall, Jevin West and Carl Bergstrom (Bergstrom Lab, Univ. Washington)

Citation patterns chart, from left to right, the emergence of neuroscience as a discipline distinct from cell and molecular biology and neurology.

Bibliometrics has a problem. In fact, the field — which tracks scholarly impact in everything from journal papers to citations, data sets and tweets — has a lot of problems. Its indicators need better interpretation, its tools are widely misused, its relevance is often questioned and its analyses lack timeliness. The expert contributors to Beyond Bibliometrics, edited by information scientists Blaise Cronin and Cassidy Sugimoto, rehearse the field's history and the theory behind those problems; critique both good and bad practices; and set out ideas about new indicators, including those from innovative 'altmetric' data sources. The result is a milestone for the field and a reference for practitioners. But the book misses a trick. It will not reach the non-experts — such as reviewers, hiring committees and grant panels — who use and often misuse bibliometrics to make decisions.

Bibliometrics developed during the post-1945 boom in research, expansion of publishing and development of structured electronic databases. It hit the research market 50 years ago, with Eugene Garfield's Science Citation Index, which was launched in 1964 and within years had been adopted by the US National Science Foundation. But it was in Europe, particularly in the Netherlands and Hungary in the 1980s, that citation analysis developed into research-management information. From the mid-1990s, on the back of analyses of UK research-assessment processes, bibliometrics began to enter policy-making as a way to benchmark institutions and researchers. Its commercial potential led Elsevier to create the Scopus bibliographic database, and the fundamental value of citation networks for search and discovery made Google Scholar a favourite with many researchers.

Bibliometrics has had many intriguing outcomes — for instance, it has revealed the speed of China's growth and the real distribution of scientific wealth between and within nations. It helped to put research policy and management on an evidence-based footing that could challenge the assumptions and prejudices of established scientists: most countries turn out to have a smaller core of excellence than many believed, and small, agile institutions have been shown to be excellent at research, alongside older, larger centres. Rooted in the basic needs of the researcher to find the literature most pertinent to their work, this history is nicely reviewed by Nicola de Bellis, who reminds us that, for example, it is bibliometric theory — partly from Garfield and explicitly from Francis Narin and Gabriel Pinski — that led to Google's key PageRank algorithm.

Because of that historical depth of analysis, the book's handling of methods and tools goes well beyond mere numbers. For example, Jevin West and Daril Vilhena discuss research-network conceptualizations such as eigenvectors, which show how fields depend on one another. Loet Leydesdorff's comprehensive visualization of the relatedness of research disciplines reveals that variations in levels of success may depend on interdisciplinary differences in team and institutional structure. Michael Kurtz and Edwin Henneken provide a detailed analysis of the 'recommender systems' that astrophysicists use to find key literature.

One big set of problems in the field relates to careless use. The h-index, which captures the distribution of a scientist's citations, is innately flawed because it is not responsive to field or career stage, but its simplicity may have led to a plethora of derivative indices. Impact factors tell us about journals, not individual publications, but research managers use them to pick staff. Engineers publish conference proceedings, but submit only journal papers for assessment. Mathematicians publish infrequently and cite deep into the past; molecular biologists, quite the opposite. Economists and particle physicists have utterly different norms of authorship assignment. Few users of bibliometrics appreciate the implications of these nuances, yet many think they can read the numbers, or persuade others to read them their way.

Bibliometrics now underpins research policy and management and is an established part of programme evaluation for the European Commission and national agencies including the UK Research Councils. But Cronin and Sugimoto urge caution: even after decades of use, do we really understand what citation data are and what we do with them? Do those who use bibliometrics have clear criteria for how they employ and interpret them? If a tool that influences so much research is used carelessly, these arguments need to be accessible to a wide audience.

A second set of problems arises from limitations in use. Julia Lane, Mark Largent and Rebecca Rosen remind us that political paymasters and research managers want timely indicators so that they can influence what is happening rather than reflect on what has happened. But citations data pertain to work funded years before. So, in its final third, the book goes “beyond bibliometrics” to alternative metrics that may address these issues, drawn partly from social media such as Twitter and blogs (discussed by Jason Priem), and partly from the business of research, including data downloads (Kayvan Kousha, Mike Thelwall and Stefanie Haustein) and esteem indicators (such as academic genealogy, discussed by Sugimoto). Whereas citation analysis builds on decades of work, we have less than ten years of social media to analyse. So there is a tension: if thoughtful commentators dare not advance answers, instant experts rush in to satisfy policy stakeholders. If we do produce analyses built on shifting sand, and promise too much, then we could undermine what might, with care, be very informative.

With caution and care, decisions about the future directions, funding and staffing of science will be better informed by the approaches and developments discussed in Beyond Bibliometrics. What we now need from this book's expert group is an accessible working narrative to guide the rest of us in our day jobs.

Comments

  1. Report this comment #63697

    Norman Morgan said:

    Yeap, this is what we need a scandal that involves people quoting other people or better yet people using other people work without quoting.

    No one wins if you use a scientific study without giving credit to the author, I mean look at all the politicians that got their Phd suspended.
    Me@gatit

Subscribe to comments

Additional data