The world of published science has become crowded and confusing. Impact factors provide rough and ready guidance, as long as they are understood in context.
As the pursuit of science has evolved from a pastime of the genteel upper classes to a full-blown industry, citations have become an obsession. Researchers worry about their places in the web of references — whom they should cite and who should cite them; research councils and university departments distribute money according to citation records; and editors anxiously anticipate the annual announcement of their journal's impact factor.
The central importance of citations to both a scientist's and a journal's success guarantees the popularity of publications on this topic. A recent citation analysis, suggesting that the move of journals online has narrowed the range of citations in terms of breadth as well as depth of time (Science 321, 329, 395–399; 2008), will be no exception. The paper bemoans the loss of broad reading that comes with leafing through a journal in the library, owing to online search facilities that efficiently lead straight to the target paper, without potentially stimulating intellectual detours.
But in the face of publication overload, researchers are forced to improve their strategies to select the reading material that is most relevant to their own work, even though such strategies, just like spam filters, overlook the occasional message of interest. In their quest to keep up to date with new publications in the online age, scientists apparently follow three main routes: scanning through a small number of core journals, searching by topics in online databases, and following hyperlinks from one paper to another. Of these, only the first has the potential to introduce unexpected content into a community, making journal selection particularly important.
When choosing their core journals, scientists consult the citation statistics. Their appeal lies with the ease of ranking journals, papers or researchers in a system of simple indices that count one paper's citations to another. And it works. After all, a very similar logic has made the Google search engine the ruler of the internet.
Heuristics have always been used to assess science and scientists. Affiliation with an ivy-league university, famous collaborators or a Nobel-prize winning PhD supervisor have certainly helped many careers. Citation numbers are yet another way of judging research. But nobody would claim that they are a perfect and objective means of measuring scientific quality. The cliquiness and politics that can be involved in compiling a reference list, the invariably frequent citations to data sets that may not contain any new insights, and the need to refer to papers in order to disprove them, all work against this measure as an egalitarian indicator of quality. Nevertheless the system is immensely practical and quite successful.
The ripples of the revolution in science evaluation have long reached the relatively uncompetitive backwaters of the geosciences. Indeed, Nature Geoscience received questions regarding its likely future impact factor before it was even accepted into Thomson Scientific's Web of Science in April this year. So here are a few thoughts on the topic from us, long before our own impact factor (due in 2010) may skew our perspective.
Citation patterns vary hugely between disciplines. The impact factors of Nature and Science have ranged between 26 and 32 in the past few years. But a quick estimate, based on a sample of papers, suggests that geoscience papers in these journals score an impact factor of around 15 when evaluated on their own. This is high considering that the impact factors of journals publishing exclusively geoscience research have not exceeded 5 in the past several years. But far higher citation counts in the biological sciences drive up the statistics of journals that publish across disciplines.
The timescales of the publication cycle in a field determine a journal's impact factor. These are defined as all citations in one year to citable content published in the two preceding years, thereby excluding all references more than two years from publication. This can be problematic for the slower-moving sciences. For example, the ten most cited papers in Geology in 2004 were collectively referred to about 1.5 times more often in 2007 than in 2006 — citations that have never entered the index.
For geoscientists, taking guidance from impact factors alone would mean favouring interdisciplinary journals (whereas many biologists would, for the same reasons, favour their own disciplines). It would also lead to reading preferentially short-lived, quickly cited papers over those that develop more slowly — not necessarily a good idea. Other more time-consuming ways of assessing quality are therefore needed to supplement the quick and easy number check.
Researchers face a difficult balance in allocating their precious reading time between focused study in their field of interest and browsing more broadly for new ideas. At Nature Geoscience, we aim to bring the broader picture to our audience. One, but not the only, way of judging whether we are succeeding will be our impact factor.
Rights and permissions
About this article
Cite this article
The importance of being cited. Nature Geosci 1, 563 (2008). https://doi.org/10.1038/ngeo305
Issue Date:
DOI: https://doi.org/10.1038/ngeo305
This article is cited by
-
Feedback received
Nature Geoscience (2012)
-
Number crunch
Nature Geoscience (2010)