Published online 20 October 2008 | Nature | doi:10.1038/news.2008.1169

News

Is physics better than biology?

Citation statistics now comparable across disciplines.

Filing system: cross-discipline research indicators could show what to read first.imagebroker / Alamy

Is the physics department at your university performing better than the biology department? Answering such questions objectively has been hard, because citation statistics and other bibliometric indicators can't be directly compared across disciplines. But now a team in Italy has found a way to do just that.

Claudio Castellano at the Sapienza University of Rome and his colleagues have found that the statistics of citations follow a virtually identical pattern in disciplines as varied as nuclear physics, haematology and agricultural economics.

As a result, indices of scientific performance based on citations can be normalized to account for differences in the overall citation rates of different disciplines, allowing for more meaningful comparisons.

This doesn't just apply to departments. Castellano says it can be used to grade scientists, whole countries or just about any group you like. In principle, the technique could be used to draw up a league table of the most influential scientists of all time, based on how much their work has been cited.

Co-author Santo Fortunato of the Institute for Scientific Interchange in Turin, Italy, says that they would have loved to have included such a table in their paper, but they couldn't get hold of the necessary data. Applying their normalization procedure is straightforward but laborious, and has to be done by hand. But "it's worth the effort", says Fortunato. "It would give you so much more [than previous metrics]." The technique is described in Proceedings of the National Academy of Sciences1.

Clean relationship

The researchers collected data on how many papers were cited a certain numbers of times in a range of scientific fields, and found that the raw numbers varied widely from one discipline to another. For example, for papers published in 1999, articles with 100 citations are 50 times more common in developmental biology than in aerospace engineering.

But if the citation counts are divided by the average number of citations per paper for the discipline in that year, the resulting statistical distributions are remarkably similar. All fit very precisely on a single curve, corresponding to what statisticians call a 'log-normal' distribution. This relationship is "amazingly clean", says Fortunato. The method allows direct comparison between disciplines — for example, an article published in aerospace engineering in 1999 that gained 20 citations had more impact in its field than an article with 100 citations in developmental biology.

The same mathematical relation also holds from year to year. For example, the statistical distributions of citations for 1990 and 2004 are indistinguishable (in every field) once normalized in this way to the averages for the respective year.

Sidney Redner, a specialist in citation statistics at Boston University in Massachusetts, says that the work "represents a good first step in a methodology to make useful bibliometric comparisons across diverse intellectual units".

But he cautions that not all academic disciplines might match the 'universal' citation curve that the team has found. "In the humanities," Redner says, "monographs seem to be the primary intellectual expression and I would not be surprised if their citation statistics can't be scaled in the same ways as articles in archival journals."

How big is your h?

One of the most popular bibliometric indices in current use is the h-index, devised in 2005 by physicist Jorge Hirsch of the University of California at San Diego2. A scientist's index is the largest number of their publications (n) that have each received at least n citations. Scientists, particularly in the physical sciences, now routinely include their h-index on CVs, and these are being used informally to assess applications for academic posts.

It was acknowledged from the outset that h-indices can't be compared across disciplines. But Castellano's technique also permits a performance comparison between scientists of any discipline. This could be particularly valuable when, say, two candidates from very different branches of physics are applying for the same vacant post. "It's important to have an unbiased way of being evaluated", says Fortunato.

ADVERTISEMENT

All the same, it is widely acknowledged that automatically generated indices such as this should not be the sole judge of academic achievement — not least because they tend to measure the popularity of a paper rather than its intrinsic intellectual value, says Redner.

Castellano's team are now hoping to investigate why citation statistics appear to be so universal across scientific fields, given that referencing practices sometimes seem quite different. 

  • References

    1. Radicchi, F., Fortunato, S. & Castellano, C. Proc. Natl Acad. Sci. USA doi:10.1073/pnas.0806977105 (in the press).
    2. Hirsch, J. E. Proc. Natl Acad. Sci. USA 102, 16569–16572 (2005).
Commenting is now closed.