Journal rankings should measure quality, not just quantity, say researchers who are proposing a new way to assess the status of science publications. Whereas the commonly used impact factor simply measures the number of citations per paper, the researchers say their ranking scheme also measures the significance of those citations, giving a truer measure of a journal's standing in the community.

Ranking journals and publications is not just an academic exercise. Such schemes are increasingly used by funding agencies to assess the research of individuals and departments. They also serve as a guide for librarians choosing which journals to subscribe to. All this puts pressure both on researchers to publish in journals with high rankings and on journal editors to attract papers that will boost their journal's profile.

The most popular index of a journal's status is the ISI Impact Factor (IF), produced by Thomson Scientific. It counts the total number of citations a journal's papers receive, and divides it by the number of papers the journal publishes. But the rise of online journals, coupled with sophisticated search engines that permit rankings of web resources, is triggering a wave of other measures. Last year, for example, physicist Jorge Hirsch of the University of California, San Diego, proposed a metric called the h-index for assessing the quality of researchers' publications (see Nature 436, 900; 2005).

All for impact: a single metric called the ISI Impact Factor is the most popular tool for ranking journals.

Now Johan Bollen and his colleagues at the Research Library of Los Alamos National Laboratory in New Mexico are focusing on Google's PageRank (PR) algorithm. The algorithm provides a kind of peer assessment of the value of a web page, by counting not just the number of pages linking to it, but also the number of pages pointing to those links, and so on. So a link from a popular page is given a higher weighting than one from an unpopular page.

The algorithm can be applied to research publications by analysing how many times those who cite a paper are themselves cited. Whereas the IF measures crude ‘popularity’, PR is a measure of prestige, says Bollen. He predicts that metrics such as the PR ranking may come to be more influential in the perception of a journal's status than the traditional IF. “Web searchers have collectively decided that PageRank helps them separate the wheat from the chaff,” he says.

PageRank helps web searchers separate the wheat from the chaff.

Bollen, however, proposes combining the two metrics. “One can more completely evaluate the status of a journal by comparing and aggregating the different ways it has acquired that status,” he says. Some journals, he points out, can have high IFs but low PRs (perhaps indicating a popular but less prestigious journal), and vice versa (for a high-quality but niche publication). Using information from different metrics would also make the rankings harder to manipulate, he adds. So Bollen and his colleagues propose ranking journals according to the product of the IF and PR, a measure they call the Y-factor.

Whereas the top ten list by IF includes many journals that publish only review articles, or that serve primarily as data resources, the Y-factor ranking pushes up journals widely regarded as publishing prestigious original research (see table). For example, among physics journals, the IF places Reviews of Modern Physics at the top of the list, but the Y-factor shifts the emphasis to rapid-publication journals. Physical Review Letters is the most influential, with a Y-factor of 5.91×10−2. (Declaration of interest: Nature receives a very high Y-factor.)

Table 1 Top 10 journals as rated by different metrics (data from 2003)

Reinhardt Schuhmann, an editor on Physical Review Letters, calls the proposal “an interesting idea”, but thinks that such metrics aren't really needed to prove status. “We don't pay much attention to impact factors,” he says. But for Bollen, ranking journals more effectively by combining different ranking systems could help protect the integrity of science. He warns that scientists and funding agencies have used the ranking system well beyond its intended purpose. “We've heard horror stories from colleagues who have been subjected to evaluation by their departments or national funding agencies which they felt were strongly influenced by their personal IF,” he says. “Many fear this may eventually reduce the healthy diversity of viewpoints and research subjects that we would normally hope to find in the scholarly community.”

Jim Pringle, vice-president of development at Thomson Scientific, is also keen on the idea. “We have always advocated that research evaluation should be derived not only from metrics such as the IF but also from a thorough knowledge of research content,” he says. “Journal status metrics such as this, used in combination with our data, should be encouraged.”