In sport, the best players publicize their most impressive and memorable statistics. They might have the highest points-per-game average for a season in basketball (30.1; Michael Jordan), the most major titles in tennis (16; Roger Federer) or the most goals scored in a single football season (53; Lionel Messi): these numbers define their careers.

Scientists have plenty of their own statistics. Performance metrics of various sorts are popular, in some cases increasingly so. And, although metrics are flawed, scientists should be aware of how they are used and how they can be bent to aid careers. Conventional indices such as reference letters and CVs are neither redundant nor passé, but it would be a mistake for any scientist to disregard quantitative bibliometric indices.

Credit: CORBIS

The idea of devising quantitative measures to rank scientists' performance originated in the early twentieth century. But it was Eugene Garfield, founder of the Science Citation Index, who pioneered bibliometrics and showed the value of tracking citations. He found, for example, that Nobel laureates publish five times the number of papers of most researchers, and that their work is cited 30–50 times more often1. Such revelations helped to popularize metrics. Among the measures in use today are the journal impact factor, which tracks the average number of citations of articles in a journal, and the h-index, which combines an author's productivity with the citation frequency of his or her papers.

Mixed feelings

Surveys of department heads, researchers and administrators reveal a variety of views on the use of such metrics2. Some use them frequently; others all but ignore them. Metrics are taken into account, to varying degrees, in decisions on hiring, awarding tenure or promotions, adjusting salaries and allocating resources. A highly cited paper in an intermediate-impact journal (impact factor 5–10) may be viewed more favourably by a hiring committee than a poorly cited paper in a top-impact journal.

Metrics have some powerful advantages: they are objective and quantitative. But they also suffer from some major deficiencies that, if not taken into account, can make them misleading. For example, citation counts and other metrics typically assign equal credit to all authors of a collaborative paper. So it is not unusual to find technology specialists (often middle authors who make technical contributions) who have more citations than some department chiefs. One of our own technicians has 1,734 lifetime citations and an h-index of 26 (that is, 26 papers with at least 26 citations each) — scores comparable to those of a 50-year-old university professor. Technologists are certainly important, but there should be some way to distinguish the different levels of contribution made by different individuals.

An author with a few highly cited papers can garner many citations, despite never pursuing a career or even a degree in science. For example, one of the papers reporting the original methodology for the polymerase chain reaction was published3 in 1987 with two authors: Kary Mullis (Nobel Prize in Chemistry, 1993) and Fred Faloona, a supporting staff member with five papers (all with Mullis) and no publications since 1992. Faloona has more than 10,000 lifetime citations from just two papers. And then there's the 'bystander effect': sometimes highly cited individuals are average, but not extraordinary, researchers who happen to have collaborated with distinguished scientists.

Still, metrics can supplement a CV. Jorge Hirsch, h-index4 inventor and a physicist at the University of California, San Diego, suggests that the index could help guide tenure decisions at universities and membership decisions at professional societies. For example, for physicists, an h value of about 18 could be the threshold for a full professorship; 15–20 could mean a fellowship in the American Physical Society; and 45 or higher could mean membership of the US National Academy of Sciences.

Every year, one of us (E.P.D.) prepares a bibliometric-analysis booklet as a companion to his CV. The booklet includes graphics showing how many publications he has had each year, the number of publications per journal impact factor, career citations by citing year, h-index, most highly cited papers, field rankings and the international rankings of his laboratory. But to underscore the limitations of such analyses, the cover page includes a disclaimer, adapted from a saying by Albert Einstein: “Many of the things you can count, don't count. Many of the things you can't count, do count.”

Know the enemy

Young scientists should familiarize themselves with the various indices, and prepare their own analyses of their scientific output. Understandably, they will start with low metrics scores, but their portfolios will grow during their careers, as will most of their metrics and indices. And evaluators should certainly take age into account.

No single index, formula or description will capture the diverse contributions of scientists to society. Scientists are involved not only in discoveries and publications, but also in teaching, mentoring, organizing scientific meetings, serving on editorial boards and lecturing. But, Einstein's caveat notwithstanding, bibliometric analyses are here to stay. And as long as their shortcomings are taken into account, they can be valuable, allowing observers to draw conclusions about a scientist's productivity, quality of research and impact in science.

Like professional athletes, young scientists should focus on their performance, their CVs and their relationships with advisers and colleagues — but they should also be aware of their metrics, and how they can improve the statistics. Otherwise, they might not measure up to the competition.