Published online 16 June 2010 | Nature 465, 860-862 (2010) | doi:10.1038/465860a

News Feature

Metrics: Do metrics matter?

Many researchers believe that quantitative metrics determine who gets hired and who gets promoted at their institutions. With an exclusive poll and interviews, Nature probes to what extent metrics are really used that way.

No scientist's career can be summarized by a number. He or she spends countless hours troubleshooting experiments, guiding students and postdocs, writing or reviewing grants and papers, teaching, preparing for and organizing meetings, participating in collaborations, advising colleagues, serving on editorial boards and more — none of which is easily quantified.

But when that scientist is seeking a job, promotion or even tenure, which of those duties will be rewarded? Many scientists are concerned that decision-makers put too much weight on the handful of things that can be measured easily — the number of papers they have published, for example, the impact factor of the journals they have published in, how often their papers have been cited, the amount of grant money they have earned, or measures of published output such as the h-index.

Last month, 150 readers responded to a Nature poll designed to gauge how researchers believe such metrics are being used at their institutions, and whether they approve of the practice. Nature also contacted provosts, department heads and other administrators at nearly 30 research institutions around the world to see what metrics are being used, and how heavily they are relied on. The results suggest that there may be a disconnect between the way researchers and administrators see the value of metrics.

Three-quarters of those polled believe that metrics are being used in hiring decisions and promotion, and almost 70% believe that they are being used in tenure decisions and performance review (see 'Metrics perceptions'). When asked to rate how much they thought administrators were relying on specific criteria for evaluation, poll respondents indicated that the most important measures were grants and income, number of publications, publication in high impact journals and citations of published research. And a majority (63%) are unhappy about the way in which some of these measures are used (see 'No satisfaction'). "Too much emphasis is paid to these flawed, seemingly objective measures to assess productivity," wrote a biologist from the United States. Respondents doubted that traditional, qualitative review counts for much. From a field of 34 criteria, "Review of your work by peers outside your department or institution" and "Letters of recommendation from people in your field" were tenth and twelfth, respectively — with 20–30% of the respondents stating that their institutions placed no emphasis on these factors at all.

Yet in Nature 's interviews, most administrators insisted that metrics don't matter nearly as much for hiring, promotion and tenure as the poll respondents seem to think. Some administrators said that they ignore citation-based metrics altogether when making such decisions, and instead rely largely on letters of recommendation solicited from outside experts in a candidate's field. "Outside letters basically trump everything," says Robert Simoni, chairman of the biology department at Stanford University in California.

That sentiment was echoed by academic administrators worldwide. "Metrics are not used a great deal," says Alex Halliday, head of the Mathematical, Physical and Life Sciences Division at the University of Oxford, UK. "The most important things are the letters, the interview and the CV, and our opinions of the papers published," he says.

"I don't look at impact factors" of the journals a candidate publishes in, says Kenichi Yoshikawa, dean of the Graduate School of Science at Japan's Kyoto University. "These usually highlight trendy papers, boom fields and recently highlighted topics. We at Kyoto don't want to follow boom."

Metrics are not wholly excluded, of course. Those 'qualitative' letters of recommendation sometimes bring in quantitative metrics by the back door. "We do not look at publication records or tell the reviewers to," says Yigong Shi, dean of the School of Life Sciences at Tsinghua University in Beijing. "But in reality, they do have an impact, because the reviewers will look at them."

Mixed messages

Administrators may also send mixed signals: metrics don't matter, except that they do. "Each year we collect the average performances of people across various different things: student evaluations of lectures, teaching loads, research income, paper output, h-indices," says Tom Welton, head of the chemistry department at Imperial College London. Welton insists that this information is reported back to researchers as a guideline, "not a hurdle that has to be leapt over to get a promotion". Nevertheless, the fact that such measures are being made could give the impression that they are being relied on heavily.

At the Massachusetts Institute of Technology in Cambridge, Claude Canizares, vice-president for research and associate provost, says that "we pay very little attention, almost zero, to citation indices and counting numbers of publications". But, he says, "if someone has multiple publications in a higher-impact journal, it's like getting another set of letters — the peers that reviewed that paper gave it high marks".

A separate reason for the disparity is that the use of metrics can vary markedly between countries (see 'Around the world with metrics') — or even between disciplines.

Poll respondents and administrators agree that metrics have potential pitfalls. For example, 71% of respondents said that they were concerned that individuals at their institutions could manipulate the metrics, for example by publishing several papers on the same basic work. Most deans and provosts seemed less concerned about that possibility, arguing that such practices were unlikely to slip past reviewers. But they were wary of the more insidious effects of using metrics.

"If you decide that publishing a large number of papers is important, then you've decided that's what quality is," says Gregory Taylor, dean of the Science Faculty at the University of Alberta in Edmonton, Canada. "That's always a very dangerous route to go down, because then you get people working to achieve by the formulae, which isn't a very good way to encourage people to use their imagination." Indeed, half the poll respondents said that they shaped their research behaviours on the basis of the metrics being used at their university. Although many of the altered behaviours given were fairly innocuous — for example, "work harder" — some had the potential to compromise scientific ideals. "It discourages me from doing important research work that may be of null association," said one respondent, a US postdoctoral fellow.

Breaking the old-boys' networks

Despite general dissatisfaction with the way in which metrics are being used, some poll respondents welcome them. Many said that they appreciated the transparency and objectivity that quantitative metrics could provide. "I prefer this to qualitative metrics," wrote one, a department head in chemistry and engineering from Europe. Others who were dissatisfied with the use of metrics at their institution said they felt that the metrics weren't being used enough or weren't being used consistently. "The metrics can be nullified at the college or provost level," complained a US professor of neuroscience. If nothing else, says Welton, the use of quantitative measures can reassure young researchers that the institution is not perpetuating an old-boys' network, in which personal connections are valued over actual achievement. Administrators who say that they do consider metrics in the decision-making process stress that they recognize the limitations of such measures in defining the career of an individual. Researchers in different fields and different specialities publish and cite at different rates. An intimate understanding of the fields — and more importantly the individuals being assessed — is crucial, they say. This ultimately makes the use of metrics more subjective by necessity.

ADVERTISEMENT

Surprisingly, if poll respondents desire change, it's not necessarily away from quantitative metrics. When Nature gave respondents a list and asked them to choose the five criteria that they thought should be used to evaluate researchers, the most frequently chosen was "Publication in high-impact journals", followed by "Grants earned", "Training and mentoring students" and "Number of citations on published research". In other words, what respondents think they are being measured on roughly matches what they want to be measured on.

The challenge for administrators, it seems, is not to reduce their reliance on metrics, but to apply them with more clarity, consistency and transparency. "The citation index is one of those things that is interesting to look at, but if you use it to make hiring decisions or use it as a sole or main criterion, you're simply abrogating a responsibility to some arbitrary assessment," says Jack Dixon, vice-president and chief scientific officer of the Howard Hughes Medical Institute in Chevy Chase, Maryland. While he says that the institute eschews such metrics, he recognizes that they will continue to be used. "All decisions are based on various criteria. The thing you hope for is that the decisions are fair, and are based upon criteria that the reviewers know and understand." 

See Editorial, page 845, and metrics special at http://www.nature.com/metrics. Full results of the survey are available at http://go.nature.com/em7auj.

Commenting is now closed.