Almost half of research-intensive universities consider journal impact factors when deciding whom to promote, a survey of North American institutions has found.
About 40% of institutes with a strong focus on research mention impact factors in documents used in the review, promotion and tenure process, according to the analysis, which examined more than 800 documents across 129 institutions in the United States and Canada.
The data imply that many universities are evaluating the performance of their staff using a metric that has been widely criticized as a crude and misleading proxy for the quality of scientists’ work.
“It suggests that those organizations may not have properly thought through what they are looking for in their faculty,” says Elizabeth Gadd, a research-policy manager at Loughborough University, UK.
The journal impact factor is a measure of the average number of citations that articles published in a specific journal have garnered over the previous two years. Publishers often promote the number in a bid to reflect the quality of a journal. But many academics and review panels and have turned to impact factors as a quick way of judging the quality, importance and reputation of a piece of research, or the scientist who published it.
This irks many academics, who say that impact factors propagate an unhealthy research culture that is detrimental to science, and who want universities to move away from using the metric in the hiring and promotion process. Studies have shown that the impact factor is not very good at predicting a scientist’s performance, but it is not known how often employers use the metric in this way.
To get a handle on its prevalence, Erin McKiernan, a neurophysiologist at the National Autonomous University of Mexico in Mexico City, and colleagues collected and analysed 864 review, promotion and tenure documents from North American institutions, they report in a preprint published on 9 April in PeerJ Preprints1.
They ran the documents through software designed to flag up specific terms related to impact factors, and they read relevant passages in a subset of the documents to get an idea of how and why institutions used the metric.
Less than one-quarter of the institutions mentioned impact factor or a closely related term such as “high impact journal” in their documents. But this proportion rose to 40% for the 57 research-intensive universities included in the survey. By contrast, just 18% of universities that focused on master’s degrees mentioned journal impact factors (see ‘High impact’).
In more than 80% of the mentions at research-heavy universities, the language in the documents encouraged the use of the impact factor in academic evaluations. Only 13% of mentions at these institutions came with any cautionary words about the metric. The language also tended to imply that high impact factors were associated with better research: 61% of the mentions portrayed the impact factor as a measure of the quality of research, for example, and 35% stated that it reflected the impact, importance or significance of the work.
Tip of the iceberg
“We now have the numbers to show what is happening in academic evaluations,” says McKiernan. She says that she expected the proportion of institutes explicitly using the impact factor in these documents to be higher, but warns that their results might only be the “tip of the iceberg”.
There could be a larger set of terms used during evaluations that indirectly refer to the impact factor, she adds — phrases such as “top-tier journal” or “high-ranking journal”.
Stephen Curry, a structural biologist at Imperial College London, says that it is crucial for universities to come up with other ways of assessing staff. “Researchers deserve to be judged on the basis of what they have done, not simply where they have published — and to be given credit for the many contributions they make above and beyond the publication of research papers,” he says.