When seeking promotion, defining success is everything
For academic researchers, ‘quality and impact’ is crucial, yet few agree on what these terms mean.
5 October 2021
Feodora Chiosea/Getty Images
An academic’s chance of promotion depends largely on the success of their research and its perceived quality, impact and prestige. But these terms mean different things to different researchers.
As a result, the people who decide on promotions are likely to apply different standards than those who are seeking them, undermining the validity of the process, concludes a new preprint study published on the BioRxiv server.
In the survey, which has yet to be peer reviewed, Lesley Schimanski and colleagues asked 338 faculty members at 55 universities in Canada and the United States to define ‘high impact’, ‘high quality’ and ‘prestigious’ in their own words.
Almost half of those surveyed equated high-impact research with publishing in a journal with a high-impact factor. (An impact factor quantifies the overall reach of a journal by the number of citations its articles tend to receive.) A paper’s influence on future research was nominated by just 16% of people, while a similar proportion of people took high impact to mean scientific rigour.
42.4% of people defined high-quality research as a paper having gone through the peer review process. But, as with ‘high impact’, many other definitions were offered. Impact factors came up again with 11.5% of people including them as their definition of quality.
For prestige, recognition of the publishing journal’s name was placed high, with 42.7% of participants noting its importance, followed again by the journal’s impact factor and the review process.
In fact, there was quite a bit of overlap between the three terms, says Schimanski, a neuroscientist at Capilano University in Canada. “Quality shouldn’t be confused with impact and prestige, but it seems to be.” The results suggest that academics aren’t singing from the same hymn sheet when deciding on who deserves to be promoted, she says.
Old fashioned attitudes persist
Some of these findings are concerning, says David Moher, an epidemiologist at the University of Ottawa in Canada, and an author of the Hong Kong Principles, which offer five ways to assess research that don’t rely on narrow metrics.
“Impact factors aren’t a measure of a specific paper’s success,” he says. “It tells us nothing about prestige, quality or impact. It may be that we haven’t disseminated this message well. Why are people holding on to these outdated measures of prestige?”
Moher was surprised that when Schimanski and her colleagues stratified survey responses by demographic groups – such as gender, age, discipline and career stage –they found no real differences in attitude.
“That’s so unusual,” says Moher. “It’s possible they haven’t surveyed enough people to make this claim.”
Katharina Richter, a biologist at the University of Adelaide in Australia, was recently promoted from an early-career fellow to senior lecturer. After working at universities in Australia, Belgium, Denmark, Germany, Switzerland and New Zealand, she says they all had in common a disparity between different levels of seniority in perceptions of prestige and impact.
On impact, for example, “the older generations only want to know how many papers you’ve published and what the journals’ impact factors were,” she says. “I enjoy talking to the media because I think outreach is important, but some of the older generations don’t get it. They say talking to the public doesn’t make you a better scientist and it’s a waste of time.”
Just 10.7% of scholars included in the new survey said that having an impact outside academia was a worthwhile way to describe impact. “I was disappointed to see that the definition of impact didn’t include much in the real world,” Schimanski says. “It’s quite disturbing.”
Schimanski says rehashing institutional guidelines to counter the lack of consensus on terms used in assessing research may not be the solution. “It may be about culture. Faculty and administrators should be talking about it, to make sure we all know what each other is thinking.”
She says part of the problem may be that metrics such as impact factor are used as a shortcut by overloaded faculty when evaluating the work of candidates for tenure. (In a recent survey of 5,888 academics in the United Kingdom, published in Studies in Higher Education in January 2020, 57% said they wanted professional help for anxiety and depression.)
Moher, meanwhile, calls for open science to be taken more seriously. “There’s almost nothing in these results about open science practices, such as registering clinical trials, how transparently results are reported and full data sharing,” he says. “You can easily measure these things as quality indicators.”