Henrik Sorensen/Getty Images
When the director of a research institute asked his Twitter followers for a practical way to dig out promising candidates from the hundreds of applications sitting on his desk, the community responded in spades. An abundance of ideas was produced by an involved online discussion about the measures that are used to evaluate the quality of researchers’ work, and how those affect everything from funding to career paths.
Ewan Birney, co-director of the European Molecular Biology Laboratory’s European Bioinformatics Institute in Hinxton, UK, admitted on Twitter that he was procrastinating over how to prepare a shortlist from the applications, which together listed around 2,500 research papers. (Because the process is ongoing, Birney would not say exactly what the researchers were applying for.) He tweeted:
Yoav Gilad, a human geneticist at the University of Chicago in Illinois, tweeted:
Birney responded: “as nice as that sounds I just don’t think it’s practical.”
Birney was tasked with sifting proposals that cover a broad range of disciplines, from imaging to genomics. Many researchers judge papers on the basis of the journal in which they are published. But others, says Birney, frown on that practice because the journal’s impact factor — the average number of citations that an article in it receives — doesn’t always reflect the quality of each individual article. He tweeted:
In a subsequent tweet, he noted: “Of course, even if I was using journal as proxy here it wouldn’t help me - everyone here has published “well”. <sigh>.”
Stephen Curry, a structural biologist at Imperial College London, suggested in a tweet: “Ask candidates to submit their best 4 papers & to explain the choice on one side of A4?”
Yet Richard Sever, co-founder of the preprint server bioRxiv and assistant director of Cold Spring Harbor Laboratory Press, responded by pointing out a pitfall with that suggestion:
He added in a later tweet that he thought Curry’s idea was still a good one.
Other academics suggested using metrics such as an article’s citation count or a researcher’s h-index, which measures an individual’s productivity and citation impact. Birney says that although he did use these tools to some extent, he was wary of the fact that practices vary across different fields in ways that could skew comparisons in a cross-disciplinary sample.
Hugo Hilton, an immunologist at Stanford University in California, says that extra requirements on applicants create additional difficulties. “Application packets will become ever more bloated, and still the criteria for selection are likely to remain vague and subject to old-fashioned biases such as the number and impact factor of journals in which the candidate has published work.” He raised his concerns from the candidate’s perspective in a pair of tweets responding to Birney’s thread:
Birney adds that he thinks referees should get a certain degree of autonomy, and that it is not necessarily a problem if reviewers do not follow exactly the same procedures in their assessments. “I would prefer subjective but unbiased opinions, and five of them with different criteria than trying to unify the criteria so we all agree with the same answers.” However, says Birney, transparency of those different procedures is essential.
One of the take-home messages from the discussion, Birney says, is the lack of exploration of online tools for assessing research and job applicants. And with this lack of alternatives, Sever said in an interview, the use of journal names as a proxy for quality will be inevitable to a certain extent, because reviewers simply have too much to read.
As he waded through the applications, Birney offered some tips to applicants: “Just listing a bunch of journal titles in resume is redundant with publication list and definitely not du jour,” he suggested, adding:
- Journal name:
- Date published: