Peer reviewers are four times more likely to give a grant application an “excellent” or “outstanding” score rather than a “poor” or “good” one when they are chosen by the grant’s applicants, an analysis of Swiss funding applications has found.
The study, at the Swiss National Science Foundation (SNSF), was completed in 2016, and the SNSF acted quickly on its findings by banning grant applicants from being able to recommend referees.
The authors, who are affiliated with the SNSF, posted their results online at PeerJ Preprints1 on 19 March, and in their paper call on other funders to reconsider their funding processes.
“I think this practice should be abolished altogether,” says study co-author Anna Severin, a sociologist who studies peer review at the University of Bern. Other experts are also wary of the problems that author-picked peer reviewers might cause, but some question whether banning them altogether is the right step.
The study examined more than 38,000 reviews from nearly 13,000 SNSF grant applications by about 27,000 peer reviewers from all disciplines between 2006 and 2016. The authors found that reviewers nominated by applicants were more likely to give these applicants higher evaluation scores than referees chosen by the SNSF.
The study found that reviewers affiliated with non-Swiss institutions gave higher evaluation scores, on average, than those based in the country. Male reviewers gave higher scores than female reviewers did, and male applicants received higher scores than female applicants, although the difference was small. Academics aged over 60 received the best feedback, regardless of their gender.
Liz Allen, who is the director of strategic initiatives at the open-access publisher F1000, says that the latest study is robust, but notes that making a policy change based solely on its data is questionable. “This almost automatically assumes that the scores must be ‘too high’ and therefore biased instead of perhaps testing out who the reviewers were and whether there were reasons why the scores might have been higher,” says Allen, who is also the former head of evaluation at the UK biomedical funder Wellcome Trust.
Johan Bollen, who studies complex computer systems and networks at Indiana University Bloomington, says he sees benefits to both sides of the argument. Grant applicants or study authors “have important information with respect to the experts that are most suited to provide an in-depth and knowledgeable review of their proposal”. But it might create an opportunity for authors to bias the reviewing process, he adds.
A new system
Bollen has previously argued for a system in which all researchers are guaranteed some money, provided they anonymously allocate a fraction of their funding to researchers of their own choice. The goal would be to shift the focus from funding projects to funding people.
Funding agencies around the world have different approaches to choosing grant reviewers. The US National Science Foundation does consider nominated reviewers, as well as those who applicants say are not fit to evaluate their work. Applicants to the US National Institutes of Health, however, are not allowed to suggest potential reviewers.
A spokesperson of UK Research and Innovation, Britain’s central research funder, told Nature that the organization’s individual, topic-based research councils invite applicants to nominate prospective peer reviewers, but suggested reviewers are not always used. When they are, the process also includes at least one additional referee, the spokesperson says.
Finding reviewers who want to referee papers or grant applications can also be a struggle, notes study co-author João Martins, a data scientist at the European Research Council Executive Agency in Brussels. A 2018 survey of more than 11,000 researchers worldwide found a growing “reviewer fatigue”. As a result, journal editors must now invite, on average, a greater number of peer reviewers to referee manuscripts to get each review completed.