In many scientific fields, women publish fewer papers than men, are less likely to be listed as first authors1 and are less likely to receive glowing letters of recommendation from their advisers2. These disparities have decreased over time, but they persist. Now, a study finds that some journal editors might be inadvertently taking gender into account when selecting reviewers for papers.

They found that, on average, male editors were much more likely to pick male reviewers, whereas female editors were more likely to pick other women. This bias was stronger for men, the researchers report in a study3 published on 21 March in eLife.

Previous papers have looked at gender bias in peer review, but most of them have focused on one field. But this latest study analysed 142 journals in the Frontiers family of publications across science, health, engineering and social sciences.

“The quality of scientific work is not determined by gender,” says Markus Helmer, a computational neuroscientist, and the lead study author, who performed most of the work while at the Max Planck Institute in Göttingen, Germany. “So if gender is impacting which reviewers are chosen, that means journals are not getting the highest-quality reviewers.”

Jennifer Glass, a sociologist at the University of Texas, Austin says this is similar to what happens on corporate boards. By limiting board members — or journal reviewers — to one gender, these groups can overlook some of the top candidates.

Helmer, now at Yale University in New Haven, Connecticut, was surprised to see that gender bias in peer review existed across the fields of science that he and his colleagues surveyed.

Gender gap

Because Frontiers journals make public the identities of their editors and reviewers, Helmer’s team was able to look at more than 9,000 editors and 43,000 reviewers of studies published between 2007 and 2015. They found the overall pattern among journal editors after controlling for the number of men and women who have published in each field. And they were also able to see the gender bias of individual reviewers.

Helmer and his colleagues found that bias was widespread across male editors, but for women, the overall effect seemed to be driven by just a few female editors. When researchers removed those outliers from the data set, female editors’ preference for female reviewers disappeared.

Marcia McNutt, president of the US National Academy of Sciences and former editor-in-chief of Science, thinks that the data are solid, and she is happy to see this disparity documented. But she also thinks that there is a major gap in the study’s design: the data set shows only the numbers of men and women who actually reviewed papers, not how many were asked to perform a review. A previous study of geophysical journals found that women between 20 and 80 years of age decline the invitation to review papers more often than men4.

Dana Britton, a sociologist at Rutgers University in New Brunswick, New Jersey, also notes this hole. “They leave out any consideration of people’s willingness to respond to review requests,” she says. “So, it could be that the initial pool of choices is more diverse than the ultimate pool of reviewers.”

The people you know

Helmer and his colleagues suggest that the editors’ preferences for reviewers of their own gender could be due to differences in the way that men and women construct their social networks, or humans’ supposed innate tendency to associate with people with similar qualities. They also suggest that some female editors might be attempting to make their field more egalitarian by deliberately picking female reviewers.

McNutt thinks that such bias might have less to do with human nature and more to do with social networks. “I have my network of go-to scientists, and most of them are women,” she says. “Women scientists also tend to mentor women students, and that expands their network.”

Britton agrees, saying that men in the US and Europe tend to be full professors. "These men are more likely to know each other, and to consider each other experts in their field,” she explains. So biases in peer review could be the result of existing disparities in academia.