Grant reviewers award lower scores to proposals from women than to those from men, even when they don’t know the gender of the applicant, an analysis of thousands of submissions to the Bill and Melinda Gates Foundation has found1.
That’s because male and female scientists use different types of word on grant applications, according to the study, published by the US National Bureau of Economic Research.
The study finds that women are more likely to choose words specific to their field to describe their science, whereas men tend to use less precise terms. These broader terms seem to be preferred by the reviewers who decide how to distribute the cash, says the analysis — even though proposals containing those words don’t lead to better research outcomes.
The findings aren’t surprising, says Kuheli Dutt, who works in academic affairs and diversity at Columbia University in New York City. Dutt sees parallels with research showing that men are more likely to boast and overstate their performance in tests, whereas women are more likely to be cautious in their statements2. Using broad words might lead to sweeping claims, but narrow words might imply more cautious claims, she says.
Previous research had highlighted how differences in the way men and women use language can drive bias. For example, some studies show that the words in some job adverts can put women off applying, and women in the geosciences are less likely than their male counterparts to receive a recommendation letter whose tone suggests that they are outstanding candidates3.
But this is the first time that ‘gendered’ language has been explored in grant applications, says Julian Kolev, who studies entrepreneurship at the Southern Methodist University in Texas and led the work.
Kolev’s analysis looked at almost 7,000 proposals submitted to the Grand Challenges Explorations programme of the Bill & Melinda Gates Foundation between 2008 and 2017. The fund awards grants of between $100,000 and $1 million to address challenges in global health and is open to anyone through a two-page online application. Reviewers are blind to the gender of the applicants.
The researchers singled out the applications from US researchers and sought information from the Gates Foundation on applicants’ gender, discipline and where they work. The group also looked at each scientist’s publication record and grant history before and after the application.
The team found that women received significantly lower scores from reviewers than men did. This couldn’t be explained by the applicants’ experience, publications record or the gender of the reviewers. Instead, it seemed to be down to their communication style in the proposal.
The researchers found that men tended to use ‘broad’ words, such as “control”, “detection” and “bacteria”, more often. These were defined as words that appeared at the same rate in proposals regardless of the topic. By contrast, women favoured ‘narrower’ or more topic-specific terms, such as “community”, “oral” and “brain” (see ‘Broad language’). The authors linked broad words to higher review scores, and narrow ones with lower scores.
But funded applications that contained many broad words didn’t result in work that led to more publications and future grants, the researchers found. And when women secured funding, they generally outperformed men on these measures.
Closing the gap
The Gates Foundation says that it is committed to ensuring gender equality and that its grand-challenges programme uses blind reviews in an attempt to eliminate reviewer bias. It is also reviewing the results of this study.
Kolev suggests that grant reviewers could be trained to limit their sensitivity to communication styles. The make-up of the review panel also seems important. “We consistently show that female reviewers’ scores do not favour proposals from male applicants in the way that male reviewers’ scores do,” he notes. “So increasing the number of female reviewers is one potential way to mitigate the effects we find.”