A preliminary study suggests code-edits by female software developers are more successful — except when their gender is known.
Female software developers see their contributions of code accepted more frequently by the open-source software repository GitHub than do men, according to a preprint that attracted much attention last week on social media. But this happened only when the contributor’s gender was not obvious from their GitHub profile page. When gender was made clear, the acceptance rate for women fell to slightly less than that for men, say researchers who analysed data on the activity of more than 1.4 million users of GitHub. Morgan Ernest, an ecologist at the University of Florida in Gainesville, tweeted a link to the paper:
But other researchers questioned whether the study, posted at PeerJ PrePrints on 9 February1, definitively demonstrates gender bias.
Software developers can submit suggested code modifications to existing software on GitHub, known as pull requests. Emerson Murphy-Hill, a computer scientist at North Carolina State University in Raleigh, and his co-authors extracted public information from more than 1.4 million GitHub user profiles that had associated Google+ accounts. The authors used names, reported genders and profile pictures from the Google+ accounts to determine the gender of coders. The researchers then looked at the acceptance rate for users’ pull requests, comparing users whose genders were obvious from their GitHub profile pages with those whose genders were unclear.
Murphy-Hill and his colleagues found that women’s pull requests are accepted more than are men’s across the ten most common programming languages, and that their changes usually consist of more lines of code. Whether comparing GitHub beginners making their first pull request, or users who are more experienced, women’s acceptance rates continued to be higher than men’s, the team found. The data suggest that women coders on GitHub are more competent than men, says Murphy-Hill.
In GitHub, pull requests can be proposed by authorized collaborators, or ‘insiders’, who are probably known to the gatekeepers of each software package. The authors found little difference when comparing contributions from gender-neutral and gender-specific insiders. “Bias probably plays less of a role when people already know you,” says Murphy-Hill.
However, for ‘outsiders’ — GitHub users who are not authorized collaborators on a software package — the data showed a different trend. The acceptance rate for women coders was 71.8% when they used gender-neutral profiles, but dropped to 62.5% when their gender was identifiable. Men with gender-neutral profiles had an acceptance rate of around 69%, which declined to 63.3% for those whose gender was clear.
“The difference is statistically significant, but whether the difference is substantial is another question that’s open for interpretation,” notes Murphy-Hill.
Cassidy Sugimoto, an information scientist at Indiana University Bloomington, who studies gender inequality in research and publishing, says that the study does not really demonstrate gender bias because it is not clear whether the people rejecting pull requests know the gender of the contributors. “If the authors want to demonstrate bias due to gender, they must show that gender was known or inferred by those accepting the pull request,” she says. “None of the initial analyses demonstrate this.”
Murphy-Hill admits that it is difficult to know whether those considering the code contributions were aware of the contributors’ gender. “Did they actually perceive those signals?” he says. “We assume they did.”
A commenter posted on the PeerJ PrePrints website that there are many reasons why a pull request could be rejected: the code could be a duplicate or it may not be appropriate for the project. Murphy-Hill says that it is impossible to isolate and account for all the reasons why a pull request might be accepted or rejected. “Only an expert human can judge [this], and even then, experts’ interpretation can be wrong. And in the interpretation there’s room for bias,” he adds.
Randal Olson, an artificial-intelligence researcher at the University of Pennsylvania in Philadelphia, spotted a different problem with the paper, tweeting:
Olson was criticizing a bar chart in the paper that showed the differences in pull-request acceptance rates between men and women, in which the y-axis begins at 60% instead of zero. Olson said in an interview that using bar charts that don’t start at zero visually distort data and exaggerate differences in displayed values, noting that this is a “common mistake” in data visualizations that misleads viewers.
Murphy-Hill, however, says: “It’s nice to start at [zero] if you can, but then it makes it difficult to determine what the difference actually is.”
Still, researchers are encouraged that women’s code seems to be accepted at a higher rate than men’s. “To see that women do better than men in a field where there is a bias against women is very exciting because it questions social norms around this field,” says Adrian Letchford, a data scientist at the University of Warwick in Coventry, UK.
Terrell, J. et al. Preprint at PeerJ PrePrints http://doi.org/bchz (2016).
Related links in Nature Research
Related external links
About this article
Cite this article
Singh Chawla, D. Researchers debate whether female computer coders face bias. Nature 530, 257 (2016). https://doi.org/10.1038/530257f