Artificial intelligence systems copy and amplify existing societal biases, a problem that by now is widely acknowledged and studied. But is current research of gender bias in natural language processing actually moving towards a resolution, asks Marta R. Costa-jussà.
This is a preview of subscription content, access via your institution
Access options
Access Nature and 54 other Nature Portfolio journals
Get Nature+, our best-value online-access subscription
$29.99 / 30 days
cancel any time
Subscribe to this journal
Receive 12 digital issues and online access to articles
$99.00 per year
only $8.25 per issue
Rent or buy this article
Get just this article for as long as you need it
$39.95
Prices may be subject to local taxes which are calculated during checkout
References
O’Neil, C. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (Crown, 2016).
Calders, T. & Verwer, S. Data Min. Knowl. Discov. 21, 277–292 (2010).
LeCun, Y., Bengio, Y. & Hinton, G. Nature 521, 436–444 (2015).
Cislak, A., Formanowicz, M. & Saguy, T. Scientometrics 115, 189–200 (2018).
Ross, K., Boyle, K., Carter, C. & Ging, D. Journalism Stud. 19, 824–845 (2018).
Sweeney, L. Queue 11, 29 (2013).
Bolukbasi, T., Chang, K.-W., Zou, J., Saligrama, V. & Kalai, A. In Proc. 30th International Conference on Neural Information Processing Systems 4356–4364 (Curran Associates, 2016).
Caliskan, A., Bryson, J. J. & Narayanan, A. Science 356, 183–186 (2017).
Zhao, J., Zhou, Y., Li, Z., Wang, W. & Chang, K.-W. Proc. 2018 Conference on Empirical Methods in Natural Language Processing 4847–4853 (Association for Computational Linguistics, 2018) (2018).
Nissim, M., van Noord, R. & van der Goot, R. Preprint at http://arxiv.org/abs/1905.09866 (2019).
Gonen, H. & Goldberg, Y. In Proc. 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies 609–614 (Association for Computational Linguistics, 2019).
Basta, C., Costa-jussà, M. R. & Casas, N. In Proc. First Workshop on Gender Bias in Natural Language Processing 33–39 (Association for Computational Linguistics, 2019).
Vanmassenhove, E., Hardmeier, C. & Way, A. In Proc. 2018 Conference on Empirical Methods in Natural Language Processing 3003–3008 (Association for Computational Linguistics, 2018).
Escudé Font, J. & Costa-jussà, M. R. In Proc. First Workshop on Gender Bias in Natural Language Processing 147–154 (Association for Computational Linguistics, 2019).
Stanovsky, G., Smith, N. A. & Zettlemoyer, L. In Proc. 57th Annual Meeting of the Association for Computational Linguistics 1679–1684 (Association for Computational Linguistics, 2019).
Menegatti, M. & Rubini, M. Oxford Research https://doi.org/10.1093/acrefore/9780190228613.013.470 (2017).
Lewis, M. & Lupyan, G. Preprint at https://psyarxiv.com/7qd3g (2019).
Lindqvist, A., Renström, E. A. & Gustafsson Sendén, M. Sex Roles 81, 109–117 (2019).
Aharoni, R., Johnson, M. & Firat, O. In Proc. 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies 3874–3884 (Association for Computational Linguistics, 2019).
Acknowledgements
This work is supported by the Spanish Ministry of Economy and Competitiveness and the European Regional Development Fund (MINECO/ERDF, EU) through the programme Ramón y Cajal.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Costa-jussà, M.R. An analysis of gender bias studies in natural language processing. Nat Mach Intell 1, 495–496 (2019). https://doi.org/10.1038/s42256-019-0105-5
Published:
Issue Date:
DOI: https://doi.org/10.1038/s42256-019-0105-5
This article is cited by
-
Towards universal translation
Nature Machine Intelligence (2021)
-
Extensive study on the underlying gender bias in contextualized word embeddings
Neural Computing and Applications (2021)
-
Machine Learning of Dislocation-Induced Stress Fields and Interaction Forces
JOM (2020)