Large language models, which are increasingly used in AI applications, display undesirable stereotypes such as persistent associations between Muslims and violence. New approaches are needed to systematically reduce the harmful bias of language models in deployment.
This is a preview of subscription content, access via your institution
Relevant articles
Open Access articles citing this article.
-
Potential to perpetuate social biases in health care by Chinese large language models: a model evaluation study
International Journal for Equity in Health Open Access 15 July 2025
-
Large language models versus traditional textbooks: optimizing learning for plastic surgery case preparation
BMC Medical Education Open Access 01 July 2025
-
A survey on multilingual large language models: corpora, alignment, and bias
Frontiers of Computer Science Open Access 03 April 2025
Access options
Access Nature and 54 other Nature Portfolio journals
Get Nature+, our best-value online-access subscription
27,99 € / 30 days
cancel any time
Subscribe to this journal
Receive 12 digital issues and online access to articles
121,22 € per year
only 10,10 € per issue
Buy this article
- Purchase on SpringerLink
- Instant access to full article PDF
Prices may be subject to local taxes which are calculated during checkout


References
Mikolov, T., Chen, K., Corrado, G. & Dean, J. in Proc. International Conference on Learning Representations (ICLR, 2013).
Dai, A. M. & Le, Q. V. in Advances in Neural Information Processing Systems Vol. 28, 3079–3087 (NeurIPS, 2015).
Brown, T. et al. in Advances in Neural Information Processing Systems Vol. 33, 1877–1901 (NeurIPS, 2020).
Kitaev, N., Kaiser, L. & Levskaya, A. in Proc. International Conference on Learning Representations (ICLR, 2020).
Bolukbasi, T., Chang, K.-W., Zou, J. Y., Saligrama, V. & Kalai, A. T. in Advances in Neural Information Processing Systems Vol. 29, 4349–4357 (NeurIPS, 2016).
Nadeem, M., Bethke, A. & Reddy, S. Preprint at https://arxiv.org/abs/2004.09456 (2020).
Sheng, E., Chang, K.-W., Natarajan, P. & Peng, N. in Proc. Conference on Empirical Methods in Natural Language Processing 3407–3412 (ACL, 2019).
Bordia, S. & Bowman, S. R. in Proc. Conference of the North American Chapter of the Association for Computational Linguistics (ACL, 2019).
Lu, K., Mardziel, P., Wu, F., Amancharla, P. & Datta, A. in Logic, Language, and Security (eds Nigam, V. et al.) 189–202 (Springer, 2020).
Lewis, M. et al. in Proc. 58th Annual Meeting of the Association for Computational Linguistics 7871–7880 (ACL, 2020).
Wallace, E., Feng, S., Kandpal, N., Gardner, M. & Singh, S. in Proc. Conference on Empirical Methods in Natural Language Processing (EMNLP) 2153–2162 (ACL, 2019).
Qian, Y., Muaz, U., Zhang, B. & Hyun, J. W. Preprint at https://arxiv.org/abs/1905.12801 (2019).
Bender, E. M., Gebru, T., McMillan-Major, A. & Mitchell, S. in ACM Conference on Fairness, Accountability, and Transparency 610–623 (ACM, 2021).
Li, X. L. & Liang, P. Preprint at https://arxiv.org/abs/2101.00190 (2021).
Acknowledgements
We thank A. Abid, A. Abdalla, D. Khan, and M. Ghassemi for the helpful feedback on the manuscript and experiments. J.Z. is supported by NSF CAREER 1942926.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Peer review information Nature Machine Intelligence thanks Arvind Narayaran for their contribution to the peer review of this work.
Supplementary information
Supplementary Information
Supplementary discussions A–C
Rights and permissions
About this article
Cite this article
Abid, A., Farooqi, M. & Zou, J. Large language models associate Muslims with violence. Nat Mach Intell 3, 461–463 (2021). https://doi.org/10.1038/s42256-021-00359-2
Published:
Issue Date:
DOI: https://doi.org/10.1038/s42256-021-00359-2
This article is cited by
-
Large language models versus traditional textbooks: optimizing learning for plastic surgery case preparation
BMC Medical Education (2025)
-
Potential to perpetuate social biases in health care by Chinese large language models: a model evaluation study
International Journal for Equity in Health (2025)
-
Bias, machine learning, and conceptual engineering
Philosophical Studies (2025)
-
Manifestations of xenophobia in AI systems
AI & SOCIETY (2025)
-
The use of large language models as scaffolds for proleptic reasoning
Asian Journal of Philosophy (2025)