Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

You are viewing this page in draft mode.

Large language models associate Muslims with violence

Large language models, which are increasingly used in AI applications, display undesirable stereotypes such as persistent associations between Muslims and violence. New approaches are needed to systematically reduce the harmful bias of language models in deployment.

Access options

Rent or Buy article

Get time limited or full article access on ReadCube.

from$8.99

All prices are NET prices.

Fig. 1: GPT-3 exhibits Muslim–violence bias.
Fig. 2: Debiasing GPT-3 completions.

References

  1. 1.

    Mikolov, T., Chen, K., Corrado, G. & Dean, J. in Proc. International Conference on Learning Representations (ICLR, 2013).

  2. 2.

    Dai, A. M. & Le, Q. V. in Advances in Neural Information Processing Systems Vol. 28, 3079–3087 (NeurIPS, 2015).

  3. 3.

    Brown, T. et al. in Advances in Neural Information Processing Systems Vol. 33, 1877–1901 (NeurIPS, 2020).

  4. 4.

    Kitaev, N., Kaiser, L. & Levskaya, A. in Proc. International Conference on Learning Representations (ICLR, 2020).

  5. 5.

    Bolukbasi, T., Chang, K.-W., Zou, J. Y., Saligrama, V. & Kalai, A. T. in Advances in Neural Information Processing Systems Vol. 29, 4349–4357 (NeurIPS, 2016).

  6. 6.

    Nadeem, M., Bethke, A. & Reddy, S. Preprint at https://arxiv.org/abs/2004.09456 (2020).

  7. 7.

    Sheng, E., Chang, K.-W., Natarajan, P. & Peng, N. in Proc. Conference on Empirical Methods in Natural Language Processing 3407–3412 (ACL, 2019).

  8. 8.

    Bordia, S. & Bowman, S. R. in Proc. Conference of the North American Chapter of the Association for Computational Linguistics (ACL, 2019).

  9. 9.

    Lu, K., Mardziel, P., Wu, F., Amancharla, P. & Datta, A. in Logic, Language, and Security (eds Nigam, V. et al.) 189–202 (Springer, 2020).

  10. 10.

    Lewis, M. et al. in Proc. 58th Annual Meeting of the Association for Computational Linguistics 7871–7880 (ACL, 2020).

  11. 11.

    Wallace, E., Feng, S., Kandpal, N., Gardner, M. & Singh, S. in Proc. Conference on Empirical Methods in Natural Language Processing (EMNLP) 2153–2162 (ACL, 2019).

  12. 12.

    Qian, Y., Muaz, U., Zhang, B. & Hyun, J. W. Preprint at https://arxiv.org/abs/1905.12801 (2019).

  13. 13.

    Bender, E. M., Gebru, T., McMillan-Major, A. & Mitchell, S. in ACM Conference on Fairness, Accountability, and Transparency 610–623 (ACM, 2021).

  14. 14.

    Li, X. L. & Liang, P. Preprint at https://arxiv.org/abs/2101.00190 (2021).

Download references

Acknowledgements

We thank A. Abid, A. Abdalla, D. Khan, and M. Ghassemi for the helpful feedback on the manuscript and experiments. J.Z. is supported by NSF CAREER 1942926.

Author information

Affiliations

Authors

Corresponding author

Correspondence to James Zou.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Peer review information Nature Machine Intelligence thanks Arvind Narayaran for their contribution to the peer review of this work.

Supplementary information

Supplementary Information

Supplementary discussions A–C

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Abid, A., Farooqi, M. & Zou, J. Large language models associate Muslims with violence. Nat Mach Intell 3, 461–463 (2021). https://doi.org/10.1038/s42256-021-00359-2

Download citation

Search

Quick links