Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Comment
  • Published:

To protect science, we must use LLMs as zero-shot translators

Large language models (LLMs) do not distinguish between fact and fiction. They will return an answer to almost any prompt, yet factually incorrect responses are commonplace. To ensure our use of LLMs does not degrade science, we must use them as zero-shot translators: to convert accurate source material from one form to another.

This is a preview of subscription content, access via your institution

Relevant articles

Open Access articles citing this article.

Access options

Buy this article

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: Two hypothetical use cases for LLMs based on real prompts and responses demonstrate the effect of inaccurate responses on user beliefs.

References

  1. Bender, E. M., Gebru, T., McMillan-Major, A. & Shmitchell, S. On the dangers of stochastic parrots: can language models be too big? In Proc. 2021 ACM Conf. on Fairness, Accountability, and Transparency, 610–623 (ACM, 2021).

  2. Mitchell, M. Science 381, adj5957 (2023).

    Article  PubMed  Google Scholar 

  3. Munn, L., Magee, L. & Arora, V. AI Soc. https://doi.org/10.1007/s00146-023-01756-4 (2023).

    Article  Google Scholar 

  4. Kidd, C. & Birhane, A. Science 380, 1222–1223 (2023).

    Article  CAS  PubMed  Google Scholar 

  5. Mielke, S. J., Szlam, A., Dinan, E. & Boureau, Y.-L. Trans. Assoc. Comput. Linguist. 10, 857–872 (2022).

    Article  Google Scholar 

  6. Kang, E. B. Big Data Soc. 10, https://doi.org/10.1177/20539517221146122 (2023).

  7. Holtzman, A., Buys, J., Du, L., Forbes, M. & Choi, Y. The curious case of neural text degeneration. In Proc. International Conf. on Learning Representations (ICLR) 2020, https://iclr.cc/virtual_2020/poster_rygGQyrFvH.html (International Conference on Learning Representations, 2020).

  8. Drezner, D. W. The Ideas Industry (Oxford Univ. Press, 2017).

  9. Feng, S., Park, C. Y., Liu, Y. & Tsvetkov, Y. From pretraining data to language models to downstream tasks: tracking the trails of political biases leading to unfair NLP models. In Proc. 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) 11737–11762 (Association for Computational Linguistics, 2023).

  10. Johnson, M. et al. Trans. Assoc. Comput. Linguist. 5, 339–351 (2017).

    Article  Google Scholar 

  11. Kabir, S., Udo-Imeh, D. N., Kou, B. & Zhang, T. Preprint at arXiv, https://doi.org/10.48550/arXiv.2308.02312 (2023).

  12. Graeber, D. Bullshit Jobs: A Theory (Simon & Schuster, 2018).

  13. Lin, J. & Ryaboy, D. SIGKDD Explor. 14, 6–19 (2013).

    Article  Google Scholar 

  14. Liew, C. S. et al. ACM Comput. Surv. 49, 66 (2016).

    Google Scholar 

  15. Bender, E. M. & Friedman, B. Trans. Assoc. Comput. Linguist. 6, 587–604 (2018).

    Article  Google Scholar 

Download references

Acknowledgements

This work has been supported through research funding provided by the Wellcome Trust (grant no. 223765/Z/21/Z), Sloan Foundation (grant no. G-2021-16779), the Department of Health and Social Care, and Luminate Group to support the Trustworthiness Auditing for AI project and Governance of Emerging Technologies research programme at the Oxford Internet Institute, University of Oxford. The funders had no role in the decision to publish or the preparation of this manuscript.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Brent Mittelstadt.

Ethics declarations

Competing interests

B.M. and S.W. declare no competing interests. C.R. was also an employee of Amazon Web Services during part of the writing of this article. He did not contribute to this article in his capacity as an Amazon employee.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Mittelstadt, B., Wachter, S. & Russell, C. To protect science, we must use LLMs as zero-shot translators. Nat Hum Behav 7, 1830–1832 (2023). https://doi.org/10.1038/s41562-023-01744-0

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1038/s41562-023-01744-0

This article is cited by

Search

Quick links

Nature Briefing AI and Robotics

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing: AI and Robotics