Large language models (LLMs) do not distinguish between fact and fiction. They will return an answer to almost any prompt, yet factually incorrect responses are commonplace. To ensure our use of LLMs does not degrade science, we must use them as zero-shot translators: to convert accurate source material from one form to another.
This is a preview of subscription content, access via your institution
Relevant articles
Open Access articles citing this article.
-
Large language models as decision aids in neuro-oncology: a review of shared decision-making applications
Journal of Cancer Research and Clinical Oncology Open Access 19 March 2024
Access options
Access Nature and 54 other Nature Portfolio journals
Get Nature+, our best-value online-access subscription
$29.99 / 30 days
cancel any time
Subscribe to this journal
Receive 12 digital issues and online access to articles
$119.00 per year
only $9.92 per issue
Buy this article
- Purchase on Springer Link
- Instant access to full article PDF
Prices may be subject to local taxes which are calculated during checkout
References
Bender, E. M., Gebru, T., McMillan-Major, A. & Shmitchell, S. On the dangers of stochastic parrots: can language models be too big? In Proc. 2021 ACM Conf. on Fairness, Accountability, and Transparency, 610–623 (ACM, 2021).
Mitchell, M. Science 381, adj5957 (2023).
Munn, L., Magee, L. & Arora, V. AI Soc. https://doi.org/10.1007/s00146-023-01756-4 (2023).
Kidd, C. & Birhane, A. Science 380, 1222–1223 (2023).
Mielke, S. J., Szlam, A., Dinan, E. & Boureau, Y.-L. Trans. Assoc. Comput. Linguist. 10, 857–872 (2022).
Kang, E. B. Big Data Soc. 10, https://doi.org/10.1177/20539517221146122 (2023).
Holtzman, A., Buys, J., Du, L., Forbes, M. & Choi, Y. The curious case of neural text degeneration. In Proc. International Conf. on Learning Representations (ICLR) 2020, https://iclr.cc/virtual_2020/poster_rygGQyrFvH.html (International Conference on Learning Representations, 2020).
Drezner, D. W. The Ideas Industry (Oxford Univ. Press, 2017).
Feng, S., Park, C. Y., Liu, Y. & Tsvetkov, Y. From pretraining data to language models to downstream tasks: tracking the trails of political biases leading to unfair NLP models. In Proc. 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) 11737–11762 (Association for Computational Linguistics, 2023).
Johnson, M. et al. Trans. Assoc. Comput. Linguist. 5, 339–351 (2017).
Kabir, S., Udo-Imeh, D. N., Kou, B. & Zhang, T. Preprint at arXiv, https://doi.org/10.48550/arXiv.2308.02312 (2023).
Graeber, D. Bullshit Jobs: A Theory (Simon & Schuster, 2018).
Lin, J. & Ryaboy, D. SIGKDD Explor. 14, 6–19 (2013).
Liew, C. S. et al. ACM Comput. Surv. 49, 66 (2016).
Bender, E. M. & Friedman, B. Trans. Assoc. Comput. Linguist. 6, 587–604 (2018).
Acknowledgements
This work has been supported through research funding provided by the Wellcome Trust (grant no. 223765/Z/21/Z), Sloan Foundation (grant no. G-2021-16779), the Department of Health and Social Care, and Luminate Group to support the Trustworthiness Auditing for AI project and Governance of Emerging Technologies research programme at the Oxford Internet Institute, University of Oxford. The funders had no role in the decision to publish or the preparation of this manuscript.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Competing interests
B.M. and S.W. declare no competing interests. C.R. was also an employee of Amazon Web Services during part of the writing of this article. He did not contribute to this article in his capacity as an Amazon employee.
Rights and permissions
About this article
Cite this article
Mittelstadt, B., Wachter, S. & Russell, C. To protect science, we must use LLMs as zero-shot translators. Nat Hum Behav 7, 1830–1832 (2023). https://doi.org/10.1038/s41562-023-01744-0
Published:
Issue Date:
DOI: https://doi.org/10.1038/s41562-023-01744-0
This article is cited by
-
Large language models as decision aids in neuro-oncology: a review of shared decision-making applications
Journal of Cancer Research and Clinical Oncology (2024)