Generative AI tools can quickly translate or summarize large volumes of complex information. This technology could revolutionize the way that we communicate science, but there are many reasons for caution. We asked six experts about the potential and pitfalls of generative AI for science communication.
This is a preview of subscription content, access via your institution
Access options
Access Nature and 54 other Nature Portfolio journals
Get Nature+, our best-value online-access subscription
$29.99 / 30 days
cancel any time
Subscribe to this journal
Receive 12 digital issues and online access to articles
$119.00 per year
only $9.92 per issue
Buy this article
- Purchase on Springer Link
- Instant access to full article PDF
Prices may be subject to local taxes which are calculated during checkout
References
Wang, H. et al. Nature 620, 47–60 (2023).
Wagner, G., Lukyanenko, R. & Paré, G. J. Inf. Technol. 37, 209–226 (2022).
Yang, Y., Youyou, W. & Uzzi, B. Proc. Natl Acad. Sci. USA 117, 10762–10768 (2020).
Messeri, L. & Crockett, M. J. Artificial intelligence and illusions of understanding in scientific research. Nature, https://doi.org/10.1038/s41586-024-07146-0 (2024).
Ghosh, S. & Caliskan, A. ChatGPT perpetuates gender bias in machine translation and ignores non-gendered pronouns: findings across Bengali and five other low-resource languages. In AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society (eds Rossi, F. et al.) 901–912 (AAAI/ACM AIES 2023).
Longino, H. E. Science as Social Knowledge: Values and Objectivity in Scientific Inquiry (Princeton Univ. Press, 1990).
Harding, S. Centen. Rev. 36, 437–470 (1992).
Nakadai, R., Nakawake, Y. & Shibasaki, S. Nat. Hum. Behav. 7, 1804–1805 (2023).
Bender, E. M., Gebru, T., McMillan-Major, A. & Shmitchell, S. On the dangers of stochastic parrots: Can language models be too big? In Proc. 2021 ACM Conf. on Fairness, Accountability, and Transparency, 610–623 (ACM, 2021).
Crockett, M. J. & Messeri, L. Preprint at OSF, https://doi.org/10.31234/osf.io/4zdx9 (2023).
Conroy, G. Nature 622, 234–236 (2023).
Ou, M. & Ho, S. S. Public Underst. Sci. 33, 241–259 (2023).
Ho, S. S., Scheufele, D. A. & Corley, E. A. Sci. Commun. 33, 167–200 (2011).
Ho, S. S. et al. Environ. Commun. 13, 457–471 (2019).
Shin, D., Rasul, A. & Fotiadis, A. Internet Res. 32, 1214–1234 (2022).
Cao, Y. et al. (2023). Assessing cross-cultural alignment between ChatGPT and human societies: an empirical study. In Proc. First Workshop on Cross-Cultural Considerations in NLP (C3NLP), 53–67 (ACL, 2023).
Draxler, F. et al. (2023). The AI ghostwriter effect: when users do not perceive ownership of AI-generated text but self-declare as authors. In ACM Trans. Computer–Human Interaction, vol. 31 (eds Höök, E. & Hornbæk, K.) 25 (ACM, 2023).
Acknowledgements
S.S.H. is part of the research programme DesCartes, which is supported by the National Research Foundation, Prime Minister’s Office, Singapore under its Campus for Research Excellence and Technological Enterprise (CREATE) programme.
Author information
Authors and Affiliations
Corresponding authors
Ethics declarations
Competing interests
J.W. is on the board of Consensus.app. The other authors declare no competing interests.
Rights and permissions
About this article
Cite this article
Alvarez, A., Caliskan, A., Crockett, M.J. et al. Science communication with generative AI. Nat Hum Behav 8, 625–627 (2024). https://doi.org/10.1038/s41562-024-01846-3
Published:
Issue Date:
DOI: https://doi.org/10.1038/s41562-024-01846-3
This article is cited by
-
AI is no substitute for having something to say
Nature Reviews Physics (2024)