Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Viewpoint
  • Published:

Generative AI and science communication in the physical sciences

Advances in generative AI could democratize science communication, by providing scientists with easy-to-use tools to help them communicate their work to different audiences. However, these tools are imperfect, and their output must be checked by experts. They can also be used maliciously to produce misinformation and disinformation. Seven researchers and science communicators weigh up the potential benefits of generative AI for science communication against its risks.

This is a preview of subscription content, access via your institution

Access options

Buy this article

Prices may be subject to local taxes which are calculated during checkout

References

  1. Schäfer, M. S. The notorious GPT: science communication in the age of artificial intelligence. JCOM J. Sci. Commun. 22, Y02 (2023).

    Google Scholar 

  2. Jamieson, K. H., Kahan, D. & Scheufele, D. A. (eds) The Oxford Handbook of the Science of Science Communication (Oxford Univ. Press, 2017).

  3. Gaia, G., Boiano, S. & Borda, A. in Museums and Digital Culture (eds Giannini, T. & Bowen, J. P.) Ch. 15 (Springer, Cham, 2019).

  4. Noh, Y.-G. & Hong, J.-H. Designing reenacted chatbots to enhance museum experience. Appl. Sci. 11, 7420 (2021).

    Article  CAS  Google Scholar 

  5. Davies, S. R. in Ethics and Practice in Science Communication (eds Priest, S., Goodwin, J. & Dahlstrom, M. F.) 175–192 (Univ. Chicago Press, 2018).

  6. Bender, E. M., Gebru, T., McMillan-Major, A. & Mitchell, M. On the dangers of stochastic parrots: can language models be too big? In Proc. 2021 ACM Conference on Fairness, Accountability, and Transparency 610–623 (ACM, 2021).

  7. Long, T. et al. Tweetorial hooks: generative AI tools to motivate science on social media. Preprint at https://arxiv.org/abs/:2305.12265 (2023).

  8. Gregory, J. & Lock, S. J. The evolution of ‘public understanding of science’: public engagement as a tool of science policy in the UK. Sociol. Compass 2, 1252–1265 (2008).

    Article  Google Scholar 

  9. Markines, B., Cattuto, C. & Menczer, F. Social spam detection. In Proc. 5th International Workshop on Adversarial Information Retrieval on the Web 41–48 (ACM, 2009).

  10. Else, H. Abstracts written by ChatGPT fool scientists. Nature 613, 423 (2023).

    Article  ADS  CAS  PubMed  Google Scholar 

  11. Yang, K.-C. & Menczer, F. Large language models can rate news outlet credibility. Preprint at https://arxiv.org/abs/2307.16336 (2023).

  12. Gero, K., Liu, V. & Chilton, L. Sparks: inspiration for science writing using language models. In Designing Interactive Systems Conference (DIS ’22). Preprint at https://arxiv.org/abs/2110.07640 (2021).

  13. Igarashi, Y., Mizushima, N. & Yokoyama, H. M. Manga-based risk communication for the COVID-19 pandemic: a case study of storytelling that incorporates a cultural context. JCOM J. Sci. Commun. 19, N02 (2020).

    Article  Google Scholar 

  14. Menczer, F., Crandall, D., Ahn, Y. & Kapadia, A. Addressing the harms of AI-generated inauthentic content. Nat. Mach. Intell. 5, 678–680 (2023).

    Article  Google Scholar 

  15. Malone, C. How better science communication can benefit everyone. Physics World https://physicsworld.com/how-better-science-communication-can-benefit-everyone/ (2022).

Download references

Acknowledgements

K.D. thanks her students on the MSc in Science Communication for their insightful comments in their essay assignments on the limitations of large language models. M.S.S. thanks M. Bischofberger and S.H. Kessler for valuable input.

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Sibusiso Biyela, Kanta Dihal, Katy Ilonka Gero, Daphne Ippolito, Filippo Menczer, Mike S. Schäfer or Hiromi M. Yokoyama.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

The contributors

Sibusiso Biyela is a science communicator and journalist who uses digital tools to translate complex research topics to different audiences in South Africa. He has been advocating for the decolonization of science and working on translating science in the indigenous languages in South Africa.

Kanta Dihal is a Lecturer in Science Communication at Imperial College London. Her research intersects science communication, literature and science, and science fiction. She focuses on stories about science and technology across cultures, and how they help us think about ethics and bias in new technologies.

Katy Ilonka Gero is a postdoctoral researcher at Harvard University specializing in the study of human–AI interaction, with a focus on technology for impactful writing and understanding the limits and capabilities of large language models.

Daphne Ippolito is an Assistant Professor at Carnegie Mellon University investigating the limitations and vulnerabilities of language model systems. She also explores ways in which human writers can use generative text models as creative tools.

Filippo Menczer is the Luddy Distinguished Professor of Informatics and Computer Science at Indiana University and Director of the Observatory on Social Media. He works on analysing and modelling the spread of information and misinformation in social networks and detecting and countering the manipulation of social media.

Mike S. Schäfer is a Professor of Science Communication at University of Zurich and Director of the Center for Higher Education and Science Studies (CHESS). His work looks at how science is communicated to the public and he investigates public perceptions of science and technology, communication and AI.

Hiromi Yokoyama is a Professor and Deputy Director of Kavli Institute for the Physics and Mathematics of the Universe. With a research background in experimental particle physics, she is investigating a broad range of topics regarding science and society, which include science communication and the ethics of AI.

Supplementary information

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Biyela, S., Dihal, K., Gero, K.I. et al. Generative AI and science communication in the physical sciences. Nat Rev Phys 6, 162–165 (2024). https://doi.org/10.1038/s42254-024-00691-7

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1038/s42254-024-00691-7

Search

Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing