Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Comment
  • Published:

Research can help to tackle AI-generated disinformation

Generative artificial intelligence (AI) tools have made it easy to create realistic disinformation that is hard to detect by humans and may undermine public trust. Some approaches used for assessing the reliability of online information may no longer work in the AI age. We offer suggestions for how research can help to tackle the threats of AI-generated disinformation.

This is a preview of subscription content, access via your institution

Relevant articles

Open Access articles citing this article.

Access options

Buy this article

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: Role of behavioural science in the era of AI-generated disinformation.

References

  1. Kreps, S. & Kriner, D. L. New Media Soc. https://doi.org/10.1177/14614448231160526 (2023).

    Article  Google Scholar 

  2. Spitale, G., Biller-Andorno, N. & Germani, F. Sci. Adv. 9, eadh1850 (2023).

    Article  PubMed  PubMed Central  Google Scholar 

  3. Kreps, S., McCain, R. M. & Brundage, M. J. Exp. Political Sci. 9, 104–117 (2022).

    Article  Google Scholar 

  4. Lorenz-Spreen, P., Lewandowsky, S., Sunstein, C. R. & Hertwig, R. Nat. Hum. Behav. 4, 1102–1109 (2020).

    Article  PubMed  Google Scholar 

  5. Bhadani, S. et al. Nat. Hum. Behav. 6, 495–505 (2022).

    Article  PubMed  Google Scholar 

  6. Kozyreva, A., Lewandowsky, S. & Hertwig, R. Psychol. Sci. Public Interest 21, 103–156 (2020).

    Article  PubMed  PubMed Central  Google Scholar 

  7. Hine, E. & Floridi, L. Nat. Mach. Intell. 4, 608–610 (2022).

    Article  Google Scholar 

  8. Bender, E. M., Gebru, T., McMillan-Major, A. & Shmitchell, S. On the dangers of stochastic parrots: can language models be too big? In FAccT ’21: ACM Conf. on Fairness, Accountability, and Transparency, 610–623 (ACM, 2021).

  9. Goldstein, J. A. et al. Preprint at arXiv, https://doi.org/10.48550/arXiv.2301.04246 (2023).

  10. He, B., Ahamad, M. & Kumar, S. Reinforcement learning-based counter-misinformation response generation: a case study of COVID-19 vaccine misinformation. In WWW ’23: Proc. ACM Web Conf. 2023, 2698–2709 (ACM, 2023).

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Stefan Feuerriegel.

Ethics declarations

Competing interests

The authors declare no competing interests.

Peer review

Peer review information

Nature Human Behaviour thanks Jennifer Stromer-Galley and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Feuerriegel, S., DiResta, R., Goldstein, J.A. et al. Research can help to tackle AI-generated disinformation. Nat Hum Behav 7, 1818–1821 (2023). https://doi.org/10.1038/s41562-023-01726-2

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1038/s41562-023-01726-2

This article is cited by

Search

Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing