Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Volume 7 Issue 11, November 2023

Navigating the AI frontier

The rapid development of generative AI has brought about a paradigm shift in content creation, knowledge representation and communication. This hot generative AI summer has created a lot of excitement, as well as disruption and concern. This issue features a Focus on the new opportunities that AI tools offer for science and society. Our authors also confront the numerous challenges that intelligent machines pose and explore strategies to tackle them.

See Focus

Cover image: Bethany Vukomanovic. Cover design: Bethany Vukomanovic.

Editorial

  • Although artificial intelligence (AI) was already ubiquitous, the recent arrival of generative AI has ushered in a new era of possibilities as well as risks. This Focus explores the wide-ranging impacts of AI tools on science and society, examining both their potential and their pitfalls.

    Editorial

    Advertisement

Top of page ⤴

Correspondence

Top of page ⤴

Comment & Opinion

  • Generative artificial intelligence (AI) tools have made it easy to create realistic disinformation that is hard to detect by humans and may undermine public trust. Some approaches used for assessing the reliability of online information may no longer work in the AI age. We offer suggestions for how research can help to tackle the threats of AI-generated disinformation.

    • Stefan Feuerriegel
    • Renée DiResta
    • Nicolas Pröllochs
    Comment
  • Algorithms are designed to learn user preferences by observing user behaviour. This causes algorithms to fail to reflect user preferences when psychological biases affect user decision making. For algorithms to enhance social welfare, algorithm design needs to be psychologically informed.

    • Carey K. Morewedge
    • Sendhil Mullainathan
    • Jens O. Ludwig
    Comment
  • Large language models (LLMs) do not distinguish between fact and fiction. They will return an answer to almost any prompt, yet factually incorrect responses are commonplace. To ensure our use of LLMs does not degrade science, we must use them as zero-shot translators: to convert accurate source material from one form to another.

    • Brent Mittelstadt
    • Sandra Wachter
    • Chris Russell
    Comment
  • If mistakes are made in clinical settings, patients suffer. Artificial intelligence (AI) generally — and large language models specifically — are increasingly used in health settings, but the way that physicians use AI tools in this high-stakes environment depends on how information is delivered. AI toolmakers have a responsibility to present information in a way that minimizes harm.

    • Marzyeh Ghassemi
    Comment
  • State-of-the-art generative artificial intelligence (AI) can now match humans in creativity tests and is at the cusp of augmenting the creativity of every knowledge worker on Earth. We argue that enriching generative AI applications with insights from the psychological sciences may revolutionize our understanding of creativity and lead to increasing synergies in human–AI hybrid intelligent interfaces.

    • Janet Rafner
    • Roger E. Beaty
    • Jacob Sherson
    Comment
  • The rise of generative AI requires a research agenda grounded in the African context to determine locally relevant strategies for its development and use. With a critical mass of evidence on the risks and benefits that generative AI poses to African societies, the scaled use of this new technology might help to reduce rising global inequities.

    • Rachel Adams
    • Ayantola Alayande
    • Davy K. Uwizera
    Comment
  • The current debate surrounding the use and regulation of artificial intelligence (AI) in Brazil has social and political implications. We summarize these discussions, advocate for balance in the current debate around AI and fake news, and caution against preemptive AI regulation.

    • Cristina Godoy B. de Oliveira
    • Fabio G. Cozman
    • João Paulo C. Veiga
    Comment
Top of page ⤴

Reviews

  • Artificial intelligence tools and systems are increasingly influencing human culture. Brinkmann et al. argue that these ‘intelligent machines’ are transforming the fundamental processes of cultural evolution: variation, transmission and selection.

    • Levin Brinkmann
    • Fabian Baumann
    • Iyad Rahwan
    Perspective
Top of page ⤴

Research

Top of page ⤴

Search

Quick links