Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain
the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in
Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles
and JavaScript.
The rapid development of generative AI has brought about a paradigm shift in content creation, knowledge representation and communication. This hot generative AI summer has created a lot of excitement, as well as disruption and concern. This issue features a Focus on the new opportunities that AI tools offer for science and society. Our authors also confront the numerous challenges that intelligent machines pose and explore strategies to tackle them.
Although artificial intelligence (AI) was already ubiquitous, the recent arrival of generative AI has ushered in a new era of possibilities as well as risks. This Focus explores the wide-ranging impacts of AI tools on science and society, examining both their potential and their pitfalls.
Large language models are capable of impressive feats, but the job of scientific review requires more than the statistics of published work can provide.
In Japan, people express gratitude towards technology and this helps them to achieve balance. Yet, dominant narratives teach us that anthropomorphizing artificial intelligence (AI) is not healthy. Our attitudes towards AI should not be bult upon overarching universal models, argues Shoko Suzuki.
Generative artificial intelligence (AI) tools have made it easy to create realistic disinformation that is hard to detect by humans and may undermine public trust. Some approaches used for assessing the reliability of online information may no longer work in the AI age. We offer suggestions for how research can help to tackle the threats of AI-generated disinformation.
Algorithms are designed to learn user preferences by observing user behaviour. This causes algorithms to fail to reflect user preferences when psychological biases affect user decision making. For algorithms to enhance social welfare, algorithm design needs to be psychologically informed.
Large language models can be construed as ‘cognitive models’, scientific artefacts that help us to understand the human mind. If made openly accessible, they may provide a valuable model system for studying the emergence of language, reasoning and other uniquely human behaviours.
Large language models (LLMs) are impressive technological creations but they cannot replace all scientific theories of cognition. A science of cognition must focus on humans as embodied, social animals who are embedded in material, cultural and technological contexts.
Large language models (LLMs) do not distinguish between fact and fiction. They will return an answer to almost any prompt, yet factually incorrect responses are commonplace. To ensure our use of LLMs does not degrade science, we must use them as zero-shot translators: to convert accurate source material from one form to another.
If mistakes are made in clinical settings, patients suffer. Artificial intelligence (AI) generally — and large language models specifically — are increasingly used in health settings, but the way that physicians use AI tools in this high-stakes environment depends on how information is delivered. AI toolmakers have a responsibility to present information in a way that minimizes harm.
State-of-the-art generative artificial intelligence (AI) can now match humans in creativity tests and is at the cusp of augmenting the creativity of every knowledge worker on Earth. We argue that enriching generative AI applications with insights from the psychological sciences may revolutionize our understanding of creativity and lead to increasing synergies in human–AI hybrid intelligent interfaces.
The rise of generative AI requires a research agenda grounded in the African context to determine locally relevant strategies for its development and use. With a critical mass of evidence on the risks and benefits that generative AI poses to African societies, the scaled use of this new technology might help to reduce rising global inequities.
The current debate surrounding the use and regulation of artificial intelligence (AI) in Brazil has social and political implications. We summarize these discussions, advocate for balance in the current debate around AI and fake news, and caution against preemptive AI regulation.
In this Perspective, the authors examine the psychological factors that shape attitudes towards AI tools, while also investigating strategies to overcome resistance when AI systems offer clear benefits.
Artificial intelligence tools and systems are increasingly influencing human culture. Brinkmann et al. argue that these ‘intelligent machines’ are transforming the fundamental processes of cultural evolution: variation, transmission and selection.
In this study of bird biodiversity data from across 195 US cities, Ellis-Soto et al. show that historical redlining is associated with increasing inequality in sampling. Historically redlined neighbourhoods remain the most undersampled areas.
In a study of 28 European Union member states, Wolfowicz et al. found that increased levels of terrorism-related arrests and convictions are associated with decreases in terrorism. However, evidence concerning the role of more severe punishment was mixed.
Using a set of experiments, the authors show that discrimination reduces work effort of those who are disadvantaged and those who are advantaged by it.
This meta-analysis of the relationship between economic inequality and prosocial behaviour finds that the relationship varies from being negative to positive, but, on average, higher economic inequality is associated with lower prosocial behaviour.
Ferguson et al. test the effectiveness of messages designed to increase rates of repeat blood donation and find that warm-glow feelings as a motivation for cooperation cool over time but can be reactivated.
The authors find that psychological responses towards representations of robots fall into three dimensions: positive, negative and competence. They also examine their individual difference predictors.
Giron et al. provide empirical evidence that human development has much in common with the algorithm of ‘stochastic optimization’ widely used in machine learning, resolving ambiguities around commonly used analogies in developmental psychology.
In a series of human functional MRI studies, Zhang et al. find that the activation of two brain areas typically involved in language comprehension reflects working memory of social semantics rather than general semantic or syntactic processing.
Kutter et al. show that neurons in the human brain encode small numbers (up to 4) more precisely than large numbers, indicating a distinction between a small-number subitizing system and a large-number estimation system.
Fjell et al. analysed multiple large-scale longitudinal MRI datasets and found no evidence for an association of sleep duration and brain atrophy, suggesting that normal brains promote adequate sleep.
Producing a high-resolution global net migration dataset for 2000–2019, Niva et al. analyse how migration affects urban and rural population growth and show that socioeconomic factors are more strongly associated with migration than climatic ones.