Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain
the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in
Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles
and JavaScript.
The Editors at Communications Psychology invite submissions on the topic of AI and Human Decision Making.
Recent developments in AI, in particular the abilities of Large Language Models (LLMs), have astounded the public and led to speculations about the degree to which AI may inform, advise, or even replace human decision making. Questions surrounding the use of LLMs in human decision making range from basic to applied research. How comparable is human decision making to that displayed by LLMs, and what can these differences tell us about human cognition? How can human information seeking and decision making be influenced, or even optimized, by LLM? On a pragmatic or meta-scientific level, how may LLMs be used to facilitate research in the domain of decision making?
This curated Collection will bring together research from the computational cognitive sciences and social psychology that advance our understanding of the role that LLMs can play in understanding human decision making, in informing human decision making, or that provide new insights into the conditions for successful interactions between humans and LLMs to improve decisions.
When people receive advice written by large language models, they downrate the competence of the source when they know the source isn’t human. Their preference to receive advice by large language models increases with positive experience.
Large language models (LLMs), which can generate and score text in human-like ways, have the potential to advance psychological measurement, experimentation and practice. In this Perspective, Demszky and colleagues describe how LLMs work, concerns about using them for psychological purposes, and how these concerns might be addressed.
Koster, Balaguer et al. show that an AI mechanism is able to learn to produce a redistribution policy which is preferred to alternatives by humans in an incentivized game.