Comment

Filter By:

  • We all care about effect sizes. Yet, traditional ways of evaluating them (P < 0.05 and generic benchmarks) are failing us. We propose two paths forward: setting better, contextualized benchmarks or — more radically — letting go of benchmarks altogether. Both paths point to adjusted expectations, more detailed reporting and slow science.

    • Friedrich M. Götz
    • Samuel D. Gosling
    • Peter J. Rentfrow
    Comment
  • Large language models can generate sophisticated text or code with little input from a user, which has the potential to impoverish our own writing and thinking skills. We need to understand the effect of this technology on our cognition and to decide whether this is what we want.

    • Richard Heersmink
    Comment
  • Given the increasing sophistication of virtual reality systems in providing immersive nature experiences, there is the potential for analogous health benefits to those that arise from real nature experiences. We call for research to better understand the human–nature–technology interaction to overcome potential pitfalls of the technology and design tailored virtual experiences that can deliver health outcomes and wellbeing across society.

    • Violeta Berdejo-Espinola
    • Renee Zahnow
    • Richard A. Fuller
    Comment
  • Effectively engaging with large language models is becoming increasingly vital as they proliferate across research landscapes. This Comment presents a practical guide for understanding their capabilities and limitations, along with strategies for crafting well-structured queries, to extract maximum utility from these artificial intelligence tools.

    • Zhicheng Lin
    Comment
  • Being able to deliver a persuasive and informative talk is an essential skill for academics, whether speaking to students, experts, grant funders or the public. Yet formal training on how to structure and deliver an effective talk is rare. In this Comment, we give practical tips to help academics to give great talks to a range of different audiences.

    • Veronica M. Lamarche
    • Franki Y. H. Kung
    • Thalia Wheatley
    Comment
  • Much well-designed and preregistered research is conducted but never published. The reasons for these studies ending up in the ‘file drawer’ are varied. Making this research public would help us all to do better science.

    • Daniël Lakens
    • Eline N. F. Ensinck
    Comment
  • Belonging is an essential part of human identity. But with belonging comes ‘otherness’ — the tendency to label ‘others’ on the basis of gender, race, ethnicity, religion, ability or some other dimension. To advance science, we need to recognize how otherness affects research and implement interventions to overcome the biases that it creates.

    • Jane L. Delgado
    • Rueben C. Warren
    Comment
  • The importance of reproducible scientific practices is widely acknowledged. However, limited resources and lack of external incentives have hindered their adoption. Here, we explore ways to promote reproducible science in practice.

    • Josefina Weinerova
    • Rotem Botvinik-Nezer
    • Roni Tibon
    Comment
  • Political polarization leads to distrust. In universities, this can lead to conflict or silence in classes and hinder learning and engagement. Faculty members and leaders can promote depolarization by encouraging constructive dialogue in and out of class, cultivating viewpoint diversity within boundaries and expanding civic spaces.

    • Sigal Ben-Porath
    Comment
  • Biobanks have emerged as valuable resources for studying behavioural and social genomics, but are not representative of global populations. Thus, current research findings do not generalize, and exacerbate knowledge and health inequalities. We call on researchers, publishers and funders to address barriers to biobank diversity.

    • Yixuan He
    • Alicia R. Martin
    Comment
  • The use of typological conceptions of race in science is not based in evidence. A recent report from the National Academy of Sciences, Engineering, and Medicine, USA clarifies how human populations should be described in genetics and genomics research. It makes twelve recommendations that are highly relevant to behavioural genetics.

    • Joseph Graves Jr
    Comment
  • Most scientific prizes and medals are named after men, and most of these are also awarded to men. The very few awards named after women or not named after a person at all are more frequently awarded to women, although parity between the gender of recipients is still not achieved. We call on the scientific community to rethink the naming of academic awards, medals and prizes, their nomination and selection criteria, and to diversify awarding committees and procedures to ensure greater inclusivity.

    • Katja Gehmlich
    • Stefan Krause
    Comment
  • Large language models can be construed as ‘cognitive models’, scientific artefacts that help us to understand the human mind. If made openly accessible, they may provide a valuable model system for studying the emergence of language, reasoning and other uniquely human behaviours.

    • Michael C. Frank
    Comment
  • Large language models (LLMs) are impressive technological creations but they cannot replace all scientific theories of cognition. A science of cognition must focus on humans as embodied, social animals who are embedded in material, cultural and technological contexts.

    • Anthony Chemero
    Comment
  • Algorithms are designed to learn user preferences by observing user behaviour. This causes algorithms to fail to reflect user preferences when psychological biases affect user decision making. For algorithms to enhance social welfare, algorithm design needs to be psychologically informed.

    • Carey K. Morewedge
    • Sendhil Mullainathan
    • Jens O. Ludwig
    Comment
  • The current debate surrounding the use and regulation of artificial intelligence (AI) in Brazil has social and political implications. We summarize these discussions, advocate for balance in the current debate around AI and fake news, and caution against preemptive AI regulation.

    • Cristina Godoy B. de Oliveira
    • Fabio G. Cozman
    • João Paulo C. Veiga
    Comment
  • Large language models (LLMs) do not distinguish between fact and fiction. They will return an answer to almost any prompt, yet factually incorrect responses are commonplace. To ensure our use of LLMs does not degrade science, we must use them as zero-shot translators: to convert accurate source material from one form to another.

    • Brent Mittelstadt
    • Sandra Wachter
    • Chris Russell
    Comment
  • State-of-the-art generative artificial intelligence (AI) can now match humans in creativity tests and is at the cusp of augmenting the creativity of every knowledge worker on Earth. We argue that enriching generative AI applications with insights from the psychological sciences may revolutionize our understanding of creativity and lead to increasing synergies in human–AI hybrid intelligent interfaces.

    • Janet Rafner
    • Roger E. Beaty
    • Jacob Sherson
    Comment
  • Generative artificial intelligence (AI) tools have made it easy to create realistic disinformation that is hard to detect by humans and may undermine public trust. Some approaches used for assessing the reliability of online information may no longer work in the AI age. We offer suggestions for how research can help to tackle the threats of AI-generated disinformation.

    • Stefan Feuerriegel
    • Renée DiResta
    • Nicolas Pröllochs
    Comment