Leading by example in open science

When I joined Nature Human Behaviour in 2020, a main motivation for me to swap an academic career for being a professional editor was the journal’s dedication to publishing robust science and supporting open science practices. With a background in social psychology — arguably one of the fields most affected by the replication crisis — I was disillusioned about the state of the discipline. Here, I therefore want to highlight a positive example of open and replicable social psychological research that we published in 20211.

Credit: Hiroshi Watanabe / Stone / Getty

During the first peak of the COVID-19 pandemic, many governments issued lockdown mandates and social distancing policies, urging their citizens to avoid social gatherings, keep physical distance and wear masks to mitigate the spread of the SARS-CoV-2 virus. A key determinant of adherence to these guidelines is citizens’ trust in their political leaders.

In their Registered Report, Everett and colleagues1 presented participants from 22 countries with fictional leaders’ moral decisions to test how these increased or eroded trust, depending on the context. The authors found that leaders who endorsed instrumental harm (sacrificing some people to save many) were trusted less. By contrast, leaders who showed impartial beneficence (maximizing welfare for all) were trusted more.

The article by Everett et al.1 demonstrates the great value of Registered Reports2. They prevent ‘quick and dirty’ research that is condemned to end up in the file drawer if it yields null results or unaddressable shortcomings are discovered during peer review. In stage 1, researchers receive referee feedback on their introduction, methodology and analysis plan before data collection begins. They can then revise the design in response to reviewer comments; once editors and reviewers are satisfied with the revisions, the stage 1 protocol receives acceptance in principle. Only afterwards does data collection begin. Importantly, publication of the stage 2 Registered Report does not depend on the direction of the results — whether a hypothesis is supported or not is irrelevant to the decision. Instead, it will be published as long as the authors adhered to the planned methodology and analysis, and the data meet the necessary quality checks.

Registered Reports are just one way to practice open science, and it is important to show that they work for large-scale projects. Valuable research resources are prevented from being wasted and avoidable confounds can be addressed before data collection. Being a strong proponent of open science, it is my wish that examples such as this inspire more researchers to see the value and use the format of Registered Reports.

Samantha Antusch has been an editor at Nature Human Behaviour since 2020.

The power of private data

Early in my editorial career, I received a paper that immediately caught my attention owing to the unusual data involved3. In this work, the authors received access to detailed personnel records from the Metropolitan Police Service in London, UK, including information on allegations of misconduct and performance scores. The authors used this information to identify the effect of peer misconduct allegations by tracing employee moves among work groups.

Two things struck me about the dataset used in this paper: it was very valuable, and it was very sensitive. First, the study was only possible because the Metropolitan Police Service was willing to share the data for research purposes. This level of transparency is often lacking in studies of policing, and I hope that papers such as this can help to shift the norm towards more transparency. Second, the dataset contains individual-level data on sensitive topics and could not be ethically published alongside the paper. In light of this tension, we discussed with the authors how to strike a balance between ensuring transparency in research and protecting the individuals in the dataset, while also conforming to their agreement with the Metropolitan Police. The data are not public, but arrangements to view them locally can be made.

As a journal, we are committed to increasing transparency and openness in science. Sometimes, though, full public accessibility is neither possible nor desirable. Only publishing papers with fully public data risks excluding important, but sensitive, research. Research tackling difficult questions can be strengthened by bringing in unusual data sources, and I’m happy to highlight this paper as an example of what we can learn from nonpublic data.

Aisha Bradshaw has been an editor at Nature Human Behaviour since 2018.

Inheriting the wheel

I was privileged to work on some excellent papers from the fascinating field of cultural evolution — one of which stands out for me because of its combination of a genuinely interesting insight, an innovative design and an experiment that sounds, dare I say, fun4.

Two of the key things that have driven human success as a species are the use of refined tools and the passing of knowledge between generations. In their paper, Maxime Derex and colleagues4 tested two hypotheses as to how humans developed highly optimized tools, such as the bow and arrow or kayak. The cognitive niche hypothesis posits that humans perfected these tools because of their excellent ability to causally reason about a problem and find an optimal solution. The cultural niche hypothesis suggests instead that optimized tools can be created through many small improvements being passed from generation to generation — with no need for our forebears to accurately understand why those improvements worked.

The authors devised a neat experiment in which willing French university students were given five chances each to minimize the time it took for a wheel to travel down a 1-m-long track, by positioning weights on each of its four spokes. The participants knew that their final two attempts at configuring the wheel would be passed to the next person in their group. Participants were also shown pictures of different wheel configurations and asked which they thought would be quickest, to assess their causal understanding of the problem.

In line with the cultural niche hypothesis, the participants did improve their designs and increase the speed of the wheel across generations, but their causal understanding did not improve significantly. Even in a second condition in which participants formulated a written theory about the optimal wheel design for their successor, the outcome was the same. Intriguingly, inheriting an explicit theory may even have hindered participants’ experimentation with the wheel, and caused them to ignore other fruitful design options.

When looking at periods of human innovation, there is a tendency to focus on individual brilliance and cognitive leaps — but cultural accumulation and transmission may be equally important drivers.

John Carson was an editor at Nature Human Behaviour from 2016 to 2019.

Social tipping dynamics

The concept of social tipping has gained traction among scholars working on climate and other large-scale collective action problems, in part because it offers the hope of achieving bottom-up change in the face of government inaction. However, the potential for social tipping to initiate widespread change depends on nuanced contextual and sociocultural features. Two early papers in Nature Human Behaviour shed light on these dynamics, with important implications for policy-makers.

The tendency to conform to social norms can sustain or entrench the status quo, but it can also act as a powerful engine of change. When the size of a minority group committed to a counter-normative behaviour surpasses a certain threshold (a tipping point), it can trigger widespread behavioural change and the emergence of new social norms. Using an agent-based model to describe the management of a groundwater aquifer, Castilla-Rho et al.5 show that conformity with social norms can increase compliance with conservation policies and the maintenance of the common resource. They also show that the integrity of the aquifer is sensitive to tipping points in norms around water use and policy compliance. In particular, tipping points mask fragility: while conservation near the threshold obscures how quickly the system could tip towards overuse, overuse near the threshold gives the impression that substantial policy efforts are needed. However, while large efforts are needed when societies are far from tipping points and norms are entrenched, small changes in policies and values can tip communities near the threshold towards (or away from) sustainable practices.

The observation that circumscribed interventions can reverberate through a community has prompted researchers to consider how pro-environmental norms might be strategically seeded. However, the effectiveness of such approaches depends on whether tipping points exist. Efferson et al.6 use models to show that mundane features of a society — heterogeneity in preferences or in the tendency to conform, the proclivity of like-minded individuals to interact, and the relationship between group identities and existing norms — can dampen the possibility for rapid social change. These observations have implications for policy design. For example, a localized intervention is unlikely to engender social tipping when a population is resistant to change. When the distribution neither favours nor disfavours change, then decisions about intervention size and targeting matter. Interventions that target a random sample, as opposed to amenable or resistant individuals, may more reliably trigger tipping when there are heterogeneous preferences. And, perhaps of greatest relevance for the polarized context of climate change, when social identities are tied to behaviours or when a behaviour is adopted by one group to differentiate it from another, the link between identity and behaviour may need to be weakened before tipping can occur. While these findings may temper some of the excitement around social tipping, they also suggest approaches to policy-making that better account for real-world social complexity.

Sara Constantino was an editor at Nature Human Behaviour from 2016 to 2018.

Psychiatry in the wild

As I write this, the world is still mourning the loss of Dr Aaron Beck (18 July 1921–1 November 2021), known as the father of cognitive behavioural therapy. It thus seems appropriate that I am celebrating here a paper that offered a unique test of some of his ideas.

A key tenet in cognitive behavioural therapy theory is that emotional problems such as depression are associated with cognitive distortions — tendencies in thinking that can reinforce the emotional state. In particular, depression is thought to be associated with several unhelpful patterns of thinking, such as overgeneralization (the tendency to draw broad conclusions from a small number of examples).

Cognitive distortions have mostly been studied either in the clinic or in the laboratory. What Bathina et al. did in their 2021 paper was to reveal these distortions ‘in the wild’: on Twitter7.

The authors sampled people with depression on Twitter by searching for accounts that stated that they had been medically diagnosed with depression. A control group of matched accounts was also identified.

Bathina et al.7 then sought evidence of cognitive distortions by counting the number of word sequences (N-grams) chosen to be indicative of distortions. For overgeneralization, for instance, the terms included “all of them”, “all the time” and “I always”. Crucially, the N-grams were chosen to not be negative in themselves, and none of them referred to depression.

The results showed that people with depression used more of the cognitive-distortion N-grams in their tweets. The fact that people with a self-reported diagnosis of depression overused these phrases was thus a nontrivial demonstration of Aaron Beck’s fundamental insight: that depression is not only a disorder of emotion, but also of thought.

Jamie Horder has been an editor at Nature Human Behaviour since 2020.

Opinions that make a difference

When I took on the role of launching Nature Human Behaviour, I wanted to create a multidisciplinary journal that stood for rigorous science that makes a difference in the real world. Over the years, we’ve published several important research papers that embody this vision. Equally valuable, however, has been the publication of commissioned nonresearch content that specifically supports these goals.

Two Perspectives helped to set the tone for the journal from our very first issue. “A manifesto for reproducible science”8, the product of a collaboration among ten metascientists, has become a textbook reference for the open science community since its publication. The manifesto went beyond enumerating the ways in which science has been failing to distil steps and initiatives required to support credible science. These views and ideas are at the heart of the journal’s identity and have become key in the transformation of a reproducibility crisis to a credibility revolution in science.

In a second Perspective9, Duncan Watts argued that the way forward for the social sciences is to become more solution-oriented. Instead of developing irreconcilable ‘pet theories’, Watts advocated for placing the solution of practical problems as the starting point for scientific enquiry in the social sciences and then building theories to address them. Solution-oriented social science would require multidisciplinary collaboration at scale and would encourage greater investment in social scientific research.

Fast-forward to four years later, another Perspective10 demonstrated the power of multidisciplinary collaboration in addressing an unprecedented societal challenge in the new millennium — the COVID-19 pandemic. The Perspective brought together psychologists, sociologists, political scientists, economists, communication and public policy experts who captured key insights from the existing literature that could be used to inform the policy response against COVID-19. Written, peer reviewed and published in record time (soon after COVID-19 was declared a pandemic), this Perspective encapsulated the value of collaborative science in response to an urgent real-world crisis.

Taken together, these pieces embody for me the power of transformative thought leadership. I feel extremely privileged to have worked with their authors — as well as several others — over the past five years and I can’t wait to see what the next five years will bring.

Stavroula Kousta has been Chief Editor of Nature Human Behaviour since 2016.

Science that evokes joy

In July 2020, amidst a tempest of pandemic publishing, a thing of beauty arrived on my (virtual, home-office) desk. It was a manuscript11 that contained unforgettable images of ancient art, each painted by artists who lived hundreds of generations ago in northwest Australia, at a time of rising sea levels and shrinking coastlines.

Please do take a look at the paper — the motifs are stunning. Most are freehand drawings, and appear to be naturalistic, near-life-size representations of animals that we know today, including a possible kangaroo (DR016_01) and wallaby (DR015_04). They used a painting style that is similar to older figurative animal motifs found in Indonesia12, suggesting cultural links. Traditional Owners of the site and surrounding area participated in the research, gave their consent for sampling, analysis and publication, and requested that site locations are not disclosed. We consider this evidence of respectful collaboration, which is crucial in science — particularly where heritage is concerned.

The dates of these paintings are exciting, and the dating method is intriguing, as it involves ancient wasps. The authors sampled mud-wasp nests over- and under-laying the paintings, and using radiocarbon dating they confirmed the dates of some motifs to intervals of a few hundred years. The oldest image is the kangaroo (DR016_01): thanks in part to the hard work of wasp colonies many thousands of years ago, we know that it was painted between 17,100 and 17,500 years ago, making this the oldest known rock painting in Australia.

There are many ways that science can change the world for the better. This paper tells a story of human creativity at the end of the Last Glacial Maximum, and of continued connection with the landscape, including through participatory research. For me, it inspired optimism at a time when I was often overwhelmed by world events. I very much hope it has a similar impact on our readers.

Charlotte Payne has been an editor at Nature Human Behaviour since 2019.

Abstract models with real impact

By June 2020, large swathes of the world were in lockdown. Policy-makers were getting wary of the undesirable effects of lockdowns: loneliness, deteriorating mental health and economic slowdown. Social bubbles, pods or quaranteams then emerged as a middle-ground solution to keep the curve flat while doing away with the more-draconian distancing measures.

In a 2020 article13, a team of sociologists modelled three ways to tame an outbreak by rearranging social network ties. In computer simulations, their three interventions slowed viral spread without harsh social isolation. And social bubbling stood out as the most effective approach.

The researchers did not collect real-world data, and they did not test model predictions in real life. This is common in modelling, an approach that strips away the complexities of the real world to aid mechanistic understanding. But it also means that the study does not tell us the exact numbers of cases avoided or lives saved. This, alongside the inherent uncertainty of models and their sensitivity to assumptions, is why it is so difficult to translate computational insights into tangible policies.

But the study had a combination of strengths that are rarely found in infectious disease modelling. The qualitative insights were convincing and consistent across many types of networks — a sign of robustness. The model assumptions were few and well-justified. And the proposed guidelines were simple, intuitive and easy to explain, which is critical in public health messaging and implementations.

Last November, I caught up with Professor Melinda Mills, the senior author, by email. In the months since the authors made their findings public, the article has appeared in UK parliamentary briefings and guidance documents introducing the UK’s ‘support bubbles’ policy. Businesses, schools, universities and governments have approached the team, citing the study’s impact on their decisions, Mills said.

The qualitative insights of a computational model were enough to act. A sure sign that abstract ‘dataless’ models can make a real difference, and a reminder of their great value to human behavioural research.

Arunas Radzvilavicius has been an editor at Nature Human Behaviour since 2021.

Finding gold in the 12th dimension

What are the defining characteristics of a great scientific article? Its novelty, its breadth of appeal or its scientific rigour?

These are questions that editors consider daily. As a team, we must agree on our answers as the basis of a fair and reliable editorial decision process. A key part — and perk — of the job is to leave the office and meet scientists, to explain what we think defines a good research article and to learn from them what they think makes a great paper.

My last in-person site visit before the pandemic took me to Leipzig, where I had a chance encounter with a researcher, Martin Hebart, at lunch. Martin asked me whether I would have time to look at one of his projects14, which he had thought might be a good candidate for Nature Human Behaviour. As he explained the research and showed me the data, I had little doubt that the project would be one I would send out to peer review.

The research question was fundamental: what are the defining characteristics by which we perceive the objects that surround us? The authors took a comprehensive approach to amass human judgments that would allow them to identify the dimensions underlying how we establish what is similar and what is not — what defines an object?

Despite my optimism, I wasn’t prepared for the paper’s reception in peer review. The referees’ comments, available in the transparent peer review file accompanying the published article, were so positive that we decided to issue acceptance-in-principle after a single round of review, with only small revisions required. A memorable occasion.

Science is an unpredictable, fortuitous combination of standards, processes and serendipity. As editors, we get to share in the occasions on which researchers strike gold, which you will find resides in the 12th dimension14.

Marike Schiffer has been an editor at Nature Human Behaviour since 2017.

The value of replication

The paper that impacted me most personally during my time as an editor at Nature Human Behaviour was also one that has had a great impact on our readership: “Evaluating the replicability of social science experiments in Nature and Science between 2010 and 2015”15.

At first glance, it seems that the main contribution of this paper is empirically demonstrating which results from studies published in two high-profile outlets, Nature and Science, are replicable and which are not. However, there is much more to this paper. From the perspective of an editor reading the paper, it provided an empirical test (and luckily validation!) of my very role: does all the work that we do as editors in the Nature portfolio result in more rigorous, reproducible and solid papers? The answer is actually promising: There was a significant effect in the same direction for 13 out of the 21 papers (61%) that met the selection criteria. This isn’t perfect, but if you consider that the result was only 36% in the Reproducibility project16, 61% is actually very good.

But the main reason for this paper’s impact isn’t just because it was a big, collaborative study on replicability at a time when replicability is an important topic for the social sciences, or because it demonstrated the importance of my own editorial role. Rather, it was because we were able to provide the context necessary to show the community that these replication projects are neither the final word nor the ultimate stamp of approval in the strength of an effect. Instead, they represent an important step in self-examination that is essential to science.

The context took the form of a collection (https://www.nature.com/collections/nfkchhxllx) comprising the original paper15 and companion, short letters from each of the authors whose studies did not replicate, who provided their own perspective on why their initial findings did not replicate. These nine correspondences17,18,19,20,21,22,23,24,25 taught me much more about what replication really means and why it is important, and also its limitations. By publishing the paper by Camerer et al.15 in this context, we actively fostered a community that uses replication studies to generate new hypotheses and hone the scientific method, rather than seeing them as demonstrations of failure. This is how I now understand replication studies and I’m very happy to see that this effort has had such an impact on the community.

Mary Elizabeth Sutherland was an editor at Nature Human Behaviour from 2018 to 2019.