Data Descriptor | Open

Data from a pre-publication independent replication initiative examining ten moral judgement effects

  • Scientific Data volume 3, Article number: 160082 (2016)
  • doi:10.1038/sdata.2016.82
  • Download Citation


We present the data from a crowdsourced project seeking to replicate findings in independent laboratories before (rather than after) they are published. In this Pre-Publication Independent Replication (PPIR) initiative, 25 research groups attempted to replicate 10 moral judgment effects from a single laboratory’s research pipeline of unpublished findings. The 10 effects were investigated using online/lab surveys containing psychological manipulations (vignettes) followed by questionnaires. Results revealed a mix of reliable, unreliable, and culturally moderated findings. Unlike any previous replication project, this dataset includes the data from not only the replications but also from the original studies, creating a unique corpus that researchers can use to better understand reproducibility and irreproducibility in science.

Design Type(s)
  • parallel group design
Measurement Type(s)
  • Reproducibility
Technology Type(s)
  • survey method
Factor Type(s)
  • study design
  • laboratory
Sample Characteristic(s)
  • Homo sapiens

Background & Summary

The replicability of findings from scientific research has garnered enormous popular and academic attention in recent years1,​2,​3. Results of replication initiatives attempting to reproduce previously published findings reveal that the majority of independent studies do not produce the same significant effects as the original investigation1,​2,​3,​4,​5.

There are many reasons why a scientific study may fail to replicate besides the original finding representing a false positive due to publication bias, questionable research practices, or error. These include meaningful population differences between the original and replication samples (e.g., cultural, subcultural, and demographic variability), overly optimistic estimates of study power based on initially published results, study materials that were carefully pre-tested in the original population but are not as well suited to the replication sample, a lack of replicator expertise, and errors in how the replication was carried out. Nonetheless, the low reproducibility rate has contributed to a crisis of confidence in science, in which the truth value of even many well-established findings has suddenly been called into question6.

The present line of research introduces a collaborative approach to increasing the robustness and reliability of scientific research, in which findings are replicated in independent laboratories before, rather than after, they are published7,8. In the Pre-Publication Independent Replication (PPIR) approach, original authors volunteer their own findings and select expert replication labs with subject populations they expect to show the effect. PPIR increases the informational value of unsuccessful replications, since common alternative explanations for failures to replicate such as a lack of replicator expertise and theoretically anticipated population differences are addressed. Sample sizes are also much larger than is common in the field, and the analysis plan is pre-registered9, allowing for more accurate effect size estimates and identification of unexpected population differences. An effect has been overestimated and is quite possibly a false positive if it consistently fails to replicate in PPIRs. Pre-publication independent replication also has the benefit of ensuring published findings are reliable before they are widely disseminated, rather than only checking after-the-fact.

In this first crowdsourced Pre-Publication Independent Replication (PPIR) initiative, 25 laboratories attempted to replicate 10 unpublished moral judgment findings in the research ‘pipeline’ of the last author and his collaborators (see Table 1). The original authors selected replication laboratories with directly relevant expertise (e.g., moral judgment researchers), and access to subject populations theoretically expected to show the effect. A pre-set list of replication criteria were applied10: whether the original and replication effect were in the same direction, whether the replication effect was statistically significant, whether the effect size was significant meta-analyzing the original and replication studies, whether the original effect size was within the confidence interval of the replication effect size, and finally the small telescopes criterion (a replication effect size large enough to be reliably captured by the original study11). Of the 10 original findings, 6 replicated according to all criteria, two studies failed to replicate entirely, one study replicated but with a smaller effect size than the original study, and one study replicated in United States samples but not outside the United States (see ref. 7 for a full empirical report).

Table 1: Overview of Replications.

Unique among the replication initiatives thus far, the pipeline project corpus includes the data from not only the replications but also all of the original studies targeted for replication. This creates a unique opportunity for future analysts to better understand reproducibility and irreproducibility in science, since the data from the original studies can be reanalyzed to better understand why a particular effect did or did not prove reliable. The dataset is complemented by both socioeconomic and demographic information on the research participants, and contains data from 6 countries (the United States, Canada, the Netherlands, France, Germany, and China) and replications in 4 languages (English, French, German, and Chinese). The Pre-Publication Independent Replication Project dataset is publicly available on the Open Science Framework (Data Citation 1: Open Science Framework and is accompanied by SPSS syntax which can be used to reproduce the analyses. This array of data will serve as a resource for researchers interested in research reproducibility, statistics, population differences, culturally-moderated phenomena, meta-science, moral judgments, and the complexities of replicating studies. For example, the data can be re-analyzed using meta-regression techniques in order to better understand if certain study characteristics or demographics moderate effect sizes. A re-analyst could also try out different analytic techniques and see how robust certain effects are to different specifications.



The Pre-Publication Independent Replication Project corpus includes three datasets. The first dataset (PPIR 1.sav: Data Citation 1: Open Science Framework contains data from 3 original studies and their replications, a second dataset (PPIR 2.sav: Data Citation 1: Open Science Framework contains data from 3 original studies and their replications, and a third dataset (PPIR 3.sav: Data Citation 1: Open Science Framework contains data from 4 original studies and their replications. In total data were collected from 11,805 participants. The first SPSS file (PPIR 1.sav: Data Citation 1: Open Science Framework contains data from 3,944 participants (including 514 from the original studies), while the second SPSS file (PPIR 2.sav: Data Citation 1: Open Science Framework contains data from 3,919 participants (including 351 from original studies) and the final SPSS file (PPIR 3.sav: Data Citation 1: Open Science Framework contains data from 3,829 participants (including 582 from original studies). An additional replication dataset collected in France contained 113 participants. No participants were removed from either the original or replication studies. All participants agreed to the informed consent form and the studies were in accordance with ethics regulations of the respective universities.

Testing procedure

The data were collected using both online and paper-pencil surveys from the respective laboratories and participants. The replications used the same materials and measurements as in the original studies, with the exception that the materials were translated into multiple languages. In the online version of the replications, Qualtrics was used to collect the data. This online platform allowed us to randomize the order by which the studies were presented. In order to prevent participant fatigue, studies were administered in one of three batches, each batch contained three to four studies, and study order was counterbalanced between subjects. Once subjects agreed to participate in the study, they read vignettes (see below for an example of the vignette from the Cold-Hearted Prosociality study) and completed survey questions assessing their reactions. Thereafter, the participants were thanked for their participation and debriefed.

Karen works as an assistant in a medical center that does cancer research. The laboratory develops drugs that improve survival rates for people stricken with breast cancer. As part of Karen’s job, she places mice in a special cage, and then exposes them to radiation in order to give them tumors. Once the mice develop tumors, it is Karen’s job to give them injections of experimental cancer drugs.

Lisa works as an assistant at a store for expensive pets. The store sells pet gerbils to wealthy individuals and families. As part of Lisa’s job, she places gerbils in a special bathtub, and then exposes them to a grooming shampoo in order to make sure they look nice for the customers. Once the gerbils are groomed, it is Lisa’s job to tie a bow on them.

Although the majority of the data were collected as described above, there were some exceptions. Specifically, as opposed to counterbalancing the order in which the study was presented, participants at Northwestern University were randomly allocated to a survey which either contained one longer study or three shorter studies that were presented in a fixed order. Participants at Yale University did not complete one study as the researchers felt that the participants may be offended by it. Also, there was a translation error in one study run at the INSEAD Paris laboratory which required that study to be re-run separately. Finally, study order for participants at HEC Paris was not counterbalanced but rather fixed. Table 2 includes an outline of the number of replications and conditions, a brief synopsis of study and instructions for creating variables. Detailed reports of each original study and the complete replication materials are available on the OSF in the Supplementary File 1 (00.Supplemental_Materials_Pipeline_Project_Final_10_24_2015.pdf: Data Citation 1: Open Science Framework Supplementary File 2 outlines all the names and measurement details used in the study (PPIR _Codebook.xlsx: Data Citation 1: Open Science Framework

Table 2: Technical Validation and Study Synopsis.

Data Records

All data records listed in this section are available from the Open Science Framework (Data Citation 1: Open Science Framework and can be downloaded without an OSF account. The datasets were anonymized to remove any information that could identify the participant responses, such as identification numbers from Amazon’s Mechanical Turk. The analysis was conducted with SPSS version 20 and detailed SPSS syntax (including comments) are provided to help with data analysis. In total there are 3 datasets and 11 syntax files available. These datasets are also accompanied by a codebook which describes the variables, the coding transformations necessary to replicate the analyses, and a synopsis of the respective studies.

First dataset

Location: (PPIR 1.sav: Data Citation 1: Open Science Framework

File format: SPSS Statistic Data Document file (.sav)

This file contains basic demographic information and responses to the items measured in the Moral Inversion study (SPSS Syntax files/PPIR 1–2 moral inversion.sps: Data Citation 1: Open Science Framework, Intuitive Economics study (SPSS Syntax files/PPIR 1–4 intuitive economics.sps: Data Citation 1: Open Science Framework, and Burn in Hell study (SPSS Syntax files/PPIR 1–7 burn in hell.sps: Data Citation 1: Open Science Framework

Second dataset

Location: (PPIR 2.sav: Data Citation 1: Open Science Framework

File format: SPSS Statistic Data Document file (.sav)

This file contains basic demographic information and responses to the items measured in the Presumption of Guilt study (SPSS Syntax files/PPIR 2—1 presumption of guilt.sps: Data Citation 1: Open Science Framework, The Moral Cliff study (SPSS Syntax files/PPIR 2–3 moral cliff.sps: Data Citation 1: Open Science Framework, and Bad Tipper study (SPSS Syntax files/PPIR 2–9 bad tipper.sps: Data Citation 1: Open Science Framework

Third dataset

Location: (PPIR 3.sav: Data Citation 1: Open Science Framework

File format: SPSS Statistic Data Document file (.sav)

This file contains basic demographic information and responses to the items measured in the Higher Standard Effect study (SPSS Syntax files/PPIR 3–5 higher standard—Charity.sps: Data Citation 1: Open Science Framework; SPSS Syntax files/PPIR 3–5 higher standard—Company.sps: Data Citation 1: Open Science Framework, Cold Hearted Prosociality study (SPSS Syntax files/PPIR 3–6 cold-hearted.sps: Data Citation 1: Open Science Framework, Bigot-Misanthrope study (SPSS Syntax files/PPIR 3–8 bigot misanthrope.sps: Data Citation 1: Open Science Framework, and Belief-Act Inconsistency study (SPSS Syntax files/PPIR 3–10 belief-act inconsistency.sps: Data Citation 1: Open Science Framework


Location: (Data descriptor—Codebook/PPIR _Codebook.xlsx: Data Citation 1: Open Science Framework

File format: Microsoft Excel Worksheet (.xlsx)

Introduction to PPIR project, outline of transformations, descriptions and labels for variables from three datasets.

Technical Validation

The studies include an array of original measurements which must be calculated to test the concepts of interest. These measures range from a single item to aggregated measures with multiple items, some of which must be reverse coded. (See Fig. 1, for an example of the items measuring candidate evaluations in the Higher Standards study; note that item 5 must be reverse coded prior to averaging the items into a composite). Instructions for how to create the study variables, the relevant conditions, and a synopsis of what concepts the variables measure, can be found in Table 2.

Figure 1: Example of the items measuring a typical moral judgement effect - in this instance candidate evaluations in the Higher Standards study.
Figure 1

Figure 1 outlines a typical questionnaire that was administered to the subjects to assess their attitudes and beliefs toward the characters depicted in the vignettes. The subjects were required to write next to the statement the number that best indicated how much they believed the statement was representative of Lisa’s or Karen’s characteristics.

Figure 1 outlines a typical questionnaire that was administered to the subjects to assess their attitudes and beliefs toward the characters depicted in the vignettes. The subjects were required to write next to the statement the number that best indicated how much they believed the statement was representative of Lisa’s or Karen’s characteristics.

Additional Information

How to cite this article: Tierney, W. et al. Data from a pre-publication independent replication initiative examining ten moral judgement effects. Sci. Data 3:160082 doi: 10.1038/sdata.2016.82 (2016).


  1. 1.

    & Drug development: Raise standards for preclinical cancer research. Nat 483, 531–533 (2012).

  2. 2.

    Open Science Collaboration. Estimating the reproducibility of psychological science. Sci 349, aac4716 (2015).

  3. 3.

    et al. Many Labs 3: Evaluating participant pool quality across the academic semester via replication. J. Exp. Soc. Psychol. 67, 68–82 (2016).

  4. 4.

    , & Believe it or not: how much can we rely on published data on potential drug targets? Nat. Rev. Drug. Discov. 10, 712–713 (2011).

  5. 5.

    & Editors’ introduction to the special section on replicability in psychological science: A crisis of confidence? Perspect. Psychol. Sci. 7, 528–530 (2012).

  6. 6.

    et al. Investigating variation in replicability: A ‘many labs’ replication project. Soc. Psychol. 45, 142–152 (2014).

  7. 7.

    et al. The pipeline project: Pre-publication independent replications of a single laboratory’s research pipeline. J. Exp. Soc. Psychol. 66, 55–67 (2016).

  8. 8.

    Metascience could rescue the ‘replication crisis’. Nat 515, 9 (2014).

  9. 9.

    et al. An agenda for purely confirmatory research. Perspect. Psychol. Sci. 7, 627–633 (2012).

  10. 10.

    et al. The replication recipe: What makes for a convincing replication? J. Exp. Soc. Psychol. 50, 217–224 (2014).

  11. 11.

    Small telescopes: Detectability and the evaluation of replication results. Psychol. Sci. 26, 559–569 (2015).

Download references

Data Citations

  1. 1.

    Tierney, W., Schweinsberg, M., & Uhlmann, E. L. Open Science Framework (2016)


The authors gratefully acknowledge financial support from an R&D grant from INSEAD.

Author information


  1. INSEAD, Fontainebleau 77305, France and Singapore 138676, Singapore

    • Warren Tierney
    • , Martin Schweinsberg
    • , Nico Thornley
    • , Nikhil Madan
    • , Eliza Bivolaru
    • , Michael Schaerer
    • , Lynn Wong
    • , Sophie-Charlotte Darroux
    •  & Eric Luis Uhlmann
  2. IMD, Lausanne, Lausanne 1001, Switzerland

    • Jennifer Jordan
  3. University of Washington Bothell, Bothell 98011, USA

    • Deanna M. Kennedy
  4. IE Business School, IE University, Madrid 28006, Spain

    • Israr Qureshi
  5. HEC Paris, Jouy-en-Josas 78351, France

    • S. Amy Sommer
    •  & Anne-Laure Sellier
  6. University of Padova, Padova 35131, Italy

    • Michelangelo Vianello
  7. University of Washington, Seattle 98195, USA

    • Eli Awtrey
    • , Sapna Cheryan
    •  & Lily Jiang
  8. University of Manitoba, Winnipeg R3T 5V4, Canada

    • Luke Lei Zhu
  9. University of Chicago, Chicago 60637, USA

    • Daniel Diermeier
  10. University of Michigan, Ann Arbor 48109, USA

    • Justin E. Heinze
    •  & Tatiana Sokolova
  11. Harvard University, Cambridge 2138, USA

    • Malavika Srinivasan
    •  & Fiery Cushman
  12. University of Utah, Salt Lake City 84112, USA

    • David Tannenbaum
  13. Yale University, New Haven 6511, USA

    • Jason Dana
    • , Victoria Brescoll
    •  & George Newman
  14. University of Missouri, Columbia 65211, USA

    • Clintin P. Davis-Stober
  15. Rotterdam School of Management, Erasmus University, Rotterdam 3000 DR, The Netherlands

    • Christilene du Plessis
  16. University of Amsterdam, Amsterdam 1001 NK, The Netherlands

    • Quentin F. Gronau
    • , Alexander Ly
    • , Maarten Marsman
    •  & Eric-Jan Wagenmakers
  17. UCP—Católica Lisbon School of Business & Economics, Lisbon 1649-023, Portugal

    • Andrew C. Hafenbrack
  18. Hang Seng Management College, Hong Kong, Hong Kong

    • Eko Yi Liao
  19. Roosevelt University, Chicago 60605, USA

    • Toshio Murase
  20. University of Illinois at Urbana-Champaign, Champaign 61820, USA

    • Christina M. Tworek
    •  & Daniel Storage
  21. Illinois Institute of Technology, Chicago 60616, USA

    • Tabitha Anderson
    • , Andrew Canavan
    • , Diana Cordon
    • , Alice Amell
    • , Kristie Hein
    • , Tehlyr Kellogg
    • , Nicole Legate
    • , Heidi Maibeucher
    •  & Carlos T. Wilson
  22. University of California, Irvine 92697, USA

    • Christopher W. Bauman
    • , Peter H. Ditto
    • , Rebecca Hofstein Grady
    •  & Jennifer Miles
  23. University of South Florida, Tampa 33620, USA

    • Wendy L. Bedwell
    • , Sarah E. Frick
    •  & P. Scott Ramsay
  24. Institute for Social Research, University of Michigan, Ann Arbor 48104, USA

    • Jesse J. Chandler
  25. University of Massachusetts Amherst, Amherst 1003, USA

    • Erik Cheries
  26. Washington University in St Louis, St Louis 63130, USA

    • Felix Cheung
  27. University of Hong Kong, Hong Kong, Hong Kong

    • Felix Cheung
    •  & Harvey Packham
  28. Department of Psychology, New York University, New York 10003, USA

    • Andrei Cimpian
    • , Jennifer L. Ray
    •  & Jay J. Van Bavel
  29. American University, Washington 20016, USA

    • Mark A. Clark
    •  & Alexandra Mislin
  30. Northwestern University, Evanston 60208, USA

    • Monica Gamez-Djokic
    •  & Daniel C. Molden
  31. University of Southern California, Los Angeles 90089, USA

    • Jesse Graham
  32. Monash University, Melbourne 3145, Australia

    • Jun Gu
  33. Social Cognition Center Cologne, University of Cologne, Koeln 50931, Germany

    • Adam Hahn
    • , Nicole J. Hartwich
    •  & Timo P. Luoma
  34. University of Illinois at Chicago, Chicago 60607, USA

    • Brittany E. Hanson
    • , Matt Motyl
    •  & Anthony N. Washburn
  35. University of Toronto, Toronto ON M5S, Canada

    • Yoel Inbar
  36. University of Pennsylvania, Philadelphia 19104, USA

    • Peter Meindl
  37. Université Paris Ouest Nanterre La Défense, Nanterre 92000, France

    • Hoai Huong Ngo
  38. University of St Thomas, St Paul 55105, USA

    • Aaron M. Sackett
  39. Centre for Psychiatry and Neuroscience, Walter Reed Army Institute of Research (WRAIR), Silver Spring 20910, USA

    • Walter Sowden
  40. Beijing Normal University, Beijing 100875, China

    • Xiaomin Sun
    •  & Cong Wei
  41. Stockholm School of Economics, Stockholm 11383, Sweden

    • Erik Wetter


  1. Search for Warren Tierney in:

  2. Search for Martin Schweinsberg in:

  3. Search for Jennifer Jordan in:

  4. Search for Deanna M. Kennedy in:

  5. Search for Israr Qureshi in:

  6. Search for S. Amy Sommer in:

  7. Search for Nico Thornley in:

  8. Search for Nikhil Madan in:

  9. Search for Michelangelo Vianello in:

  10. Search for Eli Awtrey in:

  11. Search for Luke Lei Zhu in:

  12. Search for Daniel Diermeier in:

  13. Search for Justin E. Heinze in:

  14. Search for Malavika Srinivasan in:

  15. Search for David Tannenbaum in:

  16. Search for Eliza Bivolaru in:

  17. Search for Jason Dana in:

  18. Search for Clintin P. Davis-Stober in:

  19. Search for Christilene du Plessis in:

  20. Search for Quentin F. Gronau in:

  21. Search for Andrew C. Hafenbrack in:

  22. Search for Eko Yi Liao in:

  23. Search for Alexander Ly in:

  24. Search for Maarten Marsman in:

  25. Search for Toshio Murase in:

  26. Search for Michael Schaerer in:

  27. Search for Christina M. Tworek in:

  28. Search for Eric-Jan Wagenmakers in:

  29. Search for Lynn Wong in:

  30. Search for Tabitha Anderson in:

  31. Search for Christopher W. Bauman in:

  32. Search for Wendy L. Bedwell in:

  33. Search for Victoria Brescoll in:

  34. Search for Andrew Canavan in:

  35. Search for Jesse J. Chandler in:

  36. Search for Erik Cheries in:

  37. Search for Sapna Cheryan in:

  38. Search for Felix Cheung in:

  39. Search for Andrei Cimpian in:

  40. Search for Mark A. Clark in:

  41. Search for Diana Cordon in:

  42. Search for Fiery Cushman in:

  43. Search for Peter H. Ditto in:

  44. Search for Alice Amell in:

  45. Search for Sarah E. Frick in:

  46. Search for Monica Gamez-Djokic in:

  47. Search for Rebecca Hofstein Grady in:

  48. Search for Jesse Graham in:

  49. Search for Jun Gu in:

  50. Search for Adam Hahn in:

  51. Search for Brittany E. Hanson in:

  52. Search for Nicole J. Hartwich in:

  53. Search for Kristie Hein in:

  54. Search for Yoel Inbar in:

  55. Search for Lily Jiang in:

  56. Search for Tehlyr Kellogg in:

  57. Search for Nicole Legate in:

  58. Search for Timo P. Luoma in:

  59. Search for Heidi Maibeucher in:

  60. Search for Peter Meindl in:

  61. Search for Jennifer Miles in:

  62. Search for Alexandra Mislin in:

  63. Search for Daniel C. Molden in:

  64. Search for Matt Motyl in:

  65. Search for George Newman in:

  66. Search for Hoai Huong Ngo in:

  67. Search for Harvey Packham in:

  68. Search for P. Scott Ramsay in:

  69. Search for Jennifer L. Ray in:

  70. Search for Aaron M. Sackett in:

  71. Search for Anne-Laure Sellier in:

  72. Search for Tatiana Sokolova in:

  73. Search for Walter Sowden in:

  74. Search for Daniel Storage in:

  75. Search for Xiaomin Sun in:

  76. Search for Jay J. Van Bavel in:

  77. Search for Anthony N. Washburn in:

  78. Search for Cong Wei in:

  79. Search for Erik Wetter in:

  80. Search for Carlos T. Wilson in:

  81. Search for Sophie-Charlotte Darroux in:

  82. Search for Eric Luis Uhlmann in:


Prepared the dataset for publication: Warren Tierney, Martin Schweinsberg.

Wrote the data descriptor: Warren Tierney, Martin Schweinsberg, Eric Luis Uhlmann.

Analysis co-pilots for data publication: Warren Tierney, Jennifer Jordan, Deanna M. Kennedy, Israr Qureshi, Martin Schweinsberg, Amy Sommer, Nico Thornley.

Designed the pre-publication independent replication project and wrote the original project proposal: Eric Luis Uhlmann.

Coordinators for the pre-publication independent replication project: Martin Schweinsberg, Nikhil Madan, Michelangelo Vianello, Amy Sommer, Jennifer Jordan, Warren Tierney, Eli Awtrey, Luke (Lei) Zhu, Eric Luis Uhlmann.

Contributed original studies for replication: Daniel Diermeier, Justin E. Heinze, Malavika Srinivasan, David Tannenbaum, Eric Luis Uhlmann, Luke Zhu.

Translated study materials: Adam Hahn, Nicole Hartwich, Timo Luoma, Hoai Huong Ngo, Sophie-Charlotte Darroux.

Analyzed data from the replications: Michelangelo Vianello, Jennifer Jordan, Amy Sommer, Eli Awtrey, Eliza Bivolaru, Jason Dana, Clintin P. Davis-Stober, Christilene du Plessis, Quentin F. Gronau, Andrew C. Hafenbrack, Eko Yi Liao, Alexander Ly, Maarten Marsman, Toshio Murase, Israr Qureshi, Michael Schaerer, Warren Tierney, Nico Thornley, Christina M. Tworek, Eric-Jan Wagenmakers, Lynn Wong.

Carried out the replications: Eli Awtrey, Jennifer Jordan, Amy Sommer, Tabitha Anderson, Christopher W. Bauman, Wendy L. Bedwell, Victoria Brescoll, Andrew Canavan, Jesse J. Chandler, Erik Cheries, Sapna Cheryan, Felix Cheung, Andrei Cimpian, Mark A. Clark, Diana Cordon, Fiery Cushman, Peter Ditto, Alice Amell, Sarah E. Frick, Monica Gamez-Djokic, Rebecca Hofstein Grady, Jesse Graham, Jun Gu, Adam Hahn, Brittany E. Hanson, Nicole J. Hartwich, Kristie Hein, Yoel Inbar, Lily Jiang, Tehlyr Kellogg, Deanna M. Kennedy, Nicole Legate, Timo P. Luoma, Heidi Maibuecher, Peter Meindl, Jennifer Miles, Alexandra Mislin, Daniel Molden, Matt Motyl, George Newman, Hoai Huong Ngo, Harvey Packham, Philip S. Ramsay, Jennifer Lauren Ray, Aaron M. Sackett, Anne-Laure Sellier,

Tatiana Sokolova, Walter Sowden, Daniel Storage, Xiaomin Sun, Christina M. Tworek, Jay J. Van Bavel, Anthony N. Washburn, Cong Wei, Erik Wetter, Carlos T. Wilson.

Competing interests

The authors declare no competing financial interests. Warren Tierney had full access to all of the data and takes responsibility for the integrity and accuracy of the analysis.

Corresponding authors

Correspondence to Warren Tierney or Martin Schweinsberg or Eric Luis Uhlmann.

Supplementary information

Creative Commons BYThis work is licensed under a Creative Commons Attribution 4.0 International License. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in the credit line; if the material is not included under the Creative Commons license, users will need to obtain permission from the license holder to reproduce the material. To view a copy of this license, visit Metadata associated with this Data Descriptor is available at and is released under the CC0 waiver to maximize reuse.