We propose to change the default P-value threshold for statistical significance from 0.05 to 0.005 for claims of new discoveries.

The lack of reproducibility of scientific studies has caused growing concern over the credibility of claims of new discoveries based on ‘statistically significant’ findings. There has been much progress toward documenting and addressing several causes of this lack of reproducibility (for example, multiple testing, P-hacking, publication bias and under-powered studies). However, we believe that a leading cause of non-reproducibility has not yet been adequately addressed: statistical standards of evidence for claiming new discoveries in many fields of science are simply too low. Associating statistically significant findings with P < 0.05 results in a high rate of false positives even in the absence of other experimental, procedural and reporting problems.

For fields where the threshold for defining statistical significance for new discoveries is P < 0.05, we propose a change to P < 0.005. This simple step would immediately improve the reproducibility of scientific research in many fields. Results that would currently be called significant but do not meet the new threshold should instead be called suggestive. While statisticians have known the relative weakness of using P ≈ 0.05 as a threshold for discovery and the proposal to lower it to 0.005 is not new1,2, a critical mass of researchers now endorse this change.

We restrict our recommendation to claims of discovery of new effects. We do not address the appropriate threshold for confirmatory or contradictory replications of existing claims. We also do not advocate changes to discovery thresholds in fields that have already adopted more stringent standards (for example, genomics and high-energy physics research; see the ‘Potential objections’ section below).

We also restrict our recommendation to studies that conduct null hypothesis significance tests. We have diverse views about how best to improve reproducibility, and many of us believe that other ways of summarizing the data, such as Bayes factors or other posterior summaries based on clearly articulated model assumptions, are preferable to P values. However, changing the P value threshold is simple, aligns with the training undertaken by many researchers, and might quickly achieve broad acceptance.

Strength of evidence from P values

In testing a point null hypothesis H0 against an alternative hypothesis H1 based on data xobs, the P value is defined as the probability, calculated under the null hypothesis, that a test statistic is as extreme or more extreme than its observed value. The null hypothesis is typically rejected — and the finding is declared statistically significant — if the P value falls below the (current) type I error threshold α = 0.05.

From a Bayesian perspective, a more direct measure of the strength of evidence for H1 relative to H0 is the ratio of their probabilities. By Bayes’ rule, this ratio may be written as:

Pr H 1 x obs Pr H 0 x obs = f x obs H 1 f x obs H 0 × Pr H 1 Pr H 0 BF× prior odds
(1)

where BF is the Bayes factor that represents the evidence from the data, and the prior odds can be informed by researchers’ beliefs, scientific consensus, and validated evidence from similar research questions in the same field. Multiple-hypothesis testing, P-hacking and publication bias all reduce the credibility of evidence. Some of these practices reduce the prior odds of H1 relative to H0 by changing the population of hypothesis tests that are reported. Prediction markets3 and analyses of replication results4 both suggest that for psychology experiments, the prior odds of H1 relative to H0 may be only about 1:10. A similar number has been suggested in cancer clinical trials, and the number is likely to be much lower in preclinical biomedical research5.

There is no unique mapping between the P value and the Bayes factor, since the Bayes factor depends on H1. However, the connection between the two quantities can be evaluated for particular test statistics under certain classes of plausible alternatives (Fig. 1).

Figure 1: Relationship between the P value and the Bayes factor.
Figure 1

The Bayes factor (BF) is defined as f x obs H 1 f x obs H 0 . The figure assumes that observations are independent and identically distributed (i.i.d.) according to x ~ N(μ,σ 2), where the mean μ is unknown and the variance σ 2 is known. The P value is from a two-sided z-test (or equivalently a one-sided χ 1 2 -test) of the null hypothesis H 0: μ = 0. Power (red curve): BF obtained by defining H 1 as putting ½ probability on μ = ±m for the value of m that gives 75% power for the test of size α = 0.05. This H 1 represents an effect size typical of that which is implicitly assumed by researchers during experimental design. Likelihood ratio bound (black curve): BF obtained by defining H 1 as putting ½ probability on μ = ± x ^ , where x ^ is approximately equal to the mean of the observations. These BFs are upper bounds among the class of all H 1 terms that are symmetric around the null, but they are improper because the data are used to define H 1. UMPBT (blue curve): BF obtained by defining H 1 according to the uniformly most powerful Bayesian test2 that places ½ probability on μ = ±w, where w is the alternative hypothesis that corresponds to a one-sided test of size 0.0025. This curve is indistinguishable from the ‘Power’ curve that would be obtained if the power used in its definition was 80% rather than 75%. Local-H 1 bound (green curve): BF= 1 - e p ln p , where p is the P value, is a large-sample upper bound on the BF from among all unimodal alternative hypotheses that have a mode at the null and satisfy certain regularity conditions15. The red numbers on the y axis indicate the range of Bayes factors that are obtained for P values of 0.005 or 0.05. For more details, see the Supplementary Information.

A two-sided P value of 0.05 corresponds to Bayes factors in favour of H1 that range from about 2.5 to 3.4 under reasonable assumptions about H1 (Fig. 1). This is weak evidence from at least three perspectives. First, conventional Bayes factor categorizations6 characterize this range as ‘weak’ or ‘very weak’. Second, we suspect many scientists would guess that P ≈ 0.05 implies stronger support for H1 than a Bayes factor of 2.5 to 3.4. Third, using equation (1) and prior odds of 1:10, a P value of 0.05 corresponds to at least 3:1 odds (that is, the reciprocal of the product 1 10 ×3.4) in favour of the null hypothesis!

Why 0.005

The choice of any particular threshold is arbitrary and involves a trade-off between type I and type II errors. We propose 0.005 for two reasons. First, a two-sided P value of 0.005 corresponds to Bayes factors between approximately 14 and 26 in favour of H1. This range represents ‘substantial’ to ‘strong’ evidence according to conventional Bayes factor classifications6.

Second, in many fields the P < 0.005 standard would reduce the false positive rate to levels we judge to be reasonable. If we let φ denote the proportion of null hypotheses that are true, 1 – β the power of tests in rejecting false null hypotheses, and α the type I error/significance threshold, then as the population of tested hypotheses becomes large, the false positive rate (that is, the proportion of true null effects among the total number of statistically significant findings) can be approximated by:

Falsepositiverate α ϕ α ϕ + 1 - β 1 - ϕ
(2)

For different levels of the prior odds that there is a true effect, 1 - ϕ ϕ , and for significance thresholds α = 0.05 and α = 0.005, Fig. 2 shows the false positive rate as a function of power 1−β.

Figure 2: Relationship between the P value threshold, power, and the false positive rate.
Figure 2

Calculated according to equation (2), with prior odds defined as 1 - ϕ ϕ = Pr H 1 Pr H 0 . For more details, see the Supplementary Information.

In many studies, statistical power is low7. Figure 2 demonstrates that low statistical power and α = 0.05 combine to produce high false positive rates.

For many, the calculations illustrated by Fig. 2 may be unsettling. For example, the false positive rate is greater than 33% with prior odds of 1:10 and a P value threshold of 0.05, regardless of the level of statistical power. Reducing the threshold to 0.005 would reduce this minimum false positive rate to 5%. Similar reductions in false positive rates would occur over a wide range of statistical powers.

Empirical evidence from recent replication projects in psychology and experimental economics provide insights into the prior odds in favour of H1. In both projects, the rate of replication (that is, significance at P < 0.05 in the replication in a consistent direction) was roughly double for initial studies with P < 0.005 relative to initial studies with 0.005 < P < 0.05: 50% versus 24% for psychology8, and 85% versus 44% for experimental economics9. Although based on relatively small samples of studies (93 in psychology, and 16 in experimental economics, after excluding initial studies with P > 0.05), these numbers are suggestive of the potential gains in reproducibility that would accrue from the new threshold of P < 0.005 in these fields. In biomedical research, 96% of a sample of recent papers claim statistically significant results with the P < 0.05 threshold10. However, replication rates were very low5 for these studies, suggesting a potential for gains by adopting this new standard in these fields as well.

Potential objections

We now address the most compelling arguments against adopting this higher standard of evidence.

The false negative rate would become unacceptably high

Evidence that does not reach the new significance threshold should be treated as suggestive, and where possible further evidence should be accumulated; indeed, the combined results from several studies may be compelling even if any particular study is not. Failing to reject the null hypothesis does not mean accepting the null hypothesis. Moreover, the false negative rate will not increase if sample sizes are increased so that statistical power is held constant.

For a wide range of common statistical tests, transitioning from a P value threshold of α = 0.05 to α = 0.005 while maintaining 80% power would require an increase in sample sizes of about 70%. Such an increase means that fewer studies can be conducted using current experimental designs and budgets. But Fig. 2 shows the benefit: false positive rates would typically fall by factors greater than two. Hence, considerable resources would be saved by not performing future studies based on false premises. Increasing sample sizes is also desirable because studies with small sample sizes tend to yield inflated effect size estimates11, and publication and other biases may be more likely in an environment of small studies12. We believe that efficiency gains would far outweigh losses.

The proposal does not address multiple-hypothesis testing, P-hacking, publication bias, low power, or other biases (for example, confounding, selective reporting, and measurement error), which are arguably the bigger problems

We agree. Reducing the P value threshold complements — but does not substitute for — solutions to these other problems, which include good study design, ex ante power calculations, pre-registration of planned analyses, replications, and transparent reporting of procedures and all statistical analyses conducted.

The appropriate threshold for statistical significance should be different for different research communities

We agree that the significance threshold selected for claiming a new discovery should depend on the prior odds that the null hypothesis is true, the number of hypotheses tested, the study design, the relative cost of type I versus type II errors, and other factors that vary by research topic. For exploratory research with very low prior odds (well outside the range in Fig. 2), even lower significance thresholds than 0.005 are needed. Recognition of this issue led the genetics research community to move to a ‘genome-wide significance threshold’ of 5 × 10–8 over a decade ago. And in high-energy physics, the tradition has long been to define significance by a ‘5-sigma’ rule (roughly a P value threshold of 3 × 10–7). We are essentially suggesting a move from a 2-sigma rule to a 3-sigma rule.

Our recommendation applies to disciplines with prior odds broadly in the range depicted in Fig. 2, where use of P < 0.05 as a default is widespread. Within those disciplines, it is helpful for consumers of research to have a consistent benchmark. We feel the default should be shifted.

Changing the significance threshold is a distraction from the real solution, which is to replace null hypothesis significance testing (and bright-line thresholds) with more focus on effect sizes and confidence intervals, treating the P value as a continuous measure, and/or a Bayesian method.

Many of us agree that there are better approaches to statistical analyses than null hypothesis significance testing, but as yet there is no consensus regarding the appropriate choice of replacement. For example, a recent statement by the American Statistical Association addressed numerous issues regarding the misinterpretation and misuse of P values (as well as the related concept of statistical significance), but failed to make explicit policy recommendations to address these shortcomings13. Even after the significance threshold is changed, many of us will continue to advocate for alternatives to null hypothesis significance testing.

Concluding remarks

Ronald Fisher understood that the choice of 0.05 was arbitrary when he introduced it14. Since then, theory and empirical evidence have demonstrated that a lower threshold is needed. A much larger pool of scientists are now asking a much larger number of questions, possibly with much lower prior odds of success.

For research communities that continue to rely on null hypothesis significance testing, reducing the P value threshold for claims of new discoveries to 0.005 is an actionable step that will immediately improve reproducibility. We emphasize that this proposal is about standards of evidence, not standards for policy action nor standards for publication. Results that do not reach the threshold for statistical significance (whatever it is) can still be important and merit publication in leading journals if they address important research questions with rigorous methods. This proposal should not be used to reject publications of novel findings with 0.005 < P < 0.05 properly labelled as suggestive evidence. We should reward quality and transparency of research as we impose these more stringent standards, and we should monitor how researchers’ behaviours are affected by this change. Otherwise, science runs the risk that the more demanding threshold for statistical significance will be met to the detriment of quality and transparency.

Journals can help transition to the new statistical significance threshold. Authors and readers can themselves take the initiative by describing and interpreting results more appropriately in light of the new proposed definition of statistical significance. The new significance threshold will help researchers and readers to understand and communicate evidence more accurately.

References

  1. 1.

    Greenwald, A. G. et al. Psychophysiology 33, 175–183 (1996).

  2. 2.

    Johnson, V. E. Proc. Natl Acad. Sci. USA 110, 19313–19317 (2013).

  3. 3.

    Dreber, A. et al. Proc. Natl Acad. Sci. USA 112, 15343–15347 (2015).

  4. 4.

    Johnson, V. E. et al. J. Am. Stat. Assoc. 112, 1–10 (2016).

  5. 5.

    Begley, C. G. & Ioannidis, J. P. A. Circ. Res. 116, 116–126 (2015).

  6. 6.

    Kass, R. E. & Raftery, A. E. J. Am. Stat. Assoc. 90, 773–795 (1995).

  7. 7.

    Szucs, D. & Ioannidis, J. P. A. PLoS Biol. 15, e2000797 (2017).

  8. 8.

    Open Science Collaboration. Science 349, aac4716 (2015).

  9. 9.

    Camerer, C. F. et al. Science 351, 1433–1436 (2016).

  10. 10.

    Chavalarias, D. et al. JAMA 315, 1141–1148 (2016).

  11. 11.

    Gelman, A. & Carlin, J. Perspect. Psychol. Sci. 9, 641–651 (2014).

  12. 12.

    Fanelli, D., Costas, R. & Ioannidis, J. P. A. Proc. Natl Acad. Sci. USA 114, 3714–3719 (2017).

  13. 13.

    Wasserstein, R. L. & Lazar, N. A. Am. Stat. 70, 129–133 (2016).

  14. 14.

    Fisher, R. A. Statistical Methods for Research Workers (Oliver & Boyd, Edinburgh, 1925).

  15. 15.

    Sellke, T., Bayarri, M. J. & Berger, J. O. Am. Stat. 55, 62–71 (2001).

Download references

Acknowledgements

We thank D. L. Lormand, R. Royer and A. T. Nguyen Viet for excellent research assistance.

Author information

Affiliations

  1. Center for Economic and Social Research and Department of Economics, University of Southern California, Los Angeles, CA, 90089-3332, USA

    • Daniel J. Benjamin
  2. Department of Statistical Science, Duke University, Durham, NC, 27708-0251, USA

    • James O. Berger
    • , Merlise Clyde
    •  & Robert L. Wolpert
  3. Department of Economics, Stockholm School of Economics, Stockholm, SE-113 83, Sweden

    • Magnus Johannesson
    •  & Anna Dreber
  4. University of Virginia, Charlottesville, VA, 22908, USA

    • Brian A. Nosek
  5. Center for Open Science, Charlottesville, VA, 22903, USA

    • Brian A. Nosek
  6. Department of Psychology, University of Amsterdam, Amsterdam, 1018 VZ, The Netherlands

    • E.-J. Wagenmakers
  7. School of Arts and Sciences and Department of Criminology, University of Pennsylvania, Philadelphia, PA, 19104-6286, USA

    • Richard Berk
  8. Department of Psychology and Neuroscience, Department of Sociology, University of North Carolina Chapel Hill, Chapel Hill, NC, 27599-3270, USA

    • Kenneth A. Bollen
  9. Institute of Zoology — Neurogenetics, Universität Regensburg, Universitätsstrasse 31, 93040, Regensburg, Germany

    • Björn Brembs
  10. Department of Statistics, The Wharton School, University of Pennsylvania, Philadelphia, PA, 19104, USA

    • Richard Berk
    • , Lawrence Brown
    •  & Edward I. George
  11. Division of the Humanities and Social Sciences, California Institute of Technology, Pasadena, CA, 91125, USA

    • Colin Camerer
  12. Department of Economics, New York University, New York, NY, 10012, USA

    • David Cesarini
  13. The Research Institute of Industrial Economics (IFN), Stockholm, SE-102 15, Sweden

    • David Cesarini
  14. Cardiff University Brain Research Imaging Centre (CUBRIC), Cardiff, CF24 4HQ, UK

    • Christopher D. Chambers
  15. Northwestern University, Evanston, IL, 60208, USA

    • Thomas D. Cook
  16. Mathematica Policy Research, Washington, DC, 20002-4221, USA

    • Thomas D. Cook
  17. Department of Psychology, Quantitative Program, Ohio State University, Columbus, OH, 43210, USA

    • Paul De Boeck
  18. School of Psychology, University of Sussex, Brighton, BN1 9QH, UK

    • Zoltan Dienes
    •  & Andy P. Field
  19. Department of Philosophy, Texas A&M University, College Station, TX, 77843-4237, USA

    • Kenny Easwaran
  20. Department of Psychology, Royal Holloway University of London, Egham Surrey, TW20 0EX, UK

    • Charles Efferson
  21. Department of Economics, University of Zurich, 8006, Zurich, Switzerland

    • Ernst Fehr
  22. School of BioSciences and School of Historical & Philosophical Studies, University of Melbourne, Parkville, VIC, 3010, Australia

    • Fiona Fidler
  23. Department of Philosophy, University of Wisconsin — Madison, Madison, WI, 53706, USA

    • Malcolm Forster
  24. Department of Psychology, University of Michigan, Ann Arbor, MI, 48109-1043, USA

    • Richard Gonzalez
  25. Stanford University, General Medical Disciplines, Stanford, CA, 94305, USA

    • Steven Goodman
  26. Department of Ecology, Evolution and Natural Resources SEBS, Rutgers University, New Brunswick, NJ, 08901-8551, USA

    • Edwin Green
  27. Department of Political Science, Columbia University in the City of New York, New York, NY, 10027, USA

    • Donald P. Green
  28. Department of Psychology, University of Washington, Seattle, WA, 98195-1525, USA

    • Anthony G. Greenwald
  29. Institute of Evolutionary Biology School of Biological Sciences, The University of Edinburgh, Edinburgh, EH9 3JT, UK

    • Jarrod D. Hadfield
  30. Weinberg College of Arts & Sciences Department of Statistics, Northwestern University, Evanston, IL, 60208, USA

    • Larry V. Hedges
  31. Epidemiology, Biostatistics and Prevention Institute (EBPI), University of Zurich, 8001, Zurich, Switzerland

    • Leonhard Held
  32. National University of Singapore, Singapore, 119077, Singapore

    • Teck Hua Ho
  33. Department of Methods and Statistics, Universiteit Utrecht, Utrecht, 3584 CH, The Netherlands

    • Herbert Hoijtink
  34. School of Human Evolution and Social Change, Arizona State University, Tempe, AZ, 85287-2402, USA

    • Daniel J. Hruschka
  35. Department of Politics and Center for Statistics and Machine Learning, Princeton University, Princeton, NJ, 08544, USA

    • Kosuke Imai
  36. Stanford University, Stanford, CA, 94305-5015, USA

    • Guido Imbens
  37. Departments of Medicine, of Health Research and Policy, of Biomedical Data Science, and of Statistics and Meta-Research Innovation Center at Stanford (METRICS), Stanford University, Stanford, CA, 94305, USA

    • John P. A. Ioannidis
  38. Advanced Quantitative Methods, Social Research Methodology, Department of Education, Graduate School of Education & Information Studies, University of California, Los Angeles, CA, 90095-1521, USA

    • Minjeong Jeon
  39. Department of Life Sciences, Imperial College London, Ascot, SL5 7PY, UK

    • James Holland Jones
  40. Department of Earth System Science, Stanford, CA, 94305-4216, USA

    • James Holland Jones
  41. Department of Banking and Finance, University of Innsbruck and University of Gothenburg, Innsbruck, A-6020, Austria

    • Michael Kirchler
  42. Department of Economics, Harvard University, Cambridge, MA, 02138, USA

    • David Laibson
  43. Department of Economics, University of Chicago, Chicago, IL, 60637, USA

    • John List
  44. Department of Biostatistics, University of Michigan, Ann Arbor, MI, 48109-2029, USA

    • Roderick Little
  45. Department of Political Science, University of Michigan, Ann Arbor, MI, 48109-1045, USA

    • Arthur Lupia
  46. Department of History and Philosophy of Science, University of Pittsburgh, Pittsburgh, PA, 15260, USA

    • Edouard Machery
  47. Department of Psychology, University of Notre Dame, Notre Dame, IN, 46556, USA

    • Scott E. Maxwell
  48. School of BioSciences, University of Melbourne, Parkville, VIC, 3010, Australia

    • Michael McCarthy
  49. Haas School of Business, University of California at Berkeley, Berkeley, CA, 94720-1900A, USA

    • Don A. Moore
  50. Johns Hopkins University, Baltimore, MD, 21218, USA

    • Stephen L. Morgan
  51. MRC Integrative Epidemiology Unit, University of Bristol, Bristol, BS8 1TU, UK

    • Marcus Munafó
  52. UK Centre for Tobacco and Alcohol Studies, School of Experimental Psychology, University of Bristol, Bristol, BS8 1TU, UK

    • Marcus Munafó
  53. Evolution & Ecology Research Centre and School of Biological, Earth and Environmental Sciences, University of New South Wales, Sydney, NSW, 2052, Australia

    • Shinichi Nakagawa
  54. Department of Government, Dartmouth College, Hanover, NH, 03755, USA

    • Brendan Nyhan
  55. Department of Biology, Whitman College, Walla Walla, WA, 99362, USA

    • Timothy H. Parker
  56. Department of Mathematics, University of Puerto Rico, Rio Piedras Campus, San Juan, PR, 00936-8377, Puerto Rico

    • Luis Pericchi
  57. Department of Psychology, University of Milan-Bicocca, Milan, 20126, Italy

    • Marco Perugini
  58. Department of Cognitive Sciences, University of California, Irvine, CA, 92617, USA

    • Jeff Rouder
  59. Université Paris Dauphine, 75016, Paris, France

    • Judith Rousseau
  60. Department of Psychology, The University of British Columbia, Vancouver, V6T 1Z4, BC, Canada

    • Victoria Savalei
  61. Department Psychology, Ludwig-Maximilians-University Munich, Leopoldstraβe 13, 80802, Munich, Germany

    • Felix D. Schönbrodt
  62. Department of Statistics, Purdue University, West Lafayette, IN, 47907-2067, USA

    • Thomas Sellke
  63. Department of Political Science, Washington University in St. Louis, St. Louis, MO, 63130-4899, USA

    • Betsy Sinclair
  64. Government Department, Harvard University, Cambridge, MA, 02138, USA

    • Dustin Tingley
  65. Department of Psychology, Ohio State University, Columbus, OH, 43210, USA

    • Trisha Van Zandt
  66. Department of Psychology, University of California, Davis, CA, 95616, USA

    • Simine Vazire
  67. Microsoft Research, 641 Avenue of the Americas, 7th Floor, New York, NY, 10011, USA

    • Duncan J. Watts
  68. Department of Sociology, Harvard University, Cambridge, MA, 02138, USA

    • Christopher Winship
  69. Department of Sociology, Princeton University, Princeton, NJ, 08544, USA

    • Yu Xie
  70. Department of Sociology, Stanford University, Stanford, CA, 94305-2047, USA

    • Cristobal Young
  71. Department of Economics, Dartmouth College, Hanover, NH, 03755-3514, USA

    • Jonathan Zinman
  72. Department of Statistics, Texas A&M University, College Station, TX, 77843, USA

    • Valen E. Johnson

Authors

  1. Search for Daniel J. Benjamin in:

  2. Search for James O. Berger in:

  3. Search for Magnus Johannesson in:

  4. Search for Brian A. Nosek in:

  5. Search for E.-J. Wagenmakers in:

  6. Search for Richard Berk in:

  7. Search for Kenneth A. Bollen in:

  8. Search for Björn Brembs in:

  9. Search for Lawrence Brown in:

  10. Search for Colin Camerer in:

  11. Search for David Cesarini in:

  12. Search for Christopher D. Chambers in:

  13. Search for Merlise Clyde in:

  14. Search for Thomas D. Cook in:

  15. Search for Paul De Boeck in:

  16. Search for Zoltan Dienes in:

  17. Search for Anna Dreber in:

  18. Search for Kenny Easwaran in:

  19. Search for Charles Efferson in:

  20. Search for Ernst Fehr in:

  21. Search for Fiona Fidler in:

  22. Search for Andy P. Field in:

  23. Search for Malcolm Forster in:

  24. Search for Edward I. George in:

  25. Search for Richard Gonzalez in:

  26. Search for Steven Goodman in:

  27. Search for Edwin Green in:

  28. Search for Donald P. Green in:

  29. Search for Anthony G. Greenwald in:

  30. Search for Jarrod D. Hadfield in:

  31. Search for Larry V. Hedges in:

  32. Search for Leonhard Held in:

  33. Search for Teck Hua Ho in:

  34. Search for Herbert Hoijtink in:

  35. Search for Daniel J. Hruschka in:

  36. Search for Kosuke Imai in:

  37. Search for Guido Imbens in:

  38. Search for John P. A. Ioannidis in:

  39. Search for Minjeong Jeon in:

  40. Search for James Holland Jones in:

  41. Search for Michael Kirchler in:

  42. Search for David Laibson in:

  43. Search for John List in:

  44. Search for Roderick Little in:

  45. Search for Arthur Lupia in:

  46. Search for Edouard Machery in:

  47. Search for Scott E. Maxwell in:

  48. Search for Michael McCarthy in:

  49. Search for Don A. Moore in:

  50. Search for Stephen L. Morgan in:

  51. Search for Marcus Munafó in:

  52. Search for Shinichi Nakagawa in:

  53. Search for Brendan Nyhan in:

  54. Search for Timothy H. Parker in:

  55. Search for Luis Pericchi in:

  56. Search for Marco Perugini in:

  57. Search for Jeff Rouder in:

  58. Search for Judith Rousseau in:

  59. Search for Victoria Savalei in:

  60. Search for Felix D. Schönbrodt in:

  61. Search for Thomas Sellke in:

  62. Search for Betsy Sinclair in:

  63. Search for Dustin Tingley in:

  64. Search for Trisha Van Zandt in:

  65. Search for Simine Vazire in:

  66. Search for Duncan J. Watts in:

  67. Search for Christopher Winship in:

  68. Search for Robert L. Wolpert in:

  69. Search for Yu Xie in:

  70. Search for Cristobal Young in:

  71. Search for Jonathan Zinman in:

  72. Search for Valen E. Johnson in:

Competing interests

One of the 72 authors, Christopher Chambers, is a member of the Advisory Board of Nature Human Behaviour. Christopher Chambers was not a corresponding author and did not communicate with the editors regarding the publication of this article. The other authors declare no competing interests.

Corresponding authors

Correspondence to Daniel J. Benjamin or Magnus Johannesson or Valen E. Johnson.

Electronic supplementary material

  1. Supplementary information

    Supplementary Methods