Letter | Published:

Modelling the effects of subjective and objective decision making in scientific peer review

Nature volume 506, pages 9396 (06 February 2014) | Download Citation

Subjects

Abstract

The objective of science is to advance knowledge, primarily in two interlinked ways: circulating ideas, and defending or criticizing the ideas of others. Peer review acts as the gatekeeper to these mechanisms. Given the increasing concern surrounding the reproducibility of much published research1, it is critical to understand whether peer review is intrinsically susceptible to failure, or whether other extrinsic factors are responsible that distort scientists’ decisions. Here we show that even when scientists are motivated to promote the truth, their behaviour may be influenced, and even dominated, by information gleaned from their peers’ behaviour, rather than by their personal dispositions. This phenomenon, known as herding, subjects the scientific community to an inherent risk of converging on an incorrect answer and raises the possibility that, under certain conditions, science may not be self-correcting. We further demonstrate that exercising some subjectivity in reviewer decisions, which serves to curb the herding process, can be beneficial for the scientific community in processing available information to estimate truth more accurately. By examining the impact of different models of reviewer decisions on the dynamic process of publication, and thereby on eventual aggregation of knowledge, we provide a new perspective on the ongoing discussion of how the peer-review process may be improved.

Access optionsAccess options

Rent or Buy article

Get time limited or full article access on ReadCube.

from$8.99

All prices are NET prices.

References

  1. 1.

    Why most published research findings are false. PLoS Med. 2, e124 (2005)

  2. 2.

    Scientific inbreeding and same-team replication: Type D personality as an example. J. Psychosom. Res. 73, 408–410 (2012)

  3. 3.

    Contradicted and initially stronger effects in highly cited clinical research. J. Am. Med. Assoc. 294, 218–228 (2005)

  4. 4.

    & Early extreme contradictory estimates may appear in published research: the Proteus phenomenon in molecular genetics research and randomized trials. J. Clin. Epidemiol. 58, 543–549 (2005)

  5. 5.

    in Biopsychosocial Medicine: An Integrated Approach to Understanding Illness (ed. ) 77–102 (Oxford Univ. Press, 2005)

  6. 6.

    , & Persistence of contradicted claims in the literature. J. Am. Med. Assoc. 298, 2517–2526 (2007)

  7. 7.

    in Intergenerational Caregiving (eds , , & ) 145–177 (Urban Institute Press, 2008)

  8. 8.

    Why science is not necessarily self-correcting. Perspect. Psychol. Sci. 7, 645–654 (2012)

  9. 9.

    Publication decisions and their possible effects on inferences drawn from tests of significance—or vice versa. J. Am. Stat. Assoc. 54, 30–34 (1959)

  10. 10.

    , & Scientific Knowledge: A Sociological Analysis. (Univ. Chicago Press, 1996)

  11. 11.

    The Structure of Scientific Revolutions. (Univ. Chicago Press, 1962)

  12. 12.

    & The data detective. Nature 487, 18–19 (2012)

  13. 13.

    , & Scientists behaving badly. Nature 435, 737–738 (2005)

  14. 14.

    & Large-scale assessment of the effect of popularity on the reliability of research. PLoS ONE 4, e5996 (2009)

  15. 15.

    , & Deep impact: unintended consequences of journal rank. Front. Hum. Neurosci. 7, 291 (2013)

  16. 16.

    et al. Power failure: why small sample size undermines the reliability of neuroscience. Nature Rev. Neurosci. 14, 365–376 (2013)

  17. 17.

    How citation distortions create unfounded authority: analysis of a citation network. Br. Med. J. 339, b2680 (2009)

  18. 18.

    et al. The effect of the serotonin transporter polymorphism (5-HTTLPR) on amygdala function: a meta-analysis. Mol. Psychiatry 18, 512–520 (2013)

  19. 19.

    , & False-positive psychology: undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychol. Sci. 22, 1359–1366 (2011)

  20. 20.

    The file drawer problem and tolerance for null results. Psychol. Bull. 86, 638–641 (1979)

  21. 21.

    More on the too-good-to-be-true paradox and Gregor Mendel. J. Hered. 77, 138 (1986)

  22. 22.

    , , & Reporting and interpretation of randomized controlled trials with statistically nonsignificant results for primary outcomes. J. Am. Med. Assoc. 303, 2058–2064 (2010)

  23. 23.

    Believability of relative risks and odds ratios in abstracts: cross sectional study. BMJ 333, 231–234 (2006)

  24. 24.

    A simple model of herd behavior. Q. J. Econ. 107, 797–817 (1992)

  25. 25.

    Studies of independence and conformity. Psychol. Monogr. 70, 1–70 (1956)

  26. 26.

    , & An experiment on prediction markets in science. PLoS ONE 4, e8500 (2009)

  27. 27.

    & An essay towards solving a problem in the doctrine of chances. Philosophical Transactions (1683–1775) 53, 370–418 (1763)

  28. 28.

    , & A theory of fads, fashion, custom, and cultural change in informational cascades. J. Polit. Econ. 100, 992–1026 (1992)

  29. 29.

    Probability and Measure 3rd edn (John Wiley & Sons, 1995)

Download references

Acknowledgements

This research was supported by an Economics and Social Research Council UK PhD studentship to M.W.P. M.R.M. is a member of the UK Centre for Tobacco and Alcohol Studies, a UKCRC Public Health Research Centre of Excellence. Funding from the British Heart Foundation, Cancer Research UK, Economic and Social Research Council, Medical Research Council, and the National Institute for Health Research, under the auspices of the UK Clinical Research Collaboration, is gratefully acknowledged. The authors are grateful to S. Murphy for her assistance in coding the meta-analysis study abstracts, and to A. Bird and G. Huxley for their comments on earlier drafts of this manuscript.

Author information

Affiliations

  1. Department of Economics, University of Bristol, Bristol BS8 1TN, UK

    • In-Uck Park
    •  & Mike W. Peacey
  2. Department of Economics, Sungkyunkwan University, Seoul 110-745, South Korea

    • In-Uck Park
  3. Department of Economics, University of Bath, Bath BA2 7AY, UK

    • Mike W. Peacey
  4. MRC Integrative Epidemiology Unit (IEU), University of Bristol, Bristol BS8 1BN, UK

    • Marcus R. Munafò
  5. UK Centre for Tobacco and Alcohol Studies, University of Bristol, Bristol BS8 1TU, UK

    • Marcus R. Munafò
  6. School of Experimental Psychology, University of Bristol, Bristol BS8 1TU, UK

    • Marcus R. Munafò

Authors

  1. Search for In-Uck Park in:

  2. Search for Mike W. Peacey in:

  3. Search for Marcus R. Munafò in:

Contributions

All authors contributed equally to the design and analysis of the models and the writing of the manuscript. The project was conceived by I.-U.P. and M.R.M., and the computer program was written by M.W.P.

Competing interests

The authors declare no competing financial interests.

Corresponding author

Correspondence to Marcus R. Munafò.

Supplementary information

PDF files

  1. 1.

    Supplementary Information

    This file contains Supplementary Sections 1-4. Section 1 contains the computer program used to find the numerical results presented in the main article. Section 2 presents an example of de-herding occurring in the M1 model. Section 3 discusses the effect of relaxing the common knowledge assumption of rejections, and analyses alternative models for comparison and Section 4 considers the case that the scientists are motivated to simply publish (regardless of the truthfulness of the defended theme), and shows that immediate and complete herding may occur.

About this article

Publication history

Received

Accepted

Published

DOI

https://doi.org/10.1038/nature12786

Further reading

Comments

By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.