Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Modelling the effects of subjective and objective decision making in scientific peer review

Subjects

Abstract

The objective of science is to advance knowledge, primarily in two interlinked ways: circulating ideas, and defending or criticizing the ideas of others. Peer review acts as the gatekeeper to these mechanisms. Given the increasing concern surrounding the reproducibility of much published research1, it is critical to understand whether peer review is intrinsically susceptible to failure, or whether other extrinsic factors are responsible that distort scientists’ decisions. Here we show that even when scientists are motivated to promote the truth, their behaviour may be influenced, and even dominated, by information gleaned from their peers’ behaviour, rather than by their personal dispositions. This phenomenon, known as herding, subjects the scientific community to an inherent risk of converging on an incorrect answer and raises the possibility that, under certain conditions, science may not be self-correcting. We further demonstrate that exercising some subjectivity in reviewer decisions, which serves to curb the herding process, can be beneficial for the scientific community in processing available information to estimate truth more accurately. By examining the impact of different models of reviewer decisions on the dynamic process of publication, and thereby on eventual aggregation of knowledge, we provide a new perspective on the ongoing discussion of how the peer-review process may be improved.

This is a preview of subscription content

Access options

Buy article

Get time limited or full article access on ReadCube.

$32.00

All prices are NET prices.

Figure 1: Three models of peer-review behaviour.
Figure 2: Expected misperception in a generalized version of the M1 model.
Figure 3: Empirical evidence of discrepancy between claims and results.

References

  1. Ioannidis, J. P. A. Why most published research findings are false. PLoS Med. 2, e124 (2005)

    Article  Google Scholar 

  2. Ioannidis, J. P. A. Scientific inbreeding and same-team replication: Type D personality as an example. J. Psychosom. Res. 73, 408–410 (2012)

    Article  Google Scholar 

  3. Ioannidis, J. P. A. Contradicted and initially stronger effects in highly cited clinical research. J. Am. Med. Assoc. 294, 218–228 (2005)

    MathSciNet  CAS  Article  Google Scholar 

  4. Ioannidis, J. P. & Trikalinos, T. A. Early extreme contradictory estimates may appear in published research: the Proteus phenomenon in molecular genetics research and randomized trials. J. Clin. Epidemiol. 58, 543–549 (2005)

    Article  Google Scholar 

  5. Davey Smith, G. in Biopsychosocial Medicine: An Integrated Approach to Understanding Illness (ed. White, P. ) 77–102 (Oxford Univ. Press, 2005)

    Google Scholar 

  6. Tatsioni, A., Bonitsis, N. G. & Ioannidis, J. P. A. Persistence of contradicted claims in the literature. J. Am. Med. Assoc. 298, 2517–2526 (2007)

    CAS  Article  Google Scholar 

  7. Freese, J. in Intergenerational Caregiving (eds Crouter A. C., Booth A., Bianchi S. M. & Seltzer J. A. ) 145–177 (Urban Institute Press, 2008)

    Google Scholar 

  8. Ioannidis, J. P. A. Why science is not necessarily self-correcting. Perspect. Psychol. Sci. 7, 645–654 (2012)

    Article  Google Scholar 

  9. Sterling, T. D. Publication decisions and their possible effects on inferences drawn from tests of significance—or vice versa. J. Am. Stat. Assoc. 54, 30–34 (1959)

    Google Scholar 

  10. Barnes, B., Bloor, D. & Henry, J. Scientific Knowledge: A Sociological Analysis. (Univ. Chicago Press, 1996)

    Google Scholar 

  11. Kuhn, T. S. The Structure of Scientific Revolutions. (Univ. Chicago Press, 1962)

    Google Scholar 

  12. Yong, E. & Simonsohn, U. The data detective. Nature 487, 18–19 (2012)

    ADS  CAS  Article  Google Scholar 

  13. Martinson, B. C., Anderson, M. S. & de Vries, R. Scientists behaving badly. Nature 435, 737–738 (2005)

    ADS  CAS  Article  Google Scholar 

  14. Pfeiffer, T. & Hoffmann, R. Large-scale assessment of the effect of popularity on the reliability of research. PLoS ONE 4, e5996 (2009)

    ADS  Article  Google Scholar 

  15. Brembs, B., Button, K. & Munafò, M. R. Deep impact: unintended consequences of journal rank. Front. Hum. Neurosci. 7, 291 (2013)

    Article  Google Scholar 

  16. Button, K. S. et al. Power failure: why small sample size undermines the reliability of neuroscience. Nature Rev. Neurosci. 14, 365–376 (2013)

    CAS  Article  Google Scholar 

  17. Greenberg, S. A. How citation distortions create unfounded authority: analysis of a citation network. Br. Med. J. 339, b2680 (2009)

    Article  Google Scholar 

  18. Murphy, S. E. et al. The effect of the serotonin transporter polymorphism (5-HTTLPR) on amygdala function: a meta-analysis. Mol. Psychiatry 18, 512–520 (2013)

    CAS  Article  Google Scholar 

  19. Simmons, J. P., Nelson, L. D. & Simonsohn, U. False-positive psychology: undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychol. Sci. 22, 1359–1366 (2011)

    Article  Google Scholar 

  20. Rosenthal, R. The file drawer problem and tolerance for null results. Psychol. Bull. 86, 638–641 (1979)

    Article  Google Scholar 

  21. Edwards, A. W. F. More on the too-good-to-be-true paradox and Gregor Mendel. J. Hered. 77, 138 (1986)

    Article  Google Scholar 

  22. Boutron, I., Dutton, S., Ravaud, P. & Altman, D. G. Reporting and interpretation of randomized controlled trials with statistically nonsignificant results for primary outcomes. J. Am. Med. Assoc. 303, 2058–2064 (2010)

    CAS  Article  Google Scholar 

  23. Gøtzsche, P. C. Believability of relative risks and odds ratios in abstracts: cross sectional study. BMJ 333, 231–234 (2006)

    Article  Google Scholar 

  24. Banerjee, A. V. A simple model of herd behavior. Q. J. Econ. 107, 797–817 (1992)

    Article  Google Scholar 

  25. Asch, S. E. Studies of independence and conformity. Psychol. Monogr. 70, 1–70 (1956)

    Article  Google Scholar 

  26. Almenberg, J., Kittlitz, K. & Pfeiffer, T. An experiment on prediction markets in science. PLoS ONE 4, e8500 (2009)

    ADS  Article  Google Scholar 

  27. Bayes, T. & Price, R. An essay towards solving a problem in the doctrine of chances. Philosophical Transactions (1683–1775) 53, 370–418 (1763)

    Article  Google Scholar 

  28. Bikhchandani, S., Hirshleifer, D. & Welch, I. A theory of fads, fashion, custom, and cultural change in informational cascades. J. Polit. Econ. 100, 992–1026 (1992)

    Article  Google Scholar 

  29. Billingsley, P. Probability and Measure 3rd edn (John Wiley & Sons, 1995)

    MATH  Google Scholar 

Download references

Acknowledgements

This research was supported by an Economics and Social Research Council UK PhD studentship to M.W.P. M.R.M. is a member of the UK Centre for Tobacco and Alcohol Studies, a UKCRC Public Health Research Centre of Excellence. Funding from the British Heart Foundation, Cancer Research UK, Economic and Social Research Council, Medical Research Council, and the National Institute for Health Research, under the auspices of the UK Clinical Research Collaboration, is gratefully acknowledged. The authors are grateful to S. Murphy for her assistance in coding the meta-analysis study abstracts, and to A. Bird and G. Huxley for their comments on earlier drafts of this manuscript.

Author information

Authors and Affiliations

Authors

Contributions

All authors contributed equally to the design and analysis of the models and the writing of the manuscript. The project was conceived by I.-U.P. and M.R.M., and the computer program was written by M.W.P.

Corresponding author

Correspondence to Marcus R. Munafò.

Ethics declarations

Competing interests

The authors declare no competing financial interests.

Related audio

Supplementary information

Supplementary Information

This file contains Supplementary Sections 1-4. Section 1 contains the computer program used to find the numerical results presented in the main article. Section 2 presents an example of de-herding occurring in the M1 model. Section 3 discusses the effect of relaxing the common knowledge assumption of rejections, and analyses alternative models for comparison and Section 4 considers the case that the scientists are motivated to simply publish (regardless of the truthfulness of the defended theme), and shows that immediate and complete herding may occur. (PDF 526 kb)

PowerPoint slides

Source data

Rights and permissions

Reprints and Permissions

About this article

Cite this article

Park, IU., Peacey, M. & Munafò, M. Modelling the effects of subjective and objective decision making in scientific peer review. Nature 506, 93–96 (2014). https://doi.org/10.1038/nature12786

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1038/nature12786

Further reading

Comments

By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Search

Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing