Abstract
The objective of science is to advance knowledge, primarily in two interlinked ways: circulating ideas, and defending or criticizing the ideas of others. Peer review acts as the gatekeeper to these mechanisms. Given the increasing concern surrounding the reproducibility of much published research1, it is critical to understand whether peer review is intrinsically susceptible to failure, or whether other extrinsic factors are responsible that distort scientists’ decisions. Here we show that even when scientists are motivated to promote the truth, their behaviour may be influenced, and even dominated, by information gleaned from their peers’ behaviour, rather than by their personal dispositions. This phenomenon, known as herding, subjects the scientific community to an inherent risk of converging on an incorrect answer and raises the possibility that, under certain conditions, science may not be self-correcting. We further demonstrate that exercising some subjectivity in reviewer decisions, which serves to curb the herding process, can be beneficial for the scientific community in processing available information to estimate truth more accurately. By examining the impact of different models of reviewer decisions on the dynamic process of publication, and thereby on eventual aggregation of knowledge, we provide a new perspective on the ongoing discussion of how the peer-review process may be improved.
This is a preview of subscription content, access via your institution
Relevant articles
Open Access articles citing this article.
-
Study results from journals with a higher impact factor are closer to “truth”: a meta-epidemiological study
Systematic Reviews Open Access 18 January 2023
-
Adherence to reporting guidelines increases the number of citations: the argument for including a methodologist in the editorial process and peer-review
BMC Medical Research Methodology Open Access 31 May 2019
Access options
Subscribe to this journal
Receive 51 print issues and online access
$199.00 per year
only $3.90 per issue
Rent or buy this article
Get just this article for as long as you need it
$39.95
Prices may be subject to local taxes which are calculated during checkout



References
Ioannidis, J. P. A. Why most published research findings are false. PLoS Med. 2, e124 (2005)
Ioannidis, J. P. A. Scientific inbreeding and same-team replication: Type D personality as an example. J. Psychosom. Res. 73, 408–410 (2012)
Ioannidis, J. P. A. Contradicted and initially stronger effects in highly cited clinical research. J. Am. Med. Assoc. 294, 218–228 (2005)
Ioannidis, J. P. & Trikalinos, T. A. Early extreme contradictory estimates may appear in published research: the Proteus phenomenon in molecular genetics research and randomized trials. J. Clin. Epidemiol. 58, 543–549 (2005)
Davey Smith, G. in Biopsychosocial Medicine: An Integrated Approach to Understanding Illness (ed. White, P. ) 77–102 (Oxford Univ. Press, 2005)
Tatsioni, A., Bonitsis, N. G. & Ioannidis, J. P. A. Persistence of contradicted claims in the literature. J. Am. Med. Assoc. 298, 2517–2526 (2007)
Freese, J. in Intergenerational Caregiving (eds Crouter A. C., Booth A., Bianchi S. M. & Seltzer J. A. ) 145–177 (Urban Institute Press, 2008)
Ioannidis, J. P. A. Why science is not necessarily self-correcting. Perspect. Psychol. Sci. 7, 645–654 (2012)
Sterling, T. D. Publication decisions and their possible effects on inferences drawn from tests of significance—or vice versa. J. Am. Stat. Assoc. 54, 30–34 (1959)
Barnes, B., Bloor, D. & Henry, J. Scientific Knowledge: A Sociological Analysis. (Univ. Chicago Press, 1996)
Kuhn, T. S. The Structure of Scientific Revolutions. (Univ. Chicago Press, 1962)
Yong, E. & Simonsohn, U. The data detective. Nature 487, 18–19 (2012)
Martinson, B. C., Anderson, M. S. & de Vries, R. Scientists behaving badly. Nature 435, 737–738 (2005)
Pfeiffer, T. & Hoffmann, R. Large-scale assessment of the effect of popularity on the reliability of research. PLoS ONE 4, e5996 (2009)
Brembs, B., Button, K. & Munafò, M. R. Deep impact: unintended consequences of journal rank. Front. Hum. Neurosci. 7, 291 (2013)
Button, K. S. et al. Power failure: why small sample size undermines the reliability of neuroscience. Nature Rev. Neurosci. 14, 365–376 (2013)
Greenberg, S. A. How citation distortions create unfounded authority: analysis of a citation network. Br. Med. J. 339, b2680 (2009)
Murphy, S. E. et al. The effect of the serotonin transporter polymorphism (5-HTTLPR) on amygdala function: a meta-analysis. Mol. Psychiatry 18, 512–520 (2013)
Simmons, J. P., Nelson, L. D. & Simonsohn, U. False-positive psychology: undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychol. Sci. 22, 1359–1366 (2011)
Rosenthal, R. The file drawer problem and tolerance for null results. Psychol. Bull. 86, 638–641 (1979)
Edwards, A. W. F. More on the too-good-to-be-true paradox and Gregor Mendel. J. Hered. 77, 138 (1986)
Boutron, I., Dutton, S., Ravaud, P. & Altman, D. G. Reporting and interpretation of randomized controlled trials with statistically nonsignificant results for primary outcomes. J. Am. Med. Assoc. 303, 2058–2064 (2010)
Gøtzsche, P. C. Believability of relative risks and odds ratios in abstracts: cross sectional study. BMJ 333, 231–234 (2006)
Banerjee, A. V. A simple model of herd behavior. Q. J. Econ. 107, 797–817 (1992)
Asch, S. E. Studies of independence and conformity. Psychol. Monogr. 70, 1–70 (1956)
Almenberg, J., Kittlitz, K. & Pfeiffer, T. An experiment on prediction markets in science. PLoS ONE 4, e8500 (2009)
Bayes, T. & Price, R. An essay towards solving a problem in the doctrine of chances. Philosophical Transactions (1683–1775) 53, 370–418 (1763)
Bikhchandani, S., Hirshleifer, D. & Welch, I. A theory of fads, fashion, custom, and cultural change in informational cascades. J. Polit. Econ. 100, 992–1026 (1992)
Billingsley, P. Probability and Measure 3rd edn (John Wiley & Sons, 1995)
Acknowledgements
This research was supported by an Economics and Social Research Council UK PhD studentship to M.W.P. M.R.M. is a member of the UK Centre for Tobacco and Alcohol Studies, a UKCRC Public Health Research Centre of Excellence. Funding from the British Heart Foundation, Cancer Research UK, Economic and Social Research Council, Medical Research Council, and the National Institute for Health Research, under the auspices of the UK Clinical Research Collaboration, is gratefully acknowledged. The authors are grateful to S. Murphy for her assistance in coding the meta-analysis study abstracts, and to A. Bird and G. Huxley for their comments on earlier drafts of this manuscript.
Author information
Authors and Affiliations
Contributions
All authors contributed equally to the design and analysis of the models and the writing of the manuscript. The project was conceived by I.-U.P. and M.R.M., and the computer program was written by M.W.P.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing financial interests.
Related audio
Supplementary information
Supplementary Information
This file contains Supplementary Sections 1-4. Section 1 contains the computer program used to find the numerical results presented in the main article. Section 2 presents an example of de-herding occurring in the M1 model. Section 3 discusses the effect of relaxing the common knowledge assumption of rejections, and analyses alternative models for comparison and Section 4 considers the case that the scientists are motivated to simply publish (regardless of the truthfulness of the defended theme), and shows that immediate and complete herding may occur. (PDF 526 kb)
Rights and permissions
About this article
Cite this article
Park, IU., Peacey, M. & Munafò, M. Modelling the effects of subjective and objective decision making in scientific peer review. Nature 506, 93–96 (2014). https://doi.org/10.1038/nature12786
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1038/nature12786
This article is cited by
-
Study results from journals with a higher impact factor are closer to “truth”: a meta-epidemiological study
Systematic Reviews (2023)
-
Comparing paper level classifications across different methods and systems: an investigation of Nature publications
Scientometrics (2022)
-
Adherence to reporting guidelines increases the number of citations: the argument for including a methodologist in the editorial process and peer-review
BMC Medical Research Methodology (2019)
-
The impact of conference ranking systems in computer science: a comparative regression analysis
Scientometrics (2018)
-
Zur Lösung der wirklich bedeutenden Probleme: Was von Maschinen erwartet wird
Informatik-Spektrum (2018)
Comments
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.