Skip to main content

Thank you for visiting You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

A solution to the single-question crowd wisdom problem


Once considered provocative1, the notion that the wisdom of the crowd is superior to any individual has become itself a piece of crowd wisdom, leading to speculation that online voting may soon put credentialed experts out of business2,3. Recent applications include political and economic forecasting4,5, evaluating nuclear safety6, public policy7, the quality of chemical probes8, and possible responses to a restless volcano9. Algorithms for extracting wisdom from the crowd are typically based on a democratic voting procedure. They are simple to apply and preserve the independence of personal judgment10. However, democratic methods have serious limitations. They are biased for shallow, lowest common denominator information, at the expense of novel or specialized knowledge that is not widely shared11,12. Adjustments based on measuring confidence do not solve this problem reliably13. Here we propose the following alternative to a democratic vote: select the answer that is more popular than people predict. We show that this principle yields the best answer under reasonable assumptions about voter behaviour, while the standard ‘most popular’ or ‘most confident’ principles fail under exactly those same assumptions. Like traditional voting, the principle accepts unique problems, such as panel decisions about scientific or artistic merit, and legal or historical disputes. The potential application domain is thus broader than that covered by machine learning and psychometric methods, which require data across multiple questions14,15,16,17,18,19,20.

This is a preview of subscription content, access via your institution

Relevant articles

Open Access articles citing this article.

Access options

Buy article

Get time limited or full article access on ReadCube.


All prices are NET prices.

Figure 1: Two example questions from Study 1c, described in text.
Figure 2: Why ‘surprisingly popular’ answers should be correct, illustrated by simple models of Philadelphia and Columbia questions with Bayesian respondents.
Figure 3: Selection of stimuli from Study 4 in which respondents judged the market price of 20th century artworks.
Figure 4: Results of aggregation algorithms on studies discussed in the text.
Figure 5: Logistic regressions showing the probability that an artwork is judged expensive (above $30,000) as function of actual market price.


  1. Galton, F. Vox populi. Nature 75, 450–451 (1907)

    Article  ADS  Google Scholar 

  2. Sunstein, C. Infotopia: How Many Minds Produce Knowledge (Oxford University Press, USA, 2006)

  3. Surowiecki, J. The Wisdom of Crowds (Anchor, 2005)

  4. Budescu, D. V. & Chen, E. Identifying expertise to extract the wisdom of crowds. Manage. Sci. 61, 267–280 (2014)

    Google Scholar 

  5. Mellers, B. et al. Psychological strategies for winning a geopolitical forecasting tournament. Psychol. Sci. 25, 1106–1115 (2014)

    Article  Google Scholar 

  6. Cooke, R. M. & Goossens, L. L. TU Delft expert judgment data base. Reliab. Eng. Syst. Saf. 93, 657–674 (2008)

    Article  Google Scholar 

  7. Morgan, M. G. Use (and abuse) of expert elicitation in support of decision making for public policy. Proc. Natl Acad. Sci. USA 111, 7176–7184 (2014)

    Article  CAS  ADS  Google Scholar 

  8. Oprea, T. I. et al. A crowdsourcing evaluation of the NIH chemical probes. Nat. Chem. Biol. 5, 441–447 (2009)

    Article  CAS  Google Scholar 

  9. Aspinall, W. A route to more tractable expert advice. Nature 463, 294–295 (2010)

    Article  CAS  ADS  Google Scholar 

  10. Lorenz, J., Rauhut, H., Schweitzer, F. & Helbing, D. How social influence can undermine the wisdom of crowd effect. Proc. Natl Acad. Sci. USA 108, 9020–9025 (2011)

    Article  CAS  ADS  Google Scholar 

  11. Chen, K., Fine, L. & Huberman, B. Eliminating public knowledge biases in information-aggregation mechanisms. Manage. Sci. 50, 983–994 (2004)

    Article  Google Scholar 

  12. Simmons, J. P., Nelson, L. D., Galak, J. & Frederick, S. Intuitive biases in choice versus estimation: implications for the wisdom of crowds. J. Consum. Res. 38, 1–15 (2011)

    Article  Google Scholar 

  13. Hertwig, R. Psychology. Tapping into the wisdom of the crowd–with confidence. Science 336, 303–304 (2012)

    Article  CAS  ADS  Google Scholar 

  14. Batchelder, W. & Romney, A. Test theory without an answer key. Psychometrika 53, 71–92 (1988)

    Article  MathSciNet  Google Scholar 

  15. Lee, M. D., Steyvers, M., de Young, M. & Miller, B. Inferring expertise in knowledge and prediction ranking tasks. Top. Cogn. Sci. 4, 151–163 (2012)

    Article  Google Scholar 

  16. Yi, S. K., Steyvers, M., Lee, M. D. & Dry, M. J. The wisdom of the crowd in combinatorial problems. Cogn. Sci. 36, 452–470 (2012)

    Article  Google Scholar 

  17. Lee, M. D. & Danileiko, I. Using cognitive models to combine probability estimates. Judgm. Decis. Mak. 9, 259–273 (2014)

    Google Scholar 

  18. Anders, R. & Batchelder, W. H. Cultural consensus theory for multiple consensus truths. J. Math. Psychol. 56, 452–469 (2012)

    Article  MathSciNet  Google Scholar 

  19. Oravecz, Z., Anders, R. & Batchelder, W. H. Hierarchical Bayesian modeling for test theory without an answer key. Psychometrika 80, 341–364 (2015)

    Article  MathSciNet  Google Scholar 

  20. Freund, Y. & Schapire, R. A decision-theoretic generalization of on-line learning and an application to boosting. J. Comput. Syst. Sci. 55, 119–139 (1997)

    Article  MathSciNet  Google Scholar 

  21. Goldstein, D. G. & Gigerenzer, G. Models of ecological rationality: the recognition heuristic. Psychol. Rev. 109, 75–90 (2002)

    Article  Google Scholar 

  22. Cooke, R. Experts in Uncertainty: Opinion and Subjective Probability in Science (Oxford University Press, USA, 1991)

  23. Koriat, A. When are two heads better than one and why? Science 336, 360–362 (2012)

    Article  CAS  ADS  Google Scholar 

  24. Prelec, D. A Bayesian truth serum for subjective data. Science. 306, 462–466 (2004)

    Article  CAS  ADS  Google Scholar 

  25. John, L. K., Loewenstein, G. & Prelec, D. Measuring the prevalence of questionable research practices with incentives for truth telling. Psychol. Sci. 23, 524–532 (2012)

    Article  Google Scholar 

  26. Arrow, K. J. et al. Economics. The promise of prediction markets. Science 320, 877–878 (2008)

    Article  CAS  Google Scholar 

  27. Lebreton, M., Abitbol, R., Daunizeau, J. & Pessiglione, M. Automatic integration of confidence in the brain valuation signal. Nat. Neurosci. 18, 1159–1167 (2015)

    Article  CAS  Google Scholar 

Download references


We thank M. Alam, A. Huang and D. Mijovic-Prelec for help with designing and conducting Study 3, and D. Suh with designing and conducting Study 4b. Supported by NSF SES-0519141, Institute for Advanced Study (Prelec), and Intelligence Advanced Research Projects Activity (IARPA) via the Department of Interior National Business Center contract number D11PC20058. The US Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright annotation thereon. The views and conclusions expressed herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, DoI/NBC, or the US Government.

Author information

Authors and Affiliations



All authors contributed extensively to the work presented in this paper.

Corresponding author

Correspondence to Dražen Prelec.

Ethics declarations

Competing interests

The authors declare no competing financial interests.

Additional information

Reviewer Information Nature thanks A. Baillon, D. Helbing and the other anonymous reviewer(s) for their contribution to the peer review of this work.

Extended data figures and tables

Extended Data Figure 1 Performance of all methods across all studies, shown with respect to the Matthews correlation coefficient.

Error bars are bootstrapped standard errors. Details of studies are given in Fig. 4 of the main text.

Extended Data Figure 2 Performance of all methods across all studies, shown with respect to the macro-averaged F1 score.

Error bars are bootstrapped standard errors. Details of studies are given in Fig. 4 of the main text.

Extended Data Figure 3 Performance of all methods across all studies, shown with respect to percentage of questions correct.

Error bars are bootstrapped standard errors. Details of studies are given in Fig. 4 of the main text.

Extended Data Figure 4 Performance of aggregation methods on simulated datasets of binary questions, under uniform sampling assumptions.

One draws a pair of coin biases (that is, signal distribution parameters), and a prior over worlds, each from independent uniform distributions. Combinations of coin biases and prior that result in recipients of both coin tosses voting for the same answer are discarded. An actual coin is sampled according to the prior, and tossed a finite number of times to produce the votes, confidences, and vote predictions required by different methods (see Supplementary Information for simulation details). As well as showing how sample size affects different aggregation methods the simulations also show that majorities become more reliable as consensus increases. A majority of 90% is correct about 90% of the time, while a majority of 55% is not much better than chance. This is not due to sampling error, but reflects the structure of the model and simulation assumptions. According to the model, an answer with x% endorsements is incorrect if counterfactual endorsements for that answer exceed x% (Theorem 2), and the chance of sampling such a problem diminishes with x.

Supplementary information

Supplementary Information

This file contains Supplementary Text and Data sections 1-3 – see contents page for details. (PDF 207 kb)

PowerPoint slides

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Prelec, D., Seung, H. & McCoy, J. A solution to the single-question crowd wisdom problem. Nature 541, 532–535 (2017).

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI:

This article is cited by


By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.


Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing