Skip to main content

Thank you for visiting You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Article
  • Published:

Intermediate levels of scientific knowledge are associated with overconfidence and negative attitudes towards science


Overconfidence is a prevalent problem and it is particularly consequential in its relation with scientific knowledge: being unaware of one’s own ignorance can affect behaviours and threaten public policies and health. However, it is not clear how confidence varies with knowledge. Here, we examine four large surveys, spanning 30 years in Europe and the United States and propose a new confidence metric. This metric does not rely on self-reporting or peer comparison, operationalizing (over)confidence as the tendency to give incorrect answers rather than ‘don’t know’ responses to questions on scientific facts. We find a nonlinear relationship between knowledge and confidence, with overconfidence (the confidence gap) peaking at intermediate levels of actual scientific knowledge. These high-confidence/intermediate-knowledge groups also display the least positive attitudes towards science. These results differ from current models and, by identifying specific audiences, can help inform science communication strategies.

This is a preview of subscription content, access via your institution

Access options

Buy this article

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: Possible relationships between confidence and knowledge.
Fig. 2: Overconfidence is higher at intermediate-knowledge levels.
Fig. 3: Nonlinear relationship between confidence and knowledge is masked by binning.
Fig. 4: Negative attitudes toward science peak at intermediate-knowledge levels.

Similar content being viewed by others

Data availability

Surveys EB, Pew and GSS are publicly available and data and details can be found in refs. 27,28,29, respectively. The Fernbach study was published in ref. 9 and the authors made the data available. Lackner survey data are available at:

Code availability

All code used for the analysis is available at:


  1. Plous, S. The Psychology of Judgment and Decision Making (McGraw-Hill, 1993).

  2. Lichtenstein, S. & Fischhoff, B. Do those who know more also know more about how much they know? Organ. Behav. Hum. Perf. 20, 159–183 (1977).

    Article  Google Scholar 

  3. Johnson, D. D. P. & Levin, S. A. The tragedy of cognition: psychological biases and environmental inaction. Curr. Sci. 97, 1593–1603 (2009).

    Google Scholar 

  4. Tuchman, B. W. The March of Folly: From Troy to Vietnam (Alfred A. Knopf, 1984).

  5. Johnson, D. D. P. & Tierney, D. R. The Rubicon theory of war: how the path to conflict reaches the point of no return. Int. Secur. 36, 7–40 (2011).

    Article  Google Scholar 

  6. Akerlof, G. A. & Shiller, R. J. Animal Spirits: How Human Psychology Drives the Economy and Why it Matters for Global Capitalism (Princeton Univ. Press, 2009).

  7. Kruger, J. & Dunning, D. Unskilled and unaware of it: how difficulties in recognizing one’s own incompetence lead to inflated self-assessments. J. Pers. Soc. Psychol. 77, 1121–1134 (1999).

    Article  CAS  PubMed  Google Scholar 

  8. Motta, M., Callaghan, T. & Sylvester, S. Knowing less but presuming more: Dunning–Kruger effects and the endorsement of anti-vaccine policy attitudes. Soc. Sci. Med. 211, 274–281 (2018).

    Article  PubMed  Google Scholar 

  9. Fernbach, P. M., Light, N., Scott, S. E., Inbar, Y. & Rozin, P. Extreme opponents of genetically modified foods know the least but think they know the most. Nat. Hum. Behav. 3, 251–256 (2019).

    Article  PubMed  Google Scholar 

  10. Al-Harthy, I. S., Was, C. A. & Hassan, A. S. Poor performers are poor predictors of performance and they know it: can they improve their prediction accuracy. J. Glob. Res. Educ. Soc. Sci. 4, 93–100 (2015).

    Google Scholar 

  11. Händel, M. & Fritzsche, E. S. Unskilled but subjectively aware: metacognitive monitoring ability and respective awareness in low-performing students. Mem. Cognit. 44, 229–241 (2016).

    Article  PubMed  Google Scholar 

  12. Miller, T. M. & Geraci, L. Unskilled but aware: reinterpreting overconfidence in low-performing students. J. Exp. Psychol. Learn. 37, 502–506 (2011).

    Article  Google Scholar 

  13. Urban, M. & Urban, K. Unskilled but aware of it? Cluster analysis of creative metacognition from preschool age to early adulthood. J. Creat. Behav. 55, 937–945 (2021).

    Article  Google Scholar 

  14. Sanchez, C. & Dunning, D. Cultural patterns explain the worldwide perception/performance paradox in student self-assessments of math and science skill. Soc. Psychol. Pers. Sci. 10, 935–945 (2019).

    Article  Google Scholar 

  15. McIntosh, R. D., Fowler, E. A., Lyu, T. & Della Sala, S. Wise up: clarifying the role of metacognition in the Dunning–Kruger effect. J. Exp. Psychol. Gen. 148, 1882–1897 (2019).

    Article  PubMed  Google Scholar 

  16. Gignac, G. E. & Zajenkowski, M. The Dunning–Kruger effect is (mostly) a statistical artefact: valid approaches to testing the hypothesis with individual differences data. Intelligence 80, 101449 (2020).

    Article  Google Scholar 

  17. Jansen, R. A., Rafferty, A. N. & Griffiths, T. L. A rational model of the Dunning–Kruger effect supports insensitivity to evidence in low performers. Nat. Hum. Behav. 5, 756–763 (2021).

    Article  PubMed  Google Scholar 

  18. Nederhof, A. J. Methods of coping with social desirability bias: a review. Eur. J. Soc. Psychol. 15, 263–280 (1985).

    Article  Google Scholar 

  19. Larson, R. B. Controlling social desirability bias. Int. J. Mark. Res. 61, 534–547 (2019).

    Article  Google Scholar 

  20. Cole, J. S. & Gonyea, R. M. Accuracy of self-reported SAT and ACT test scores: implications for research. Res. High. Educ. 51, 305–319 (2010).

    Article  Google Scholar 

  21. Greenwald, A. G., Poehlman, T. A., Uhlmann, E. L. & Banaji, M. R. Understanding and using the Implicit Association Test: III. Meta-analysis of predictive validity. J. Pers. Soc. Psychol. 97, 17–41 (2009).

    Article  PubMed  Google Scholar 

  22. Bishop, G. F., Tuchfarber, A. J. & Oldendick, R. W. Opinions on fictitious issues: the pressure to answer survey questions. Public Opin. Quart. 50, 240–250 (1986).

    Article  Google Scholar 

  23. Sedikides, C. & Alicke, M. D. in The Oxford Handbook of Human Motivation (ed. Ryan, R. M.) Ch. 17 (Oxford Univ. Press, 2012).

  24. De Neys, W. Bias and conflict: a case for logical intuitions. Perspect. Psychol. Sci. 7, 28–38 (2012).

    Article  PubMed  Google Scholar 

  25. Williams, E. F., Dunning, D. & Kruger, J. The hobgoblin of consistency: algorithmic judgment strategies underlie inflated self-assessments of performance. J. Pers. Soc. Psychol. 104, 976–994 (2013).

    Article  PubMed  Google Scholar 

  26. Light, N., Fernbach, P. M., Rabb, N., Geana, M. V. & Sloman, S. A. Knowledge overconfidence is associated with anti-consensus views on controversial scientific issues. Sci. Adv. 8, eabo0038 (2022).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  27. Bauer, M. W., Shukla, R. & Kakkar, P. Public Understanding of Science in Europe 1989–2005—A Eurobarometer Trend File (GESIS, 2012);

  28. Smith, T. W., Davern, M., Freese, J. & Morgan, S. L. General Social Surveys, 1972–2018 (NORC, 2019);

  29. Funk, C., Kennedy, B., Johnson, C., Hefferon, M. & Thigpen, C. L. American Trends Panel Wave 42 (Pew Research Center, 2019);

  30. Maki, R. H. in Metacognition in Educational Theory and Practice (eds Hacker, D. J. et al.) Ch. 6 (Routledge, 1998).

  31. Bauer, M. Socio-demographic correlates of DK-responses in knowledge surveys: self-attributed ignorance of science. Soc. Sci. Inform. 35, 39–68 (1996).

    Article  Google Scholar 

  32. Hamilton, L. C. Self-assessed understanding of climate change. Clim. Change 151, 349–362 (2018).

    Article  Google Scholar 

  33. Durant, J. R., Evans, G. A. & Thomas, G. P. The public understanding of science. Nature 340, 11–14 (1989).

    Article  CAS  PubMed  Google Scholar 

  34. Bauer, M. W., Allum, N. & Miller, S. What can we learn from 25 years of PUS survey research? Liberating and expanding the agenda. Public Understand. Sci. 16, 79–95 (2007).

    Article  Google Scholar 

  35. Wynne, B. Knowledges in context. Sci., Technol. Hum. Values 16, 111–121 (1991).

    Article  Google Scholar 

  36. Evans, G. & Durant, J. The relationship between knowledge and attitudes in the public understanding of science in Britain. Public Understand. Sci. 4, 57–74 (1995).

    Article  Google Scholar 

  37. Pardo, R. & Calvo, F. Attitudes toward science among the European public: a methodological analysis. Public Understand. Sci. 11, 155–195 (2002).

    Article  Google Scholar 

  38. Hamilton, L. C. Education, politics and opinions about climate change evidence for interaction effects. Clim. Change 104, 231–242 (2011).

    Article  Google Scholar 

  39. McCright, A. M. Political orientation moderates Americans’ beliefs and concern about climate change. Clim. Change 104, 243–253 (2011).

    Article  Google Scholar 

  40. Drummond, C. & Fischhoff, B. Individuals with greater science literacy and education have more polarized beliefs on controversial science topics. Proc. Natl Acad. Sci. USA 114, 9587–9592 (2017).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  41. Oskamp, S. Overconfidence in case-study judgments. J. Couns. Psychol. 29, 261–265 (1965).

    Article  CAS  Google Scholar 

  42. Sanchez, C. & Dunning, D. Overconfidence among beginners: is a little learning a dangerous thing? J. Pers. Soc. Psychol. 114, 10–28 (2018).

    Article  PubMed  Google Scholar 

  43. Quattrociocchi, W., Scala, A. & Sunstein, C. R. Echo Chambers on Facebook (SSRN, 2016);

  44. Science and Engineering Indicators 2014 Ch. 7 (National Science Foundation, 2014).

  45. Iyengar, S. & Krupenkin, M. The strengthening of partisan affect. Polit. Psychol. 39, 201–218 (2018).

    Article  Google Scholar 

  46. Klein, E. Why We’re Polarized (Profile Books, 2020).

  47. Bauer, M. in Handbook of Public Communication of Science and Technology (eds Bucchi, M. & Trench, B.) Ch. 8 (Routledge, 2008).

  48. Nickerson, R. S. Confirmation bias: a ubiquitous phenomenon in many guises. Rev. Gen. Psychol. 2, 175–220 (1998).

    Article  Google Scholar 

  49. Fernbach, P. M., Rogers, T., Fox, C. R. & Sloman, S. A. Political extremism is supported by an illusion of understanding. Psychol. Sci. 24, 939–946 (2013).

    Article  PubMed  Google Scholar 

  50. Meyers, E. A., Turpin, M. H., Białek, M., Fugelsang, J. A. & Koehler, D. J. Inducing feelings of ignorance makes people more receptive to expert (economist) opinion. Judgm. Decis. Mak. 15, 909–925 (2020).

    Article  Google Scholar 

  51. Rozenblit, L. & Keil, F. The misunderstood limits of folk science: an illusion of explanatory depth. Cogn. Sci. 26, 521–562 (2002).

    Article  PubMed  PubMed Central  Google Scholar 

  52. Walters, D. J., Fernbach, P. M., Fox, C. R. & Sloman, S. A. Known unknowns: a critical determinant of confidence and calibration. Manag. Sci. 63, 4298–4307 (2017).

    Article  Google Scholar 

  53. Campbell, T. H. & Kay, A. C. Solution aversion: on the relation between ideology and motivated disbelief. J. Pers. Soc. Psychol. 107, 809–824 (2014).

    Article  PubMed  Google Scholar 

  54. Feinberg, M. & Willer, R. The moral roots of environmental attitudes. Psychol. Sci. 24, 56–62 (2013).

    Article  PubMed  Google Scholar 

  55. Feinberg, M. & Willer, R. From gulf to bridge: when do moral arguments facilitate political influence? Pers. Soc. Psychol. B. 41, 1665–1681 (2015).

    Article  Google Scholar 

  56. Hornsey, M. J. Why facts are not enough: understanding and managing the motivated rejection of science. Curr. Dir. Psychol. Sci. 29, 583–591 (2020).

    Article  Google Scholar 

  57. Campbell, W. K., Goodie, A. S. & Foster, J. D. Narcissism, confidence and risk attitude. J. Behav. Decis. Mak. 17, 297–311 (2004).

    Article  Google Scholar 

  58. Littrell, S., Fugelsang, J. & Risko, E. F. Overconfidently underthinking: narcissism negatively predicts cognitive reflection. Think. Reason. 26, 352–380 (2020).

    Article  Google Scholar 

  59. Macenczak, L. A., Campbell, S., Henley, A. B. & Campbell, W. K. Direct and interactive effects of narcissism and power on overconfidence. Pers. Indiv. Differ. 91, 113–122 (2016).

    Article  Google Scholar 

  60. Sukenik, S., Reizer, A. & Koslovsky, M. Direct and indirect effects of agreeableness on overconfidence. J. Individ. Differ. 39, 174–181 (2018).

    Article  Google Scholar 

  61. Rodríguez-Ferreiro, J. & Barberia, I. Believers in pseudoscience present lower evidential criteria. Sci. Rep. 11, 24352 (2021).

  62. Sanchez, C. & Dunning, D. Jumping to conclusions: implications for reasoning errors, false belief, knowledge corruption and impeded learning. J. Pers. Soc. Psychol. 120, 789–815 (2021).

    Article  PubMed  Google Scholar 

  63. Kuhn, S. A. K., Lieb, R., Freeman, D. andreou, C. & Zander-Schellenberg, T. Coronavirus conspiracy beliefs in the German-speaking general population: endorsement rates and links to reasoning biases and paranoia. Psychol. Med. 52, 4162–4176 (2022).

  64. Pytlik, N., Soll, D. & Mehl, S. Thinking preferences and conspiracy belief: intuitive thinking and the jumping to conclusions-bias as a basis for the belief in conspiracy theories. Front. Psychiatry 11, 568942 (2020).

    Article  PubMed  PubMed Central  Google Scholar 

  65. Beyer, S. Gender differences in the accuracy of self-evaluations of performance. J. Pers. Soc. Psychol. 59, 960–970 (1990).

    Article  Google Scholar 

  66. Hill, P. W. H. et al. Science possible selves and the desire to be a scientist: mindsets, gender bias and confidence curing early adolescence. Soc. Sci. 6, 55 (2017).

    Article  Google Scholar 

  67. Johns, R. Likert Items and Scales Survey Question Bank: Methods Fact Sheet 1 (UK Data Service, 2010).

Download references


We thank C. Souto-Mayor, M. West and J. Nolasco for initial extraction and analysis of the EB dataset, members of the SPAC group for valuable discussions, Fernbach et al.9 for making their survey data available and M. Bauer, T. Paixão, M. Entradas and J. Lobo Antunes for critical reading of the manuscript. We also thank L. Hamilton for independently testing the robustness of our metric and confirming some of our findings. This project was partially funded by Welcome DFRH WIIA 60 2011 and ERC-Starting Grant FARE-853566, both to J.G.S. The funders had no role in study design, data collection and analysis, decision to publish or preparation of the manuscript.

Author information

Authors and Affiliations



J.G.S. conceived of this work. All authors contributed to the methodology. S.L., F.F., C.M. and J.G.S were involved in investigation. A.M. and J.G.S. undertook supervision. All authors wrote the manuscript.

Corresponding author

Correspondence to Joana Gonçalves-Sá.

Ethics declarations

Competing interests

The authors declare no competing interests.

Peer review

Peer review information

Nature Human Behaviour thanks Ian Brunton-Smith, Lawrence Hamilton and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Extended data

Extended Data Fig. 1 Model comparison.

Different expectations of the proportions of correct (yellow), incorrect (purple) and ‘Don’t Know’ (green) answers, per knowledge bin (a, c, e, g, i) or proportion of incorrect (purple) and ‘Don’t Know’ (green) within non-correct answers only, per knowledge bin (b, d, f, h, j) depending on different expectations of the relationship between confidence and knowledge (k). Perfect metacognition (a, b, yellow solid line in k) expects all non-correct answers to be of the ‘Don’t Know’ type. Random answering (c, d, dotted blue line in k) expects a constant and even proportion of ‘Don’t Know’ and incorrect answers regardless of knowledge bin. If overconfidence decreases with knowledge (e,f, green lines in k), the proportion of incorrect answers should decrease as knowledge increases. If overconfidence increases with knowledge (i, j, solid purple line in k), the proportion of incorrect answers should increase as knowledge increases. If respondents only ‘guess’ when they do not know the answer, the distribution of incorrect may vary depending on the baseline knowledge and the fraction of incorrect should grow nonlinearly with knowledge (g, h, large-dash grey line in k).

Extended Data Fig. 2 Knowledge distributions.

Knowledge distributions for EB (a), GSS (b), Pew (c), Lackner (d) and Fernbach (e). Absolute frequencies for the first bin in each dataset were: 1179, 107, 165, 2, 42, respectively. Absolute frequencies for the last bin in each dataset were: 2753, 556, 685, 64, 48, respectively.

Extended Data Fig. 3 Alternative calibration models.

Alternative representation of calibration errors, with different null models. Left axis show the answer proportion with green bars representing observed proportion of ‘Don’t Know’ answers per knowledge bin and purple bars representing observed proportion of incorrect answers per knowledge bin, out of all non-correct answers, for EB (a–d), GSS (e–h), Pew (i–l) and Lackner (m–p). In all plots, solid lines show the expected proportion of incorrect answers (null model) and the dashed line the calibration error calculated as the difference between the observed and the corresponding null. As different null models allow for different expectations please note that the right axis, can vary between 0 and 1 or between -1 and 1. In (a, e, i, m), the null model represents the perfect metacognitive model (yellow lines), in which any incorrect answer represent a calibration error. In (b, f, j, n), the null model represents random guessing (blue lines), such that an equal proportion of incorrect and ‘Don’t Know’ answers is expected, regardless of knowledge level. In (c, g, k, o), the null model expects confidence to increase in tandem with knowledge (purple lines). In (d, h, l, p), the null model is the result of the simulations with 25% guessers (dark-grey lines).

Extended Data Fig. 4 Demographic analyses for the EB and Lackner surveys.

(a–d) EB data, (e–g) Lackner data. (a, e) Box plot shows the fraction of female (orange) and male (blue) respondents that never say ‘Don’t know’ across (a) 31 territories or (e) 3 countries. Data was negatively tested for normality using scipy’s stats module’s normaltest function (α = 0.001) and for similarity (two-tailed Mann-Whitney U test) in both datasets. Three black asterisks indicate statistical significance with p < 0.001 in (a) and in (e) no significant difference was found. (b, f) Box plot shows the fraction of different age group bins that never say ‘Don’t know’ across all (b) 31 territories or (f) 3 Lackner-surveyed countries. Diamond indicates an outlier (values in the panel). A two-tailed Kruskal-Wallis H-test and all pairwise comparisons were found to be significant with post hoc Tukey’s tests except for (b) 25-39 vs. 40-49 and 40-49 vs. 55+ and no evidence of significance in (f) (p = 0.042). (c, g) Box plot shows the fractions of different bins of age at time of completing their education that never say ‘Don’t know’ across all (c) 31 territories or (g) 3 Lackner-surveyed countries. Diamonds mark outliers (values in the panel). A two-tailed Kruskal-Wallis H-test and all pairwise comparisons were found to be significant with post hoc Tukey’s tests, except for ‘Up to 15’ vs. ‘Still studying’ and ‘16-19’ vs. ‘20 + ’, in (c) and no evidence of significance in (g) (p = 0.036). (d) Scatter plot shows for each territory the fraction of respondents that never say ‘Don’t know’ sorted according to latitude of the territory. Black line shows the linear regression with low correlation represented R2 = 0.21. (h). Table with values for all whiskers (low, 3rd column and high, 7th column) and quartiles (Q1, Median and Q3).

Extended Data Fig. 5 Answer distributions to the ‘How Informed’ questions and calibration errors.

(a, c, e, g, i) Stacked bar plots showing fraction of respondents who answered ‘Poorly’ (yellow), ‘Moderately well’ (light blue) and ‘Very well’ (dark green) when questioned how informed they were about (a) new inventions and technologies, (c) new medical discoveries, (e) new scientific discoveries, (g) politics and (i) sports news, per knowledge level. In all panels, black solid lines with squares indicate mean fraction of respondents who answered ‘Moderately well’ or ‘Very well’ per quartile, while solid grey line shows average knowledge rank per quartile. (b, d, f, h, j) Plot showing the difference between average fraction of respondents who answered ‘Moderately well’ or ‘Very well’ per quartile and average knowledge rank per quartile, each represented by a circle marking the average and a vertical line marking the variation in average between bins of the same quartile.

Extended Data Fig. 6 EB attitudinal data.

(a, c, e, g, i, k, m) show stacked bar plots with fractions of Agree (orange), Neutral (yellow) and Disagree (red) answers in response to 7 EB attitude questions. Order of stacked bars is inverted in (e, k) as, in those two items, a negative attitude could be revealed by the Agree answer, while the reverse might be true for (a, g, m). (c) and (i) show a more nuanced response. Figures in (b, d, f, h, j, l, n) show the mean fractions across 34 EU territories with standard error of the mean.

Extended Data Table 1 Knowledge questions
Extended Data Table 2 Attitude questions

Supplementary information

Supplementary Information

Supplementary Methods, Tables 1–5, Figs. 1–9 and Annex.

Reporting Summary

Peer Review File

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lackner, S., Francisco, F., Mendonça, C. et al. Intermediate levels of scientific knowledge are associated with overconfidence and negative attitudes towards science. Nat Hum Behav 7, 1490–1501 (2023).

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI:

This article is cited by


Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing