Abstract
Humans can meaningfully express their confidence about uncertain events. Normatively, these beliefs should correspond to Bayesian probabilities. However, it is unclear whether the normative theory provides an accurate description of the human sense of confidence, partly because the self-report measures used in most studies hinder quantitative comparison with normative predictions. To measure confidence objectively, we developed a dual-decision task in which the correctness of a first decision determines the correct answer of a second decision, thus mimicking real-life situations in which confidence guides future choices. While participants were able to use confidence to improve performance, they fell short of the ideal Bayesian strategy. Instead, behaviour was better explained by a model with a few discrete confidence levels. These findings question the descriptive validity of normative accounts, and suggest that confidence judgments might be based on point estimates of the relevant variables, rather than on their full probability distributions.
This is a preview of subscription content, access via your institution
Access options
Access Nature and 54 other Nature Portfolio journals
Get Nature+, our best-value online-access subscription
$29.99 / 30 days
cancel any time
Subscribe to this journal
Receive 12 digital issues and online access to articles
$119.00 per year
only $9.92 per issue
Buy this article
- Purchase on Springer Link
- Instant access to full article PDF
Prices may be subject to local taxes which are calculated during checkout
Similar content being viewed by others
Data availability
The data that support the findings of this study are available at https://osf.io/w74cn/.
Code availability
The code for models and analyses that support the findings of this study is available at https://osf.io/w74cn/.
References
Ernst, M. O. & Banks, M. S. Humans integrate visual and haptic information in a statistically optimal fashion. Nature 415, 429–433 (2002).
Mamassian, P. Visual confidence. Annu. Rev. Vis. Sci. 2, 459–481 (2016).
Sanders, J. I., Hangya, B. & Kepecs, A. Signatures of a statistical computation in the human sense of confidence. Neuron 90, 499–506 (2016).
Adler, W. & Ma, W. J. Limitations of proposed signatures of Bayesian confidence. Preprint at bioRxiv https://doi.org/10.1101/218222 (2018).
Rahnev, D. & Denison, R. N. Suboptimality in perceptual decision making. Behav. Brain Sci. 41, 1–66 (2018).
De Finetti, B. La prévision: ses lois logiques, ses sources subjectives. Annales de l’institut Henri Poincaré 7, 1–68 (1937).
Savage, L. J. Elicitation of personal probabilities and expectations. J. Am. Stat. Assoc. 66, 783–801 (1971).
Drugowitsch, J., Moreno-Bote, R. R. & Pouget, A. Relation between belief and performance in perceptual decision making. PLoS ONE 9, e96511 (2014).
Fleming, S. M. & Lau, H. C. How to measure metacognition. Front. Hum. Neurosci. 8, 1–9 (2014).
Barthelmé, S. & Mamassian, P. Evaluation of objective uncertainty in the visual system. PLoS Comput. Biol. 5, e1000504 (2009).
Akaike, H. A new look at the statistical model identification. IEEE Trans. Autom. Control 19, 716–723 (1974).
Burnham, K. P. & Anderson, D. R. Model Selection and Multimodel Inference: a Practical Information-Theoretic Approach. (Springer New York, 2002).
Stephan, K. E., Penny, W. D., Daunizeau, J., Moran, R. J. & Friston, K. J. Bayesian model selection for group studies. NeuroImage 46, 1004–1017 (2009).
Rigoux, L., Stephan, K. E., Friston, K. J. & Daunizeau, J. Bayesian model selection for group studies - revisited. NeuroImage 84, 971–985 (2014).
Nelson, T. O. Metamemory: a theoretical framework and new findings in Psychology of Learning and Motivation. 26, 125–173 (Elsevier, 1990).
Ulehla, Z. J. Optimality of perceptual decision criteria. J. Exp. Psychol. 71, 564–569 (1966).
Zhang, H., Daw, N. D. & Maloney, L. T. Human representation of visuo-motor uncertainty as mixtures of orthogonal basis distributions. Nat. Neurosci. 18, 1152–1158 (2015).
Fleming, S. M. & Daw, N. D. Self-evaluation of decision performance: a general Bayesian framework for metacognitive computation. Psychol. Rev. 124, 1–59 (2016).
Adler, W. T. & Ma, W. J. Comparing Bayesian and non-Bayesian accounts of human confidence reports. PLoS Comput. Biol. 14, e1006572 (2018).
Zhang, H. & Maloney, L. T. Ubiquitous log odds: a common representation of probability and frequency distortion in perception, action, and cognition. Front. Neurosci. 6, 1–14 (2012).
De Gardelle, V., Le Corre, F. & Mamassian, P. Confidence as a common currency between vision and audition. PLoS ONE 11, e0147901 (2016).
de Gardelle, V. & Mamassian, P. Does confidence use a common currency across two visual tasks? Psychol. Sci. 25, 1286–1288 (2014).
Meyniel, F. & Dehaene, S. Brain networks for confidence weighting and hierarchical inference during probabilistic learning. Proc. Natl Acad. Sci. USA 114, E3859–E3868 (2017).
Bang, D. & Fleming, S. M. Distinct encoding of decision confidence in human medial prefrontal cortex. Proc. Natl Acad. Sci. USA 115, 6082–6087 (2018).
Laquitaine, S. & Gardner, J. L. A switching observer for human perceptual estimation. Neuron 97, 462–474.e6 (2018).
Gardner, J. L. Optimality and heuristics in perceptual neuroscience. Nat. Neurosci. 22, 514–523 (2019).
Maloney, L. T. & Mamassian, P. Bayesian decision theory as a model of human visual perception: testing Bayesian transfer. Vis. Neurosci. 26, 147 (2009).
Huys, Q. J. M., Maia, T. V. & Frank, M. J. Computational psychiatry as a bridge from neuroscience to clinical applications. Nat. Neurosci. 19, 404–413 (2016).
Meyniel, F., Schlunegger, D. & Dehaene, S. The sense of confidence during probabilistic learning: a normative account. PLoS Comput. Biol. 11, 1–25 (2015).
Aitchison, L., Bang, D., Bahrami, B. & Latham, P. E. Doubly Bayesian analysis of confidence in perceptual decision-making. PLoS Comput. Biol. 11, 1–23 (2015).
Kleiner, M., Brainard, D. & Pelli, D. What’s new in psychtoolbox-3? Perception 36, ECVP Abstract Supplement (2007).
Pelli, D. G. The VideoToolbox software for visual psychophysics: transforming numbers into movies. Spat. Vis. 10, 437–442 (1997).
Efron, B. Better bootstrap confidence intervals. J. Am. Stat. Assoc. 82, 171–185 (1987).
Rouder, J. N., Speckman, P. L., Sun, D., Morey, R. D. & Iverson, G. Bayesian t tests for accepting and rejecting the null hypothesis. Psychon. Bull. Rev. 16, 225–237 (2009).
R Core Team R: A Language and Environment for Statistical Computing (R Foundation for Statistical Computing, 2020).
Acknowledgements
This work was partially supported by funding from the French National Research Agency (grant ANR-12-BSH2-0005 to A.G.). The funders had no role in study design, data collection and analysis, decision to publish or preparation of the manuscript. We thank J. A. Solomon and M. J. Morgan for providing facilities. This paper is dedicated to the memory of Andrei Gorea, whose creative, untameable and questioning mind inspired this project and made it possible.
Author information
Authors and Affiliations
Contributions
M.L., A.G. and G. Mongillo conceived the study. M.L. and G. Mongillo developed the computational models. M.L. programmed the experiments. M.L. and G. Milne collected the data. M.L analysed the data. M.L, G. Mongillo and T.D. interpreted the results. M.L. wrote the paper with feedback from all authors.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Peer review information Primary Handling Editor: Marike Schiffer
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Extended data
Extended Data Fig. 1 Differences between observed and predicted choice probabilities.
These plots represent for each participant, the difference between the observed and predicted probability of choosing ‘right’ in the second decision, expressed as standardized Pearson residuals. Positive values indicate that the model underestimates the probability, and negative values indicate overestimation. Panel a shows the average residuals after a wrong first decision and panel b after a correct first decision. Blue bars represent the the discrete model (with 2 confidence levels) and the orange bars represent the biased-Bayesian model. Error bands represent bootstrapped standard errors. To facilitate interpretation, blue dots on the bottom denote participants for which the average residuals of the discrete model are smaller than that of the biased-Bayesian model, indicating that the discrete model made predictions that were on average closer to the observed probability. Note that individual data display the same pattern seen in the group data (Fig. 2, Main text), in which the biased-Bayesian model provides a poorer fit to the data, because it tends to more severely underestimate the probability of choosing ‘right’ after a wrong response.
Extended Data Fig. 2 Individual AIC values.
AIC differences (alternative model minus discrete model) are plotted for each participant. In all panels positive values (coloured in blue) indicate that the discrete model with two confidence levels provide a better fit to the data.
Extended Data Fig. 3 Model recovery analysis.
In order to ensure that the models were distinguishable we performed a model recovery analysis for our three main computational models: ideal Bayesian, biased-Bayesian and discrete. We generated synthetic data (20 simulated observers, for 600 trials each), with parameters randomly sampled from the (multivariate) Gaussian distribution of the parameters fitted to our empirical data. Each panel indicate a different generative model, while different lines represent the mean and standard error of the models fit to the synthetic dataset. In each case the model with the highest relative likelihood (computed by transforming the AIC onto a likelihood scale) is the one that generated the data, indicating that the model was correctly recovered and confirming that they are distinguishable.
Supplementary information
Supplementary Information
Supplementary Methods, Supplementary Results and Supplementary Figs. 1–3.
Rights and permissions
About this article
Cite this article
Lisi, M., Mongillo, G., Milne, G. et al. Discrete confidence levels revealed by sequential decisions. Nat Hum Behav 5, 273–280 (2021). https://doi.org/10.1038/s41562-020-00953-1
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1038/s41562-020-00953-1
This article is cited by
-
Prior information differentially affects discrimination decisions and subjective confidence reports
Nature Communications (2023)
-
Confidence guides priority between forthcoming tasks
Scientific Reports (2021)