Discrete confidence levels revealed by sequential decisions

Abstract

Humans can meaningfully express their confidence about uncertain events. Normatively, these beliefs should correspond to Bayesian probabilities. However, it is unclear whether the normative theory provides an accurate description of the human sense of confidence, partly because the self-report measures used in most studies hinder quantitative comparison with normative predictions. To measure confidence objectively, we developed a dual-decision task in which the correctness of a first decision determines the correct answer of a second decision, thus mimicking real-life situations in which confidence guides future choices. While participants were able to use confidence to improve performance, they fell short of the ideal Bayesian strategy. Instead, behaviour was better explained by a model with a few discrete confidence levels. These findings question the descriptive validity of normative accounts, and suggest that confidence judgments might be based on point estimates of the relevant variables, rather than on their full probability distributions.

Access options

Rent or Buy article

Get time limited or full article access on ReadCube.

from$8.99

All prices are NET prices.

Fig. 1: Dual-decision paradigm and results.
Fig. 2: Ideal Bayesian model and comparison with human behaviour.
Fig. 3: Alternative sub-optimal models.

Data availability

The data that support the findings of this study are available at https://osf.io/w74cn/.

Code availability

The code for models and analyses that support the findings of this study is available at https://osf.io/w74cn/.

References

  1. 1.

    Ernst, M. O. & Banks, M. S. Humans integrate visual and haptic information in a statistically optimal fashion. Nature 415, 429–433 (2002).

    CAS  Article  Google Scholar 

  2. 2.

    Mamassian, P. Visual confidence. Annu. Rev. Vis. Sci. 2, 459–481 (2016).

    Article  Google Scholar 

  3. 3.

    Sanders, J. I., Hangya, B. & Kepecs, A. Signatures of a statistical computation in the human sense of confidence. Neuron 90, 499–506 (2016).

    CAS  Article  Google Scholar 

  4. 4.

    Adler, W. & Ma, W. J. Limitations of proposed signatures of Bayesian confidence. Preprint at bioRxiv https://doi.org/10.1101/218222 (2018).

  5. 5.

    Rahnev, D. & Denison, R. N. Suboptimality in perceptual decision making. Behav. Brain Sci. 41, 1–66 (2018).

    Article  Google Scholar 

  6. 6.

    De Finetti, B. La prévision: ses lois logiques, ses sources subjectives. Annales de l’institut Henri Poincaré 7, 1–68 (1937).

    Google Scholar 

  7. 7.

    Savage, L. J. Elicitation of personal probabilities and expectations. J. Am. Stat. Assoc. 66, 783–801 (1971).

    Article  Google Scholar 

  8. 8.

    Drugowitsch, J., Moreno-Bote, R. R. & Pouget, A. Relation between belief and performance in perceptual decision making. PLoS ONE 9, e96511 (2014).

    Article  Google Scholar 

  9. 9.

    Fleming, S. M. & Lau, H. C. How to measure metacognition. Front. Hum. Neurosci. 8, 1–9 (2014).

    Article  Google Scholar 

  10. 10.

    Barthelmé, S. & Mamassian, P. Evaluation of objective uncertainty in the visual system. PLoS Comput. Biol. 5, e1000504 (2009).

    Article  Google Scholar 

  11. 11.

    Akaike, H. A new look at the statistical model identification. IEEE Trans. Autom. Control 19, 716–723 (1974).

    Article  Google Scholar 

  12. 12.

    Burnham, K. P. & Anderson, D. R. Model Selection and Multimodel Inference: a Practical Information-Theoretic Approach. (Springer New York, 2002).

  13. 13.

    Stephan, K. E., Penny, W. D., Daunizeau, J., Moran, R. J. & Friston, K. J. Bayesian model selection for group studies. NeuroImage 46, 1004–1017 (2009).

    Article  Google Scholar 

  14. 14.

    Rigoux, L., Stephan, K. E., Friston, K. J. & Daunizeau, J. Bayesian model selection for group studies - revisited. NeuroImage 84, 971–985 (2014).

    CAS  Article  Google Scholar 

  15. 15.

    Nelson, T. O. Metamemory: a theoretical framework and new findings in Psychology of Learning and Motivation. 26, 125–173 (Elsevier, 1990).

  16. 16.

    Ulehla, Z. J. Optimality of perceptual decision criteria. J. Exp. Psychol. 71, 564–569 (1966).

    CAS  Article  Google Scholar 

  17. 17.

    Zhang, H., Daw, N. D. & Maloney, L. T. Human representation of visuo-motor uncertainty as mixtures of orthogonal basis distributions. Nat. Neurosci. 18, 1152–1158 (2015).

    CAS  Article  Google Scholar 

  18. 18.

    Fleming, S. M. & Daw, N. D. Self-evaluation of decision performance: a general Bayesian framework for metacognitive computation. Psychol. Rev. 124, 1–59 (2016).

    Google Scholar 

  19. 19.

    Adler, W. T. & Ma, W. J. Comparing Bayesian and non-Bayesian accounts of human confidence reports. PLoS Comput. Biol. 14, e1006572 (2018).

    Article  Google Scholar 

  20. 20.

    Zhang, H. & Maloney, L. T. Ubiquitous log odds: a common representation of probability and frequency distortion in perception, action, and cognition. Front. Neurosci. 6, 1–14 (2012).

    CAS  PubMed  PubMed Central  Google Scholar 

  21. 21.

    De Gardelle, V., Le Corre, F. & Mamassian, P. Confidence as a common currency between vision and audition. PLoS ONE 11, e0147901 (2016).

    Article  Google Scholar 

  22. 22.

    de Gardelle, V. & Mamassian, P. Does confidence use a common currency across two visual tasks? Psychol. Sci. 25, 1286–1288 (2014).

    Article  Google Scholar 

  23. 23.

    Meyniel, F. & Dehaene, S. Brain networks for confidence weighting and hierarchical inference during probabilistic learning. Proc. Natl Acad. Sci. USA 114, E3859–E3868 (2017).

    CAS  Article  Google Scholar 

  24. 24.

    Bang, D. & Fleming, S. M. Distinct encoding of decision confidence in human medial prefrontal cortex. Proc. Natl Acad. Sci. USA 115, 6082–6087 (2018).

    CAS  Article  Google Scholar 

  25. 25.

    Laquitaine, S. & Gardner, J. L. A switching observer for human perceptual estimation. Neuron 97, 462–474.e6 (2018).

    CAS  Article  Google Scholar 

  26. 26.

    Gardner, J. L. Optimality and heuristics in perceptual neuroscience. Nat. Neurosci. 22, 514–523 (2019).

    CAS  Article  Google Scholar 

  27. 27.

    Maloney, L. T. & Mamassian, P. Bayesian decision theory as a model of human visual perception: testing Bayesian transfer. Vis. Neurosci. 26, 147 (2009).

    Article  Google Scholar 

  28. 28.

    Huys, Q. J. M., Maia, T. V. & Frank, M. J. Computational psychiatry as a bridge from neuroscience to clinical applications. Nat. Neurosci. 19, 404–413 (2016).

    CAS  Article  Google Scholar 

  29. 29.

    Meyniel, F., Schlunegger, D. & Dehaene, S. The sense of confidence during probabilistic learning: a normative account. PLoS Comput. Biol. 11, 1–25 (2015).

    Article  Google Scholar 

  30. 30.

    Aitchison, L., Bang, D., Bahrami, B. & Latham, P. E. Doubly Bayesian analysis of confidence in perceptual decision-making. PLoS Comput. Biol. 11, 1–23 (2015).

    Article  Google Scholar 

  31. 31.

    Kleiner, M., Brainard, D. & Pelli, D. What’s new in psychtoolbox-3? Perception 36, ECVP Abstract Supplement (2007).

  32. 32.

    Pelli, D. G. The VideoToolbox software for visual psychophysics: transforming numbers into movies. Spat. Vis. 10, 437–442 (1997).

    CAS  Article  Google Scholar 

  33. 33.

    Efron, B. Better bootstrap confidence intervals. J. Am. Stat. Assoc. 82, 171–185 (1987).

    Article  Google Scholar 

  34. 34.

    Rouder, J. N., Speckman, P. L., Sun, D., Morey, R. D. & Iverson, G. Bayesian t tests for accepting and rejecting the null hypothesis. Psychon. Bull. Rev. 16, 225–237 (2009).

    Article  Google Scholar 

  35. 35.

    R Core Team R: A Language and Environment for Statistical Computing (R Foundation for Statistical Computing, 2020).

Download references

Acknowledgements

This work was partially supported by funding from the French National Research Agency (grant ANR-12-BSH2-0005 to A.G.). The funders had no role in study design, data collection and analysis, decision to publish or preparation of the manuscript. We thank J. A. Solomon and M. J. Morgan for providing facilities. This paper is dedicated to the memory of Andrei Gorea, whose creative, untameable and questioning mind inspired this project and made it possible.

Author information

Affiliations

Authors

Contributions

M.L., A.G. and G. Mongillo conceived the study. M.L. and G. Mongillo developed the computational models. M.L. programmed the experiments. M.L. and G. Milne collected the data. M.L analysed the data. M.L, G. Mongillo and T.D. interpreted the results. M.L. wrote the paper with feedback from all authors.

Corresponding author

Correspondence to Matteo Lisi.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Peer review information Primary Handling Editor: Marike Schiffer

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Extended data

Extended Data Fig. 1 Differences between observed and predicted choice probabilities.

These plots represent for each participant, the difference between the observed and predicted probability of choosing ‘right’ in the second decision, expressed as standardized Pearson residuals. Positive values indicate that the model underestimates the probability, and negative values indicate overestimation. Panel a shows the average residuals after a wrong first decision and panel b after a correct first decision. Blue bars represent the the discrete model (with 2 confidence levels) and the orange bars represent the biased-Bayesian model. Error bands represent bootstrapped standard errors. To facilitate interpretation, blue dots on the bottom denote participants for which the average residuals of the discrete model are smaller than that of the biased-Bayesian model, indicating that the discrete model made predictions that were on average closer to the observed probability. Note that individual data display the same pattern seen in the group data (Fig. 2, Main text), in which the biased-Bayesian model provides a poorer fit to the data, because it tends to more severely underestimate the probability of choosing ‘right’ after a wrong response.

Extended Data Fig. 2 Individual AIC values.

AIC differences (alternative model minus discrete model) are plotted for each participant. In all panels positive values (coloured in blue) indicate that the discrete model with two confidence levels provide a better fit to the data.

Extended Data Fig. 3 Model recovery analysis.

In order to ensure that the models were distinguishable we performed a model recovery analysis for our three main computational models: ideal Bayesian, biased-Bayesian and discrete. We generated synthetic data (20 simulated observers, for 600 trials each), with parameters randomly sampled from the (multivariate) Gaussian distribution of the parameters fitted to our empirical data. Each panel indicate a different generative model, while different lines represent the mean and standard error of the models fit to the synthetic dataset. In each case the model with the highest relative likelihood (computed by transforming the AIC onto a likelihood scale) is the one that generated the data, indicating that the model was correctly recovered and confirming that they are distinguishable.

Supplementary information

Supplementary Information

Supplementary Methods, Supplementary Results and Supplementary Figs. 1–3.

Reporting Summary

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Lisi, M., Mongillo, G., Milne, G. et al. Discrete confidence levels revealed by sequential decisions. Nat Hum Behav (2020). https://doi.org/10.1038/s41562-020-00953-1

Download citation

Search

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing