Article | Published:

Rational quantitative attribution of beliefs, desires and percepts in human mentalizing

Nature Human Behaviour volume 1, Article number: 0064 (2017) | Download Citation

Abstract

Social cognition depends on our capacity for ‘mentalizing’, or explaining an agent’s behaviour in terms of their mental states. The development and neural substrates of mentalizing are well-studied, but its computational basis is only beginning to be probed. Here we present a model of core mentalizing computations: inferring jointly an actor’s beliefs, desires and percepts from how they move in the local spatial environment. Our Bayesian theory of mind (BToM) model is based on probabilistically inverting artificial-intelligence approaches to rational planning and state estimation, which extend classical expected-utility agent models to sequential actions in complex, partially observable domains. The model accurately captures the quantitative mental-state judgements of human participants in two experiments, each varying multiple stimulus dimensions across a large number of stimuli. Comparative model fits with both simpler ‘lesioned’ BToM models and a family of simpler non-mentalistic motion features reveal the value contributed by each component of our model.

Access optionsAccess options

Rent or Buy article

Get time limited or full article access on ReadCube.

from$8.99

All prices are NET prices.

References

  1. 1.

    & Perception as Bayesian Inference (Cambridge Univ. Press, 1996).

  2. 2.

    . Vision (Freeman, 1982).

  3. 3.

    , & . Motion illusions as optimal percepts. Nat. Neurosci. 5, 598–604 (2002).

  4. 4.

    . The Origin of Concepts (Oxford Univ. Press, 2009).

  5. 5.

    , , & . One-year-old infants use teleological representations of actions productively. Cogn. Sci. 27, 111–133 (2003).

  6. 6.

    , & Social evaluation by preverbal infants. Nature 450, 557–560 (2007).

  7. 7.

    , & . Core mechanisms in ‘theory of mind’. Trends Cogn. Sci. 8, 528–533 (2005).

  8. 8.

    & Do 15-month-old infants understand false beliefs? Science 308, 255–258 (2005).

  9. 9.

    Infants selectively encode the goal object of an actor’s reach. Cognition 69, 1–34 (1998).

  10. 10.

    Making Minds: How Theory of Mind Develops (Oxford Univ. Press, 2014).

  11. 11.

    & . Beliefs about beliefs: representation and constraining function of wrong beliefs in young children’s understanding of deception. Cognition 13, 103–128 (1983).

  12. 12.

    , & . Simulation as an engine of physical scene understanding. Proc. Natl Acad. Sci. USA 110, 18327–18332 (2013).

  13. 13.

    & Bayesian integration in sensorimotor learning. Nature 427, 244–247 (2004).

  14. 14.

    , & Action understanding as inverse planning. Cognition 113, 329–349 (2009).

  15. 15.

    & A decision network account of reasoning about other people’s choices. Cognition 142, 12–38 (2015).

  16. 16.

    , , & Children’s understanding of the costs and rewards underlying rational action. Cognition 140, 14–23 (2015).

  17. 17.

    et al. The child as econometrician: a rational model of preference understanding in children. PLoS ONE 9, e92160 (2014).

  18. 18.

    , & . Mental state inference using visual control parameters. Cogn. Brain Res. 22, 129–151 (2005).

  19. 19.

    et al. Inferring the intentional states of autonomous virtual agents. Cognition 130, 360–379 (2014).

  20. 20.

    , & in Simple Heuristics that Make Us Smart (eds Gigerenzer G., Todd, P. M. & the ABC Research Group) 257–286 (Oxford Univ. Press, 1999).

  21. 21.

    Using movement and intentions to understand simple events. Cogn. Sci. 28, 979–1008 (2004).

  22. 22.

    , & . Cause and intent: social reasoning in causal learning. In Proc. 31st Annu. Conf. Cognitive Science Society (eds Taatgen, N. & van Rijn, H.) 2759–2764 (Cognitive Science Society, 2009).

  23. 23.

    , & . Learning from others: the consequences of psychological reasoning for human learning. Persp. Psychol. Sci. 7, 341–351 (2012).

  24. 24.

    , & Inferring learners’ knowledge from their actions. Cogn. Sci. 39, 584–618 (2015).

  25. 25.

    & So good it has to be true: wishful thinking in theory of mind. In Proc. 37th Annu. Conf. Cognitive Science Society (eds Noelle, D. C. et al.) 884–889 (Cognitive Science Society, 2015).

  26. 26.

    , , & Modeling aspects of theory of mind with Markov random fields. Int. J. Soc. Robot. 1, 41–51 (2009).

  27. 27.

    , , & . Epistemic trust: modeling children’s reasoning about others’ knowledge and intent. Dev. Sci. 15, 436–447 (2012).

  28. 28.

    & . Teleological reasoning in infancy: the naïve theory of rational action. Trends Cogn. Sci. 7, 287–292 (2003).

  29. 29.

    . Understanding the Representational Mind (MIT Press, 1991).

  30. 30.

    , & . Planning and acting in partially observable stochastic domains. Artificial Intelligence 101, 99–134 (1998).

  31. 31.

    & . Theory of Games and Economic Behavior (Princeton Univ. Press, 1953).

  32. 32.

    & A framework for sequential planning in multi-agent settings. J. Artif. Intell. Res. 24, 49–79 (2005).

  33. 33.

    & . Computational models of space: isovists and isovist fields. Comput. Graph. Image Process. 11, 49–72 (1979).

  34. 34.

    , & . Human activity understanding using visibility context. In IEEE/RSJ IROS Workshop: From Sensors to Human Spatial Concepts (FS2HSC) (2007).

  35. 35.

    & . The influence of spatial context and the role of intentionality in the interpretation of animacy from motion. Percept. Psychophys. 29, 943–951 (2006).

  36. 36.

    & Infants’ insight into the mind: how deep? Science 308, 214–216 (2005).

  37. 37.

    & . Infants can infer the presence of hidden objects from referential gaze information. Br. J. Dev. Psychol. 26, 1–11 (2008).

  38. 38.

    & . 12- and 18-month-old infants follow gaze to spaces behind barriers. Dev. Sci. 7, F1–F9 (2004).

  39. 39.

    & . What makes some POMDP problems easy to approximate? In Advances in Neural Information Processing Systems (NIPS 2007) Vol. 20 (eds Platt, J. C. ) (NIPS Foundation, 2007).

  40. 40.

    & . Monte-Carlo planning in large POMDPs. In Advances in Neural Information Processing Systems (NIPS 2010) Vol. 23 (eds Lafferty, J. D. et al.) (NIPS Foundation, 2010).

  41. 41.

    , , & Despot: online POMDP planning with regularization. In Advances in Neural Information Processing Systems (NIPS 2013) Vol. 26 (eds Burges, C. J. C. et al.) (NIPS Foundation, 2013).

  42. 42.

    . Computationally feasible bounds for partially observed Markov decision processes. Oper. Res. 39, 162–175 (1991).

  43. 43.

    , & SARSOP: efficient point-based POMDP planning by approximating optimally reachable belief spaces. In Proc. Robotics: Science and Systems Vol. 4 (eds Brock, O., Trinkle, J. & Ramos, F.) 65–72 (MIT, 2009).

  44. 44.

    , , & . Picture: an imperative probabilistic programming language for scene perception. In IEEE Conf. Computer Vision and Pattern Recognition 4390–4399 (Computer Vision Foundation, 2015).

  45. 45.

    & in Language Acquisition (ed. Foster-Cohen, S.) Ch. 7, 169–195 (Palgrave Macmillan, 2009).

  46. 46.

    , & . Theory-based social goal inference. In Proc. 30th Annu. Conf. Cognitive Science Society 1447–1455 (Cognitive Science Society, 2008).

  47. 47.

    , , , & The mentalistic basis of core social cognition: experiments in preverbal infants and a computational model. Dev. Sci. 16, 209–226 (2013).

  48. 48.

    . Markov games as a framework for multi-agent reinforcement learning. In Proc. 11th Int. Conf. Machine Learning Vol. 9166 (ed. Perner, P.) 157–163 (Springer, 1994).

  49. 49.

    , & Game theory of mind. PLoS Comput. Biol. 4, 1–14 (2008).

  50. 50.

    , , & Modeling recursive reasoning by humans using empirically informed interactive POMDPs. In Proc. 9th Int. Conf. Autonomous Agents and Multiagent Systems (AAMAS) (International Foundation for Autonomous Agents and Multiagent Systems, 2010).

  51. 51.

    & . Reasoning about reasoning by nested conditioning: modeling theory of mind with probabilistic programs. J. Cogn. Syst. Res. 28, 80–99 (2013).

  52. 52.

    & Core knowledge. Dev. Sci. 10, 89–96 (2007).

  53. 53.

    . Thinking, Fast and Slow (Farrar, Straus and Giroux, 2011).

  54. 54.

    , , & POMDPs for robotic tasks with mixed observability. In Robotics: Science and Systems Vol. 5 (MIT Press, 2009).

  55. 55.

    Empirical Methods in Artificial Intelligence (MIT Press, 1995).

Download references

Acknowledgements

This work was supported by the Center for Brains, Minds & Machines (CBMM), under NSF STC award CCF-1231216; by NSF grant IIS-1227495 and by DARPA grant IIS-1227504. The funders had no role in study design, data collection and analysis, decision to publish or preparation of the manuscript.

Author information

Affiliations

  1. Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, Massachusetts 02139, USA

    • Chris L. Baker
    • , Julian Jara-Ettinger
    • , Rebecca Saxe
    •  & Joshua B. Tenenbaum

Authors

  1. Search for Chris L. Baker in:

  2. Search for Julian Jara-Ettinger in:

  3. Search for Rebecca Saxe in:

  4. Search for Joshua B. Tenenbaum in:

Contributions

C.L.B., R.S. and J.B.T. designed Experiment 1. C.L.B. ran Experiment 1, implemented the models and performed the analyses of Experiment 1. J.J.E., C.L.B. and J.B.T. designed Experiment 2. J.J.-E. and C.L.B. ran Experiment 2, implemented the models and performed the analyses of Experiment 2. C.L.B. and J.B.T. wrote the manuscript.

Competing interests

The authors declare no competing interests.

Corresponding author

Correspondence to Joshua B. Tenenbaum.

Supplementary information

PDF files

  1. 1.

    Supplementary Information

    Supplementary Methods, Supplementary Figures, Supplementary References.

About this article

Publication history

Received

Accepted

Published

DOI

https://doi.org/10.1038/s41562-017-0064

Further reading