Bayesian brain theories suggest that perception, action and cognition arise as animals minimise the mismatch between their expectations and reality. This principle could unify cognitive science with the broader natural sciences, but leave key elements of cognition and behaviour unexplained.
In everyday life, we tend to explain the behaviour of ourselves and other creatures in terms of beliefs and desires. For example, we might say that a rat pulls a lever or a scientist runs an experiment because they believe that certain outcomes will ensue (e.g. a piece of food or a piece of data) and because these are outcomes they desire (e.g. because they are hungry or curious).
The idea that action is motivated by belief-like and desire-like representations—respectively defining which states of the world are most probable and most valuable (Box 1)—is also a key feature of theories across the cognitive sciences. For example, cognitive models suggest goal-directed action depends on separate associations between actions and outcomes (instrumental beliefs) and outcomes and values (incentives)1,2. A similar distinction is fundamental to models of economic choice, where decisions are thought to reflect a combination of utilities (how good is this option?) and probabilities (how certain am I to obtain it?)3.
However, recently cognitive scientists have explored the possibility that the familiar double act of beliefs and desires can be replaced by theories that explain behaviour using only one kind of internal state: prediction (Fig. 1)4. These predictive processing accounts based on the free energy principle5 assume that the brain acts as a model of the extracranial world, optimised to fit information arriving at the senses. According to this view, the brain is structured in a hierarchical way such that higher cortical areas embody hypotheses about the activity expected in lower areas, which in turn send information up the processing hierarchy signalling the mismatch or ‘error’ between prediction and reality. This structure allows the brain to optimise its fit to the outside world through two kinds of process or ‘inference’. The first is perceptual inference, where incoming sensory signals are used to adjust hypotheses at higher levels, such that the hypotheses more closely match the outside world. The second is active inference, where strong top-down predictions engage muscles and organs to drive action, changing states of the body and the world such that they conform with the prior predictions. More simply put, the brain can either revise its predictions to match the world or change the world to make the predictions come true.
Proponents of this view4 suggest that these models leave us with a desert landscape view of cognition, where mental states once thought to be crucial in explaining behaviour—such as goals, drives and desires—are reduced to predictions. Under this account “there is no essential difference between goals or desires and beliefs or predictions”6 and “desired outcomes [are] simply…those that an agent believes, a priori, it will obtain”7. According to this view, the hungry rat presses the lever because it expects itself to press, since it expects not to be hungry in the future. Neuroscientists and philosophers defending these models have recently reaffirmed that desires emerge as webs of prior beliefs8, dissolving the distinction between beliefs and desires: “from motor control to expected utility theory…as each of these constructs is absorbed…the landscape of explanations becomes progressively deserted. Is this something to be celebrated or resisted9?”
The predictive processing scheme has the potential to unify cognitive science with other life and social sciences through a common set of principles. For example, it can be shown that any plausible biological system—whether brain, bacterium or birch tree—behaves as though it possesses a predictive model of its environment, and acts in ways that improve the fit between this model and the outside world10. It has also been suggested that the same mathematical principles can explain cultural evolution11. These models are useful to scientists who seek continuity between the principles explaining human and animal behaviour and those explaining the rest of the natural world.
However, the unifying potential of such predictive processing models may come at a cost to explanatory power. There may still be good reasons for the cognitive scientist to retain the concepts of belief-like and desire-like states in their theoretical arsenal. For example, predictive processing models of active inference assume that we act by generating (false) predictions about the states of our body (e.g. my hand is over there) and enslaving peripheral reflexes to make the prediction come true (i.e. move it). While this formulation provides an elegant account of how motor commands are generated and unpacked in the spinal cord, and there would be little dispute that goals are achieved through error-minimisation processes12, a key component of this scheme is the assumption that agents suspend perception of their actions until their predictions are realised—reducing the weight or ‘precision’ afforded to incoming sensory signals13 (Box 2). This assumption is required because one state plays the role of belief and desire—I cannot simultaneously represent with one state that my hand is by my side, and that I would like it to be grasping the mug. These assumptions are difficult to reconcile with evidence that agents can simultaneously act and perceptually monitor their actions as they unfold, for example, when adapting to unexpected perturbations in a visually guided reaching movement12. It is unclear if there is a straightforward solution to this problem. This kind of sensory-guided goal-directed action is compatible with there being some levels in the hierarchy that do not distinguish between belief-like and desire-like information1,11 but not with the absence of this distinction at all levels.
Retaining the distinction between belief-like and desire-like states may also help clinical scientists explain atypical aspects of action. For example, studies of drug addiction have shown that individuals can expect substances to be unrewarding, yet still feel strong compulsions to consume them, with expectations about the pleasantness of consumption (‘liking’) and about one’s future actions (‘wanting’) subserved by dissociable mechanisms14. A similar distinction may be important in obsessive-compulsive disorder, where individuals feel strong urges to perform actions they believe to be causally impotent15. Such experiences are difficult to explain without distinguishing desire-like and belief-like mechanisms (Box 1).
The predictive processing framework is used by many scientists, and it may be that some are implicitly committed to the belief-desire distinction despite the ‘desert landscape’ view emphatically defended by some of the framework’s key architects6. We propose it is important to retain a clear distinction between beliefs and desires when explaining cognition and behaviour. Intriguingly, this distinction could be explicitly reintroduced into predictive processing via the concept of deep temporal models16. These accounts propose that agents can act in ways that minimise future prediction errors, possessing separate predictions about states of the world and predictions about plausible actions they could perform. However, while it may be tempting to identify the former and latter types of predictions as beliefs and desires, theorists have not explicitly or implicitly taken steps in this direction. We would welcome such steps, but they would imply that the aim of unifying scientific explanation via the concept of error-minimisation can be only partially achieved. The desert landscape of cognition is not as featureless as it seems, and we must accept that there is a discontinuity between different types of mental state, and between error-minimising systems that possess predictions about the future (e.g. animals) and those that do not (e.g. viruses).
In conclusion, prominent predictive processing models have suggested it is possible to abandon traditional concepts of belief and desire, explaining all cognition and behaviour in terms of predictions. This account holds promise for uniting the study of the mind with the study of the natural world, but discarding these concepts may limit cognitive science’s ability to explain the subtleties of motivated action in health and disease. Though both beliefs and desires could be crafted from the sands of a desert landscape, the cognitive scientist may still find them to be as different as concrete and glass.
Dickinson, A. & Balleine, B. Motivational control of goal-directed action. Animal Learn. Behav. 22, 1–18 (1994).
Hommel, B., Müsseler, J., Aschersleben, G. & Prinz, W. The Theory of Event Coding (TEC): a framework for perception and action planning. Behav. Brain Sci. 24, 849–878 (2001).
Kahneman, D. & Tversky, A. Prospect theory: an analysis of decision under risk. Econometrica 47, 263–291 (1979).
Clark, A. Whatever next? Predictive brains, situated agents, and the future of cognitive science. Behav. Brain Sci. 36, 181–204 (2013).
Friston, K. The free-energy principle: a unified brain theory? Nat. Rev. Neurosci. 11, 127–138 (2010).
Van de Cruys, S., Friston, K. & Clark, A. Controlled optimism: Reply to Sun and Firestone on the Dark Room Problem. Trends Cogn. Sci. https://doi.org/10.1016/j.tics.2020.05.012 (2020).
FitzGerald, T. H. B., Dolan, R. J. & Friston, K. Dopamine, reward learning, and active inference. Front. Comput. Neurosci. 9, 136 (2015).
Clark, A. Beyond desire? Agency, choice, and the predictive mind. Aust. J. Philos. 0, 1–15 (2019).
Friston, K. J. Beyond the Desert Landscape in Andy Clark and His Critics (eds Colombo, M., Irvine, E., & Stapleton, M.) (Oxford University Press, 2019).
Friston, K. J. Life as we know it. J. R. Soc. Interface 10, 20130475 (2013).
Ramstead, M. J. D., Badcock, P. B. & Friston, K. J. Answering Schrödinger’s question: a free-energy formulation. Phys. Life Rev. 24, 1–16 (2018).
Desmurget, M. & Grafton, S. Forward modeling allows feedback control for fast reaching movements. Trends Cogn. Sci. 4, 423–431 (2000).
Brown, H., Adams, R. A., Parees, I., Edwards, M. & Friston, K. Active inference, sensory attenuation and illusions. Cogn. Process 14, 411–427 (2013).
Robinson, T. E. & Berridge, K. C. The incentive sensitization theory of addiction: some current issues. Philos. Trans. R Soc. Lond. B Biol. Sci. 363, 3137–3146 (2008).
Gillan, C. M. & Robbins, T. W. Goal-directed learning and obsessive-compulsive disorder. Philos. Trans. R. Soc. Lond. B Biol. Sci. 369, 20130475 (2014).
Friston, K. J., Rosch, R., Parr, T., Price, C. & Bowman, H. Deep temporal models and active inference. Neurosci. Biobehav. Rev. 77, 388–402 (2017).
Shea, N. Perception versus action: the computations may be the same but the direction of fit differs. Behav. Brain Sci. 36, 228–229 (2013).
Klein, C. What do predictive coders want? Synthese 195, 2541–2557 (2018).
Yon, D., de Lange, F. P. & Press, C. The predictive brain as a stubborn scientist. Trends Cogn. Sci. 23, 6–8 (2019).
Feldman, H. & Friston, K. J. Attention, uncertainty, and free-energy. Front. Hum. Neurosci. 4, 215 (2010).
We are grateful to Karl Friston for useful discussions on these topics and comments on the manuscript. We also thank Richard Ivry for helpful discussions.
The authors declare no competing interests.
Peer review information Nature Communications thanks the anonymous reviewers for their contribution to the peer review of this work.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
About this article
Cite this article
Yon, D., Heyes, C. & Press, C. Beliefs and desires in the predictive brain. Nat Commun 11, 4404 (2020). https://doi.org/10.1038/s41467-020-18332-9
This article is cited by
Review of Philosophy and Psychology (2023)