Abstract
A standard assumption in neuroscience is that low-effort model-free learning is automatic and continuously used, whereas more complex model-based strategies are only used when the rewards they generate are worth the additional effort. We present evidence refuting this assumption. First, we demonstrate flaws in previous reports of combined model-free and model-based reward prediction errors in the ventral striatum that probably led to spurious results. More appropriate analyses yield no evidence of model-free prediction errors in this region. Second, we find that task instructions generating more correct model-based behaviour reduce rather than increase mental effort. This is inconsistent with cost–benefit arbitration between model-based and model-free strategies. Together, our data indicate that model-free learning may not be automatic. Instead, humans can reduce mental effort by using a model-based strategy alone rather than arbitrating between multiple strategies. Our results call for re-evaluation of the assumptions in influential theories of learning and decision-making.
This is a preview of subscription content, access via your institution
Access options
Access Nature and 54 other Nature Portfolio journals
Get Nature+, our best-value online-access subscription
$29.99 / 30 days
cancel any time
Subscribe to this journal
Receive 12 digital issues and online access to articles
$119.00 per year
only $9.92 per issue
Buy this article
- Purchase on Springer Link
- Instant access to full article PDF
Prices may be subject to local taxes which are calculated during checkout
Similar content being viewed by others
Data availability
The behavioural and eye-tracking data can be found on https://github.com/carolfs/fmri_magic_carpet and the fMRI images can be found on https://openneuro.org/datasets/ds004455.
Code availability
The code used to run the task and the analyses can be found on https://github.com/carolfs/fmri_magic_carpet and makes use of PsychoPy v.1.90.3, SPM v.12, FSL v.6.0.5, Python v.3.8.13, R v.4.1.3, Julia v.1.7.2, MATLAB v.R2019b and MACS v.1.3.
References
Daw, N. D., Niv, Y. & Dayan, P. Uncertainty-based competition between prefrontal and dorsolateral striatal systems for behavioral control. Nat. Neurosci. 8, 1704–1711 (2005).
Akam, T., Costa, R. & Dayan, P. Simple plans or sophisticated habits? state, transition and learning interactions in the two-step task. PLoS Comput. Biol. 11, e1004648 (2015).
Kool, W., Cushman, F. A. & Gershman, S. J. When does model-based control pay off? PLoS Comput. Biol. 12, e1005090 (2016).
Daw, N. D., Gershman, S. J., Seymour, B., Dayan, P. & Dolan, R. J. Model-based influences on humans’ choices and striatal prediction errors. Neuron 69, 1204–1215 (2011).
Wunderlich, K., Smittenaar, P. & Dolan, R. J. Dopamine enhances model-based over model-free choice behavior. Neuron 75, 418–424 (2012).
Dezfouli, A. & Balleine, B. W. Actions, action sequences and habits: evidence that goal-directed and habitual action control are hierarchically organized. PLoS Comput. Biol. 9, e1003364 (2013).
Otto, A. R., Raio, C. M., Chiang, A., Phelps, E. A. & Daw, N. D. Working-memory capacity protects model-based learning from stress. Proc. Natl Acad. Sci. USA 110, 20941–20946 (2013).
Smittenaar, P., FitzGerald, T. H., Romei, V., Wright, N. D. & Dolan, R. J. Disruption of dorsolateral prefrontal cortex decreases model-based in favor of model-free control in humans. Neuron 80, 914–919 (2013).
Eppinger, B., Walter, M., Heekeren, H. R. & Li, S.-C. Of goals and habits: age-related and individual differences in goal-directed decision-making. Front. Neurosci. https://doi.org/10.3389/fnins.2013.00253 (2013).
Dezfouli, A., Lingawi, N. W. & Balleine, B. W. Habits as action sequences: hierarchical action control and changes in outcome value. Philos. Trans. R. Soc. B: Biol. Sci. 369, 20130482–20130482 (2014).
Otto, A. R., Skatova, A., Madlon-Kay, S. & Daw, N. D. Cognitive control predicts use of model-based reinforcement learning. J. Cogn. Neurosci. 27, 319–333 (2014).
Friedel, E. et al. Devaluation and sequential decisions: linking goal-directed and model-based behavior. Front. Human Neurosci. https://doi.org/10.3389/fnhum.2014.00587 (2014).
Economides, M., Kurth-Nelson, Z., Lübbert, A., Guitart-Masip, M. & Dolan, R. J. Model-based reasoning in humans becomes automatic with training. PLoS Comput. Biol. 11, e1004463 (2015).
Deserno, L. et al. Ventral striatal dopamine reflects behavioral and neural signatures of model-based control during sequential decision making. Proc. Natl Acad. Sci. USA 112, 1595–1600 (2015).
Voon, V. et al. Disorders of compulsivity: a common bias towards learning habits. Mol. Psychiatry 20, 345–352 (2015).
Gillan, C. M., Otto, A. R., Phelps, E. A. & Daw, N. D. Model-based learning protects against forming habits. Cogn., Affect., Behav. Neurosci. 15, 523–536 (2015).
Doll, B. B., Bath, K. G., Daw, N. D. & Frank, M. J. Variability in dopamine genes dissociates model-based and model-free reinforcement learning. J. Neurosci. 36, 1211–1222 (2016).
Decker, J. H., Otto, A. R., Daw, N. D. & Hartley, C. A. From creatures of habit to goal-directed learners: tracking the developmental emergence of model-based reinforcement learning. Psychol. Sci. 27, 848–858 (2016).
Konovalov, A. & Krajbich, I. Gaze data reveal distinct choice processes underlying model-based and model-free reinforcement learning. Nat. Commun. 7, 12438 (2016).
Gillan, C. M., Kosinski, M., Whelan, R., Phelps, E. A. & Daw, N. D. Characterizing a psychiatric symptom dimension related to deficits in goal-directed control. eLife https://elifesciences.org/articles/11305 (2016).
Sharp, M. E., Foerde, K., Daw, N. D. & Shohamy, D. Dopamine selectively remediates ‘model-based’ reward learning: a computational approach. Brain 139, 355–364 (2016).
Miller, K. J., Botvinick, M. M. & Brody, C. D. Dorsal hippocampus contributes to model-based planning. Nat. Neurosci. 20, 1269–1276 (2017).
Shahar, N. et al. Credit assignment to state-independent task representations and its relationship with model-based decision making. Proc. Natl Acad. Sci. USA 116, 15871–15876 (2019).
Shahar, N. et al. Improving the reliability of model-based decision-making estimates in the two-stage decision task with reaction-times and drift-diffusion modeling. PLoS Comput. Biol. 15, e1006803 (2019).
Grosskurth, E. D., Bach, D. R., Economides, M., Huys, Q. J. M. & Holper, L. No substantial change in the balance between model-free and model-based control via training on the two-step task. PLoS Comput. Biol. 15, e1007443 (2019).
Sebold, M. et al. When habits are dangerous: alcohol expectancies and habitual decision making predict relapse in alcohol dependence. Biol. Psychiatry 82, 847–856 (2017).
Nebe, S. et al. No association of goal-directed and habitual control with alcohol consumption in young adults. Addiction Biol. 23, 379–393 (2018).
Feher da Silva, C. & Hare, T. A. Humans primarily use model-based inference in the two-stage task. Nat. Hum. Behav. 4, 1053–1066 (2020).
Seow, T. X. F. et al. Model-based planning deficits in compulsivity are linked to faulty neural representations of task structure. J. Neurosci. 41, 6539–6550 (2021).
Doll, B. B., Simon, D. A. & Daw, N. D. The ubiquity of model-based reinforcement learning. Curr. Opin. Neurobiol. 22, 1075–1081 (2012).
Chen, H. et al. Model-based and model-free control predicts alcohol consumption developmental trajectory in young adults: a 3-year prospective study. Biol. Psychiatry 89, 980–989 (2021).
Sharp, P. B., Dolan, R. J. & Eldar, E. Disrupted state transition learning as a computational marker of compulsivity. Psychol. Med. https://doi.org/10.1017/S0033291721003846 (2021).
Dromnelle, R. et al. in Biomimetic and Biohybrid Systems (eds Vouloutsi, V. et al.) 68–79 (Springer International Publishing, 2020).
Wise, R. A. Dopamine, learning and motivation. Nat. Rev. Neurosci. 5, 483–494 (2004).
Gläscher, J., Daw, N., Dayan, P. & O’Doherty, J. P. States versus rewards: dissociable neural prediction error signals underlying model-based and model-free reinforcement learning. Neuron 66, 585–595 (2010).
Lee, S. W., Shimojo, S. & O’Doherty, J. P. Neural computations underlying arbitration between model-based and model-free learning. Neuron 81, 687–699 (2014).
Donoso, M., Collins, A. G. E. & Koechlin, E. Foundations of human reasoning in the prefrontal cortex. Science 344, 1481–1486 (2014).
Charpentier, C. J., Iigaya, K. & O’Doherty, J. P. A neuro-computational account of arbitration between choice imitation and goal emulation during human observational learning. Neuron 106, 687–699.e7 (2020).
Daw, N. D., O’Doherty, J. P., Dayan, P., Seymour, B. & Dolan, R. J. Cortical substrates for exploratory decisions in humans. Nature 441, 876–879 (2006).
Raja Beharelle, A., Polania, R., Hare, T. A. & Ruff, C. C. Transcranial stimulation over frontopolar cortex elucidates the choice attributes and neural mechanisms used to resolve exploration-exploitation trade-offs. J. Neurosci. 35, 14544–14556 (2015).
Kahneman, D. & Beatty, J. Pupil diameter and load on memory. Science 154, 1583–1585 (1966).
Poock, G. K. Information processing vs pupil diameter. Percept. Mot. Skills 37, 1000–1002 (1973).
Jepma, M. & Nieuwenhuis, S. Pupil diameter predicts changes in the exploration-exploitation trade-off: evidence for the adaptive gain theory. J. Cogn. Neurosci. 23, 1587–1596 (2011).
Reimer, J. et al. Pupil fluctuations track fast switching of cortical states during quiet wakefulness. Neuron 84, 355–362 (2014).
Richer, F. & Beatty, J. Contrasting effects of response uncertainty on the task-evoked pupillary response and reaction time. Psychophysiology 24, 258–262 (1987).
Urai, A. E., Braun, A. & Donner, T. H. Pupil-linked arousal is driven by decision uncertainty and alters serial choice bias. Nat. Commun. 8, 14637 (2017).
O’Reilly, J. X. et al. Dissociable effects of surprise and model update in parietal and anterior cingulate cortex. Proc. Natl Acad. Sci. USA 110, E3660–E3669 (2013).
Grueschow, M., Kleim, B. & Ruff, C. C. Role of the locus coeruleus arousal system in cognitive control. J. Neuroendocrinol. 32, e12890 (2020).
Kool, W., Gershman, S. J. & Cushman, F. A. Cost-benefit arbitration between multiple reinforcement-learning systems. Psychol. Sci. https://doi.org/10.1177/0956797617708288 (2017).
Kool, W., Gershman, S. J. & Cushman, F. A. Planning complexity registers as a cost in metacontrol. J. Cogn. Neurosci. 30, 1391–1404 (2018).
Daw, N. D. Are we of two minds? Nat. Neurosci. 21, 1497 (2018).
Collins, A. G. & Cockburn, J. Beyond dichotomies in reinforcement learning. Nat. Rev. Neurosci. 21, 576–586 (2020).
Bennett, D., Niv, Y. & Langdon, A. J. Value-free reinforcement learning: Policy optimization as a minimal model of operant behavior. Curr. Opin. Behav. Sci. 41, 114–121 (2021).
Heo, S., Sung, Y. & Lee, S. W. Effects of subclinical depression on prefrontal-striatal model-based and model-free learning. PloS Comput. Biol. 17, e1009003 (2021).
Bromberg-Martin, E. S., Matsumoto, M., Hong, S. & Hikosaka, O. A pallidus-habenula-dopamine pathway signals inferred stimulus values. J. Neurophysiol. 104, 1068–1076 (2010).
Sadacca, B. F., Jones, J. L. & Schoenbaum, G. Midbrain dopamine neurons compute inferred and cached value prediction errors in a common framework. eLife https://elifesciences.org/articles/13665 (2016).
Sharpe, M. J. et al. Dopamine transients are sufficient and necessary for acquisition of model-based associations. Nat. Neurosci. 20, 735–742 (2017).
Feher da Silva, C., Lombardi, G., Edelson, M. & Hare, T. Is model-based learning related to dietary self-control? (Centre for Open Science, 2018); osf.io/wkcvx
Esteban, O., Markiewicz, C.J., Blair, R.W. et al. fMRIPrep: a robust preprocessing pipeline for functional MRI. Nat. Methods 16, 111–116 (2019).
Esteban, O. et al. Fmriprep 1.2.5 (2018).
Lewandowski, D., Kurowicka, D. & Joe, H. Generating random correlation matrices based on vines and extended onion method. J. Multivar. Anal. 100, 1989–2001 (2009).
Stan modeling language users guide and reference manual, version 2.16.0 (Stan Development Team, 2017).
Carpenter, B. et al. Stan: a probabilistic programming language. J. Statist. Softw. http://www.jstatsoft.org/v76/i01/ (2017).
PyStan: the Python interface to Stan (Stan Development Team, 2017); http://mc-stan.org
Vehtari, A., Gelman, A. & Gabry, J. Practical Bayesian model evaluation using leave-one-out cross-validation and WAIC. Stat. Comput. https://doi.org/10.1007/s11222-016-9696-4 (2016).
McElreath, R. Monsters and Mixtures 2nd edn, 369–397 (CRC Press, 2020).
Gorgolewski, K. et al. Nipype: a flexible, lightweight and extensible neuroimaging data processing framework in python. Front. Neuroinform. 5, 13 (2011).
Gorgolewski, K. J. et al. Nipype (2018).
Tustison, N. J. et al. N4itk: improved n3 bias correction. IEEE Trans. Med. Imaging 29, 1310–1320 (2010).
Fonov, V., Evans, A., McKinstry, R., Almli, C. & Collins, D. Unbiased nonlinear average age-appropriate brain templates from birth to adulthood. NeuroImage 47, S102 (2009).
Avants, B., Epstein, C., Grossman, M. & Gee, J. Symmetric diffeomorphic image registration with cross-correlation: evaluating automated labeling of elderly and neurodegenerative brain. Med. Image Anal. 12, 26–41 (2008).
Zhang, Y., Brady, M. & Smith, S. Segmentation of brain MR images through a hidden Markov random field model and the expectation-maximization algorithm. IEEE Trans. Med. Imaging 20, 45–57 (2001).
Wang, S. et al. Evaluation of field map and nonlinear registration methods for correction of susceptibility artifacts in diffusion MRI. Front. Neuroinform. http://journal.frontiersin.org/article/10.3389/fninf.2017.00017/full (2017).
Huntenburg, J. M. Evaluating Nonlinear Coregistration of BOLD EPI and T1w Images. Master’s thesis, Freie Univ., Berlin (2014).
Treiber, J. M. et al. Characterization and correction of geometric distortions in 814 diffusion weighted images. PLoS ONE 11, e0152472 (2016).
Jenkinson, M. & Smith, S. A global optimisation method for robust affine registration of brain images. Med. Image Anal. 5, 143–156 (2001).
Greve, D. N. & Fischl, B. Accurate and robust brain image alignment using boundary-based registration. NeuroImage 48, 63–72 (2009).
Jenkinson, M., Bannister, P., Brady, M. & Smith, S. Improved optimization for the robust and accurate linear registration and motion correction of brain images. NeuroImage 17, 825–841 (2002).
Cox, R. W. & Hyde, J. S. Software tools for analysis and visualization of fMRI data. NMR Biomed. 10, 171–178 (1997).
Power, J. D. et al. Methods to detect, characterize, and remove motion artifact in resting state fMRI. NeuroImage 84, 320–341 (2014).
Behzadi, Y., Restom, K., Liau, J. & Liu, T. T. A component based noise correction method (CompCor) for BOLD and perfusion based fmri. NeuroImage 37, 90–101 (2007).
Lanczos, C. Evaluation of noisy data. J. Soc. Ind. Appl. Math. Ser. B Numer. Anal. 1, 76–85 (1964).
Abraham, A. et al. Machine learning for neuroimaging with scikit-learn. Front. Neuroinform. https://www.frontiersin.org/articles/10.3389/fninf.2014.00014/full (2014).
Gorgolewski, K. J. Confounds from fmriprep: which one would you use for GLM? (2017); https://neurostars.org/t/confounds-from-fmriprep-which-one-would-you-use-for-glm/326/2
Virtanen, P. et al. SciPy 1.0: fundamental algorithms for scientific computing in Python. Nat. Methods 17, 261–272 (2020).
Bürkner, P.-C. brms: an R package for Bayesian multilevel models using Stan. J. Stat. Softw. 80, 1–28 (2017).
Acknowledgements
We thank G.M. Parente for the illustrations used in the experimental tasks, K. Treiber and E. Silingardi for helping with the fMRI data collection, S. Gobbi for helping with the fMRI preprocessing and analysis as well as reviewing our calculations, and N.D. Daw, P. Dayan, M. Grueschow, A. Konovalov, I. Krajbich and S. Nebe for helpful comments on early drafts of this manuscript. Our acknowledgement of their feedback does not imply that these individuals fully agree with our conclusions or opinions in this paper. This work was supported by the CAPES Foundation (grant no. 88881.119317/2016-01), awarded to C.F.S., and the European Union’s Seventh Framework programme for research, technological development and demonstration under grant agreement no. 607310 (Nudge-it), awarded to T.A.H. The funders had no role in study design, data collection and analysis, decision to publish or preparation of the manuscript.
Author information
Authors and Affiliations
Contributions
C.F.S. and T.A.H. conceived the project. All authors designed the experiments. C.F.S. and G.L. collected and analysed the data with input from M.E. and T.A.H. C.F.S. and T.A.H. wrote the first draft of the manuscript. All authors revised the manuscript for submission.
Corresponding authors
Ethics declarations
Competing interests
The authors declare no competing interests.
Peer review
Peer review information
Nature Human Behaviour thanks Mehdi Khamassi, Jan Gläscher and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Extended data
Extended Data Fig. 1 Mean estimated coefficients from the combined-RPE and separated-RPE GLMs within the nucleus accumbens.
Mean estimated coefficients from the a, c) combined-RPE and b, d) separated-RPE GLMs within the nucleus accumbens for the abstract (N = 48) and story (N = 46) conditions. Each black dot represents the coefficient from a single participant. The box and whisker plots show the distribution across the entire sample. The box extends from the first quartile to the third quartile of the distribution, with a line at the median. The whiskers extend from the box by 1.5 times the inter-quartile range. These coefficients were obtained using hybrid parameter estimates from the previous sample4.
Supplementary information
Supplementary Information
Supplemental Material.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Feher da Silva, C., Lombardi, G., Edelson, M. et al. Rethinking model-based and model-free influences on mental effort and striatal prediction errors. Nat Hum Behav 7, 956–969 (2023). https://doi.org/10.1038/s41562-023-01573-1
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1038/s41562-023-01573-1
This article is cited by
-
Humans Adopt Different Exploration Strategies Depending on the Environment
Computational Brain & Behavior (2023)