Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Mice alternate between discrete strategies during perceptual decision-making

Abstract

Classical models of perceptual decision-making assume that subjects use a single, consistent strategy to form decisions, or that decision-making strategies evolve slowly over time. Here we present new analyses suggesting that this common view is incorrect. We analyzed data from mouse and human decision-making experiments and found that choice behavior relies on an interplay among multiple interleaved strategies. These strategies, characterized by states in a hidden Markov model, persist for tens to hundreds of trials before switching, and often switch multiple times within a session. The identified decision-making strategies were highly consistent across mice and comprised a single ‘engaged’ state, in which decisions relied heavily on the sensory stimulus, and several biased states in which errors frequently occurred. These results provide a powerful alternate explanation for ‘lapses’ often observed in rodent behavioral experiments, and suggest that standard measures of performance mask the presence of major changes in strategy across trials.

This is a preview of subscription content, access via your institution

Access options

\$39.95

Prices may be subject to local taxes which are calculated during checkout

Data availability

The raw data studied in this paper are publicly available. The IBL data associated with ref. 19 can be accessed at https://doi.org/10.6084/m9.figshare.11636748. The Odoemene et al. data associated with ref. 20 can be accessed at https://doi.org/10.14224/1.38944. Finally, the human data associated with ref. 27 can be accessed at https://doi.org/10.6084/m9.figshare.4300043.

Code availability

We contributed code to version 0.0.1 of the Bayesian State Space Modeling framework of ref. 72, and we used this code base to perform GLM-HMM inference. The code to analyze the resulting model fits and to produce the figures in this paper is available at https://github.com/zashwood/glm-hmm.

References

1. Gomez-Marin, A., Paton, J. J., Kampff, A. R., Costa, R. M. & Mainen, Z. F. Big behavioral data: psychology, ethology and the foundations of neuroscience. Nat. Neurosci. 17, 1455–1462 (2014).

2. Krakauer, J. W., Ghazanfar, A. A., Gomez-Marin, A., MacIver, M. A. & Poeppel, D. Neuroscience needs behavior: correcting a reductionist bias. Neuron 93, 480–490 (2017).

3. Berman, G. J. Measuring behavior across scales. BMC Biol. 16, 23 (2018).

4. Wiltschko, A. B. et al. Mapping sub-second structure in mouse behavior. Neuron 88, 1121–1135 (2015).

5. Sharma, A., Johnson, R., Engert, F. & Linderman, S. Point process latent variable models of larval zebrafish behavior. In: Bengio, S. et al. (eds) Advances in Neural Information Processing Systems 31, 10919–10930 (MIT Press, 2018).

6. Calhoun, A. J., Pillow, J. W. & Murthy, M. Unsupervised identification of the internal states that shape natural behavior. Nat. Neurosci. 22, 2040–2049 (2019).

7. Batty, E. et al. BehaveNet: nonlinear embedding and Bayesian neural decoding of behavioral videos. In: Wallach, H. et al. (eds) Advances in Neural Information Processing Systems 32, 15706–15717 (MIT Press, 2019).

8. Johnson, R. E. et al. Probabilistic models of larval zebrafish behavior reveal structure on many scales. Curr. Biol. 30, 70–82 (2020).

9. Green, D. M. & Swets, J. A. Signal Detection Theory and Psychophysics (Wiley, 1966).

10. Klein, S. A. Measuring, estimating, and understanding the psychometric function: a commentary. Percept. Psychophys. 63, 1421–1455 (2001).

11. Bogacz, R., Brown, E., Moehlis, J., Holmes, P. & Cohen, J. D. The physics of optimal decision making: a formal analysis of models of performance in two-alternative forced-choice tasks. Psychol. Rev. 113, 700–765 (2006).

12. Gold, J. I. & Shadlen, M. N. The neural basis of decision making. Annu. Rev. Neurosci. 30, 535–574 (2007).

13. Ratcliff, R. & McKoon, G. The diffusion decision model: theory and data for two-choice decision tasks. Neural Comput. 20, 873–922 (2008).

14. Iigaya, K., Fonseca, M. S., Murakami, M., Mainen, Z. F. & Dayan, P. An effect of serotonergic stimulation on learning rates for rewards apparent after long intertrial intervals. Nat. Commun. 9, 2477 (2018).

15. Urai, A. E., de Gee, J. W., Tsetsos, K. & Donner, T. H. Choice history biases subsequent evidence accumulation. eLife 8, e46331 (2019).

16. Lak, A. et al. Reinforcement biases subsequent perceptual decisions when confidence is low, a widespread behavioral phenomenon. eLife 9, e49834 (2020).

17. Pisupati, S., Chartarifsky-Lynn, L., Khanal, A. & Churchland, A. K. Lapses in perceptual decisions reflect exploration. eLife 10, e55490 (2021).

18. Beron, C., Neufeld, S., Linderman, S. & Sabatini, B. Efficient and stochastic mouse action switching during probabilistic decision making. Preprint at https://www.biorxiv.org/content/10.1101/2021.05.13.444094v1 (2021).

19. The International Brain Laboratory et al. A standardized and reproducible method to measure decision-making in mice. eLife 10, e63711 (2021).

20. Odoemene, O., Pisupati, S., Nguyen, H. & Churchland, A. K. Visual evidence accumulation guides decision-making in unrestrained mice. J. Neurosci. 38, 10143–10155 (2018).

21. Pinto, L. et al. An accumulation-of-evidence task using visual pulses for mice navigating in virtual reality. Front. Behav. Neurosci. 12, 36 (2018).

22. Wichmann, F. A. & Hill, N. J. The psychometric function: I. Fitting, sampling, and goodness of fit. Percept. Psychophys. 63, 1293–1313 (2001).

23. Prins, N. The psychometric function: the lapse rate revisited. J. Vis. 12, 25 (2012).

24. Carandini, M. & Churchland, A. K. Probing perceptual decisions in rodents. Nat. Neurosci. 16, 824–831 (2013).

25. Lindsay, B. G. Mixture models: theory, geometry and applications. NSF-CBMS Regional Conference Series in Probability and Statistics 5, i–163 https://books.google.com/books?id=VFDzNhikFbQC&printsec=copyright#v=onepage&q&f=false (1995).

26. Law, C.-T. & Gold, J. I. Reinforcement learning can account for associative and perceptual learning on a visual-decision task. Nat. Neurosci. 12, 655–663 (2009).

27. Urai, A. E., Braun, A. & Donner, T. H. Pupil-linked arousal is driven by decision uncertainty and alters serial choice bias. Nat. Commun. 8, 14637 (2017).

28. Busse, L. et al. The detection of visual contrast in the behaving mouse. J. Neurosci. 31, 11351–11361 (2011).

29. Fründ, I., Wichmann, F. A. & Macke, J. H. Quantifying the effect of intertrial dependence on perceptual decisions. J. Vis. 14, 9 (2014).

30. Lueckmann, J.-M., Macke, J. H. & Nienborg, H. Can serial dependencies in choices and neural activity explain choice probabilities? J. Neurosci. 38, 3495–3506 (2018).

31. Escola, S., Fontanini, A., Katz, D. & Paninski, L. Hidden Markov models for the stimulus–response relationships of multistate neural systems. Neural Comput. 23, 1071–1132 (2011).

32. Bengio, Y. & Frasconi, P. An input output HMM architecture. In: Tesauro, G., Touretzky, D. S. & Leen, T. K. (eds) Advances in Neural Information Processing Systems 7, 427–434 (MIT Press, 1995).

33. Burgess, C. P. et al. High-yield methods for accurate two-alternative visual psychophysics in head-fixed mice. Cell Rep. 20, 2513–2524 (2017).

34. Friedman, J., Hastie, T. & Tibshirani, R. The Elements of Statistical Learning (Springer, 2001).

35. Roy, N. A., Bak, J. H., Akrami, A., Brody, C. & Pillow, J. W. Efficient inference for time-varying behavior during learning. In: Advances in Neural Information Processing Systems 31, 5695–5705 (MIT Press, 2018).

36. Roy, N. A. et al. Extracting the dynamics of behavior in sensory decision-making experiments. Neuron 109, 597–610 (2021).

37. Heitz, R. P. The speed–accuracy tradeoff: history, physiology, methodology, and behavior. Front. Neurosci. 8, 150 (2014).

38. Ratcliff, R., Smith, P. L., Brown, S. D. & McKoon, G. Diffusion decision model: current issues and history. Trends Cogn. Sci. 20, 260–281 (2016).

39. Shevinsky, C. A. & Reinagel, P. The interaction between elapsed time and decision accuracy differs between humans and rats. Front. Neurosci. 13, 1211 (2019).

40. Ding, L. & Gold, J. I. Neural correlates of perceptual decision making before, during, and after decision commitment in monkey frontal eye field. Cereb. Cortex 22, 1052–1067 (2012).

41. Ding, L. & Gold, J. I. The basal ganglia’s contributions to perceptual decision making. Neuron 79, 640–649 (2013).

42. Engel, T. A. et al. Selective modulation of cortical state during spatial attention. Science 354, 1140–1144 (2016).

43. Zhou, Y. & Freedman, D. J. Posterior parietal cortex plays a causal role in perceptual and categorical decisions. Science 365, 180–185 (2019).

44. Steinmetz, N. A., Zatka-Haas, P., Carandini, M. & Harris, K. D. Distributed coding of choice, action and engagement across the mouse brain. Nature 576, 266–273 (2019).

45. Zoltowski, D., Pillow, J. & Linderman, S. A general recurrent state space framework for modeling neural dynamics during decision-making. In: Proceedings of the 37th International Conference on Machine Learning, 11680–11691 (PMLR, 2020).

46. Hennig, J. A. et al. Learning is shaped by abrupt changes in neural engagement. Nat. Neurosci. 24, 727–736 (2021).

47. Bolkan, S. S. et al. Strong and opponent contributions of dorsomedial striatal pathways to behavior depends on cognitive demands and task strategy. Preprint at https://www.biorxiv.org/content/10.1101/2021.07.23.453573v1 (2021).

48. Carter, M. E. et al. Tuning arousal with optogenetic modulation of locus coeruleus neurons. Nat. Neurosci. 13, 1526–1533 (2010).

49. Harris, K. D. & Thiele, A. Cortical state and attention. Nat. Rev. Neurosci. 12, 509–523 (2011).

50. Reimer, J. et al. Pupil fluctuations track fast switching of cortical states during quiet wakefulness. Neuron 84, 355–362 (2014).

51. Vinck, M., Batista-Brito, R., Knoblich, U. & Cardin, J. A. Arousal and locomotion make distinct contributions to cortical activity patterns and visual encoding. Neuron 86, 740–754 (2015).

52. Ashwood, Z., Roy, N. A., Bak, J. H. & Pillow, J. W. Inferring learning rules from animal decision-making. In: Advances in Neural Information Processing Systems 33, 3442–3453 (MIT Press, 2020).

53. Daw, N. D., O’Doherty, J. P., Dayan, P., Seymour, B. & Dolan, R. J. Cortical substrates for exploratory decisions in humans. Nature 441, 876–879 (2006).

54. Dayan, P. & Daw, N. D. Decision theory, reinforcement learning, and the brain. Cogn. Affect. Behav. Neurosci. 8, 429–453 (2008).

55. Behrens, T. E. J., Woolrich, M. W., Walton, M. E. & Rushworth, M. F. S. Learning the value of information in an uncertain world. Nat. Neurosci. 10, 1214–1221 (2007).

56. Fiser, J., Berkes, P., Orbán, G. & Lengyel, M. Statistically optimal perception and learning: from behavior to neural representations. Trends Cogn. Sci. 14, 119–130 (2010).

57. Mendonça, A. G. et al. The impact of learning on perceptual decisions and its implication for speed-accuracy tradeoffs. Nat. Commun. 11, 2757 (2020).

58. Gershman, S. J. Origin of perseveration in the trade-off between reward and complexity. Cognition 204, 104394 (2020).

59. Weilnhammer, V., Chikermane, M. & Sterzer, P. Bistable perception alternates between internal and external modes of sensory processing. iScience 24, 102234 (2021).

60. Weilnhammer, V. A., Stuke, H., Eckert, A.-L., Standvoss, K. & Sterzer, P. Humans and mice fluctuate between external and internal modes of sensory processing. Preprint at https://www.biorxiv.org/content/10.1101/2021.08.20.457079v2 (2021).

61. Dempster, A. P., Laird, N. M. & Rubin, D. B. Maximum likelihood from incomplete data via the EM algorithm. J. R. Stat. Soc. Series B Stat. Methodol. 39, 1–22 (1977).

62. McLachlan, G. J. & Krishnan, T. The EM Algorithm and Extensions (Wiley, 2007).

63. Bishop, C. M. Pattern Recognition and Machine Learning (Springer, 2006).

64. Baum, L. E., Petrie, T., Soules, G. & Weiss, N. A maximization technique occurring in the statistical analysis of probabilistic functions of Markov chains. Ann. Math. Stat. 41, 164–171 (1970).

65. Broyden, C. G. The convergence of a class of double-rank minimization algorithms 1. General considerations. IMA J. Appl. Math. 6, 76–90 (1970).

66. Fletcher, R. A new approach to variable metric algorithms. Comput. J. 13, 317–322 (1970).

67. Goldfarb, D. A family of variable-metric methods derived by variational means. Math. Comput. 24, 23–26 (1970).

68. Shanno, D. F. Conditioning of quasi-Newton methods for function minimization. Math. Comput. 24, 647–656 (1970).

69. Virtanen, P. et al. SciPy 1.0: fundamental algorithms for scientific computing in Python. Nat. Methods 17, 261–272 (2020).

70. Salakhutdinov, R., Roweis, S. T. & Ghahramani, Z. Optimization with EM and expectation-conjugate-gradient. In: Proceedings of the 20th International Conference on Machine Learning (ICML-03), 672–679 (Association for Computing Machinery, 2003).

71. The International Brain Laboratory et al. Data architecture for a large-scale neuroscience collaboration. Preprint at https://www.biorxiv.org/content/10.1101/827873v3 (2020).

72. Linderman, S., Antin, B., Zoltowski, D. & Glaser, J. SSM: Bayesian learning and inference for state space models https://github.com/lindermanlab/ssm (2020).

73. Yu, S.-Z. Hidden semi-Markov models. Artificial Intelligence 174, 215–243 (2010).

Acknowledgements

We are grateful to M. Wells, R. Terry, L. Funnell and the Cortexlab at University College London for providing us with the data for the four mice plotted in Extended Data Fig. 3. We are grateful to S. Linderman for developing the beautiful Bayesian State Space Modeling framework of ref. 72; as described in our Methods section, we built our code on top of this framework. We thank members of the Pillow Laboratory, the International Brain Laboratory (IBL) and specifically the Behavior Analysis Working Group within the IBL for helpful feedback throughout the project. We thank P. Dayan, S. Bruijns and L. Paninski for acting as the IBL Review Board for this paper. We thank E. Dennis for feedback at various points during the project. We thank A. Russo and M. Whiteway for providing feedback on drafts of this manuscript. We thank H. Bayer for help and advice as we were preparing to submit this paper. Finally, we thank the anonymous reviewers for their insightful comments; our manuscript is greatly improved as a result of their input.

This work was supported by grants SCGB AWD543027 (J.W.P.) and SCGB AWD543011 (A.P.) from the Simons Collaboration on the Global Brain; grants NS104899 (J.W.P.), R01EB026946 (J.W.P.) and R01EY022979 (A.K.C.) from the National Institutes of Health BRAIN initiative; grant U19 NIH-NINDS BRAIN Initiative Award 5U19NS104648 (J.W.P.); grant 315230_197296 from the Swiss National Fund (A.P.); and Wellcome Trust grants 209558 (the IBL) and 216324 (the IBL). A.E.U. was supported by the German National Academy of Sciences Leopoldina and the International Brain Research Organization. The funders had no role in study design, data collection and analysis, decision to publish or preparation of the manuscript.

Author information

Authors

Contributions

Conceptualization: Z.C.A. and J.W.P.; Methodology: Z.C.A. and J.W.P.; Additional Technical and Analysis Support: N.A.R., I.R.S., A.E.U., A.K.C. and A.P.; Implementation: Z.C.A.; Data Collection and Curation: A.E.U., A.K.C. and the International Brain Laboratory (IBL); Writing—Original Draft: Z.C.A. and J.W.P.; Writing—Review and Editing: Z.C.A., N.A.R., I.R.S., the IBL, A.E.U., A.K.C., A.P. and J.W.P.; Visualization: Z.C.A. and J.W.P.; Supervision: A.P. and J.W.P.; Project Administration: Z.C.A. and J.W.P.; Funding Acquisition: the IBL, A.K.C., A.P. and J.W.P.

Corresponding authors

Correspondence to Zoe C. Ashwood or Jonathan W. Pillow.

Ethics declarations

Competing interests

The authors declare no competing interests.

Peer review

Peer review information

Nature Neuroscience thanks Timothy Behrens and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Extended data

Extended Data Fig. 1 Cross-validation for Model Selection.

(a,b) Here we show the cross-validated train and test loglikelihood in units of bits per trial for two humans performing the task of27 as a function of the number of states. While the number of parameters of the GLM-HMM increases as the number of states is increased (and a two state GLM-HMM has more parameters than both the classic lapse model and single state GLM), only the loglikelihood of the training dataset (red) is guaranteed to increase as the number of parameters increases. Indeed, as the number of states increases, the GLM-HMM may start to overfit the training dataset causing it to fit the test dataset poorly. This is what we see here when the grey curves begin to decrease in each of the two figures as the number of states increases. Thus, by observing the performance of the model on a test dataset, we can appropriately trade off predictive performance with model complexity.

Extended Data Fig. 2 Retrieved state dwell times are approximately geometrically distributed.

With the solid line, we show the predicted dwell times (according to the retrieved transition matrix) in each of the three states for the example animal of Fig. 2 and Fig. 3. Predicted dwell times can be obtained from the transition matrix as $$p(\,{{\mbox{dwell time}}}\,=t)=(1-{A}_{kk}){A}_{kk}^{t-1}$$ because state dwell times in the Hidden Markov Model are geometrically distributed. We then use the posterior state probabilities to assign states to trials in order to calculate the dwell times that are actually observed in the real data (shown with the dashed line); we also show the 68% confidence intervals associated with these empirical probabilities (n is between 36 and 86, depending on state). We find that the empirical dwell times for the biased leftward and rightward states seem to be geometrically distributed. For the engaged state, because there are some entire sessions (each session is 90 trials) during which the animal is engaged, we see that the empirical dwell times associated with this state are not as well described by a geometric distribution. A future direction may be to allow non-geometrically distributed state dwell times by replacing the Hidden Markov Model with e.g. the Hidden semi-Markov Model73.

Extended Data Fig. 3 GLM-HMM application to 4 mice not exposed to bias blocks in IBL task.

We confirm that mice that have never been exposed to bias blocks in the IBL task continue to show state-dependent decision-making. This is a sister figure to Fig. 4, and each panel can be interpreted in the same way as in Fig. 4.

Extended Data Fig. 4 Additional comparisons with PsyTrack model.

(a) Copy of left panel from Fig. 4f showing the difference in test set loglikelihood for 37 IBL animals for the 3 state GLM-HMM compared to the PsyTrack model of35,36. Black indicates the mean across animals. The 3 state GLM-HMM better explained the choice data of all 37 animals compared to the PsyTrack model with continuously evolving states. (b) Analogous figure for the 4 additional IBL animals studied in Fig. ED3. All 4 animals’ data were better explained by the GLM-HMM compared to PsyTrack. (c) Same as in panel a for Odoemene et al. animals shown in Fig. 5, although comparison now utilizes 4 state GLM-HMM fits. All 15 animals’ data were better explained by the GLM-HMM compared to PsyTrack.

Extended Data Fig. 5 State switching in Odoemene et al. task.

(a) Posterior state probabilities for three example sessions for three example mice (different mice are shown in each row). (b) Histogram giving number of state changes (identified with posterior state probabilities) per session for all sessions across all animals. For visibility, state changes are censored above 60. (c) Different sessions have different numbers of trials, so we normalize the histogram of b to give the number of state changes per 500 trials for each session (the median session length is 683 trials). Again, for visibility, state changes are censored above 60. Left: we use all data from all trials to generate the normalized histogram. Right: we plot the normalized histogram when we exclude the first 100 trials of a session. As can be observed, the left and right normalized histograms are very similar (p-value = 0.96 using KS-test). While the GLM-HMM is able to capture ‘warm-up’ effects (as described in the main text), this test reveals that the GLM-HMM is able to capture more than this, and state switching occurs much later in the session too (as also indicated by the posterior state probabilities shown in a).

Extended Data Fig. 6 GLM-HMM captures ‘warm-up’ effect for Odoemene et al. animals.

Average (across 20 sessions) posterior state probabilities for the first 200 trials of a session for each animal in the Odoemene et al. dataset. Orange corresponds to the engaged state, green to the biased left, blue to the biased right and pink to the win-stay state from Fig. 5. Error bars represent standard errors.

Extended Data Fig. 7 GLM-HMM posterior state probabilities at end of session.

Average (across 20 sessions) posterior state probabilities for the last 200 trials of a session for each animal in the Odoemene et al. dataset. Orange corresponds to the engaged state, green to the biased left, blue to the biased right and pink to the win-stay state from Fig. 5. Error bars represent standard errors.

Extended Data Fig. 8 Simulated data from GLM-HMM captures statistics of real choice data.

(a) Definition of choice run-length. Shown are the choices that an IBL mouse made over the course of 30 trials (red, bottom), as well as the choices it should have made during that same time course if the mouse performed the task perfectly (grey, top). Choice run-length is defined as the number of trials during which a mouse repeated the same decision (example choice run-lengths of 2, 3 and 9 trials are highlighted). (b) Red: fraction of trials in choice run-lengths of between 1 and 30 trials when calculated from all trials for all mice. Grey: distribution of choice run-lengths that would have been obtained if IBL mice performed the task perfectly. (c) Difference in choice run-length distribution for simulated data (from three different models) compared to the red distribution shown in (b). Models used to simulate data were a lapse model with only stimulus intensity and bias regressors, a lapse model that also included history regressors (previous choice and win-stay-lose-switch), and a 3 state GLM-HMM (also with history regressors). We simulated 100 example choice sequences from each model and calculated the mean histogram of choice run-lengths across the 100 simulations. This was then subtracted from the red histogram of (b). (d) Number of choice run-lengths with more than 5 trials for each model simulation used in (c). In the 181,530 trials of real choice data, there were 6111 run-lengths lasting more than 5 trials (as shown with the dashed line). When we simulated choice data according to each of the models shown in (c), we found that only the GLM-HMM could generate simulations with as many run-lengths lasting more than 5 trials as in the real data (15/100 simulations had 6111 or more run-lengths lasting more than 5 trials for the GLM-HMM compared to 0/100 for both of the lapse models).

Extended Data Fig. 9 GLM-HMM Recovery Analysis 1.

For dataset sizes comparable to those of real animals, we can recover the IBL and Odoemene global parameters in simulated data. (a) Dataset sizes for each of the 37 IBL animals studied (left) and 15 mice from Odoemene et al. (right). The dashed vertical line indicates the number of trials that we used in simulation data in panels b, c and d (3240 for the IBL parameter regime and 12000 for the Odoemene regime simulation). (b) Test set loglikelihood for each of 5 simulations is maximized at 3 states (blue vertical line) after we simulate according to the (IBL regime) parameters shown in panel c. Similarly, in the right panel, test set loglikelihood is maximized at 4 states when we simulate choice data with the (Odoemene regime) parameters shown in panel d. The thick black line marked as ‘ex. sim.’ (example simulation) indicates the simulation whose generative and recovered parameters we show in panels c and d. (c) Left: the generative and recovered GLM weights (for the simulation marked as ‘ex. sim.’ in panel b) when we simulate choice data in the IBL parameter regime. Middle and right: the generative and recovered transition matrices. (d) The generative and recovered parameters in the Odoemene et al. parameter regime.

Extended Data Fig. 10 GLM-HMM Recovery Analysis 2: We can recover lapse behavior.

(a) We simulate 5 datasets, each with 3240 trials, according to the best fitting lapse model for IBL animals. We then fit these simulated datasets with GLM-HMMs, as well as a lapse model (a constrained 2 state GLM-HMM). The test set loglikelihood is highest for the lapse model in all simulations, indicating that lapse behavior can be distinguished from the long-enduring multi-state behavior that best described the real data. The thick black line marked as ‘ex. sim.’ (example simulation) indicates the simulation whose generative and recovered parameters we show in panels b and c. (b) Left: the generative and recovered weights when recovery is with a lapse model. Right: the generative weights are the same as in the left panel, but we now recover with an unconstrained 2 state GLM-HMM (thus the stimulus, previous choice and w.s.l.s. weights for the second state can be non-zero) (c) The generative (left) transition matrix and the recovered transition matrices when we recover with a lapse model (middle) and an unconstrained 2 state GLM-HMM (right). While the lapse model and 2 state GLM-HMM results don’t perfectly agree, if mice were truly lapsing, the transition matrix would not have the large entries on the diagonals that we observe in the real data.

Supplementary information

Supplementary Information

Supplementary Figs. 1–14

Rights and permissions

Reprints and Permissions

Ashwood, Z.C., Roy, N.A., Stone, I.R. et al. Mice alternate between discrete strategies during perceptual decision-making. Nat Neurosci 25, 201–212 (2022). https://doi.org/10.1038/s41593-021-01007-z

• Accepted:

• Published:

• Issue Date:

• DOI: https://doi.org/10.1038/s41593-021-01007-z

• The geometry of cortical representations of touch in rodents

• Ramon Nogueira
• Chris C. Rodgers
• Stefano Fusi

Nature Neuroscience (2023)

• Opponent control of behavior by dorsomedial striatal pathways depends on task demands and internal state

• Scott S. Bolkan
• Iris R. Stone
• Ilana B. Witten

Nature Neuroscience (2022)

• Efficient stabilization of imprecise statistical inference through conditional belief updating

• Julie Drevet
• Jan Drugowitsch
• Valentin Wyart

Nature Human Behaviour (2022)

• From choices to internal states

• Mark H. Histed
• Jonathan F. O’Rawe

Nature Neuroscience (2022)