# Distributed coding of choice, action and engagement across the mouse brain

## Abstract

Vision, choice, action and behavioural engagement arise from neuronal activity that may be distributed across brain regions. Here we delineate the spatial distribution of neurons underlying these processes. We used Neuropixels probes1,2 to record from approximately 30,000 neurons in 42 brain regions of mice performing a visual discrimination task3. Neurons in nearly all regions responded non-specifically when the mouse initiated an action. By contrast, neurons encoding visual stimuli and upcoming choices occupied restricted regions in the neocortex, basal ganglia and midbrain. Choice signals were rare and emerged with indistinguishable timing across regions. Midbrain neurons were activated before contralateral choices and were suppressed before ipsilateral choices, whereas forebrain neurons could prefer either side. Brain-wide pre-stimulus activity predicted engagement in individual trials and in the overall task, with enhanced subcortical but suppressed neocortical activity during engagement. These results reveal organizing principles for the distribution of neurons encoding behaviourally relevant variables across the mouse brain.

## Access options

from\$8.99

All prices are NET prices.

## Data availability

The behavioural and neural datasets generated and analysed in this study are available as downloadable files at https://figshare.com/articles/steinmetz/9598406 and via the Open Neurophysiology Environment interface at https://figshare.com/articles/steinmetz/9974357.

## Code availability

The code used to analyse the data are available at https://github.com/nsteinme/steinmetz-et-al-2019.

## References

1. 1.

Jun, J. J. et al. Fully integrated silicon probes for high-density recording of neural activity. Nature 551, 232–236 (2017).

2. 2.

Steinmetz, N. A., Koch, C., Harris, K. D. & Carandini, M. Challenges and opportunities for large-scale electrophysiology with Neuropixels probes. Curr. Opin. Neurobiol. 50, 92–100 (2018).

3. 3.

Burgess, C. P. et al. High-yield methods for accurate two-alternative visual psychophysics in head-fixed mice. Cell Rep. 20, 2513–2524 (2017).

4. 4.

Cisek, P. & Kalaska, J. F. Neural mechanisms for interacting with a world full of action choices. Annu. Rev. Neurosci. 33, 269–298 (2010).

5. 5.

Romo, R. & de Lafuente, V. Conversion of sensory signals into perceptual decisions. Prog. Neurobiol. 103, 41–75 (2013).

6. 6.

Ding, L. & Gold, J. I. The basal ganglia’s contributions to perceptual decision making. Neuron 79, 640–649 (2013).

7. 7.

Hikosaka, O., Takikawa, Y. & Kawagoe, R. Role of the basal ganglia in the control of purposive saccadic eye movements. Physiol. Rev. 80, 953–978 (2000).

8. 8.

Felsen, G. & Mainen, Z. F. Midbrain contributions to sensorimotor decision making. J. Neurophysiol. 108, 135–147 (2012).

9. 9.

Gao, Z. et al. A cortico-cerebellar loop for motor planning. Nature 563, 113–116 (2018).

10. 10.

Chabrol, F. P., Blot, A. & Mrsic-Flogel, T. D. Cerebellar contribution to preparatory activity in motor neocortex. Neuron 103, 506–519.e4 (2019).

11. 11.

Shadlen, M. N. & Newsome, W. T. Neural basis of a perceptual decision in the parietal cortex (area LIP) of the rhesus monkey. J. Neurophysiol. 86, 1916–1936 (2001).

12. 12.

Niell, C. M. & Stryker, M. P. Modulation of visual responses by behavioral state in mouse visual cortex. Neuron 65, 472–479 (2010).

13. 13.

Sachidhanandam, S., Sreenivasan, V., Kyriakatos, A., Kremer, Y. & Petersen, C. C. H. Membrane potential correlates of sensory perception in mouse barrel cortex. Nat. Neurosci. 16, 1671–1677 (2013).

14. 14.

Stringer, C. et al. Spontaneous behaviors drive multidimensional, brainwide activity. Science 364, eaav7893 (2019).

15. 15.

Siegel, M., Buschman, T. J. & Miller, E. K. Cortical information flow during flexible sensorimotor decisions. Science 348, 1352–1355 (2015).

16. 16.

Allen, W. E. et al. Global representations of goal-directed behavior in distinct cell types of mouse neocortex. Neuron 94, 891–907.e6 (2017).

17. 17.

Poort, J. et al. Learning enhances sensory and multiple non-sensory representations in primary visual cortex. Neuron 86, 1478–1490 (2015).

18. 18.

Britten, K. H., Newsome, W. T., Shadlen, M. N., Celebrini, S. & Movshon, J. A. A relationship between behavioral choice and the visual responses of neurons in macaque MT. Vis. Neurosci. 13, 87–100 (1996).

19. 19.

Yang, H., Kwon, S. E., Severson, K. S. & O’Connor, D. H. Origins of choice-related activity in mouse somatosensory cortex. Nat. Neurosci. 19, 127–134 (2016).

20. 20.

van Vugt, B. et al. The threshold for conscious report: signal loss and response bias in visual and frontal cortex. Science 360, 537–542 (2018).

21. 21.

Hernández, A. et al. Decoding a perceptual decision process across cortex. Neuron 66, 300–314 (2010).

22. 22.

Shuler, M. G. & Bear, M. F. Reward timing in the primary visual cortex. Science 311, 1606–1609 (2006).

23. 23.

Gonzalez-Castillo, J. et al. Whole-brain, time-locked activation with simple tasks revealed using massive averaging and model-free analysis. Proc. Natl Acad. Sci. USA 109, 5487–5492 (2012).

24. 24.

Vickery, T. J., Chun, M. M. & Lee, D. Ubiquity and specificity of reinforcement signals throughout the human brain. Neuron 72, 166–177 (2011).

25. 25.

Svoboda, K. & Li, N. Neural mechanisms of movement planning: motor cortex and beyond. Curr. Opin. Neurobiol. 49, 33–41 (2018).

26. 26.

Jacobs, E. A. K., Steinmetz, N. A., Carandini, M. & Harris, K. D. Cortical state fluctuations during sensory decision making. Preprint at bioRxiv, https://doi.org/10.1101/348193 (2018).

27. 27.

Mackworth, N. H. The breakdown of vigilance during prolonged visual search. Q. J. Exp. Psychol. 1, 6–21 (1948).

28. 28.

Makeig, S. & Inlow, M. Lapses in alertness: coherence of fluctuations in performance and EEG spectrum. Electroencephalogr. Clin. Neurophysiol. 86, 23–35 (1993).

29. 29.

Allen, W. E. et al. Thirst regulates motivated behavior through modulation of brainwide neural population dynamics. Science 364, eaav3932 (2019).

30. 30.

Raichle, M. E. et al. A default mode of brain function. Proc. Natl Acad. Sci. USA 98, 676–682 (2001).

31. 31.

Sridharan, D., Steinmetz, N. A., Moore, T. & Knudsen, E. I. Distinguishing bias from sensitivity effects in multialternative detection tasks. J. Vis. 14, 2194077 (2014).

32. 32.

Pachitariu, M., Steinmetz, N. A., Kadir, S., Carandini, M. & Harris, K. D. Fast and accurate spike sorting of high-channel count probes with KiloSort. Adv. Neural Inf. Process. Syst. 29, 1–9 (2016).

33. 33.

Rossant, C. et al. Spike sorting for large, dense electrode arrays. Nat. Neurosci. 19, 634–641 (2016).

34. 34.

Musall, S., Kaufman, M. T., Juavinett, A. L., Gluf, S. & Churchland, A. K. Single-trial neural dynamics are dominated by richly varied movements. Nat. Neurosci. 22, 1677–1686 (2019).

35. 35.

Aertsen, A. M., Gerstein, G. L., Habib, M. K. & Palm, G. Dynamics of neuronal firing correlation: modulation of “effective connectivity”. J. Neurophysiol. 61, 900–917 (1989).

36. 36.

Gerstein, G. L. & Perkel, D. H. Simultaneously recorded trains of action potentials: analysis and functional interpretation. Science 164, 828–830 (1969).

37. 37.

Semedo, J. D., Zandvakili, A., Machens, C. K., Yu, B. M. & Kohn, A. Cortical areas interact through a communication subspace. Neuron 102, 249–259 (2019).

38. 38.

Ahrens, M. B. et al. Brain-wide neuronal dynamics during motor adaptation in zebrafish. Nature 485, 471–477 (2012).

39. 39.

Crapse, T. B. & Sommer, M. A. Corollary discharge across the animal kingdom. Nat. Rev. Neurosci. 9, 587–600 (2008).

40. 40.

Horwitz, G. D., Batista, A. P. & Newsome, W. T. Representation of an abstract perceptual decision in macaque superior colliculus. J. Neurophysiol. 91, 2281–2296 (2004).

41. 41.

Zatka-Haas, P., Steinmetz, N. A., Carandini, M. & Harris, K. D. Distinct contributions of mouse cortical areas to visual discrimination. Preprint at bioRxiv https://doi.org/10.1101/501627 (2018).

42. 42.

Vinck, M., Batista-Brito, R., Knoblich, U. & Cardin, J. A. Arousal and locomotion make distinct contributions to cortical activity patterns and visual encoding. Neuron 86, 740–754 (2015).

43. 43.

Shimaoka, D., Harris, K. D. & Carandini, M. Effects of arousal on mouse sensory cortex depend on modality. Cell Rep. 22, 3160–3167 (2018).

44. 44.

Steinmetz, N. A. et al. Aberrant cortical activity in multiple GCaMP6-expressing transgenic mouse lines. eNeuro 4, 0207-17.2017 (2017).

45. 45.

Guo, Z. V. et al. Flow of cortical activity underlying a tactile decision in mice. Neuron 81, 179–194 (2014).

46. 46.

Bhagat, J. et al. Rigbox: an open-source toolbox for probing neurons and behavior. Preprint at https://www.biorxiv.org/content/10.1101/672204v3 (2019).

47. 47.

Ludwig, K. A. et al. Using a common average reference to improve cortical neuron recordings from microelectrode arrays. J. Neurophysiol. 101, 1679–1689 (2009).

48. 48.

Park, I. M., Meister, M. L. R., Huk, A. C. & Pillow, J. W. Encoding and decoding in parietal cortex during sensorimotor decision-making. Nat. Neurosci. 17, 1395–1403 (2014).

49. 49.

Ashe, J. & Georgopoulos, A. P. Movement parameters and neural activity in motor cortex and area 5. Cereb. Cortex 4, 590–600 (1994).

50. 50.

Paninski, L., Shoham, S., Fellows, M. R., Hatsopoulos, N. G. & Donoghue, J. P. Superlinear population encoding of dynamic hand trajectory in primary motor cortex. J. Neurosci. 24, 8551–8561 (2004).

51. 51.

Izenman, A. J. Reduced-rank regression for the multivariate linear model. J. Multivariate Anal. 5, 248–264 (1975).

52. 52.

Mathis, A. et al. DeepLabCut: markerless pose estimation of user-defined body parts with deep learning. Nat. Neurosci. 21, 1281–1289 (2018).

53. 53.

McGinley, M. J., David, S. V. & McCormick, D. A. Cortical membrane potential signature of optimal states for sensory signal detection. Neuron 87, 179–192 (2015).

54. 54.

Treves, A. & Rolls, E. T. What determines the capacity of autoassociative memories in the brain? Netw. Comput. Neural Syst. 2, 371–397 (1991).

55. 55.

Shamash, P., Carandini, M., Harris, K. D. & Steinmetz, N. A. A tool for analyzing electrode tracks from slice histology. Preprint at bioRxiv https://doi.org/10.1101/447995 (2018).

## Acknowledgements

We thank A. J. Peters, M. Pachitariu and P. Shamash for software tools; H. Forrest for help with data preprocessing; C. Reddy, M. Wells and L. Funnell for help with mouse husbandry, training and histology; R. Raghupathy for help with histology; C. P. Burgess for help with experimental apparatus; T. Harris and B. Karsh for support with Neuropixels recordings; M. Häusser, A. J. Peters and S. Schroeder for feedback on the manuscript. This project was funded by the European Union’s Marie Skłodowska-Curie program (656528), the Human Frontier Sciences Program (LT001071/2015-L), the Wellcome Trust (205093, 102264), the European Research Council (694401), the Gatsby Foundation (GAT3531) and the Simons Foundation (325512). M.C. holds the GlaxoSmithKline / Fight for Sight Chair in Visual Neuroscience.

## Author information

N.A.S., M.C. and K.D.H. conceived and designed the study. N.A.S. collected data. N.A.S., P.Z.-H. and K.D.H. analysed data. N.A.S., M.C. and K.D.H. wrote the manuscript.

Correspondence to Nicholas A. Steinmetz.

## Ethics declarations

### Competing interests

The authors declare no competing interests.

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

## Extended data figures and tables

### Extended Data Fig. 1 Behavioural performance as psychometric curves for each subject, and analysis of wheel movements.

a, Psychometric curves for mouse Cori, showing the probability of choosing left (blue), right (orange) or NoGo (black) as a function of stimulus contrasts on the left and right screens. Each row corresponds to a pedestal contrast (the minimum contrast on the left and right screens). The horizontal axis encodes the relative contrast from the pedestal value, positive numbers indicating higher contrast on the right screen, and negative numbers for higher contrast on the left screen (for example, at pedestal = 50%, a ΔContrast of +50% corresponds to trials with 50% contrast on the left screen and 100% contrast on the right screen). Dots and vertical lines indicate the empirical fraction of choices made and 95% binomial confidence intervals for the fraction estimate, pooling data over sessions. Curves indicate the fit of a multinomial logistic model: $$\mathrm{ln}\,\frac{p(Left)}{p(NoGo)}={b}_{L}+{s}_{L}{c}_{L}^{n}$$; $$\mathrm{ln}\,\frac{p(Right)}{p(NoGo)}={b}_{R}+{s}_{R}{c}_{R}^{n}$$, in which cL and cR are the contrast on the left and right, and parameters bL, sL, n, bR and sR are fit by maximum likelihood estimation to the data for each subject3. bj, As in a, for the remaining subjects. k, The model fit for all subjects overlaid, for left choices (blue) and right choices (orange), in both cases for pedestal = 0%. l, Summary of performance on high-contrast trials. Dots reflect the session-pooled proportion correct of each mouse for trials with 100% versus 0% contrast, with 95% binomial confidence interval. m, Example segment of wheel position data showing wheel movements detected as left turns (blue), right turns (orange) or incidental movements (black). Detected onsets (green circles) and offsets (red circles) marked for each movement. y-axis scale: distance moved at the circumference of the wheel (that is, 2π, in which R is wheel radius and θ is its angular position). n, Wheel velocity trace for the same segment of data as in a. o, Example wheel turns aligned to the detected onset time. The dashed box indicates the region expanded in p. p, Example wheel turns aligned to detected onset time, zoomed to show the moment of takeoff, illustrating that the wheel had moved by less than 0.5 mm by onset. The step-like appearance of the trace reflects the resolution of the rotary encoder (each step unit is 0.135 mm at the surface of the wheel). q, Decoding the eventual direction of the wheel movement using the instantaneous velocity at different times relative to detected movement onset reveals that the direction only starts to be decodable around 20 ms before detected onset, and is not reliably (>80%) decoded until the time of onset. Error bars represent s.d. across sessions (n = 39).

### Extended Data Fig. 2 Method for histological alignment.

a, Before insertion, probes were coated with DiI. The brain was sliced and imaged, and locations of each probe’s DiI spots were manually identified on the Allen CCF atlas (15.1 ± 6.9 spots per probe). When multiple penetrations were performed in a single brain, their tracks were sufficiently far apart to avoid confusion. b, A vector is fit to the probe track using total least squares linear regression. The median distance of individual points from this vector is 39.3 µm, providing an estimate of lateral displacement error. c, To fit the longitudinal mapping from recording sites to brain locations, we used landmarks that were easily detectable by their electrophysiological signatures (arrows, left), linearly interpolating the location of sites between these landmarks. d, Visual receptive fields served as a post hoc check on correct alignment, but were not used to estimate track location. Each horizontally elongated plot with two vertical black lines indicates the responsiveness of all spikes recorded in an 80-µm-depth bin to flashed white squares at varying locations on the three screens (see Methods, Receptive field mapping). Colour map indicates spike rate, independently scaled for each map. e, Areas assigned for each recording site. Right, example DiI traces in slices corresponding to these locations. f, Example of cross-validation procedure to assess error in longitudinal alignment. For each point, the longitudinal mapping was recomputed excluding this point, and the distance from this point to the mapping fit to other points provides an estimate of longitudinal alignment error. Brain diagrams were derived from the Allen Mouse Brain Common Coordinate Framework (v.3 (2017); downloaded from http://download.alleninstitute.org/informatics-archive/current-release/mouse_ccf/).

### Extended Data Fig. 3 Examples of DiI tracks showing recording sites from the depicted sub-surface brain regions in aligned histology.

Visual inspection of the DiI tracks confirms that the probe indeed passed through that region at some point along the recording span. Thick white lines outline the region given in the plot title; grey lines outline other regions; and white arrows point to places where probe tracks were found within the given region. Blue: DAPI; green: GCaMP; red: DiI. Brain diagrams were derived from the Allen Mouse Brain Common Coordinate Framework (v.3 (2017); downloaded from http://download.alleninstitute.org/informatics-archive/current-release/mouse_ccf/).

### Extended Data Fig. 4 Global neuronal activity and distribution.

a, Activity of example neurons in VISp and VISam, showing the neuron’s waveform and anatomical location (top), rasters sorted by contralateral contrast (middle), and trial-averaged firing rates (smoothed with 30 ms causal half-Gaussian) for each of the four contralateral contrasts (bottom). Shaded regions show the s.e.m. across trials. b, Colour map showing trial-averaged firing rates of all highly activated neurons (P < 10−4 compared with pre-trial activity), vertically sorted by firing latency. Latency sorting was cross-validated: latencies for each neuron were determined from odd-numbered trials, and activity from even-numbered trials is depicted in the plot. The grey scale represents average normalized firing rate across even-numbered trials with contralateral visual stimuli and contralateral choice. ce, Curves showing mean firing rate across responsive neurons in each area, aligned to visual stimulus onset (c), movement onset (d) or reward onset (e). Trials were included in e when a reward was earned for keeping the wheel still following zero-contrast stimuli. Shaded regions show the s.e.m. across neurons. f, The focality index, defined as $$\sum ({p}_{a}^{2})/\sum {({p}_{a})}^{2}$$, in which pa is the proportion of neurons in area a selective for the kernel in question, measures how widely versus focally distributed a representation is, with a floor of 0.0238 for a uniform distribution (across 42 brain regions) and a maximum of 1.0 if all selective neurons were found in a single brain region. This focality index was 0.079 for Choice, 0.069 for Vision kernels and 0.040 for Action kernels; the differences between Choice and Action, as well as contralateral Vision and Action, were statistically significant (P < 0.05; bias-corrected bootstrap). Dots represent the true value and error bars represent bias-corrected bootstrap-estimated 95% confidence intervals. Brain diagrams were derived from the Allen Mouse Brain Common Coordinate Framework (v.3 (2017); downloaded from http://download.alleninstitute.org/informatics-archive/current-release/mouse_ccf/).

### Extended Data Fig. 5 Comparison of the reduced-rank kernel regression method to other methods for spike train prediction.

a, Example fit of spiking data for an individual neuron with the kernel model. Green trace shows spike data smoothed with a causal filter, black shows the model’s prediction, and other coloured traces show the components of the prediction from each kernel. Data between trials is omitted from the fitting and from this plot. b, Cartoon of the three methods evaluated. In the Toeplitz and Cosine models, a predictor matrix X of size Ntimepoints × Npredictors is constructed from task events (illustrated, transposed, in d). A linear fit from predictors X to spike counts Y is estimated using elastic net regularization. In the reduced-rank regression method, the predictor matrix X is the same as the Toeplitz model, but predicts Y after passing through a low-rank bottleneck (X·b), which is optimized using reduced-rank regression. c, Relationship between Action and Choice kernels, which are added or subtracted together to give the shapes for left and right choice trials. This allows separation of neurons with choice from action correlates, while still allowing for arbitrary-shaped responses on left and right trials. d, Structure of predictor matrices (shown transposed). The Toeplitz predictor has rows for each variable and time offset, which take non-zero values for time points (columns) corresponding to the appropriate time offset from the given event. The cosine model has similar structure but with rows replaced by smooth raised cosine functions, allowing a smaller number of basis functions. The reduced-rank regression model has learned a small number of dense basis functions optimized to predict spike counts. e, Density scatterplot of cross-validated variance explained for each neuron under the Toeplitz model against the reduced-rank model (top), and for the cosine model versus the reduced-rank model (bottom). Each point represents one cell, coloured to show density when they overlap. Right, magnified view of the densest region of the plot. These comparisons show that the reduced-rank model consistently outperforms the other two (points lie below the diagonal), and that it overfits fewer neurons (fewer points with CV variance explained <0). f, The proportion of overfit-explained variance, that is, (CVtrain − CVtest)/CVtrain, in which CVtrain is the train-set variance explained and CVtest is the test-set variance explained. Smaller values for the reduced-rank model show that it overfits less. g, Left, population decoding of contralateral visual stimulus contrast from residual population activity in each area after subtracting the prediction of a model including all other kernels. The other panels depict the same analysis for decoding of ipsilateral visual stimulus contrast, action and direction of choice. h, Distributions of the cross-validated proportion variance explained for each neuron when shuffling left and right trial choice labels (orange) together with the distribution for the original data (blue). After shuffling, a small number (14, 0.33%) of neurons are false positives by this threshold. The dashed line represents the 2% CV variance explained threshold used. The y axis is clipped.

### Extended Data Fig. 6 Summary of variance explained by the kernel model and population average responses on Go, Miss, and Passive trials.

a, The unique contribution of each predictor variable as assessed by nested prediction. Each panel depicts the distribution of variance explained across neurons of a single brain region, using various reduced-rank kernel regression models, (compare with Figs. 3c, e, 4b). Each bar shows the 10th, 25th, 50th, 75th and 90th percentiles of the distribution for a single prediction model, colour-coded by model identity. The numbers in the subplot title indicate the number of all neurons analysed with the full model (that is, the distribution shown with the grey bar), and the number of neurons included for nested model analysis (that is, cells with ≥2% variance explained with the full model). The black bar shows distribution of variance explained by the full model in this subset; coloured bars show the unique contribution of each predictor. Note that the unique contributions need not sum to the variance of the full model, as predictor variables are correlated. Variance explained by the Action kernel (yellow) is essentially global, whereas contralateral Vision variance explained is distinctly restricted, and Choice is rare enough to be difficult to see in these plots. b, Population average firing rates across neurons for each brain region in Go, Miss, and passive trials, selected to have matched contralateral visual stimulus contrasts. The patterns characteristic of engagement can be seen in pre-stimulus activity (that is, before time 0): the pre-stimulus firing rate of midbrain, basal ganglia, and hippocampal regions is in the order Go > Miss > passive; whereas pre-stimulus activity in neocortical areas instead is arranged as passive > Miss and Go. Thalamic regions can exhibit either pattern; notably, visual thalamic regions (LGd, LP and LD) follow the pattern of neocortical areas.

### Extended Data Fig. 7 Choice probability and detect probability analysis.

a, The percentage of neurons with significant ccCP (that is, neurons whose rate differed significantly between left and right choices in response to the same stimulus; left two columns) and ccDP (that is, neurons whose rate differed significantly between Go and NoGo trials in response to the same stimulus; right two columns) as a function of time aligned to visual stimulus onset (left) and movement onset (right). The horizontal dashed line represents the value expected by chance given the statistical threshold alpha = 0.05. b, Percentage of neurons in each area with significant detect probability between −50 and +50 ms from movement onset, replicating the finding from Fig. 2d, e, 3e that non-selective action signals are distributed widely. Asterisks indicate brain regions for which 95% confidence intervals for the percentage of significant neurons (black error bars) did not include the chance value (5%, horizontal dashed line). c, As in b, for choice probability in the same window. The number of trials usable in this analysis is limited, meaning that some sessions (n = 6 of 39) had to be excluded; nevertheless, this analysis broadly replicates the finding from Fig. 4b that around the time of movement onset, choice-selective neurons are restricted to frontal cortex, basal ganglia, midbrain and certain thalamic nuclei. d, As in a, for choice probability in a window +150 to 250 ms after movement onset, showing that by this time choice-related signals are distributed more widely, including visual and parietal cortex. These signals are too late to have participated in generating the choice but could reflect either corollary discharge or sensory reafference. However, they cannot reflect movement of the visual stimulus on the screen, as it is fixed during this time period. e, The percentage of neurons with significant choice probability, as a function of time relative to movement onset for selected areas (zoom and overlay of certain traces from a), replicating that choice related activity is first seen in the final 50–100 ms relative to movement onset and with similar timing across multiple areas. Note that six sessions were excluded from ccCP analysis for having too few trials; these six sessions included 20.8% of the MOs neurons determined to have choice-selective responses with kernel regression. f, The pre-stimulus detect probability (after subtracting 0.5, so that positive values indicate higher rates on Go trials, and negative values the reverse) versus the mean Go − Miss firing rate difference for each area (used in Fig. 5d), demonstrating that these two quantities identify essentially the same factor. The pre-stimulus detect probability was correlated with the engagement index (that is, task − passive difference) similarly to the Go − Miss difference (with r = 0.48, P = 0.001; not shown).

### Extended Data Fig. 8 jPECC analysis for determining whether correlations occur with a temporal offset between a pair of regions.

a, Canonical correlation analysis is applied to firing rates at every pair of time points relative to a behavioural event (illustration shows 0.1 s after stimulus onset in VISp and 0.15 s after in MOs). Canonical correlation analysis is applied to the pair of matrices containing each cell’s firing rate at selected times on each training set trial (90% of the total) to find dimensions in each population maximally correlated with each other. (For regularization purposes, this is applied after dimensionality reduction using principal component analysis). The strength of population correlation is summarized by the correlation of test set activity projected onto the first canonical dimension. b, Results on an example session, showing relationships between visual cortex, midbrain and frontal cortex relative to stimulus onset (top) and movement onset (bottom). Visual cortical activity leads frontal and midbrain activity, as can be seen from the below-diagonal bias in correlations. However, no lead/lag relationship is seen between midbrain and frontal cortex. Grey, P > 0.05. c, Average across all recording sessions that contained each pair of areas, showing similar relationships to the example in each case. d, e, Summary of lead–lag interactions, obtained by subtracting the averages of the jPECC coefficients over inter-area time ranges of −50 to 0 and 0 to 50 ms, as a function of time relative to stimulus onset (d) or movement onset (e). Grey region, 2 × s.e.m. across experiments. Visual cortex reliably leads frontal cortex and midbrain at around 100 ms after the stimulus; and over a range −200 to −50 ms relative to movement.

## Supplementary information

### Supplementary Information

This file contains the Supplementary Discussion, Supplementary Tables 1-3 and Supplementary Figures 1-4.

## Rights and permissions

Reprints and Permissions

Steinmetz, N.A., Zatka-Haas, P., Carandini, M. et al. Distributed coding of choice, action and engagement across the mouse brain. Nature (2019) doi:10.1038/s41586-019-1787-x

• #### DOI

https://doi.org/10.1038/s41586-019-1787-x