Abstract
When experts are immersed in a task, do their brains prioritize task-related activity? Most efforts to understand neural activity during well-learned tasks focus on cognitive computations and task-related movements. We wondered whether task-performing animals explore a broader movement landscape and how this impacts neural activity. We characterized movements using video and other sensors and measured neural activity using widefield and two-photon imaging. Cortex-wide activity was dominated by movements, especially uninstructed movements not required for the task. Some uninstructed movements were aligned to trial events. Accounting for them revealed that neurons with similar trial-averaged activity often reflected utterly different combinations of cognitive and movement variables. Other movements occurred idiosyncratically, accounting for trial-by-trial fluctuations that are often considered ‘noise’. This held true throughout task-learning and for extracellular Neuropixels recordings that included subcortical areas. Our observations argue that animals execute expert decisions while performing richly varied, uninstructed movements that profoundly shape neural activity.
This is a preview of subscription content, access via your institution
Relevant articles
Open Access articles citing this article.
-
The respective activation and silencing of striatal direct and indirect pathway neurons support behavior encoding
Nature Communications Open Access 17 August 2023
-
An automated, low-latency environment for studying the neural basis of behavior in freely moving rats
BMC Biology Open Access 11 August 2023
-
Ascending neurons convey behavioral state to integrative sensory and action selection brain regions
Nature Neuroscience Open Access 23 March 2023
Access options
Access Nature and 54 other Nature Portfolio journals
Get Nature+, our best-value online-access subscription
$29.99 / 30 days
cancel any time
Subscribe to this journal
Receive 12 print issues and online access
$209.00 per year
only $17.42 per issue
Rent or buy this article
Prices vary by article type
from$1.95
to$39.95
Prices may be subject to local taxes which are calculated during checkout








Data availability
The data from this study will be stored on a dedicated, backed-up repository maintained by Cold Spring Harbor Laboratory. A link to the repository can be found at http://churchlandlab.labsites.cshl.edu/code/.
Code availability
The MATLAB code used for the data analysis in this study is available online at http://churchlandlab.labsites.cshl.edu/code/.
References
Shadlen, M. N. & Newsome, W. T. Motion perception: seeing and deciding. Proc. Natl Acad. Sci. USA 93, 628–633 (1996).
Maimon, G. & Assad, J. A. A cognitive signal for the proactive timing of action in macaque LIP. Nat. Neurosci. 9, 948–955 (2006).
Horwitz, G. D., Batista, A. P. & Newsome, W. T. Representation of an abstract perceptual decision in macaque superior colliculus. J. Neurophysiol. 91, 2281–2296 (2004).
Gold, J. I. & Shadlen, M. N. The influence of behavioral context on the representation of a perceptual decision in developing oculomotor commands. J. Neurosci. 23, 632–651 (2003).
Roitman, J. D. & Shadlen, M. N. Response of neurons in the lateral intraparietal area during a combined visual discrimination reaction time task. J. Neurosci. 22, 9475–9489 (2002).
Churchland, A. K., Kiani, R. & Shadlen, M. N. Decision-making with multiple alternatives. Nat. Neurosci. 11, 693–702 (2008).
Erlich, J. C., Bialek, M. & Brody, C. D. A cortical substrate for memory-guided orienting in the rat. Neuron 72, 330–343 (2011).
Niell, C. M. & Stryker, M. P. Modulation of visual responses by behavioral state in mouse visual cortex. Neuron 65, 472–479 (2010).
Saleem, A. B., Ayaz, A., Jeffery, K. J., Harris, K. D. & Carandini, M. Integration of visual motion and locomotion in mouse visual cortex. Nat. Neurosci. 16, 1864–1869 (2013).
Wekselblatt, J. B., Flister, E. D., Piscopo, D. M. & Niell, C. M. Large-scale imaging of cortical dynamics during sensory perception and behavior. J. Neurophysiol. 115, 2852–2866 (2016).
Guo, Z. V. et al. Flow of cortical activity underlying a tactile decision in mice. Neuron 81, 179–194 (2014).
Allen, W. E. et al. Global representations of goal-directed behavior in distinct cell types of mouse neocortex. Neuron 94, 891–907.e6 (2017).
Runyan, C. A., Piasini, E., Panzeri, S. & Harvey, C. D. Distinct timescales of population coding across cortex. Nature 548, 92–96 (2017).
Scott, B. B. et al. Fronto-parietal cortical circuits encode accumulated evidence with a diversity of timescales. Neuron 95, 385–398.e5 (2017).
Caballero-Gaudes, C. & Reynolds, R. C. Methods for cleaning the BOLD fMRI signal. NeuroImage 154, 128–149 (2017).
Stringer, C. et al. Spontaneous behaviors drive multidimensional, brainwide activity. Science 364, eaav7893 (2019).
Shimaoka, D., Harris, K. D. & Carandini, M. Effects of arousal on mouse sensory cortex depend on modality. Cell Rep. 22, 3160–3167 (2018).
Hanks, T. D. et al. Distinct relationships of parietal and prefrontal cortices to evidence accumulation. Nature 520, 220–223 (2015).
Erlich, J. C., Brunton, B. W., Duan, C. A., Hanks, T. D. & Brody, C. D. Distinct effects of prefrontal and parietal cortex inactivations on an accumulation of evidence task in the rat. eLife Sci. 4, e05457 (2015).
Steinmetz, N. A. et al. Aberrant cortical activity in multiple GCaMP6-expressing transgenic mouse lines. eNeuro 4, ENEURO.0207-17.2017 (2017).
Zhuang, J. et al. An extended retinotopic map of mouse cortex. eLife 6, e18372 (2017).
Chen, T.-W., Li, N., Daie, K. & Svoboda, K. A map of anticipatory activity in mouse motor cortex. Neuron 94, 866–879.e4 (2017).
Garrett, M. E., Nauhaus, I., Marshel, J. H. & Callaway, E. M. Topography and areal organization of mouse visual cortex. J. Neurosci. 34, 12587–12600 (2014).
Guo, Z. V. et al. Maintenance of persistent activity in a frontal thalamocortical loop. Nature 545, 181–186 (2017).
Li, N., Chen, T.-W., Guo, Z. V., Gerfen, C. R. & Svoboda, K. A motor cortex circuit for motor planning and movement. Nature 519, 51–56 (2015).
Salkoff, D. B., Zagha, E., McCarthy, E., McCormick, D.A. Movement and performance predict widespread cortical activity in a visual detection task. Cereb. Cortex (in the press).
Raposo, D., Kaufman, M. T. & Churchland, A. K. A category-free neural population supports evolving demands during decision-making. Nat. Neurosci. 17, 1784–1792 (2014).
Horwitz, G. D. & Newsome, W. T. Separate signals for target selection and movement specification in the superior colliculus. Science 284, 1158–1161 (1999).
Jun, J. J. et al. Fully integrated silicon probes for high-density recording of neural activity. Nature 551, 232–236 (2017).
Vinck, M., Batista-Brito, R., Knoblich, U. & Cardin, J. A. Arousal and locomotion make distinct contributions to cortical activity patterns and visual encoding. Neuron 86, 740–754 (2015).
Polack, P.-O., Friedman, J. & Golshani, P. Cellular mechanisms of brain-state-dependent gain modulation in visual cortex. Nat. Neurosci. 16, 1331–1339 (2013).
Reimer, J. et al. Pupil fluctuations track fast switching of cortical states during quiet wakefulness. Neuron 84, 355–362 (2014).
Pereira, T. D. et al. Fast animal pose estimation using deep neural networks. Nat. Methods 16, 117 (2019).
Mathis, A. et al. DeepLabCut: markerless pose estimation of user-defined body parts with deep learning. Nat. Neurosci. 21, 1281 (2018).
Le Merre, P. et al. Reward-based learning drives rapid sensory signals in medial prefrontal cortex and dorsal hippocampus necessary for goal-directed behavior. Neuron 97, 83–91.e5 (2018).
Gilad, A., Gallero-Salas, Y., Groos, D. & Helmchen, F. Behavioral strategy determines frontal or posterior location of short-term memory in neocortex. Neuron 99, 814–828.e7 (2018).
Euston, D. R. & McNaughton, B. L. Apparent encoding of sequential context in rat medial prefrontal cortex is accounted for by behavioral variability. J. Neurosci. 26, 13143–13155 (2006).
Kawai, R. et al. Motor cortex is required for learning but not for executing a motor skill. Neuron 86, 800–812 (2015).
Coddington, L. T. & Dudman, J. T. The timing of action determines reward prediction signals in identified midbrain dopamine neurons. Nat. Neurosci. 21, 1563 (2018).
Ayaz, A., Saleem, A. B., Schölvinck, M. L. & Carandini, M. Locomotion controls spatial integration in mouse visual cortex. Curr. Biol. 23, 890–894 (2013).
Sommer, M. A. & Wurtz, R. H. Brain circuits for the internal monitoring of movements. Annu. Rev. Neurosci. 31, 317 (2008).
Schmitt, L. I. et al. Thalamic amplification of cortical connectivity sustains attentional control. Nature 545, 219–223 (2017).
Wang, L., Rangarajan, K. V., Gerfen, C. R. & Krauzlis, R. J. Activation of striatal neurons causes a perceptual decision bias during visual change detection in mice. Neuron 97, 1369–1381.e5 (2018).
Engel, T. A., Chaisangmongkon, W., Freedman, D. J. & Wang, X.-J. Choice-correlated activity fluctuations underlie learning of neuronal category representation. Nat. Commun. 6, 6454 (2015).
Keller, G. B., Bonhoeffer, T. & Hübener, M. Sensorimotor mismatch signals in primary visual cortex of the behaving mouse. Neuron 74, 809–815 (2012).
Wolpert, D. M., Ghahramani, Z. & Jordan, M. I. An internal model for sensorimotor integration. Science 269, 1880–1882 (1995).
Wolpert, D. M. & Miall, R. C. Forward models for physiological motor control. Neural Netw. 9, 1265–1279 (1996).
Wolpert, D. M. & Kawato, M. Multiple paired forward and inverse models for motor control. Neural Netw. 11, 1317–1329 (1998).
Schultz, W. Dopamine neurons and their role in reward mechanisms. Curr. Opin. Neurobiol. 7, 191–197 (1997).
Wolpert, D. M., Miall, R. C. & Kawato, M. Internal models in the cerebellum. Trends Cogn. Sci. (Regul. Ed.) 2, 338–347 (1998).
Juavinett, A. L., Bekheet, G. & Churchland, A. K. Chronically implanted Neuropixels probes enable high-yield recordings in freely moving mice. eLife 8, e47188 (2018).
Ratzlaff, E. H. & Grinvald, A. A tandem-lens epifluorescence macroscope: hundred-fold brightness advantage for wide-field imaging. J. Neurosci. Methods 36, 127–137 (1991).
Lerner, T. N. et al. Intact-brain analyses reveal distinct information carried by SNc dopamine subcircuits. Cell 162, 635–647 (2015).
Pachitariu, M. et al. Suite2p: beyond 10,000 neurons with standard two-photon microscopy. Preprint at bioRxiv https://doi.org/10.1101/061507 (2016).
Jia, H., Rochefort, N. L., Chen, X. & Konnerth, A. In vivo two-photon imaging of sensory-evoked dendritic calcium signals in cortical neurons. Nat. Protoc. 6, 28–35 (2011).
Pachitariu, M., Steinmetz, N., Kadir, S., Carandini, M. & Harris, K. D. Fast and accurate spike sorting of high-channel count probes with KiloSort. Adv. Neural Inf. Proc. Sys. 29, 6326 (2016).
Powell, K., Mathy, A., Duguid, I. & Häusser, M. Synaptic representation of locomotion in single cerebellar granule cells. eLife Sci. 4, e07290 (2015).
Mumford, J. A., Poline, J.-B. & Poldrack, R. A. Orthogonalization of regressors in fMRI models. PLoS One 10, e0126255 (2015).
Karabatsos, G. Marginal maximum likelihood estimation methods for the tuning parameters of ridge, power ridge, and generalized ridge regression. Commun. Stat. Simulat. (2017).
Acknowledgements
We thank O. Odoemene, S. Pisupati and H. Nguyen for technical assistance and scientific discussions; H. Zeng for providing Ai93 mice; J. Tucciarone and F. Marbach for breeding assistance; A. Mills and P. Shrestha for providing GFP mice; T. Harris, S. Caddick and the Allen Institute for Brain Sciences for assistance with the Neuropixels probes; and N. Steinmetz, M. Pachitariu and K. Harris for widefield analysis code. Financial support was received from the Swiss National Science foundation (S.M., grant no. P2ZHP3_161770), the Pew Charitable Trusts (A.K.C.), the Simons Collaboration on the Global Brain (A.K.C., M.T.K.), the NIH (grant no. EY R01EY022979) and the Army Research Office under contract no. W911NF-16-1-0368 as part of the collaboration between the US DOD, the UK MOD and the UK Engineering and Physical Research Council under the Multidisciplinary University Research Initiative (A.K.C.).
Author information
Authors and Affiliations
Contributions
S.M., M.T.K. and A.K.C. designed the experiments. S.M. and S.G. trained animals and recorded widefield data. S.M. performed surgeries. M.T.K. and S.M. acquired two-photon data, designed the linear model and performed data analysis. A.L.J. recorded and spike-sorted Neuropixels data. A.K.C., M.T.K. and S.M. wrote the paper with assistance from S.G. and A.L.J. S.M. and M.T.K. contributed equally.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Peer review information Nature Neuroscience thanks Mackenzie Mathis, Mala Murthy and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Integrated supplementary information
Supplementary Figure 1 Overview over cortical areas.
Shown are cortical areas based on the Allen common coordinate framework v.3. 1: Olfactory bulb (combined); 2: Frontal pole, cerebral cortex; 3: Prelimbic area; 4: Anterior cingulate area, dorsal part; 5: Secondary motor area; 6: Primary motor area; 7: Primary somatosensory area, mouth; 8: Primary somatosensory area, upper limb; 9: Primary somatosensory area, nose; 10: Primary somatosensory area, lower limb; 11: Primary somatosensory area, unassigned; 12: Supplemental somatosensory area; 13: Primary somatosensory area, trunk; 14: Primary somatosensory area, barrel field; 15: Ventral auditory area; 16: Anterior visual area; 17: Retrosplenial area, dorsal part; 18: Anteromedial visual area; 19: Rostrolateral visual area; 20: Dorsal auditory area; 21: Primary auditory area; 22: Retrosplenial area, lateral agranular part; 23: Posteromedial visual area; 24: Primary visual area; 25: Anterolateral visual area; 26: Posterior auditory area; 27: Lateral visual area; 28: Laterointermediate area; 29: Temporal association areas; 30: Postrhinal area; 31: Posterolateral visual area.
Supplementary Figure 2 Visual sign maps for all mice.
Shown are visual field sign maps for all trained animals, aligned to the Allen CCF. Mapped areas largely agreed with corresponding location of visual areas in the CCF.
Supplementary Figure 3 Auto- and cross-correlation and impact of sampling rate.
(a) Autocorrelation of the imaging data at different time shifts between −1.5 and 1.5 seconds at 30 Hz. Explained variance falls off relatively quickly with a time constant of 150 ms. High autocorrelations when shifting by a single frame also indicate that imaging at sampling rates above 30 Hz would not provide much additional information, potentially due to the kinematics of the GCaMP6f indicator. (b) Cross-correlation between the imaging data and the model prediction for time shifts between −1.5 and 1.5 seconds. Explained variance of the behavioral prediction falls off with a time constant of 245 ms, indicating that the model mostly fits slower fluctuations on the order of 100–200 ms instead of fast frame-frame changes. (c) cvR2 for a model consisting only of the previous frame (blue) or the behavioral model (red) at different sampling rates. At 30 Hz, the previous frame can predict a high degree of variance. However, predictive power is much lower at lower sampling rates. In contrast, cvR2 for the behavioral model increases at lower sampling rates. This demonstrates that the model does not benefit from autocorrelations in the data but benefits from averaging out fast fluctuations in the imaging data. The box shows the first and third quartiles, inner line is the median over 22 recordings. Whiskers represent minimum and maximum values.
Supplementary Figure 4 Unique explained variance for different expert groups.
Shown are cortical maps of unique model contributions for the right vision, right handle and nose variables. The left column shows the average over all recordings from 11 animals. All maps identified specific cortical areas that sensibly corresponded to their respective model variable. Maps were highly robust when averaging over all visual experts (6 mice, 12 recordings, middle column) or auditory experts (5 mice, 10 recordings, right column), respectively.
Supplementary Figure 5 GFP and GCaMP6s controls.
(a) Remaining variance in fluorescence after subtracting hemodynamic signals. Only ~10% of the variance remained in GFP controls (GFP), 38% in Ai93 mice (Ai93) and 89% in GCaMP6s expressing animals (G6s). This demonstrates that the hemodynamic correction accurately rejects most intrinsic activity while leaving calcium-related signals intact. The box shows the first and third quartiles, the inner line is the median. Box whiskers represent minimum and maximum values. (b) Absolute amount of remaining variance for individual pixels. Remaining variance in GFP controls was much lower as in GCaMP-expressing mice, across the dorsal cortex. (c) Cross-validated R2 of the full linear model. Conventions as in (A). Explained variance was lowest in GFP controls and highest in GCaMP6s animals, demonstrating that the widespread predictive power of the linear model is not explained by predicting hemodynamic signals. However, the model still accounted for ~21% of the variance in GFP animals, indicating that the hemodynamic correction is imperfect and the remaining fluorescence still contains a small but predictable component. (d) Absolute amount of predicted variance for individual pixels. Comparing absolute explained variance (instead of percentages in A and C) shows that, although a smaller percentage of fluorescence in GFP animals could be predicted, the absolute amount of predicted variance is extremely small compared to Ai93 mice. Absolute predicted variance was also much higher in GCamP6s-expressing animals, further demonstrating that the models success is due to accurate prediction of neural dynamics instead of intrinsic signals. (e) Unique model contribution maps for the right visual stimulus variable. No specific unique contribution was apparent in GFP mice but was clearly visible for Ai93 and GCaMP6s mice. Unique contributions were also well-localized to visual areas. (f) β-weights in V1. Visual responses are strongest in in GCaMP6s animals and absent in GFP controls. Shading denotes the SEM over sessions. g, h) Same as e, f) for the right handle variable. (a–h) (n = 4 GFP recordings, 22 Ai93 recordings and 4 G6s recordings.).
Supplementary Figure 6 Auditory rate discrimination task.
(a) Schematic of the rate discrimination task. Mice are presented with auditory click sounds on both sides and identify the target side that contains more clicks to obtain a water reward. Stimulus sequences were 1-s long and clicks were randomly distributed. (b) Discrimination performance of an example animal. Right choice probability increases with the number of rightward pulses. Animals performed the task with high accuracy and were between 90–95% correct with the easiest stimuli. Shown are means ± 95% convidence intervals. n=8000 trials (2000 trials per animal). (c) Psychophysical reverse correlation revealed time-points for which stimuli most strongly influence animal decisions. Positive weights predict rightward and negative weights leftward decisions. Weights were non-zero for the entire stimulus duration, demonstrating that animals integrated sensory evidence over time to perform the task. Shown are means ± SEM for 4 mice. (d) Maps of unique model contribution for different variable groups, similar to Fig. 4e. As in the main results, unique contributions from uninstructed movements were highest across dorsal cortex. Shown are averages over 40 recordings from 4 mice.
Supplementary Figure 7 Model performance without analog predictors.
(a) Cross-validated explained variance for a reduced model without any analog regressors. Model’s performance was lower than the full model in Fig. 3a but still predicted a large amount of variance. Averaged across cortex, the event kernel-only model predicted 30.8±0.2% (mean±SEM, n=22 sessions) of all variance. (b) Unique model contribution map for each variable group. (c) Explained variance for variable groups, averaged across cortical maps. Shown is either cvR2 (light green) or ∆R2 (dark green). The box shows the first and third quartiles, the inner line is the median over 22 sessions. Box whiskers represent minimum and maximum values. Even after removing all analog predictors, uninstructed movement contained the highest unique contributions across cortex. This demonstrates that their importance for predicting cortical activity is not just explained by including analog predictors such as the video variables.
Supplementary Figure 8 Controlling for interictal events.
(a) Scatter plots show distribution of peaks in cortical activity, averaged over cortex. Left: Example animal, raised on a DOX-diet. Peaks were of variable length and remained at prominence below 5%. Right: Example animal, raised on a standard diet. Clearly visible are peaks of short latency and high promince (red dots). (b) Interictal event probability for all mice. Circles show individual sessions (two per animal). Four out of five mice that were raised on standard (non-DOX) diet show potential interictal activity. (c) Example trace for removal of interictal activity using autoregressive interpolation. (d, e) Modeling results for all DOX-raised animals. Similar to Fig. 4c, d. The box shows the first and third quartiles, the inner line is the median. Box whiskers represent minimum and maximum values. n=7 mice. (f, g) Modeling results for all non-DOX-raised animals. Modeling results between DOX-raised and non-DOX-raised mice were highly similar, demonstrating that our results are not due to potential interictal activity in some of the mice. n=4 mice.
Supplementary Figure 9 Predicted single-neuron variance for individual model variables.
Shown is either all explained variance (cvR2, light green) or unique model contributions (∆R2, dark green) for individual model variables in each cortical area. The box shows the first and third quartiles, the inner line is the median over 5 animals per area. Box whiskers represent minimum and maximum values. Prev.: previous.
Supplementary Figure 10 Depth comparison and motion control for 2-photon data.
(a) Explained variance for groups of model variables at different cortical depths. Shown is either all explained variance (cvR2, light green) or unique model contributions (∆R2, dark green). Superficial recordings were made from 150–350 µm, infragranular recordings between 350–450 µm. All infragranular recordings were performed in areas ALM (2655 neurons) or MM (3907 neurons). The box shows the first and third quartiles, the inner line is the median over 5 animals. Box whiskers represent minimum and maximum values. (b) Explained variance of variable groups for individual neurons, sorted by full-model performance (light gray trace). Conventions as in Fig. 6e but excluding imaging frames that were translated more than 2 pixels in either X- or Y-direction, relative to a reference image. As in Fig. 6e, uninstructed movements were most important to predict single-cell variance, demonstrating that this is not due to motion of the imaging plane with animal movement. (c) Explained variance for individual model variables averaged over all neurons after excluding translated imaging frames as described above. Shown is either all explained variance (light green) or unique model contributions (dark green) for 10 animals. Conventions as in (A).
Supplementary Figure 11 Revealing hidden task-dynamics with PETH partitioning.
(a) Histogram of modulation indices for either the task (left), uninstructed (middle) or instructed (right) movement groups. Dashed lines indicate the respective indices for the three example cells in (B). In contrast to Fig. 7h, all cells were best explained by a combination of all three model groups. (b) Three example cells with distinct task-related dynamics (left column, green) that were uncovered by accounting for uninstructed (middle column, black) and instructed movements (right column, blue). In all cases, task-related dynamics were clearly distinct from the uncorrected PETHs (gray traces) and revealed different response features, especially during the stimulus and delay period. (c) Contribution of individual model variables to the PETH reconstruction. Shown is the absolute PETH modulation for each variable as a percentage of the total modulation by all model variables. Example cells are the same as cells 1–3 in Fig. 7. While the task cell has PETH contributions from many variables, the instructed cell is mostly affected by rightward licks. (d) Number of variables required to reach 25% of the total PETH contributions. Most cells require at least 3 variables to reach the threshold (red bars in (C)). This indicates that the PETH of most cells is modulated by a combination of model variables instead of being dominated by a single variable.
Supplementary information
Supplementary Video 1
Averaged cortical activity for all visual trials (n = 22 recordings). Colors show change in neural activity relative to the first second in the baseline period. Lines indicate location of cortical areas based on the Allen CCF.
Supplementary Video 2
Cortical maps of unique contribution for different model groups. Maps were generated for all frames at each time point in the trial. Bottom text denotes different episodes of the trial.
Rights and permissions
About this article
Cite this article
Musall, S., Kaufman, M.T., Juavinett, A.L. et al. Single-trial neural dynamics are dominated by richly varied movements. Nat Neurosci 22, 1677–1686 (2019). https://doi.org/10.1038/s41593-019-0502-4
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1038/s41593-019-0502-4
This article is cited by
-
An automated, low-latency environment for studying the neural basis of behavior in freely moving rats
BMC Biology (2023)
-
Neural dynamics underlying associative learning in the dorsal and ventral hippocampus
Nature Neuroscience (2023)
-
A wearable platform for closed-loop stimulation and recording of single-neuron and local field potential activity in freely moving humans
Nature Neuroscience (2023)
-
Expectation violations enhance neuronal encoding of sensory information in mouse primary visual cortex
Nature Communications (2023)
-
Prefrontal circuits encode both general danger and specific threat representations
Nature Neuroscience (2023)