To increase computational flexibility, the processing of sensory inputs changes with behavioural context. In the visual system, active behavioural states characterized by motor activity and pupil dilation1,2 enhance sensory responses, but typically leave the preferred stimuli of neurons unchanged2,3,4,5,6,7,8,9. Here we find that behavioural state also modulates stimulus selectivity in the mouse visual cortex in the context of coloured natural scenes. Using population imaging in behaving mice, pharmacology and deep neural network modelling, we identified a rapid shift in colour selectivity towards ultraviolet stimuli during an active behavioural state. This was exclusively caused by state-dependent pupil dilation, which resulted in a dynamic switch from rod to cone photoreceptors, thereby extending their role beyond night and day vision. The change in tuning facilitated the decoding of ethological stimuli, such as aerial predators against the twilight sky10. For decades, studies in neuroscience and cognitive science have used pupil dilation as an indirect measure of brain state. Our data suggest that, in addition, state-dependent pupil dilation itself tunes visual representations to behavioural demands by differentially recruiting rods and cones on fast timescales.
This is a preview of subscription content, access via your institution
Subscribe to Nature+
Get immediate online access to Nature and 55 other Nature journal
Subscribe to Journal
Get full journal access for 1 year
only $3.90 per issue
All prices are NET prices.
VAT will be added later in the checkout.
Tax calculation will be finalised during checkout.
Get time limited or full article access on ReadCube.
All prices are NET prices.
The stimulus images and neuronal data used in this paper are stored at https://gin.g-node.org/cajal/Franke_Willeke_2022.
Our coding framework uses general tools such as PyTorch, Numpy, scikit-image, matplotlib, seaborn, DataJoint70, Jupyter and Docker. We also used the following custom libraries and code: neuralpredictors (https://github.com/sinzlab/neuralpredictors) for torch-based custom functions for model implementation; nnfabrik (https://github.com/sinzlab/nnfabrik) for automatic model training pipelines using DataJoint; nndichromacy for utilities, (https://github.com/sinzlab/nndichromacy); and mei (https://github.com/sinzlab/mei) for stimulus optimization.
Reimer, J. et al. Pupil fluctuations track fast switching of cortical states during quiet wakefulness. Neuron 84, 355–362 (2014).
Niell, C. M. & Stryker, M. P. Modulation of visual responses by behavioral state in mouse visual cortex. Neuron 65, 472–479 (2010).
Vinck, M., Batista-Brito, R., Knoblich, U. & Cardin, J. A. Arousal and locomotion make distinct contributions to cortical activity patterns and visual encoding. Neuron 86, 740–754 (2015).
Treue, S. & Maunsell, J. H. Attentional modulation of visual motion processing in cortical areas MT and MST. Nature 382, 539–541 (1996).
Erisken, S. et al. Effects of locomotion extend throughout the mouse early visual system. Curr. Biol. 24, 2899–2907 (2014).
Reimer, J. et al. Pupil fluctuations track rapid changes in adrenergic and cholinergic activity in cortex. Nat. Commun. 7, 13289 (2016).
Bennett, C., Arroyo, S. & Hestrin, S. Subthreshold mechanisms underlying state-dependent modulation of visual responses. Neuron 80, 350–357 (2013).
Liang, L. et al. Retinal inputs to the thalamus are selectively gated by arousal. Curr. Biol. 30, 3923–3934.e9 (2020).
McAdams, C. J. & Maunsell, J. H. Effects of attention on orientation-tuning functions of single neurons in macaque cortical area V4. J. Neurosci. 19, 431–441 (1999).
Qiu, Y. et al. Natural environment statistics in the upper and lower visual field are reflected in mouse retinal specializations. Curr. Biol. 31, 3233–3247.e6 (2021).
Rowell, C. H. Variable responsiveness of a visual interneurone in the free-moving locust, and its relation to behaviour and arousal. J. Exp. Biol. 55, 727–747 (1971).
Chiappe, M. E., Seelig, J. D., Reiser, M. B. & Jayaraman, V. Walking modulates speed sensitivity in Drosophila motion vision. Curr. Biol. 20, 1470–1475 (2010).
Busse, L. The influence of locomotion on sensory processing and its underlying neuronal circuits. eNeuroforum 24, A41–A51 (2018).
Schneider, D. M. Reflections of action in sensory cortex. Curr. Opin. Neurobiol. 64, 53–59 (2020).
Gerl, E. J. & Morris, M. R. The causes and consequences of color vision. Evol. Educ. Outreach 1, 476–486 (2008).
Szél, A. et al. Unique topographic separation of two spectral classes of cones in the mouse retina. J. Comp. Neurol. 325, 327–342 (1992).
Baden, T. et al. A tale of two retinal domains: near-optimal sampling of achromatic contrasts in natural scenes through asymmetric photoreceptor distribution. Neuron 80, 1206–1217 (2013).
Walker, E. Y. et al. Inception loops discover what excites neurons most using deep predictive models. Nat. Neurosci. 22, 2060–2065 (2019).
Lurz, K.-K. et al. Generalization in data-driven models of primary visual cortex. In Proc. International Conference on Learning Representations (2021).
Bashivan, P., Kar, K. & DiCarlo, J. J. Neural population control via deep image synthesis. Science 364, eaav9436 (2019).
Franke, K. et al. An arbitrary-spectrum spatial visual stimulator for vision research. eLife 8, e48779 (2019).
Liu, R. et al. An intriguing failing of convolutional neural networks and the CoordConv solution. In Advances in Neural Information Processing Systems (2018).
Rhim, I., Coello-Reyes, G., Ko, H.-K. & Nauhaus, I. Maps of cone opsin input to mouse V1 and higher visual areas. J. Neurophysiol. 117, 1674–1682 (2017).
Denman, D. J., Siegle, J. H., Koch, C., Reid, R. C. & Blanche, T. J. Spatial organization of chromatic pathways in the mouse dorsal lateral geniculate nucleus. J. Neurosci. 37, 1102–1116 (2017).
Rhim, I., Coello-Reyes, G. & Nauhaus, I. Variations in photoreceptor throughput to mouse visual cortex and the unique effects on tuning. Sci. Rep. 11, 11937 (2021).
Fu, Y. et al. A cortical circuit for gain control by behavioral state. Cell 156, 1139–1152 (2014).
Schröder, S. et al. Arousal modulates retinal output. Neuron 107, 487–495.e9 (2020).
Eggermann, E., Kremer, Y., Crochet, S. & Petersen, C. C. H. Cholinergic signals in mouse barrel cortex during active whisker sensing. Cell Rep. 9, 1654–1660 (2014).
Tikidji-Hamburyan, A. et al. Retinal output changes qualitatively with every change in ambient illuminance. Nat. Neurosci. 18, 66–74 (2015).
Grimes, W. N., Schwartz, G. W. & Rieke, F. The synaptic and circuit mechanisms underlying a change in spatial encoding in the retina. Neuron 82, 460–473 (2014).
Pennesi, M. E., Lyubarsky, A. L. & Jr. Pugh, E. N. Extreme responsiveness of the pupil of the dark-adapted mouse to steady retinal illumination. Invest. Ophthalmol. Vis. Sci. 39, 2148–2156 (1998).
Safarani, S. et al. Towards robust vision by multi-task learning on monkey visual cortex. In Advances in Neural Information Processing Systems (2021).
Bialek, W., Rieke, F., de Ruyter van Steveninck, R. R. & Warland, D. Reading a neural code. Science 252, 1854–1857 (1991).
Froudarakis, E. et al. Object manifold geometry across the mouse cortical visual hierarchy. Preprint at bioRxiv https://doi.org/10.1101/2020.08.20.258798 (2020).
Dadarlat, M. C. & Stryker, M. P. Locomotion enhances neural encoding of visual stimuli in mouse V1. J. Neurosci. 37, 3764–3775 (2017).
Spitzer, H., Desimone, R. & Moran, J. Increased attention enhances both behavioral and neuronal performance. Science 240, 338–340 (1988).
Wiersma, C. A. & Oberjat, T. The selective responsiveness of various crayfish oculomotor fibers to sensory stimuli. Comp. Biochem. Physiol. 26, 1–16 (1968).
Maimon, G., Straw, A. D. & Dickinson, M. H. Active flight increases the gain of visual motion processing in Drosophila. Nat. Neurosci. 13, 393–399 (2010).
Bezdudnaya, T. et al. Thalamic burst mode and inattention in the awake LGNd. Neuron 49, 421–432 (2006).
de Gee, J. W. et al. Mice regulate their attentional intensity and arousal to exploit increases in task utility. Preprint at bioRxiv https://doi.org/10.1101/2022.03.04.482962 (2022).
Andermann, M. L., Kerlin, A. M., Roumis, D. K., Glickfeld, L. L. & Reid, R. C. Functional specialization of mouse higher visual cortical areas. Neuron 72, 1025–1039 (2011).
Cronin, T. W. & Bok, M. J. Photoreception and vision in the ultraviolet. J. Exp. Biol. 219, 2790–2801 (2016).
Hulburt, E. O. Explanation of the brightness and color of the sky, particularly the twilight sky. J. Opt. Soc. Am. 43, 113–118 (1953).
Storchi, R. et al. Measuring vision using innate behaviours in mice with intact and impaired retina function. Sci. Rep. 9, 10396 (2019).
Meyer, A. F., Poort, J., O’Keefe, J., Sahani, M. & Linden, J. F. A head-mounted camera system integrates detailed behavioral monitoring with multichannel electrophysiology in freely moving mice. Neuron 100, 46–60.e7 (2018).
Wald, G. Human vision and the spectrum. Science 101, 653–658 (1945).
Lamb, T. D. Why rods and cones? Eye 30, 179–185 (2016).
Larsen, R. S. & Waters, J. Neuromodulatory correlates of pupil dilation. Front. Neural Circuits 12, 21 (2018).
Douglas, R. H. The pupillary light responses of animals; a review of their distribution, dynamics, mechanisms and functions. Prog. Retin. Eye Res. 66, 17–48 (2018).
Eberhardt, L. V., Grön, G., Ulrich, M., Huckauf, A. & Strauch, C. Direct voluntary control of pupil constriction and dilation: exploratory evidence from pupillometry, optometry, skin conductance, perception, and functional MRI. Int. J. Psychophysiol. 168, 33–42 (2021).
Froudarakis, E. et al. Population code in mouse V1 facilitates readout of natural scenes through increased sparseness. Nat. Neurosci. 17, 851–857 (2014).
Mathis, A. et al. DeepLabCut: markerless pose estimation of user-defined body parts with deep learning. Nat. Neurosci. 21, 1281–1289 (2018).
Garrett, M. E., Nauhaus, I., Marshel, J. H. & Callaway, E. M. Topography and areal organization of mouse visual cortex. J. Neurosci. 34, 12587–12600 (2014).
Sofroniew, N. J., Flickinger, D., King, J. & Svoboda, K. A large field of view two-photon mesoscope with subcellular resolution for in vivo imaging. eLife 5, e14472 (2016).
Pnevmatikakis, E. A. et al. Simultaneous denoising, deconvolution, and demixing of calcium imaging data. Neuron 89, 285–299 (2016).
Henriksson, J. T., Bergmanson, J. P. G. & Walsh, J. E. Ultraviolet radiation transmittance of the mouse eye and its individual media components. Exp. Eye Res. 90, 382–387 (2010).
Schmucker, C. & Schaeffel, F. A paraxial schematic eye model for the growing C57BL/6 mouse. Vision Res. 44, 1857–1867 (2004).
Russakovsky, O. et al. ImageNet large scale visual recognition challenge. Int. J. Comput. Vis. 115, 211–252 (2015).
Grozdanic, S. et al. Characterization of the pupil light reflex, electroretinogram and tonometric parameters in healthy mouse eyes. Curr. Eye Res. 26, 371–378 (2003).
Szatko, K. P. et al. Neural circuits in the mouse retina support color vision in the upper visual field. Nat. Commun. 11, 3481 (2020).
Yoshimatsu, T., Schröder, C., Nevala, N. E., Berens, P. & Baden, T. Fovea-like photoreceptor specializations underlie single UV cone driven prey–capture behavior in zebrafish. Neuron 107, 320–337.e6 (2020).
Perlin, K. An image synthesizer. SIGGRAPH Comput. Graph. 19, 287–296 (1985).
Schwartz, O., Pillow, J. W., Rust, N. C. & Simoncelli, E. P. Spike-triggered neural characterization. J. Vis. 6, 484–507 (2006).
Ioffe, S. & Szegedy, C. Batch normalization: accelerating deep network training by reducing internal covariate shift. In Proc. 32nd International Conference on Machine Learning (2015).
Clevert, D.-A., Unterthiner, T. & Hochreiter, S. Fast and accurate deep network learning by exponential linear units (ELUs). In Proc. International Conference on Learning Representations (2016).
Chollet, F. Xception: Deep learning with depthwise separable convolutions. In Proc. 30th IEEE Conference on Computer Vision and Pattern Recognition (2017).
Kingma, D. P. & Ba, J. Adam: a method for stochastic optimization. In Proc. International Conference on Learning Representations (2015).
Pospisil, D. A. & Bair, W. The unbiased estimation of the fraction of variance explained by a model. PLoS Comput. Biol. 17, e1009212 (2021).
Wood, S. N. Generalized Additive Models: An Introduction with R (Chapman and Hall/CRC, 2006).
Yatsenko, D. et al. DataJoint: managing big scientific data using MATLAB or Python. Preprint at bioRxiv https://doi.org/10.1101/031658 (2015).
Tan, Z., Sun, W., Chen, T.-W., Kim, D. & Ji, N. Neuronal representation of ultraviolet visual stimuli in mouse primary visual cortex. Sci. Rep. 5, 12597 (2015).
Mouland, J. W. et al. Extensive cone-dependent spectral opponency within a discrete zone of the lateral geniculate nucleus supporting mouse color vision. Curr. Biol. 31, 3391–3400.e4 (2021).
We thank G. Horwitz, T. Euler, M. Mathis, T. Baden, L. Höfling and Y. Qiu for feedback on the manuscript and D. Kim, D. Sitonic, D. Tran, Z. Ding, K. Lurz, M. Bashiri, C. Blessing and E. Walker for technical support and helpful discussions. We also thank the International Max Planck Research School for Intelligent Systems (IMPRS-IS) for supporting Konstantin F. Willeke. This work was supported by the Carl-Zeiss-Stiftung (to F.H.S.), the DFG Cluster of Excellence ‘Machine Learning—New Perspectives for Science’ (to F.H.S.; EXC 2064/1, project number 390727645), an AWS Machine Learning research award (to F.H.S.), the Intelligence Advanced Research Projects Activity (IARPA) through the Department of Interior/Interior Business Center (DoI/IBC) contract number D16PC00003 (to A.S.T.), grant R01 EY026927 (to A.S.T.), grant U01 UF1NS126566 (to A.T.), a NEI/NIH Core Grant for Vision Research (P30EY002520) and an NSF NeuroNex grant 1707400 (to A.S.T.). The US Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, DoI/IBC, or the US Government.
The authors declare no competing interests.
Peer review information
Nature thanks Najib Majaj, Nathalie Rochefort and Aman Saleem for their contribution to the peer review of this work. Peer reviewer reports are available.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Extended data figures and tables
Extended Data Fig. 1 Selection of coloured naturalistic scenes and pupil changes with monitor intensity.
a, Mean intensity in 8-bit pixel space of green and blue channel of randomly sampled ImageNet images (light gray; n=6.000) and selected images (dark gray; n=6.000). Images were selected such that the distribution of mean intensities of blue and green image channels were not significantly different. Selected images can be downloaded from the online repository (see Data Availability in Methods section). b, Distribution of correlation and mean squared error (MSE) across green and blue image channels. To increase chromatic content, only images with MSE > 85 were selected for visual stimulation. c, Mean screen intensity (top) and pupil size changes (bottom) for n=50 trials. Dotted lines in the bottom indicate 5th and 95th percentile, respectively. d, Screen-intensity triggered pupil traces (top) for n=3 scans performed in different animals. Vertical dotted line indicates time point of screen intensity increase. Bottom shows mean change in pupil size (black; s.d. shading in gray) upon increase in screen intensity. Compared to pupil dilation induced by the behavioural state, the changes in monitor intensity over time only elicited minor changes in pupil size.
a, Response reliability plotted versus test correlation (left) and correlation to average (right) for data shown in Fig. 2 (n=1.759 cells, n=3 scans, n=1 mouse). b, Mean Poisson loss (lower is better) for different models trained on the dataset from (a). The default model is used for all analysis, while models 1-3 are shown for comparison. Dotted line marks mean Poisson loss of default model. The default model had significantly lower Poisson loss values compared to all three alternative models (Wilcoxon signed rank test (two-sided), n=1,759: p < 10−288 (model 1), 10−200 (model 2), 10−18 (model 3)). Error bars show 95% confidence interval. c, Mean response reliability, test correlation and correlation to average across neurons (error bars: s.d. across neurons; n=478 to n=1,160 neurons per recording) for n=10 models, with control and drug condition indicated below. d, Pupil size and locomotion speed trace of example animal, with active trials indicated by red dots. Trials were considered active if pupil size > 60th percentile and/or locomotion speed > 90th percentile. Plots on the right show mean pupil size across trials versus mean locomotion speed across trials. Dotted lines indicate 60th and 90th percentile of pupil size and locomotion speed, respectively. e, Example frames of eye camera for a quiet and active behavioural period for control and dilated condition. For the dilated condition, the eye was often squinted during quiet periods. f, Same as (e), but for control and constricted condition. Right plots show pupil size versus locomotion speed of trials used for model training for control and constricted condition.
a, MEIs of 21 exemplary neurons illustrate structural similarity across colour channels. b, Distribution of correlation across colour channels for dataset shown in Fig. 2. MEIs on top show example cells with relatively low correlation across colour channels. c, Schematic illustrating paradigm of 10 Hz full-field binary white noise stimulus and corresponding response of exemplary neuron. d, Temporal kernels estimated from responses to full-field noise stimulus from (c) of three exemplary neurons and distribution of kernel correlations (n=924 neurons, n=1 scan, n=1 mouse; scan 1 from (e)). Dotted line indicates correlation threshold of -0.25 – cells with a kernel correlation lower than this threshold were considered colour-opponent. A fraction of neurons (<5%) exhibited colour-opponent temporal receptive fields (see also 71) in response to this full-field binary noise stimulus – in line with recent retinal work 60. e, Neurons recorded in 3 consecutive scans at different positions within V1, colour-coded based on colour-opponency (red: opponent). f, Temporal kernels in response to full-field coloured noise stimulus of three exemplary neurons (left) and MEIs of the same neurons. Neurons were anatomically matched across recordings by alignment to the same 3D stack. This indicates that colour-opponency of mouse V1 neurons depends on stimulus condition, similar to neurons in mouse dLGN 72, which might be due to e.g. differences in activation of the neuron’s surround or static versus dynamic stimuli.
a, We simulated neurons with Gabor receptive fields (RFs) of varying size, orientation, spectral contrast and colour-opponency (correlation across colour channels). Then, responses of simulated neurons with Gabor RFs were generated by multiplication of the RFs with the natural images also used during experiments. Corresponding responses were passed through a non-linearity and a poisson process before model training. Model predictions and optimized MEIs closely matched the simulated responses and Gabor RFs, respectively. b, Gabor RFs and corresponding MEIs of four example neurons, some of them with colour-opponent RFs and MEIs. c, Spectral contrast of Gabor RFs plotted versus spectral contrast of computed MEIs. The model faithfully recovered the simulated neurons’ colour preference. Only extreme colour preferences were slightly underestimated by our model, which is likely due to correlations across colour channels of natural scenes. This also suggests that it is unlikely that the low number of colour-opponent MEIs (Extended Data Fig. 3) is due to an artifact of modelling. d, Correlation of the MEI with the ground truth gabor RF.
a, MEIs optimized for a quiet (top row of each sub-panel) and active (bottom row) behavioural state of 18 example neurons illustrate structural similarity of MEIs across states. b, MEIs of two exemplary neurons with low correlation across behavioural states. c, Distribution of MEI correlation across states (n=1,759 neurons, n=3 scans, n=1 mouse). d, MEI activation for incongruent behavioural state (n=1,759 neurons, n=3 scans, n=1 mouse). Gray: Model activation of MEI optimized for a quiet state presented to the model for active state relative to model activation of MEI optimized and presented for active state (activation=1). Red: Model activation of MEI optimized for active state presented to the model for quiet state relative to model activation of MEI optimized and presented for quiet state (activation=1). This suggests that MEIs optimized for different behavioural states lead to similar activations in the model and thus share similar tuning properties for the majority of neurons.
a, MEIs optimized for quiet and active state of exemplary neuron and corresponding colour tuning curves. b, Neurons recorded in posterior V1 colour coded based on spectral contrast of their quiet state MEI (top) and distribution of spectral contrast along posterior-anterior axis of V1 in an additional example animal. Black line corresponds to binned average (n=10 bins), with s.d. shading in gray. c, Like (b), but for active state. d, Mean of colour tuning curves of neurons from (b, c), aligned with respect to peak position of quiet state tuning curves. Shading: s.d. across neurons from this scan. Top shows higher model activation for active state tuning curves, in line with gain modulation of visual responses. Bottom shows peak-normalized tuning curves, illustrating (i) a shift towards lower spectral contrast values for the peak response, (ii) lower activation relative to peak for green-biased stimuli for an active state and (iii) stronger activation relative to peak for UV-biased stimuli for an active state. This suggests that during an active state, the increase in UV-sensitivity is accompanied by a decrease in green-sensitivity. e, Density plot of model activation in response to MEIs optimized for a quiet versus an active behavioural state, for n=6,770 neurons from n=7 mice. f, Mean of peak-normalized colour tuning curves of quiet (black) and active state (red), aligned with respect to peak position of quiet state tuning curves for n=3 scans from n=3 mice. Shading: s.d. across neurons.
Extended Data Fig. 7 Behavioural shift of colour preference of mouse V1 neurons in the context of a coloured sparse noise paradigm.
a, Activity of n=50 exemplary V1 neurons in response to UV and green On and Off dots (10∘ visual angle) flashed for 0.2 seconds and simultaneously recorded locomotion speed and pupil size. Horizontal dashed lines indicate thresholds for quiet (black; < 50th percentile of pupil size) and active trials (red, > 75th percentile of pupil size). We adjusted the definition of quiet and active state compared to our in-silico analysis to ensure a sufficient number of trials in each state despite the shorter recording time (25 minutes for sparse noise versus 120 minutes for naturalistic images). Shading below in red and gray highlights trials above or below these thresholds. Bottom images show single stimulus frames. b, Spike-triggered average (STA) of 4 example neurons estimated from quiet and active trials, separated by posterior and anterior recording position. STAs estimated based on On and Off stimuli were combined to yield one STA per cell and pupil size. c, Neurons recorded in three consecutive experiments along the posterior-anterior axis of V1 (n=981 neurons, n=3 scans, n=1 mouse), colour coded based on spectral contrast of their STA estimated for quiet (left) and active trials (right). Bottom shows spectral contrast along the posterior-anterior axis of V1 of cells from (c, top), with binned average (black, n=10 bins) and s.d. shading (gray). Spectral contrast varied only slightly, but significantly along the anterior-posterior axis of V1 for quiet periods (n=981, p=10−7 for smooth term on cortical position of Generalized Additive Model (GAM); see Supplementary Methods). The small change in spectral contrast across the anterior-posterior axis of V1 is likely due to the fact that we pooled data from a wider range of pupil sizes. For an active state, optimal spectral contrast also changed with behavioural state (n=981, p=10−16 for behavioural state coefficient of GAM), with a significant interaction between cortical position and behavioural state modulation (p=10−7; see Supplementary Methods). d, Mean STA spectral contrast of quiet versus active state for n=6 scans from n=3 mice. Error bars: s.d. across neurons recorded in one scan that passed quality threshold. Marker shape and filling indicate mouse ID and cortical position along the posterior-anterior axis, respectively. STA spectral contrast was significantly shifted (p=10−101/3.68*10−51/10−59/10−303, Wilcoxon signed rank test (two-sided)) towards UV for posterior and medial scan fields. The shift was not evident in anterior V1. This was likely due to the different definitions of quiet and active state in the model compared to the sparse noise recordings: For pupil size thresholds more similar to the ones used in the model (20th and 85th percentile), we observed a stronger UV-shift in STA colour preference with behaviour, also for anterior V1. e, Top: pupil size trace with state changes from quiet to active indicated by vertical dashed lines. Red dots show selected trials using a 3 second read-out window. Bottom: difference in STA spectral contrast of quiet versus active state for different read-out times after state change. All: all trials with quiet and active trials defined as < 20th and > 85th percentile of pupil size. Shuffle: all trials with shuffled behaviour parameters relative to neuronal responses. Dashed horizontal line indicates delta spectral contrast=0. Data shows mean and s.d. across neurons (n=996/702/964 cells, n=3 scans, n=3 animals).
Extended Data Fig. 8 Pharmacological pupil dilation replicates shift in colour selectivity with sparse noise stimulus.
a, STAs of three example neurons, estimated for quiet trials in control condition (black) and dilated condition (red). b, Neurons recorded in three consecutive experiments across the posterior-anterior axis of V1 (n=1,079 neurons, n=3 scans, n=1 mouse), colour coded based on STA estimated for quiet trials in the dilated condition. See Extended Data Fig. 7 for STAs estimated for the control condition of the same animal. c, Spectral contrast of STAs of neurons from (b) along the posterior-anterior axis of V1 (red dots), with binned average (n=10 bins; red line) and s.d. shading. Black line and gray shading corresponds to binned average and s.d. of neurons recorded at the same cortical positions in control condition (cf. Extended Data Fig. 7). Spectral contrast significantly varied across anterior-posterior axis of V1 for the dilated condition (n=1,079, p=10−16 for smooth term on cortical position of GAM). Optimal spectral contrast changed with pupil dilation (n=1,079 (dilated) and n=943 (control), p=10−16 for condition coefficient of GAM), with a significant interaction between cortical position and behavioural state modulation (see Supplementary Methods). d, Mean spectral contrast of quiet state STAs in control condition versus spectral contrast of quiet state STAs in dilated condition (n=10 scans, n=3 mice). Error bars: s.d. across neurons. Two-sample t-test (two-sided): p=10−135/10−20/10−29/10−194/0.0006.
Extended Data Fig. 9 Reconstructions of coloured naturalistic scenes predict colour tuning shift for a population of neurons.
a, Schematic illustrating reconstruction paradigm. As the receptive fields of neurons recorded within one of our scans only covered a fraction of the screen, we used an augmented version of our CNN model for image reconstruction where the receptive field of each model neuron was copied to each pixel position of the image except the image margins. For a given target input image (image 1), this results in a predicted response vector (R1) of length number of neurons times number of pixels. During image reconstruction, a novel image (image 2) is optimized such that its corresponding response vector (R2) matches the response vector of the target image as closely as possible. b, Green and UV image channels of exemplary test image (top) and reconstructions of this image for a quiet (middle) and active state (bottom). For reconstructions, neurons from scan 1 in Fig. 2 were used. c, Spectral contrasts of reconstructed test images (n=100) in quiet state versus active state for n=3 models trained on scans from n=3 animals. Wilcoxon signed rank test (two-sided): p=10−18/10−18/10−18.
a, Exemplary frames of stimulus condition with lower object contrast than in Fig. 5c due to gray background in the object colour channel. Right: Scatter plot of decoding discriminability of green versus UV objects for quiet (gray) and active (red) trials for n=3 animals. Each marker represents the decoding performance of the SVM decoder trained on all neurons of the respective scan. The decoding performance for the two behavioural states are connected with gray lines, with slopes larger than one for all animals, corresponding to a larger increase in decoding performance for UV versus green objects. P-values obtained from a one-sided permutation test: < 0.012 (Mouse 1), < 0.032 (Mouse 2), < 0.112 (Mouse 3). b, Like (a), but for stimulus condition with objects as dark silhouettes and noise in the other colour channel. P-values obtained from a one-sided permutation test: < 0.02 (Mouse 1), < 0.1 (Mouse 2), < 0.038 (Mouse 3). c, Like (a), but for stimulus condition with high contrast objects and no noise in the other colour channel. P-values obtained from a one-sided permutation test (see Methods for detail): 0.44 (Mouse 1), 0.404 (Mouse 2), 0.024 (Mouse 3). The observed variability in (a) and (b) across animals might be related to different recording positions along the anterior-posterior axis of V1 and differences in the animal’s behaviour, i.e. the time spent in a quiet versus active behavioural state. For the stimulus condition in (c), we might also observe a ceiling effect caused by the fact that these stimuli are relatively easy to discriminate, as indicated by high object discriminability even during quiet behavioural periods.
About this article
Cite this article
Franke, K., Willeke, K.F., Ponder, K. et al. State-dependent pupil dilation rapidly shifts visual feature selectivity. Nature 610, 128–134 (2022). https://doi.org/10.1038/s41586-022-05270-3