Cerebellar granule cells, which constitute half the brain's neurons, supply Purkinje cells with contextual information necessary for motor learning, but how they encode this information is unknown. Here we show, using two-photon microscopy to track neural activity over multiple days of cerebellum-dependent eyeblink conditioning in mice, that granule cell populations acquire a dense representation of the anticipatory eyelid movement. Initially, granule cells responded to neutral visual and somatosensory stimuli as well as periorbital airpuffs used for training. As learning progressed, two-thirds of monitored granule cells acquired a conditional response whose timing matched or preceded the learned eyelid movements. Granule cell activity covaried trial by trial to form a redundant code. Many granule cells were also active during movements of nearby body structures. Thus, a predictive signal about the upcoming movement is widely available at the input stage of the cerebellar cortex, as required by forward models of cerebellar control.
Access optionsAccess options
Subscribe to Journal
Get full journal access for 1 year
only $18.75 per issue
All prices are NET prices.
VAT will be added later in the checkout.
Rent or Buy article
Get time limited or full article access on ReadCube.
All prices are NET prices.
The authors thank L. Lynch for expert laboratory assistance, A.C.H.G. Ijpelaar for technical assistance and Dr. H. Boele of the Neuroscience department at the Erasmus Medical Center for their input in eyeblink recordings, M.J. Berry II for discussion of the information calculation, D. Dombeck, J.P. Rickgauer and C. Domnisoru for experimental advice, and D. Pacheco-Pinedo, J.L. Verpeut, P. Sanchez-Jauregui and I. Witten for comments and suggestions. This work was supported by National Institutes of Health grants R01 NS045193 (S.W.) and R01 MH093727 (J.M.), New Jersey Council on Brain Injury Research fellowship CBIR12FEL031 (A.G.), the Searle Scholars program (J.M.), DARPA N66001-15-C-4032 (L.P.), National Science Foundation Graduate Research Fellowship DGE-1148900 (T.P.), the Nancy Lurie Marks Family Foundation (S.W.), the Netherlands Organization for Scientific Research (Innovational Research Incentives Scheme Veni; A.B. and Z.G.), the Dutch Fundamental Organization for Medical Sciences (ZonMW; C.I.D.Z.), Life Sciences (NWO-ALW; C.I.D.Z.) and Social and Behavioral Sciences (NWO-MAGW; C.I.D.Z.), as well as ERC-adv and ERC-POC (C.I.D.Z.).
Integrated supplementary information
A B6.Cg-Tg(NeuroD1-Cre).GN135.Gsat mouse was injected with 200 nL of AAV1.CAG.Flex.GCaMP6f.WPRE.SV40.
Spontaneous activity in parallel-fiber boutons of lobule VI was recorded in a mouse walking on a cylindrical treadmill. Images were acquired at 512 by 128 pixels (130×30 μm), 10 ms per frame. Playback speed, 20 Hz (5× slower than acquisition).
Raw Data. Original movie after motion correction (horizontal black bands arise from motion correction). The red circles indicate four granule cells identified by the factorization procedure. Denoised. Data with noise removed and synchronized neuropil activity retained. No Neuropil. Denoised movie with neuropil contribution to each pixel removed. Neuropil synchronized activity. Activity of the neuropil displayed separately from neurons. Representative spatial components. Time course of the four example granule cells. Raw data patches. Corresponding patches of raw data for the spatial components.
Left. A model of the wheel was constructed from physical measurements and manually fitted to the behavioral recording of the animal. Top-right. The parameters of the model defined a projective transform to remap the image pixels to the perspective of the surface normal of the wheel. Bottom-right. The tracked vertical displacement of the reprojected wheel pixels was rescaled to physical units to estimate the true instantaneous locomotor velocity.
Left. The outline of the shape of the snout was manually traced to provide a set of seed points that was displaced when the animal generated facial movements. Top-right. The images were thresholded and a robust point set registration algorithm was applied to track the trajectories of the seed points even when segmentation was made difficult by background artifacts. Bottom-right. The magnitude of the seed point velocities was used as a measure of facial movements.