Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Article
  • Published:

Integration of visual motion and locomotion in mouse visual cortex

Abstract

Successful navigation through the world requires accurate estimation of one's own speed. To derive this estimate, animals integrate visual speed gauged from optic flow and run speed gauged from proprioceptive and locomotor systems. The primary visual cortex (V1) carries signals related to visual speed, and its responses are also affected by run speed. To study how V1 combines these signals during navigation, we recorded from mice that traversed a virtual environment. Nearly half of the V1 neurons were reliably driven by combinations of visual speed and run speed. These neurons performed a weighted sum of the two speeds. The weights were diverse across neurons, and typically positive. As a population, V1 neurons predicted a linear combination of visual and run speeds better than either visual or run speeds alone. These data indicate that V1 in the mouse participates in a multimodal processing system that integrates visual motion and locomotion during navigation.

This is a preview of subscription content, access via your institution

Access options

Buy this article

Prices may be subject to local taxes which are calculated during checkout

Figure 1: Neuronal responses in V1 are strongly influenced by speed.
Figure 2: Tuning of V1 neurons for run speed in the dark.
Figure 3: V1 neurons are tuned for a weighted sum of run speed and virtual speed.
Figure 4: V1 neuron population activity encodes positive combinations of virtual and run speeds.

Similar content being viewed by others

References

  1. Carandini, M. et al. Do we know what the early visual system does? J. Neurosci. 25, 10577–10597 (2005).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  2. Ghazanfar, A.A. & Schroeder, C.E. Is neocortex essentially multisensory? Trends Cogn. Sci. 10, 278–285 (2006).

    Article  PubMed  Google Scholar 

  3. Iurilli, G. et al. Sound-driven synaptic inhibition in primary visual cortex. Neuron 73, 814–828 (2012).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  4. McAdams, C.J. & Reid, R.C. Attention modulates the responses of simple cells in monkey primary visual cortex. J. Neurosci. 25, 11023–11033 (2005).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  5. Shuler, M.G. & Bear, M.F. Reward timing in the primary visual cortex. Science 311, 1606–1609 (2006).

    Article  CAS  PubMed  Google Scholar 

  6. Niell, C.M. & Stryker, M.P. Modulation of visual responses by behavioral state in mouse visual cortex. Neuron 65, 472–479 (2010).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  7. Keller, G.B., Bonhoeffer, T. & Hubener, M. Sensorimotor mismatch signals in primary visual cortex of the behaving mouse. Neuron 74, 809–815 (2012).

    Article  CAS  PubMed  Google Scholar 

  8. Ayaz, A., Saleem, A.B., Scholvinck, M.L. & Carandini, M. Locomotion controls spatial integration in mouse visual cortex. Curr. Biol. 23, 890–894 (2013).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  9. Jeffery, K.J. Self-localization and the entorhinal-hippocampal system. Curr. Opin. Neurobiol. 17, 684–691 (2007).

    Article  CAS  PubMed  Google Scholar 

  10. Terrazas, A. et al. Self-motion and the hippocampal spatial metric. J. Neurosci. 25, 8085–8096 (2005).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  11. Israel, I., Grasso, R., Georges-Francois, P., Tsuzuku, T. & Berthoz, A. Spatial memory and path integration studied by self-driven passive linear displacement. I. Basic properties. J. Neurophysiol. 77, 3180–3192 (1997).

    Article  CAS  PubMed  Google Scholar 

  12. Lappe, M., Bremmer, F. & van den Berg, A.V. Perception of self-motion from visual flow. Trends Cogn. Sci. 3, 329–336 (1999).

    Article  CAS  PubMed  Google Scholar 

  13. DeAngelis, G.C. & Angelaki, D.E. Visual-vestibular integration for self-motion perception. in The Neural Bases of Multisensory Processes Frontiers in Neuroscience (eds., M. M. Murray & M. T. Wallace) (2012).

  14. Duffy, C.J. & Wurtz, R.H. Sensitivity of MST neurons to optic flow stimuli. I. A continuum of response selectivity to large-field stimuli. J. Neurophysiol. 65, 1329–1345 (1991).

    Article  CAS  PubMed  Google Scholar 

  15. Chen, G., King, J.A., Burgess, N. & O'Keefe, J. How vision and movement combine in the hippocampal place code. Proc. Natl. Acad. Sci. USA 110, 378–383 (2013).

    Article  PubMed  Google Scholar 

  16. Sargolini, F. et al. Conjunctive representation of position, direction, and velocity in entorhinal cortex. Science 312, 758–762 (2006).

    Article  CAS  PubMed  Google Scholar 

  17. Geisler, C., Robbe, D., Zugaro, M., Sirota, A. & Buzsaki, G. Hippocampal place cell assemblies are speed-controlled oscillators. Proc. Natl. Acad. Sci. USA 104, 8149–8154 (2007).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  18. Angelaki, D.E., Gu, Y. & Deangelis, G.C. Visual and vestibular cue integration for heading perception in extrastriate visual cortex. J. Physiol. (Lond.) 589, 825–833 (2011).

    Article  CAS  Google Scholar 

  19. Ernst, M.O. & Banks, M.S. Humans integrate visual and haptic information in a statistically optimal fashion. Nature 415, 429–433 (2002).

    Article  CAS  PubMed  Google Scholar 

  20. Wolpert, D.M., Ghahramani, Z. & Jordan, M.I. An internal model for sensorimotor integration. Science 269, 1880–1882 (1995).

    Article  CAS  PubMed  Google Scholar 

  21. Zipser, D. & Andersen, R.A.A. Back-propagation programmed network that simulates response properties of a subset of posterior parietal neurons. Nature 331, 679–684 (1988).

    Article  CAS  PubMed  Google Scholar 

  22. Rigotti, M. et al. The importance of mixed selectivity in complex cognitive tasks. Nature 497, 585–590 (2013).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  23. Angelaki, D.E., Gu, Y. & DeAngelis, G.C. Multisensory integration: psychophysics, neurophysiology, and computation. Curr. Opin. Neurobiol. 19, 452–458 (2009).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  24. Pouget, A., Deneve, S. & Duhamel, J.R. A computational perspective on the neural basis of multisensory spatial representations. Nat. Rev. Neurosci. 3, 741–747 (2002).

    Article  CAS  PubMed  Google Scholar 

  25. Gu, Y. et al. Perceptual learning reduces interneuronal correlations in macaque visual cortex. Neuron 71, 750–761 (2011).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  26. Dombeck, D.A., Khabbaz, A.N., Collman, F., Adelman, T.L. & Tank, D.W. Imaging large-scale neural activity with cellular resolution in awake, mobile mice. Neuron 56, 43–57 (2007).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  27. Harvey, C.D., Collman, F., Dombeck, D.A. & Tank, D.W. Intracellular dynamics of hippocampal place cells during virtual navigation. Nature 461, 941–946 (2009).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  28. Harris, K.D., Henze, D.A., Csicsvari, J., Hirase, H. & Buzsaki, G. Accuracy of tetrode spike separation as determined by simultaneous intracellular and extracellular measurements. J. Neurophysiol. 84, 401–414 (2000).

    Article  CAS  PubMed  Google Scholar 

  29. Harris, K.D., Csicsvari, J., Hirase, H., Dragoi, G. & Buzsaki, G. Organization of cell assemblies in the hippocampus. Nature 424, 552–556 (2003).

    Article  CAS  PubMed  Google Scholar 

  30. Itskov, V., Curto, C. & Harris, K.D. Valuations for spike train prediction. Neural Comput. 20, 644–667 (2008).

    Article  PubMed  Google Scholar 

  31. Harris, K.D. & Thiele, A. Cortical state and attention. Nat. Rev. Neurosci. 12, 509–523 (2011).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  32. Rao, R.P. & Ballard, D.H. Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects. Nat. Neurosci. 2, 79–87 (1999).

    Article  CAS  PubMed  Google Scholar 

  33. Bastos, A.M. et al. Canonical microcircuits for predictive coding. Neuron 76, 695–711 (2012).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  34. Xu, N.L. et al. Nonlinear dendritic integration of sensory and motor input during an active sensing task. Nature 492, 247–251 (2012).

    Article  CAS  PubMed  Google Scholar 

  35. Murayama, M. & Larkum, M.E. Enhanced dendritic activity in awake rats. Proc. Natl. Acad. Sci. USA 106, 20482–20486 (2009).

    Article  PubMed  PubMed Central  Google Scholar 

  36. Polack, P.O., Friedman, J. & Golshani, P. Cellular mechanisms of brain state-dependent gain modulation in visual cortex. Nat. Neurosci. (2013).

  37. Ahrens, M.B. et al. Brain-wide neuronal dynamics during motor adaptation in zebrafish. Nature 485, 471–477 (2012).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  38. Deneve, S. & Pouget, A. Bayesian multisensory integration and cross-modal spatial links. J. Physiol. Paris 98, 249–258 (2004).

    Article  PubMed  Google Scholar 

  39. Harvey, C.D., Coen, P. & Tank, D.W. Choice-specific sequences in parietal cortex during a virtual-navigation decision task. Nature 484, 62–68 (2012).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  40. Domnisoru, C., Kinkhabwala, A.A. & Tank, D.W. Membrane potential dynamics of grid cells. Nature 495, 199–204 (2013).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  41. Schmidt-Hieber, C. & Hausser, M. Cellular mechanisms of spatial navigation in the medial entorhinal cortex. Nat. Neurosci. 16, 325–331 (2013).

    Article  CAS  PubMed  Google Scholar 

  42. Szuts, T.A. et al. A wireless multi-channel neural amplifier for freely moving animals. Nat. Neurosci. 14, 263–269 (2011).

    Article  CAS  PubMed  Google Scholar 

  43. Wallace, D.J. et al. Rats maintain an overhead binocular field at the expense of constant fusion. Nature 498, 65–69 (2013).

    Article  CAS  PubMed  Google Scholar 

  44. Bizley, J.K., Nodal, F.R., Bajo, V.M., Nelken, I. & King, A.J. Physiological and anatomical evidence for multisensory interactions in auditory cortex. Cereb. Cortex 17, 2172–2189 (2007).

    Article  PubMed  Google Scholar 

  45. Crochet, S. & Petersen, C.C. Correlating whisker behavior with membrane potential in barrel cortex of awake mice. Nat. Neurosci. 9, 608–610 (2006).

    Article  CAS  PubMed  Google Scholar 

  46. Harris, K.D., Henze, D.A., Csicsvari, J., Hirase, H. & Buzsaki, G. Accuracy of tetrode spike separation as determined by simultaneous intracellular and extracellular measurements. J. Neurophysiol. 84, 401–414 (2000).

    Article  CAS  PubMed  Google Scholar 

  47. Hazan, L., Zugaro, M. & Buzsaki, G. Klusters, NeuroScope, NDManager: a free software suite for neurophysiological data processing and visualization. J. Neurosci. Methods 155, 207–216 (2006).

    Article  PubMed  Google Scholar 

  48. Schmitzer-Torbert, N., Jackson, J., Henze, D., Harris, K. & Redish, A.D. Quantitative measures of cluster quality for use in extracellular recordings. Neuroscience 131, 1–11 (2005).

    Article  CAS  PubMed  Google Scholar 

  49. Dombeck, D.A., Khabbaz, A.N., Collman, F., Adelman, T.L. & Tank, D.W. Imaging large-scale neural activity with cellular resolution in awake, mobile mice. Neuron 56, 43–57 (2007).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  50. Brainard, D.H. The psychophysics toolbox. Spat. Vis. 10, 433–436 (1997).

    Article  CAS  PubMed  Google Scholar 

  51. Pelli, D.G. The VideoToolbox software for visual psychophysics: Transforming numbers into movies. Spat. Vis. 10, 437–442 (1997).

    Article  CAS  PubMed  Google Scholar 

  52. Harris, K.D. et al. Spike train dynamics predicts theta-related phase precession in hippocampal pyramidal cells. Nature 417, 738–741 (2002).

    Article  CAS  PubMed  Google Scholar 

  53. Harris, K.D., Csicsvari, J., Hirase, H., Dragoi, G. & Buzsaki, G. Organization of cell assemblies in the hippocampus. Nature 424, 552–556 (2003).

    Article  CAS  PubMed  Google Scholar 

  54. Loader, C. Local Regression and Likelihood (Springer, New York, 1999).

  55. Freeman, T.C., Durand, S., Kiper, D.C. & Carandini, M. Suppression without inhibition in visual cortex. Neuron 35, 759–771 (2002).

    Article  CAS  PubMed  Google Scholar 

  56. Webb, B.S., Dhruv, N.T., Solomon, S.G., Tailby, C. & Lennie, P. Early and late mechanisms of surround suppression in striate cortex of macaque. J. Neurosci. 25, 11666–11675 (2005).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

Download references

Acknowledgements

We thank M. Schölvinck for help with pilot experiments, L. Muessig and F. Cacucci for sharing open-field behavioral data, B. Haider, D. Schulz and other members of the laboratory for helpful discussions, J. O'Keefe, N. Burgess and K.D. Longden for comments on the project. This work was supported by the Medical Research Council and by the European Research Council. M.C. and K.D.H. are jointly funded by the Wellcome Trust. K.J.J. is supported by the Biotechnology and Biological Sciences Research Council. M.C. is supported as a GlaxoSmithKline/Fight for Sight Chair in Visual Neuroscience.

Author information

Authors and Affiliations

Authors

Contributions

All the authors contributed to the design of the study and to the interpretation of the data, A.B.S. and A.A. carried out the experiments, A.B.S. analyzed the data, and A.B.S., M.C. and K.D.H. wrote the paper.

Corresponding author

Correspondence to Aman B Saleem.

Ethics declarations

Competing interests

The authors declare no competing financial interests.

Integrated supplementary information

Supplementary Figure 1 Distribution of running speeds.

a. Fraction of time spent running at different speeds while animals were in the virtual reality environment. The distributions when run speeds were sampled on a linear scale (left) or log scale (right). The data is shown as median ± median absolute deviation (n = 10). b. same as in a for animals exploring a real open-field environment (details described below). c. Comparison of the kurtosis of the distributions of speed in linear and log scales. Lower kurtosis implies that a distribution is closer to uniform. Because the kurtoses of run speed distributions were consistently lower when sampled on a logarithmic scale, we sampled speed on the log scale in this study. Although, this kurtoses in linear scale is an interesting aspect of the data, using the log scale allows us sample the data more evenly, and hence reduce any bias due to unequal sampling of the data. Open-field methods: 26 wild-type (C57BL6) mice were placed in a walled circular environment for 30 minutes. The environment was imaged using an overhead camera at 50 frames/sec and mouse movements were detected based the change in overall body position. Therefore, the resolution at lower speeds is limited (~1 cm/sec) unlike in the virtual reality environments. Data courtesy of Laurenz Muessig and Francesca Cacucci, UCL Institute of Behavioural Neuroscience.

Supplementary Figure 2 Raw traces from example recording session.

a. Example filtered (0.3 - 5 kHz) recording from 11 channels of the recording array. Spikes isolated across a group of channels (Chn 10-13) are highlighted with shaded regions. b. The waveforms of spikes on the group of channels highlighted in a.

Supplementary Figure 3 Interpreting the measure of fit quality.

a-b. Example tuning curves of two neurons for stimulus direction. Stimuli were drifting sinusoidal gratings (0.05 cycles/deg, 2 Hz). Curves are best fits of a double Gaussian (Carandini M, Ferster D, J Neurosci 20:470-484, 2000). Unlike runs through the virtual reality environment, these stimuli can be repeated multiple times. Error bars indicate ± s.e. (n = 10 repeats). c. Relationship between cross-validated explained variance, Q (calculated over individual trials), and explained variance (variance of the mean response explained). Each point corresponds to a cell (36 cells with Q>0). Cells in a and b are shown with the appropriate colors. Q>0.1 gives explained variance of 0.75 to 0.97 (0.87±0.07, mean ± s.d.). We can calculate the latter for direction tuning experiments but not for responses measured in virtual reality, because runs through the virtual reality environment are not exact repetitions of the same stimulus.

Supplementary Figure 4 Accurate estimation of firing rate as a function of position and speed.

To verify that our methods could accurately estimate how a neuron's firing rate depended on speed and position, we tested their ability to reproduce the known firing rate function of simulated neurons. Simulated spike trains were generated by running the trajectories taken by animals (e.g. Fig 1d) during each experimental session (identity labeled above the map) through the response map of the simulated neuron (shown on the left), and generating spikes from an inhomogeneous Poisson process with the corresponding rate function. We tested the methods used to calculate response maps (Fig. 1e-f), by using them to calculate the response map for each simulated session. We recovered accurate estimates of response maps in all our experimental sessions, showing a sufficient sampling of the stimulus space.

Supplementary Figure 5 Dependence of closed-loop responses on position and speed.

a-b. Responses of two example neurons (similar to Fig. 1e, f). c. Predictive power using position alone, compared to using the combination of position and speed. Solid dots indicate well-isolated units. d. Predictive power using speed alone, compared to using the combination of position and speed.

Supplementary Figure 6 Modulations in firing rate by different kinds of speed.

We defined the modulation in firing rate as 2(RmaxRmin) / (Rmax + Rmin), where Rmax and Rmin are the maximum and minimum firing rates measured in a tuning curve. a. Modulation in firing rate with respect to closed-loop speed for units considered in Fig. 1 (n = 110 with QPS>0.1). The black bars indicate well-isolated units, the gray bars the remainder of the units. Arrows in this and other panels indicate the values for cells used in the corresponding main text figures. b. Modulation of firing rate with respect to run speed in the dark condition (n = 70). c. Modulation of firing rate with respect to run speed for units considered in Fig. 3 (open-loop condition, n = 73 with QRV>0.1). d. Modulation of firing rate with respect to virtual speed for units considered in Fig. 3 (open-loop condition, n = 73 with QRV>0.1). e. Comparison of the modulation with respect to virtual speed and run speed in the open loop condition (n = 73). Filled symbols indicate well-isolated single units. Red dots indicate the three example cells in Fig. 3.

Supplementary Figure 7 Examples of band-pass tuning for run speed in the dark.

Additional examples of band-pass tuned neurons (similar to Fig. 2b).

Supplementary Figure 8 Consistency of closed-loop and open-loop responses.

a. The part of the open-loop response region where virtual speed matches run speed, as in the closed-loop condition, is highlighted in cyan. b-c. Dependence of firing rate on speed in the open and closed-loop conditions. Each panel is from an example neuron. Solid and dashed red curves show closed-loop responses from two periods of closed-loop stimulation in the same recording session (run immediately before and after the open-loop condition); the pink shaded area therefore indicates the range of measurement variability expected. The cyan curve shows the part of the open-loop response region indicated in a. d. Prediction of closed-loop responses from tuning curves measured in a second closed-loop session (QCL1CL2) compared to prediction from open-loop response maps (QCL1OL) (n=120). Solid dots indicate well-isolated units (n=31). e. Bar chart showing mean and standard error of prediction quality over recording sessions. Prediction of closed-loop spiking from the open-loop speed map was not significantly different from the prediction using speed tuning calculated in another closed-loop condition.

Supplementary Figure 9 Tuning for run speed is not altered by eye movements.

Eye position was measured by tracking the pupil using an infrared camera (DMK 21BU04.H, Imaging Source) equipped with a zoom lens (MVL7000, Thor Labs) and acquired at 60 frames/s using a custom software. We used custom image processing software written in MATLAB to detect eye movements. Briefly, within a region of interest, the contrast of the image was enhanced, the image smoothed and the negative image was thresholded to detect the pupil area. The detected pupil area was fit by an ellipse on every image and we tracked the center of this ellipse. a. Example image from the eye tracking camera. b. Pupil center position of an example session, overlaid over a representative image of a region of interest. Eye movements were larger in the horizontal direction than the vertical direction. c. Horizontal position as a function of time. Each row represents a run through the virtual reality environment. Eye movement events are detected as the movement of the pupil center by more than 2°, and the time of such events are marked as magenta dots. d-f. The response of three example neurons as the function of running speed, showing no effect of eye movements. To calculate the response of neurons in the absence of eye movements (magenta), the periods of -33 to +294 ms from all eye movement events were removed from the analysis. g. Comparison of the peak position (i.e. the speed of maximum firing rate) estimated with and without excluding eye movement events. The examples shown in d-f are highlighted. The peak speeds are jittered randomly (<1 cm/s) to see overlapping points. The strong relationship of the two measures indicates no effect of eye movements on speed tuning. h. Histogram showing the correlation between fits of the rate-speed response of each neuron with and without excluding eye movement events. The large peak at 1 indicates no effect of eye movements on speed tuning.

Supplementary Figure 10 Responses to virtual speed and run speed in open-loop.

Responses of 70 units (all units with QRV>0.1 except examples shown in Fig. 3b-d), to virtual speed (blue) and run speed (red) in the open-loop condition. Similar to the marginal responses shown in Fig. 3b-d.

Supplementary Figure 11 Prediction of speed from population activity using multiple decoders.

All curves show population decoding performance as a function of interaction angle θ using three different decoders: random forest decoder (number of trees <35; Breiman L, Mach Learn 45:5-32, 2001; Huber D, et al. Nature 484:473-478, 2012), linear population decoder (used in the main text; described in Methods) and neural-network decoder (feed-forward network with a single hidden layer; layer size <35, trained with Levenberg-Marquardt algorithm and implemented by the Neural Networks toolbox of Matlab). Speed < 1 cm/s are considered as stationary for these tests. a-f. Performance of the three algorithms for example sessions which had 20(a-c) or 11(d-f) recorded units. Error bars are cross-validation standard error. g-i. Decoding performance across all sessions (error bars are the ± s.e. across 11 sessions.

Supplementary Figure 12 Simulation of populations with various distributions of interaction angles.

a. Simulation method. The run speed and virtual speed were passed through the speed-tuning map of each model neuron, and independent Gaussian noise was added to generate the output. b. Examples of the tuning of model cells selective for different interaction angles (θ = 0, 45°, 90° and 135°). The tuning curve along any direction was modeled as a monotonically increasing function. c-i. The distribution of interaction angles of the simulated population (left) and the performance of a population decoder as a function of the interaction angle (right). c-f Decoder performance (Tree-based decoder) for a simulated population sharing an identical interaction angle. The decoder performs best for this angle, and performance drops to 0 for the orthogonal interaction angle. For example, when the population of neurons all encode virtual speed (θ = 0; panel c), the decoder gives no information about run speed (θ = 90°). g. When the population consists of a mixture of neurons encoding either virtual speed (θ = 0) or run speed (θ = 90o), the population decoder performs equally well for all interaction angles. h. The same is true when the population has neurons encoding a uniform distribution of interaction angles. i. The performance of a distribution of neurons that are diverse but biased towards equal weighting (θ = 45°) shows performance profile that is most similar to the real population of neurons. For all panels, the mean ± s.e. were calculated across experimental sessions, because the simulation was run for the virtual speed and run speed distribution of each session individually. j. Distribution of optimal interaction angles (left, same as Fig 3j) and decoder performance (right, similar to Fig 4d) for an actual recorded population. The lower speed limit was set at 0.03 cm/s, as in all the simulated models of this figure.

Supplementary information

Supplementary Text and Figures

Supplementary Figures 1–12 (PDF 2877 kb)

Rights and permissions

Reprints and permissions

About this article

Cite this article

Saleem, A., Ayaz, A., Jeffery, K. et al. Integration of visual motion and locomotion in mouse visual cortex. Nat Neurosci 16, 1864–1869 (2013). https://doi.org/10.1038/nn.3567

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1038/nn.3567

This article is cited by

Search

Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing