Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Brief Communication
  • Published:

Hexadirectional coding of visual space in human entorhinal cortex

Abstract

Entorhinal grid cells map the local environment, but their involvement beyond spatial navigation remains elusive. We examined human functional MRI responses during a highly controlled visual tracking task and show that entorhinal cortex exhibited a sixfold rotationally symmetric signal encoding gaze direction. Our results provide evidence for a grid-like entorhinal code for visual space and suggest a more general role of the entorhinal grid system in coding information along continuous dimensions.

This is a preview of subscription content, access via your institution

Access options

Rent or buy this article

Prices vary by article type

from$1.95

to$39.95

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: Hypothesis and task design.
Fig. 2: Gaze-dependent hexadirectional coding in EC.

Similar content being viewed by others

References

  1. Hafting, T., Fyhn, M., Molden, S., Moser, M.-B. & Moser, E. I. Nature 436, 801–806 (2005).

    Article  CAS  PubMed  Google Scholar 

  2. Jacobs, J. et al. Nat. Neurosci. 16, 1188–1190 (2013).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  3. Stensola, H. et al. Nature 492, 72–78 (2012).

    Article  CAS  PubMed  Google Scholar 

  4. Heys, J. G., Rangarajan, K. V. & Dombeck, D. A. Neuron 84, 1079–1090 (2014).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  5. Killian, N. J., Potter, S. M. & Buffalo, E. A. Proc. Natl. Acad. Sci. USA 112, 15743–15748 (2015).

    CAS  PubMed  PubMed Central  Google Scholar 

  6. Killian, N. J., Jutras, M. J. & Buffalo, E. A. Nature 491, 761–764 (2012).

    CAS  PubMed  PubMed Central  Google Scholar 

  7. Wandell, B. A., Dumoulin, S. O. & Brewer, A. A. Neuron 56, 366–383 (2007).

    Article  CAS  PubMed  Google Scholar 

  8. Doeller, C. F., Barry, C. & Burgess, N. Nature 463, 657–661 (2010).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  9. Constantinescu, A. O., O’Reilly, J. X. & Behrens, T. E. J. Science 352, 1464–1468 (2016).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  10. Ross, J., Morrone, M. C., Goldberg, M. E. & Burr, D. C. Trends Neurosci. 24, 113–121 (2001).

    Article  CAS  PubMed  Google Scholar 

  11. Rolls, E. T. Hippocampus 9, 467–480 (1999).

    Article  CAS  PubMed  Google Scholar 

  12. Ekstrom, A. D. Hippocampus 25, 731–735 (2015).

    Article  PubMed  PubMed Central  Google Scholar 

  13. Chen, G., King, J. A., Burgess, N. & O’Keefe, J. Proc. Natl. Acad. Sci. USA 110, 378–383 (2013).

    Article  CAS  PubMed  Google Scholar 

  14. Klier, E. M. & Angelaki, D. E. Neuroscience 156, 801–818 (2008).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  15. Kravitz, D., Saleem, K., Baker, C. & Mishkin, M. Nat. Rev. Neurosci. 12, 217–230 (2011).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  16. Epstein, R. & Kanwisher, N. Nature 392, 598–601 (1998).

    Article  CAS  PubMed  Google Scholar 

  17. Summerfield, J. J., Lepsien, J., Gitelman, D. R., Mesulam, M. M. & Nobre, A. C. Neuron 49, 905–916 (2006).

    Article  CAS  PubMed  Google Scholar 

  18. Bellmund, J. L. S., Deuker, L., Navarro Schröder, T. & Doeller, C. F. eLife https://doi.org/10.7554/eLife.16534 (2016).

  19. Aronov, D., Nevers, R. & Tank, D. W. Nature 543, 719–722 (2017).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  20. Voss, J. L., Bridge, D. J., Cohen, N. J. & Walker, J. A. Trends Cogn. Sci. 21, 577–588 (2017).

    Article  PubMed  Google Scholar 

Download references

Acknowledgements

Our research is funded by the Netherlands Organisation for Scientific Research (NWO-Vidi 452-12-009; NWO-Gravitation 024-001-006; NWO-MaGW 406-14-114; NWO-MaGW 406-15-291), the Kavli Foundation, the Centre of Excellence scheme of the Research Council of Norway – Centre for Biology of Memory and Centre for Neural Computation, The Egil and Pauline Braathen and Fred Kavli Centre for Cortical Microcircuits, the National Infrastructure scheme of the Research Council of Norway – NORBRAIN, and the European Research Council (ERC-StG RECONTEXT 261177; ERC-CoG GEOCOG 724836).

Author information

Authors and Affiliations

Authors

Contributions

M.N., T.N.S., J.L.S.B. and C.F.D. designed the study; M.N. ran the experiments and analyzed the data; M.N. and C.F.D. wrote the manuscript; and all authors discussed the results and revised the manuscript.

Corresponding authors

Correspondence to Matthias Nau or Christian F. Doeller.

Ethics declarations

Competing interests

The authors declare no competing financial interests.

Additional information

Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Integrated supplementary information

Supplementary Figure 1 Visual tracking task.

Typical trial: each trial started with the fixation target (black cross) being stationary (1s), then moving (6–10s) to sample 3–5 eye movement directions sequentially (in 2s blocks), followed by a waiting period (1s) and finally jumping to a random position and continuing moving from there. When the fixation target moved over predefined locations, objects were shown (represented by black circle). After 3 pseudorandomly chosen trials per run, participants reported object locations by moving a cursor to the remembered position. Typical run: each run consisted of 9 trials, each testing 12 directions twice. Half of the trials tested a visual motion control. Condition sequence was counterbalanced. Full experiment: the full experiment consisted of 3 blocks with 3 runs each per participants. Each run tested a different set of 12 directions defined by trajectory rotation (−10°, 0°, 10°). Each run block tested all directions equally with 10° angular resolution. All trajectories tested in this experiment: The arena was presented in two orientations (0° and 180°) and was surrounded by two external landmarks depicted as black squares (0° and 150° in arena orientation 0°). Objects were cued at 4 predefined locations (black circles) that allowed a balanced sampling across trajectory rotations.

Supplementary Figure 2 Stimuli and spatial memory performance.

a Main paradigm: virtual arena from bird’s eye view, fixation target was moving. b Visual motion control: fixation target remained at the screen center while arena was moving. Objects (colored circles) were shown only when the fixation target moved across them. c First-person virtual navigation task. Participants navigated a virtual arena via button presses and reported object locations by navigating to them. White arrows indicate movement of either fixation disc a, virtual arena b or the first-person agent c and were not shown during the experiment. d Spatial memory performance for objects for both viewing- and first-person virtual navigation task. We plot the average Euclidean distance between memorized and true location normalized by the total size of the environment in the respective coordinate system (pixels vs. unreal coordinates) separately for the four objects across individual participants overlaid on box-whisker-plot (center: median, box: 25th to 75% percentile, whiskers: 1,5×IQR). No differences between objects were observed (repeated measures ANOVA, viewing task: F(3, 81) = 0.64, p = 0.593, navigation task: F(3, 81) = 1.9, p = 0.136, n = 28).

Supplementary Figure 3 Entorhinal cortex.

Binary regions of interest-mask for entorhinal cortex overlaid on single participant normalized mean-EPI image on which the ROI was drawn.

Supplementary Figure 4 Analysis of hexadirectional activity.

For each voxel we extracted beta estimates (β) for each direction for three data partitions within which directional sampling was balanced. From each beta estimate we subtracted the mean across all beta estimates obtained from the same run (normalization) and averaged it across two of the three partitions (training set). We then fitted regressors for sine and cosine of viewing direction with 60°-periodicity (and constant-regressor) to the training set and used the resulting beta estimates (βsin and βcos) to estimate the voxel’s putative grid orientation. All directions in steps of 60° (0 modulo 60°) are considered aligned directions, those in-between (30 modulo 60°) are misaligned directions. In the third data partition (testing set), we then contrasted directions aligned to the putative grid orientation versus directions misaligned to it. This process was iterated until every data partition served as testing set once and as training set twice. In each iteration, contrast coefficients were transformed into z-scores based on a bootstrapped null-distribution. By averaging across iterations and ROI-voxels we obtained one three-fold cross-validated z-score per participants that was taken to the group level.

Supplementary Figure 5 Visualization of main results when analysis is performed on threefold cross-validated contrast coefficients for Aligned versus Misaligned contrast instead of z-scores.

a Six-fold rotational symmetry in EC. We plot single participant data (n = 29) overlaid on whisker-box-plot (center: median, box: 25th to 75% percentile, whiskers: 1,5×IQR) and mean and SEM across participants. b Regions of interest on SPM single participant T1-template (MNI-coordinates: X = −8, Y = −11, Z = −20). c Control symmetries for EC and six-fold rotational symmetry in control regions. We plot single participant data overlaid on whisker-box-plots.

Supplementary Figure 6 Results for ROIs matched in size to entorhinal cortex.

a Frontal lobe mask obtained by thresholding the corresponding probability map (SPM anatomy toolbox) at 99% probability to approximate the average size of the entorhinal cortex. Mask shown on single participant normalized mean-EPI image. b Aligned versus misaligned contrast for six-fold rotational symmetry in control regions with sizes approximating the average size of entorhinal cortex. We plot single participant data overlaid on whisker-box-plots (center: median, box: 25th to 75% percentile, whiskers: 1,5×IQR). We found a weak hexadirectional signal in the prefrontal cortex (One-sided Wilcoxon signed rank test, z = 1.92, +p = 0.027, n = 29) in line with previous reports8,9, which did not survive Bonferroni correction.

Supplementary Figure 7 Putative grid orientation coherence and lateralization.

a Putative grid orientations across voxels and hemispheres. Smoothed (upper left panel) and unsmoothed data (lower left panel) depicted for all voxels as well as for the 10% strongest and 10% weakest hexadirectionally modulated voxels. Each plot depicts percent of voxels found for each possible putative grid orientation (10°-binning in 60°-space). The mean putative grid orientation of each subject was set to zero. The 10% strongest hexadirectionally modulated voxels had a more similar putative grid orientation (smaller variance) than the 10% weakest voxels for both smoothed (one-sided t-test, t(28) = −6.67 p = 1.5 × 10−7, CI [-Inf, −0.19]) and unsmoothed data (one-sided t-test, t(28) = −2.74, p = 5.3 × 10−3, CI [-Inf, −0.02]). Middle panel: Across-voxel coherence of putative grid orientations within each hemisphere compared to across hemispheres. We plot unsmoothed individual participant data overlaid on whisker-box-plots of the average absolute angular difference between putative grid orientations of each voxel to all voxels in the same hemisphere (within) and to all voxels in the other hemisphere (across). Voxels in the same hemisphere had a more similar putative grid orientation than voxels in different hemispheres (two-sided t-test, t(28) = −3.67, p = 0.001, CI [−0.03, −0.01]). b Main analysis for left and right hemisphere. Hexadirectional modulation was strong in the right hemisphere (one-sided Wilcoxon signed rank test, z = 2.85, *p = 2.2 × 10−3, n = 29). Whiskerbox-plots show the median on the 25th to 75% percentile, whiskers represent 1,5 × IQR.

Supplementary Figure 8 Gaze-dependent hexadirectional pattern cannot be explained by fixation error or eye velocity.

a Euclidean distance between fixation target and gaze (fixation error). Left: fixation error for all 36 directions and 24 participants in degree visual angle (radial axis). Across-participant mean in petrol green, SEM in black, single participant data in purple. Middle: R-square statistics for all participants and model symmetries tested. Data for individual participants (white dots) overlaid on whisker-box-plot. We fitted linear models with regressors for sine and cosine of viewing direction with different periodicities (4, 5, 6, 7, 8-fold symmetry) to the fixation error (shown left) of each participant. If fixation accuracy was symmetrical, corresponding models were expected to produce higher R-square values and hence a higher goodness of fit. There was no difference in R-square statistics between model symmetries (repeated measures ANOVA, F(4, 92) = 1.7, p = 0.156). Right: fixation error for main paradigm and visual motion control. We plot data for single participants next to whisker-box-plots (center: median, box: 25th to 75% percentile, whiskers: 1,5 × IQR). b Average velocity of eye movements in degree per second (radial axis) across all 36 directions. The fixation target moved with a constant speed of 7.5°/s, average eye velocity across directions and participants was 7.75 ± 0.183. There was no difference in eye velocity across directions (repeated measures ANOVA, F(35, 805) = 1.1, p = 0.313). Across-participant mean in petrol green, SEM in black, single participant data in purple.

Supplementary information

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Nau, M., Navarro Schröder, T., Bellmund, J.L.S. et al. Hexadirectional coding of visual space in human entorhinal cortex. Nat Neurosci 21, 188–190 (2018). https://doi.org/10.1038/s41593-017-0050-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1038/s41593-017-0050-8

This article is cited by

Search

Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing