Entorhinal grid cells map the local environment, but their involvement beyond spatial navigation remains elusive. We examined human functional MRI responses during a highly controlled visual tracking task and show that entorhinal cortex exhibited a sixfold rotationally symmetric signal encoding gaze direction. Our results provide evidence for a grid-like entorhinal code for visual space and suggest a more general role of the entorhinal grid system in coding information along continuous dimensions.
Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Our research is funded by the Netherlands Organisation for Scientific Research (NWO-Vidi 452-12-009; NWO-Gravitation 024-001-006; NWO-MaGW 406-14-114; NWO-MaGW 406-15-291), the Kavli Foundation, the Centre of Excellence scheme of the Research Council of Norway – Centre for Biology of Memory and Centre for Neural Computation, The Egil and Pauline Braathen and Fred Kavli Centre for Cortical Microcircuits, the National Infrastructure scheme of the Research Council of Norway – NORBRAIN, and the European Research Council (ERC-StG RECONTEXT 261177; ERC-CoG GEOCOG 724836).
Integrated supplementary information
Typical trial: each trial started with the fixation target (black cross) being stationary (1s), then moving (6–10s) to sample 3–5 eye movement directions sequentially (in 2s blocks), followed by a waiting period (1s) and finally jumping to a random position and continuing moving from there. When the fixation target moved over predefined locations, objects were shown (represented by black circle). After 3 pseudorandomly chosen trials per run, participants reported object locations by moving a cursor to the remembered position. Typical run: each run consisted of 9 trials, each testing 12 directions twice. Half of the trials tested a visual motion control. Condition sequence was counterbalanced. Full experiment: the full experiment consisted of 3 blocks with 3 runs each per participants. Each run tested a different set of 12 directions defined by trajectory rotation (−10°, 0°, 10°). Each run block tested all directions equally with 10° angular resolution. All trajectories tested in this experiment: The arena was presented in two orientations (0° and 180°) and was surrounded by two external landmarks depicted as black squares (0° and 150° in arena orientation 0°). Objects were cued at 4 predefined locations (black circles) that allowed a balanced sampling across trajectory rotations.
a Main paradigm: virtual arena from bird’s eye view, fixation target was moving. b Visual motion control: fixation target remained at the screen center while arena was moving. Objects (colored circles) were shown only when the fixation target moved across them. c First-person virtual navigation task. Participants navigated a virtual arena via button presses and reported object locations by navigating to them. White arrows indicate movement of either fixation disc a, virtual arena b or the first-person agent c and were not shown during the experiment. d Spatial memory performance for objects for both viewing- and first-person virtual navigation task. We plot the average Euclidean distance between memorized and true location normalized by the total size of the environment in the respective coordinate system (pixels vs. unreal coordinates) separately for the four objects across individual participants overlaid on box-whisker-plot (center: median, box: 25th to 75% percentile, whiskers: 1,5×IQR). No differences between objects were observed (repeated measures ANOVA, viewing task: F(3, 81) = 0.64, p = 0.593, navigation task: F(3, 81) = 1.9, p = 0.136, n = 28).
Binary regions of interest-mask for entorhinal cortex overlaid on single participant normalized mean-EPI image on which the ROI was drawn.
For each voxel we extracted beta estimates (β) for each direction for three data partitions within which directional sampling was balanced. From each beta estimate we subtracted the mean across all beta estimates obtained from the same run (normalization) and averaged it across two of the three partitions (training set). We then fitted regressors for sine and cosine of viewing direction with 60°-periodicity (and constant-regressor) to the training set and used the resulting beta estimates (βsin and βcos) to estimate the voxel’s putative grid orientation. All directions in steps of 60° (0 modulo 60°) are considered aligned directions, those in-between (30 modulo 60°) are misaligned directions. In the third data partition (testing set), we then contrasted directions aligned to the putative grid orientation versus directions misaligned to it. This process was iterated until every data partition served as testing set once and as training set twice. In each iteration, contrast coefficients were transformed into z-scores based on a bootstrapped null-distribution. By averaging across iterations and ROI-voxels we obtained one three-fold cross-validated z-score per participants that was taken to the group level.
Supplementary Figure 5 Visualization of main results when analysis is performed on threefold cross-validated contrast coefficients for Aligned versus Misaligned contrast instead of z-scores.
a Six-fold rotational symmetry in EC. We plot single participant data (n = 29) overlaid on whisker-box-plot (center: median, box: 25th to 75% percentile, whiskers: 1,5×IQR) and mean and SEM across participants. b Regions of interest on SPM single participant T1-template (MNI-coordinates: X = −8, Y = −11, Z = −20). c Control symmetries for EC and six-fold rotational symmetry in control regions. We plot single participant data overlaid on whisker-box-plots.
a Frontal lobe mask obtained by thresholding the corresponding probability map (SPM anatomy toolbox) at 99% probability to approximate the average size of the entorhinal cortex. Mask shown on single participant normalized mean-EPI image. b Aligned versus misaligned contrast for six-fold rotational symmetry in control regions with sizes approximating the average size of entorhinal cortex. We plot single participant data overlaid on whisker-box-plots (center: median, box: 25th to 75% percentile, whiskers: 1,5×IQR). We found a weak hexadirectional signal in the prefrontal cortex (One-sided Wilcoxon signed rank test, z = 1.92, +p = 0.027, n = 29) in line with previous reports8,9, which did not survive Bonferroni correction.
a Putative grid orientations across voxels and hemispheres. Smoothed (upper left panel) and unsmoothed data (lower left panel) depicted for all voxels as well as for the 10% strongest and 10% weakest hexadirectionally modulated voxels. Each plot depicts percent of voxels found for each possible putative grid orientation (10°-binning in 60°-space). The mean putative grid orientation of each subject was set to zero. The 10% strongest hexadirectionally modulated voxels had a more similar putative grid orientation (smaller variance) than the 10% weakest voxels for both smoothed (one-sided t-test, t(28) = −6.67 p = 1.5 × 10−7, CI [-Inf, −0.19]) and unsmoothed data (one-sided t-test, t(28) = −2.74, p = 5.3 × 10−3, CI [-Inf, −0.02]). Middle panel: Across-voxel coherence of putative grid orientations within each hemisphere compared to across hemispheres. We plot unsmoothed individual participant data overlaid on whisker-box-plots of the average absolute angular difference between putative grid orientations of each voxel to all voxels in the same hemisphere (within) and to all voxels in the other hemisphere (across). Voxels in the same hemisphere had a more similar putative grid orientation than voxels in different hemispheres (two-sided t-test, t(28) = −3.67, p = 0.001, CI [−0.03, −0.01]). b Main analysis for left and right hemisphere. Hexadirectional modulation was strong in the right hemisphere (one-sided Wilcoxon signed rank test, z = 2.85, *p = 2.2 × 10−3, n = 29). Whiskerbox-plots show the median on the 25th to 75% percentile, whiskers represent 1,5 × IQR.
Supplementary Figure 8 Gaze-dependent hexadirectional pattern cannot be explained by fixation error or eye velocity.
a Euclidean distance between fixation target and gaze (fixation error). Left: fixation error for all 36 directions and 24 participants in degree visual angle (radial axis). Across-participant mean in petrol green, SEM in black, single participant data in purple. Middle: R-square statistics for all participants and model symmetries tested. Data for individual participants (white dots) overlaid on whisker-box-plot. We fitted linear models with regressors for sine and cosine of viewing direction with different periodicities (4, 5, 6, 7, 8-fold symmetry) to the fixation error (shown left) of each participant. If fixation accuracy was symmetrical, corresponding models were expected to produce higher R-square values and hence a higher goodness of fit. There was no difference in R-square statistics between model symmetries (repeated measures ANOVA, F(4, 92) = 1.7, p = 0.156). Right: fixation error for main paradigm and visual motion control. We plot data for single participants next to whisker-box-plots (center: median, box: 25th to 75% percentile, whiskers: 1,5 × IQR). b Average velocity of eye movements in degree per second (radial axis) across all 36 directions. The fixation target moved with a constant speed of 7.5°/s, average eye velocity across directions and participants was 7.75 ± 0.183. There was no difference in eye velocity across directions (repeated measures ANOVA, F(35, 805) = 1.1, p = 0.313). Across-participant mean in petrol green, SEM in black, single participant data in purple.