Hippocampal neurons encode physical variables1,2,3,4,5,6,7 such as space1 or auditory frequency6 in cognitive maps8. In addition, functional magnetic resonance imaging studies in humans have shown that the hippocampus can also encode more abstract, learned variables9,10,11. However, their integration into existing neural representations of physical variables12,13 is unknown. Here, using two-photon calcium imaging, we show that individual neurons in the dorsal hippocampus jointly encode accumulated evidence with spatial position in mice performing a decision-making task in virtual reality14,15,16. Nonlinear dimensionality reduction13 showed that population activity was well-described by approximately four to six latent variables, which suggests that neural activity is constrained to a low-dimensional manifold. Within this low-dimensional space, both physical and abstract variables were jointly mapped in an orderly manner, creating a geometric representation that we show is similar across mice. The existence of conjoined cognitive maps suggests that the hippocampus performs a general computation—the creation of task-specific low-dimensional manifolds that contain a geometric representation of learned knowledge.
Subscribe to Journal
Get full journal access for 1 year
only $3.90 per issue
All prices are NET prices.
VAT will be added later in the checkout.
Tax calculation will be finalised during checkout.
Rent or Buy article
Get time limited or full article access on ReadCube.
All prices are NET prices.
The datasets generated in this study are available from the corresponding authors on reasonable request. Source data are provided with this paper.
The code used for all analyses in this study is available on GitHub (https://github.com/BrainCOGS/HPC_manifolds). All other codes are available on reasonable request.
O’Keefe, J. & Dostrovsky, J. The hippocampus as a spatial map. Preliminary evidence from unit activity in the freely-moving rat. Brain Res. 34, 171–175 (1971).
Frank, L. M., Brown, E. N. & Wilson, M. Trajectory encoding in the hippocampus and entorhinal cortex. Neuron 27, 169–178 (2000).
Wood, E. R., Dudchenko, P. A., Robitsek, R. J. & Eichenbaum, H. Hippocampal neurons encode information about different types of memory episodes occurring in the same location. Neuron 27, 623–633 (2000).
Eichenbaum, H., Kuperstein, M., Fagan, A. & Nagode, J. Cue-sampling and goal-approach correlates of hippocampal unit activity in rats performing an odor-discrimination task. J. Neurosci. 7, 716–732 (1987).
Herzog, L. E. et al. Interaction of taste and place coding in the hippocampus. J. Neurosci. 39, 3057–3069 (2019).
Aronov, D., Nevers, R. & Tank, D. W. Mapping of a non-spatial dimension by the hippocampal–entorhinal circuit. Nature 543, 719–722 (2017).
Taxidis, J. et al. Differential emergence and stability of sensory and temporal representations in context-specific hippocampal sequences. Neuron 108, 984–998.e9 (2020).
O’Keefe, J. & Nadel, L. The Hippocampus as a Cognitive Map (Clarendon, 1978).
Schuck, N. W. & Niv, Y. Sequential replay of nonspatial task states in the human hippocampus. Science 364, eaaw5181 (2019).
Tavares, R. M. et al. A map for social navigation in the human brain. Neuron 87, 231–243 (2015).
Park, S. A., Miller, D. S., Nili, H., Ranganath, C. & Boorman, E. D. Map making: constructing, combining, and inferring on abstract cognitive maps. Neuron 107, 1226–1238.e8 (2020).
Rubin, A. et al. Revealing neural correlates of behavior without behavioral measurements. Nat. Commun. 10, 4745 (2019).
Low, R. J., Lewallen, S., Aronov, D., Nevers, R. & Tank, D. W. Probing variability in a cognitive map using manifold inference from neural dynamics. Preprint at https://doi.org/10.1101/418939 (2018).
Pinto, L. et al. An accumulation-of-evidence task using visual pulses for mice navigating in virtual reality. Front. Behav. Neurosci. 12, 36 (2018).
Koay, S. A., Thiberge, S., Brody, C. D. & Tank, D. W. Amplitude modulations of cortical sensory responses in pulsatile evidence accumulation. eLife 9, e60628 (2020).
Engelhard, B. et al. Specialized coding of sensory, motor and cognitive variables in VTA dopamine neurons. Nature 570, 509–513 (2019).
MacDonald, C. J., Lepage, K. Q., Eden, U. T. & Eichenbaum, H. Hippocampal “time cells” bridge the gap in memory for discontiguous events. Neuron 71, 737–749 (2011).
Pastalkova, E., Itskov, V., Amarasingham, A. & Buzsáki, G. Internally generated cell assembly sequences in the rat hippocampus. Science 321, 1322–1327 (2008).
Tolman, E. C. Cognitive maps in rats and men. Psychol. Rev. 55, 189–208 (1948).
Bellmund, J. L. S., Gärdenfors, P., Moser, E. I. & Doeller, C. F. Navigating cognition: spatial codes for human thinking. Science 362, eaat6766 (2018).
Eichenbaum, H. in Behavioral Neuroscience of Learning and Memory (eds Clark, R. E. & Martin, S. J.) 101–117 (Springer, 2018).
Constantinescu, A. O., O’Reilly, J. X. & Behrens, T. E. J. Organizing conceptual knowledge in humans with a gridlike code. Science 352, 1464–1468 (2016).
Gallego, J. A., Perich, M. G., Miller, L. E. & Solla, S. A. Neural manifolds for the control of movement. Neuron 94, 978–984 (2017).
Russo, A. A. et al. Motor cortex embeds muscle-like commands in an untangled population response. Neuron 97, 953–966.e8 (2018).
Chaudhuri, R., Gerçek, B., Pandey, B., Peyrache, A. & Fiete, I. The intrinsic attractor manifold and population dynamics of a canonical cognitive circuit across waking and sleep. Nat. Neurosci. 22, 1512–1520 (2019).
Eichenbaum, H. & Cohen, N. J. Can we reconcile the declarative memory and spatial navigation views on hippocampal function? Neuron 83, 764–770 (2014).
Recanatesi, S. et al. Predictive learning as a network mechanism for extracting low-dimensional latent space representations. Nat. Commun. 12, 1417 (2021).
Aronov, D. & Tank, D. W. Engagement of neural circuits underlying 2D spatial navigation in a rodent virtual reality system. Neuron 84, 442–456 (2014).
Dombeck, D. A., Harvey, C. D., Tian, L., Looger, L. L. & Tank, D. W. Functional imaging of hippocampal place cells at cellular resolution during virtual navigation. Nat. Neurosci. 13, 1433–1440 (2010).
Harvey, C. D., Coen, P. & Tank, D. W. Choice-specific sequences in parietal cortex during a virtual-navigation decision task. Nature 484, 62–68 (2012).
Brunton, B. W., Botvinick, M. M. & Brody, C. D. Rats and humans can optimally accumulate evidence for decision-making. Science 340, 95–98 (2013).
Gold, J. I. & Shadlen, M. N. The neural basis of decision making. Annu. Rev. Neurosci. 30, 535–574 (2007).
Gill, P. R., Mizumori, S. J. Y. & Smith, D. M. Hippocampal episode fields develop with learning. Hippocampus 21, 1240–1249 (2011).
McKenzie, S. et al. Hippocampal representation of related and opposing memories develop within distinct, hierarchically organized neural schemas. Neuron 83, 202–215 (2014).
Howard, M. W., Luzardo, A. & Tiganj, Z. Evidence accumulation in a Laplace domain decision space. Comput. Brain Behav. 1, 237–251 (2018).
Pnevmatikakis, E. A. et al. Simultaneous denoising, deconvolution, and demixing of calcium imaging data. Neuron 89, 285–299 (2016).
Pnevmatikakis, E. A. & Giovannucci, A. NoRMCorre: an online algorithm for piecewise rigid motion correction of calcium imaging data. J. Neurosci. Methods 291, 83–94 (2017).
Skaggs, W. E., McNaughton, B. L. & Gothard, K. M. An information-theoretic approach to deciphering the hippocampal code. In Advances in Neural Information Processing Systems 5 (eds Hanson, S. J. et al.) 1030–1037 (Morgan-Kaufmann, 1993).
Grassberger, P. & Procaccia, I. Measuring the strangeness of strange attractors. Physica D 9, 189–208 (1983).
Stachenfeld, K. L., Botvinick, M. M. & Gershman, S. J. The hippocampus as a predictive map. Nat. Neurosci. 20, 1643–1653 (2017).
Gauthier, J. L. & Tank, D. W. A dedicated population for reward coding in the hippocampus. Neuron 99, 179–193.e7 (2018).
Pinto, L. et al. Task-dependent changes in the large-scale dynamics and necessity of cortical regions. Neuron 104, 810–824.e9 (2019).
Domnisoru, C., Kinkhabwala, A. A. & Tank, D. W. Membrane potential dynamics of grid cells. Nature 495, 199–204 (2013).
Rich, P. D., Liaw, H.-P. & Lee, A. K. Large environments reveal the statistical structure governing hippocampal representations. Science 345, 814–817 (2014).
Tenenbaum, J. B., de Silva, V. & Langford, J. C. A global geometric framework for nonlinear dimensionality reduction. Science 290, 2319–2323 (2000).
Yu, B. M. et al. Gaussian-process factor analysis for low-dimensional single-trial analysis of neural population activity. J. Neurophysiol. 102, 614–635 (2009).
We thank A. Song and S. Thiberge for assistance with two-photon imaging, S. Stein and S. Baptista for technical support with animal training, M. L. Ioffe for providing code and E. M. Diamanti and B. Engelhard for discussions. This work was supported by NIH grants U01NS090541, U19NS104648 and F32MH119749, as well as the Simons Collaboration on the Global Brain.
The authors declare no competing interests.
Peer review information Nature thanks the anonymous reviewers for their contribution to the peer review of this work.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Extended data figures and tables
a, Each heat map represents one neuron and the trial-by-trial activity of that neuron in the towers task for left-choice trials. Each row in each heat map is the ΔF/F (normalized within each neuron) of the neuron in that trial. b, Same as in a, but for the alternation task. Note that the single-trial activity appears more variable in the towers task and more reliable in the alternation task, consistent with the results that evidence is also being represented by neurons in the towers task. c, Neural activity (ΔF/F normalized within each neuron) of cells significantly encoding evidence, sorted by activity in half the trials (top) and plotted using the same sorting in the other half of the trials (bottom).
a, Each heat map shows the mean ΔF/F of a neuron with significant mutual information in E × Y space. b, Scatterplot of the mutual information in RE × Y space versus E × Y space for each cell with significant information in E × Y space (n = 917 neurons). RE is randomized evidence. c, Same as in b, but for E × RY space versus E × Y space. RY is randomized position. d, In total, 29% of imaged neurons had significant mutual information in E × Y space, whereas 16% had significant mutual information only for position and 6% had significant mutual information only for evidence. e, Of the cells with significant mutual information in E × Y space, 89.9% had significantly more information in E × Y space than just place or evidence information alone, whereas 9.8% could not be differentiated from place cells and 0.3% could not be differentiated from evidence cells (Methods). f, The probability of a cell having significant mutual information in E × Y space is significantly greater than the joint probability of a cell being a place cell and a cell being an evidence cell. Two-tailed Wilcoxon signed-rank test, *P = 0.016; n = 7 mice; data are mean ± s.e.m. g, Cells with significant mutual information in E × Y space had 1.7 ± 0.03 (mean ± s.e.m.) firing fields (n = 917 cells).
During the training of the towers task, mice proceed through various stages of training. In one of these training stages, mice perform a task nearly identical to the towers task, except that visual cues only show up on one side of the maze. a, The intrinsic dimensionality of the one-side cues task is approximately 4.2 (4.0–4.5; 95% bootstrapped confidence interval). n = 4 mice; data are mean ± 95% bootstrapped confidence intervals. Grey dashed lines illustrate the slope expected for a four-dimensional manifold. b, Intrinsic dimensionality of the one-side cues task is significantly lower than the dimensionality of the towers task. Two-tailed Wilcoxon rank-sum test, *P = 0.042; n = 7 mice (towers task) and n = 4 mice (one-side cues task); data are mean ± s.e.m. c, Choice-specific place cell sequences in the one-side cues task, similar to Fig. 2a. Sequences are divided into left-choice-preferring (top), right-choice-preferring (middle) and non-preferring (bottom) cells. Data are split between left-choice trials (left) and right-choice trials (right). Cells are shown in the same order within each row group. ΔF/F was normalized within each neuron.
Extended Data Fig. 4 Cross-validation methods and results demonstrating how neural activity from single neurons is captured by coordinated population activity.
a, Illustration of the cross-validation method to calculate the decoding index in Fig. 3c. Data are split for training (solid colours) and testing (shaded colours). With the training data, a map is obtained from ΔF/F to latent dimensions and back. This map is then evaluated on the test data. b, To assess the performance of the map, we concatenate the neuron × time data in the test block and reconstructed test block into two vectors and calculate the correlation coefficient from the elementwise pairwise comparison of the vectors. The correlation coefficient was averaged across 10 individually held-out trials to yield the decoding index. c, Illustration of a similar analysis in which the activity of a single cell is decoded from a manifold fit to the rest of the neural population. One neuron (red) is removed before using MIND to obtain a set of latents. Next, in the training data (solid green), a map is calculated from the manifold to the activity of the held-out neuron. The map is then used to predict the test data (shaded green). The correlation coefficient is calculated as in b and averaged over five folds across 10 individually held-out neurons as the decoding index. d, Example of neural activity from 40 individually reconstructed neurons, in which the activity of each neuron was decoded from the five-dimensional manifold fit to the other cells following procedures in c (comparable to Fig. 3b, for which the method in a and b was used). ΔF/F is normalized to the maximum ΔF/F in the window shown. e, Cross-validated correlation coefficients between the activity of individual neurons in the real and reconstructed data, in which the reconstruction was accomplished with d-dimensional embeddings of the neural manifold. The decoding index is the correlation coefficient between the predicted and real ΔF/F of the held-out ROIs. n = 7 mice; data are mean ± s.e.m.
a, The visual space of the accumulating towers task across a representative session. Top, the mean luminance of the virtual-reality visual field as a function of position in the T-maze. Bottom, four example frames. Note the high variability of luminance during the cue period, where bright towers are randomly presented on the left and right walls. b, Performing dimensionality reduction on the time series of the pixel values in the raw video stream using MIND reveals a low-dimensional manifold, reflecting the visual sensory structure of the accumulating towers task. Plotting luminance (top) and evidence (bottom) on the manifold reveals that luminance is represented as a smooth gradient, whereas evidence requires memory and is thus absent on the task manifold. c, Same as in b, but showing the neural manifold obtained from the mouse that ran the session (Fig. 3f). Note the absence of a luminance representation, but the emergence of evidence.
a, Similar to Fig. 3f, the view angle is plotted as colour on the three-dimensional embedding of the manifold. b, The five latent variables of the neural manifold embedded in a five-dimensional Euclidean space are better predicted by GPR from view angle and evidence values than from view angle and shuffled evidence values. Two-tailed Wilcoxon signed-rank test, *P = 0.016; n = 7 mice; data are mean ± s.e.m. Decoding index is the correlation coefficient between the predicted manifold values and true manifold values, averaged over the five dimensions of the manifold. c, Same as in b, but for decoding manifold values using position and velocity. The addition of velocity to position information significantly improves the decoding of manifold values. Two-tailed Wilcoxon signed-rank test, *P = 0.016; n = 7 mice; data are mean ± s.e.m. d, Same as in b, but for decoding using position and time. The addition of time information does not significantly increase how well manifold values are decoded. Two-tailed Wilcoxon signed-rank test, P = 0.30 (ns, not significant); n = 7 mice; data are mean ± s.e.m. e, We used PCA to separate the correlated and orthogonal dimensions between evidence and view angle and decoded both PC1 (correlated) and PC2 (orthogonal) from the five-dimensional embedding of the manifold. n = 7 mice; data are mean ± s.e.m. The decoding index is the correlation coefficient between the predicted and true principal component values. f, The view angle is better decoded from the neural manifold (five-dimensional embedding) in the towers task (‘Tow’), when evidence is also present, than in the alternation task (‘Alt’) when evidence is not present. Two-tailed Wilcoxon rank-sum test, P = 0.07; n = 7 mice (towers task) and n = 7 mice (alternation task); data are mean ± s.e.m. The decoding index is the correlation coefficient between the predicted and true view angle values. g, Average view angle trajectories, separated between left- and right-choice trials, for the mice in the towers task (n = 7 mice; blue, thin lines) and the alternation task (n = 7 mice; red, thin lines). Thick lines represent averages across mice. h, Average view angle values in the towers task (n = 7 mice; blue, thin lines) and the alternation task (n = 7 mice; red, thin lines) over all trials. Thick lines and shaded area are mean ± 95% bootstrapped confidence interval. i, Accuracy in predicting the upcoming choice (left), the choice of the mouse in the previous trial (centre) and whether the previous trial was rewarded (right) from d-dimensional embeddings of the neural manifold. n = 7 mice; data are mean ± s.e.m.
a, Schematic to describe how doublets were defined. Orange and green are calcium traces of the cells under consideration. Grey is the calcium trace of a third cell. b, Twenty-five examples of doublets in a single session from one mouse. Each panel shows traces for trials in which the doublet was present. Orange traces are the neural activity from the first cell in the doublet; green traces are the neural activity from the second cell in the doublet. Heat maps represent the normalized neural activity of each cell across all trials in the session.
a, Trajectories through evidence and position in one session of the task. Each thin line represents a fit with a cubic spline to a single trial; thick lines represent fits over all trials in which the mouse was supposed to turn left or right. b, The average change in position and evidence over time across trials in a single session for a set of representative states in evidence and position space. c, Conceptual diagram showing four trajectories through the neural manifold in right-choice trials. Two different doublets are activated because the trajectories pass through their firing fields. d, Shuffling trial IDs within right-choice trials will disrupt doublet activity while maintaining trial-averaged place and choice preferences of each cell.
a, Distribution of the values in Fig. 4b. b, Distribution of the values in Fig. 4c. c, Distribution of the values in Fig. 4f. d, ROC curves for sequential activity predicted from the five-dimensional embedding of the manifold compared to sequential activity in real data. n = 7 mice. e, Similar to a, but for triplets. Inset shows that triplets are significantly more likely to appear in the real data than in the shuffled dataset in which trial IDs were shuffled. Two-tailed paired Student’s t-test, real versus shuffled data, ****P < 0.0001; n = 34,737 triplets. f, Similar to c, but for triplets, showing that left- and right-choice-predictive triplets from real data are more predictive than triplets obtained from the shuffled dataset in which trial IDs were shuffled. Left inset, left-predictive, n = 1,135 triplets, two-tailed paired Student’s t-test, real versus shuffled data, ****P < 0.0001; right inset, right-predictive, n = 1,755 triplets, two-tailed paired Student’s t-test, real versus shuffled data, ****P < 0.0001. g, Left-choice-predictive triplets are significantly more predictive than instances in which the first two cells in the triplet fire, but the third does not, or when the third cell fires alone. n = 1,135 triplets; two-tailed paired Student’s t-tests, Bonferroni corrected; 1 → 2 → 3 versus 1 → 2 → not 3, ****P < 0.0001; 1 → 2 → 3 versus not 1 → not 2 → 3, ****P < 0.0001; 1 → 2 → not 3 versus not 1→ not 2 → 3, P = 0.78. h, Notably, for left-choice-predictive triplets, in trials in which cells 1 and 2 fire, but cell 3 does not, significantly more trials end with the mouse turning right than the same instances in the shuffled dataset. n = 1,135 triplets, two-tailed paired Student’s t-test, real versus shuffled data, ****P < 0.0001. i, Same as in g, but for right-choice-predictive triplets. n = 1,755 triplets; two-tailed paired Student’s t-tests, Bonferroni corrected; 1 → 2 → 3 versus 1 → 2 → not 3, ****P < 0.0001; 1 → 2 → 3 versus not 1 → not 2 → 3, ****P < 0.0001; 1 → 2 → not 3 versus not 1 → not 2 → 3, P = 1.0. j, Same as in h, but for right-choice-predictive triplets. n = 1,755 triplets; two-tailed paired Student’s t-test, real versus shuffled data, ****P < 0.0001. For box plots, boundaries: 25–75th percentiles; midline, median; whiskers, minimum–maximum.
This file contains the Supplementary Discussion and Supplementary References.
Videos from the viewpoint of a mouse showing behavioral trials from the accumulating towers task. The first trial shows an example of the navigational component of the task - the animal initially turns too early and has to correct itself before making it into the correct arm.
Neural activity from 25 example cells recorded in the same imaging session and plotted on the 3-dimensional embedding of the manifold. Similar to Fig. 3d, we only show ΔF/F values that are 3 standard deviations above the mean activity for each cell.
Two trajectories (left-choice trial = blue; right-choice trial = red) through manifold space in one imaging session from one animal. Position (left) and evidence (right) are plotted as color on the 3-dimensional embedding of the manifold. The bar at the bottom represents the time elapsed in the trial (cue and delay periods).
About this article
Cite this article
Nieh, E.H., Schottdorf, M., Freeman, N.W. et al. Geometry of abstract learned knowledge in the hippocampus. Nature 595, 80–84 (2021). https://doi.org/10.1038/s41586-021-03652-7