Article | Published:

Precise temporal memories are supported by the lateral entorhinal cortex in humans

Nature Neurosciencevolume 22pages284288 (2019) | Download Citation


There is accumulating evidence that the entorhinal-hippocampal network is important for temporal memory. However, relatively little is known about the precise neurobiological mechanisms underlying memory for time. In particular, whether the lateral entorhinal cortex (LEC) is involved in temporal processing remains an open question. During high-resolution functional magnetic resonance imaging (fMRI) scanning, participants watched a ~28-min episode of a television show. During the test, they viewed still-frames and indicated on a continuous timeline the precise time each still-frame was viewed during the study. This procedure allowed us to measure error in seconds for each trial. We analyzed fMRI data from retrieval and found that high temporal precision was associated with increased blood-oxygen-level-dependent fMRI activity in the anterolateral entorhinal (a homolog of the LEC in rodents) and perirhinal cortices, but not in the posteromedial entorhinal and parahippocampal cortices. This suggests a previously unknown role for the LEC in processing of high-precision, minute-scale temporal memories.


The association of temporal and spatial contextual information with an experience is a critical component of episodic memory1,2,3. A rich literature has examined how spatial properties are encoded by hippocampal–entorhinal circuitry, including spatially selective cells in both the hippocampus4 and the medial entorhinal cortex (MEC)5,6,7. Temporal coding properties in the same network have only recently been examined. The discovery of ‘time cells’ in hippocampal CA1 and MEC8,9,10,11 suggests that the medial temporal lobes (MTL) may employ similar mechanisms and shared circuitry to encode both space and time10,12,13. In contrast to the MEC, the LEC appears to code for several elements of the sensory experience14, including item information15 and locations of objects in space14. Human fMRI studies have similarly shown that the LEC is preferentially selective for object identity information (that is, ‘what’), whereas the MEC is preferentially selective for spatial locations (that is, ‘where’)16,17. Whether the LEC provides temporal information to, or receives information from, the hippocampus to become integrated in episodic representations remains an open question. While the temporal coding properties of ‘time cells’ offer a suitable mechanism by which short timescales (milliseconds to seconds) may be encoded, it is not clear how the longer timescale of episodes (minutes) is encoded by these mechanisms. Additionally, episodic memory involves unique ‘one-shot’ encoding that is incidental in nature, while most studies assessing temporal coding properties involve explicit tasks and/or extensive training (for example, sequence learning). We address both of these challenges using a 28-min incidental viewing framework of a complex naturalistic stimulus (an episode of a television sitcom) and a continuous evaluation of the precision of subsequent temporal memory judgments (in the order of seconds to minutes). Here we demonstrate that the LEC plays a prominent role in temporal processing in a task involving a timescale of minutes. These results suggest that there may be multiple distinct mechanisms supporting temporal memory in the MTL and that timescale may be a critical variable that should be considered in future work.


Temporal judgments generate a range of accuracies between 1 and 3 min

During fMRI scanning, subjects watched a ~28 min television episode of a sitcom (Curb Your Enthusiasm, Home Box Office), and were asked during a later test to determine, on a continuous timeline, when still-frames extracted from the episode appeared during incidental viewing (Fig. 1). All analyses discussed were performed on data at retrieval. To ensure that subjects were able to accomplish the task and that behavioral performance would reflect a range of different accuracies, we quantified error in seconds on each trial. Average error was 155.54 s (2.6 min), with a s.d. of 163.58 s (Fig. 2a).

Fig. 1: Task parameters and description.
Fig. 1

During encoding, participants passively viewed a ~28 min television episode of Curb Your Enthusiasm, separated into three scans of 9 min and 26 s each. A 5 min resting-state fMRI scan took place before and after each scan for a total encoding and storage time of ~45 min. Subsequent testing blocks were divided into two scans, for 7 min and 10 s. During testing, participants were shown still-frames that appeared during the episode and were asked to indicate on a timeline, using an MR-compatible dial, when the event in question occurred. Each testing trial lasted 9 s to allow participants to hone in on the temporal frame and enable more comprehensive indexing of temporal signals during retrieval. Perceptual baseline trials were also included, in whch participants were asked to indicate which of the two circles on the screen was brighter.

Fig. 2: Behavioral performance.
Fig. 2

Error was calculated per trial as time (in s) between subject placement and actual time of appearance. a, The average error across all subjects was 153.3 s. Still-frames shown at retrieval were taken from the middle 1,545 s of the episode to avoid primacy and recency effects. b, Behavioral performance compared to chance. Participants in fMRI had significantly lower error than a separate group of participants who performed the same task but had never seen the episode (n = 19 participants, two-tailed Kolmogorov–Smirnov D = 0.4991, P < 0.001).

For each subject we divided retrieval trials into thirds: ‘high-precision’, ‘medium-precision’ and ‘low-precision’. Across subjects, high-precision trials were associated with error <74 s, and low-precision trials were associated with error >170 s, suggesting that the differences in terms of time were not drastic. In other words, the comparison is akin to examining differences in being accurate within one vs three minutes. Trials with error exceeding five minutes were rare across all subjects and did not contribute significantly. Additionally, we ascertained that all participants were attentive to the episode and evaluated their semantic knowledge of the episode using a post-scan true–false test. Average accuracy was 96%.

To determine further whether similar accuracy could be driven by response biases (preference for specific portions of the timeline) or other factors not associated with temporal memory, we conducted a separate control experiment in an independent sample. Subjects in this experiment did not watch the episode but were still asked to place the still-frames on the timeline. Because they had no memory for the episode, their performance provided a measure of random distribution. We compared the distribution of accuracy (absolute value of the trial-by-trial error in seconds) in the experimental fMRI sample and the control sample that did not view the episode, using a nonparametric two-tailed Kolmogorov–Smirnov test. The difference across the two distributions was significant (Kolmogorov–Smirnov D = 0.4991, P < 0.0001), Fig. 2b), confirming that performance in the fMRI participants was not merely reflecting behavioral biases related to assessment via the continuous timeline. We conducted a one-way repeated-measures analysis of variance (ANOVA) comparing trials that were of short (2–107 s), medium (108–186 s) and long (200–277 s) distance from a boundary, which proved not to be statistically significant (F(2,18) = 3.29, P > 0.05), indicating that error does not differ significantly based on the distance of a trial from a segment boundary (Supplementary Fig. 1). Additionally, we found no evidence for regional modulations by vividness of the recall. We asked 12 participants to provide vividness ratings after the scanner-based recall and compared high- and low-vividness trials. We found no significant differences that surpassed our threshold of P < 0.05, Bonferroni–Holm corrected (Supplementary Fig. 2).

Anterolateral, but not posteromedial, entorhinal cortex is selectively engaged for precise temporal memory

Recent work using fMRI functional connectivity has clarified the boundaries of the LEC and MEC regions in the human brain and demonstrated that, consistent with nonhuman primate anatomical studies18, the human analog of rodent LEC is anterolateral (alEC), whereas the human analog of rodent MEC is posteromedial (pmEC)19,20. We used anatomical masks for alEC and pmEC to contrast the level of engagement as a function of temporal precision in these two particular regions. Contrasting high- and low-precision trials allowed us to examine the sensitivity of MTL regions to the temporal accuracy of recall. Voxel beta-coefficients were averaged within the regions of interest (ROIs) as an overall indicator of the degree of model fit with the underlying hemodynamic signal. We found significant temporal precision-related modulation in the alEC (t = 4.537, d.f. = 18, two-tailed P = 0.0003, Cohen’s d = 0.8808, Fig. 3a) but not in the pmEC (t = 0.3504, d.f. = 18, two-tailed P = 0.7301, Fig. 3b). To determine whether this difference across subregions of the entorhinal cortex was significant, we calculated the difference in beta-coefficients between high- and low-precision conditions in the alEC and pmEC (that is, modulation score). We found that the difference in modulation score was also significant (t = 4.794, d.f. = 18, two-tailed P = 0.0001, Cohen’s d = 1.0886, Fig. 3c), suggesting that high-precision trials preferentially engaged the alEC but not the pMEC. To determine whether this selective engagement might extend upstream of the entorhinal cortex, we additionally averaged voxel activity in the perirhinal (PRC) and parahippocampal (PHC) cortices. As expected from the entorhinal cortex results, upstream cortices reflected a similar effect. We found a significant difference between high- and low-precision trials in the PRC (t = 4.331, d.f. = 18, two-tailed P = 0.0004, Cohen’s d = 0.8936, Fig. 3d) but not in the PHC (t = 0.1464, d.f. = 18, two-tailed P = 0.8852, Fig. 3e). Modulation scores across the two regions were also significantly different (t = 3.193, d.f. = 18, P = 0.0005, Cohen’s d = 0.7213, Fig. 3f). Together, these results suggest that the extension of the ventral visual stream (PRC and alEC) is engaged in temporal processing on the scale of minutes, whereas the extension of the dorsal visual stream (PHC and pmEC) does not appear to show temporal precision-selective signals on the same scale.

Fig. 3: Effects of precision on MTL regions.
Fig. 3

a,b,d,e,g,h, Comparing most precise (within 1 min) > least precise (>3 min) across hippocampal subfields and MTL cortical regions. Using two-tailed paired-samples t tests (n = 19 participants, Bonferroni–Holm corrected), we found significantly higher BOLD fMRI activity for high- vs. low-precision trials in alEC (t = 4.537, d.f. = 18, two-tailed P = 0.0003), PRC (t = 4.331, d.f. = 18, two-tailed P = 0.0004), DG/CA3 (t = 4.113, d.f. = 18, two-tailed P = 0.0007), and CA1 (t = 3.691, d.f. = 18, two-tailed P = 0.0017). No significant differences were found in pmEC (t = 0.3504, d.f. = 18, two-tailed P = 0.7301) and PHC (t = 0.1464, d.f. = 18, two-tailed P = 0.8852). n = 19 for all comparisons. c,f,i, Magnitude of modulation by precision. Difference metrics were calculated by subtracting beta-coefficients from the least precise condition from those of the most precise condition. Modulations were significantly higher in the alEC (t = 4.794, d.f. = 18, two-tailed P = 0.0001, minimum = −0.0751, 25th percentile = 0.0705, median = 0.1723, 75th percentile = 0.4288, maximum = 0.6932), PRC (t = 3.193, d.f. = 18, two-tailed P = 0.0005, minimum = −0.0535, 25th percentile = 0.0466, median = 0.1231, 75th percentile = 0.2884, maximum = 0.5558) and in hippocampal subfields (with a stronger effect in DG/CA3; t = 3.091, d.f. = 18, two-tailed P = 0.0063, minimum = −0.0615, 25th percentile = 0.0434, median = 0.1913, 75th percentile = 0.471, maximum = 0.8114) compared to the pmEC (minimum = −0.3783, 25th percentile = −0.1157, median = 0.0256, 75th percentile = 0.1032, maximum = 0.4919), PHC (minimum = −0.6703, 25th percentile = −0.1117, median = −0.0387, 75th percentile = 0.1733, maximum = 0.5688) and CA1 (minimum = −0.0905, 25th percentile = 0.0211, median = 0.1075, 75th percentile = 0.2832, maximum = 0.5437). n = 19 for all comparisons.

Hippocampal DG/CA3 is more engaged than CA1 for precise temporal memory

Next, we sought to examine whether hippocampal subfields show blood oxygen level-dependent (BOLD) fMRI signals modulated by the precision of temporal judgments. We used anatomical segmentations of the hippocampal dentate gyrus (DG) and CA3 (combined for a joint DG/CA3 label as in past fMRI studies), and CA1, to acquire regional averages of voxel-level activation during temporal memory judgments. We found precision-related modulations (high vs low) in both hippocampal subregions, with stronger effects in DG/CA3 (t = 4.113, d.f. = 18, two-tailed P = 0.0007, Cohen’s d = 0.622, Fig. 3g) compared to CA1 (t = 3.691, d.f. = 18, two-tailed P = 0.0017, Cohen’s d = 0.6871, Fig. 3h). Again, we calculated average modulation scores across the two subregions for all participants and found a significant difference across subfields (t = 3.091, d.f. = 18, two-tailed P = 0.0063, Cohen’s d = 0.4216, Fig. 3i), suggesting that modulation by temporal precision in DG/CA3 was stronger than in CA1.

Cortical regions preferentially engaged during precise temporal memory judgments

Since correct temporal memory judgments would be expected to engage circuitry involved in the experience of recollection and memory for rich contextual details, we examined how cortical regions outside of the MTL are modulated by temporal memory precision, focusing on regions previously implicated in recollection and detail memory21, including the angular gyrus, retrosplenial cortex (RSC), precuneus (PreC), posterior cingulate cortex (PCC) and medial prefrontal cortex (mPFC). Using anatomical masks for these regions to average voxel-level activity during high and low precision, we found significant high vs low differences bilaterally in the mPFC (t = 3.851, d.f. = 18, P = 0.0017, Cohen’s d = 0.6469), AG (t = 3.41, d.f. = 18, P = 0.0031, Cohen’s d = 0.6471) and PCC (t = 2.75, d.f. = 18, P = 0.0132, Cohen’s d = 0.4547). We observed no significant modulation in the PreC (t = 1.937, d.f. = 18, P = 0.068) and RSC (t = 0.137, d.f. = 18, P = 0.8925). These results are summarized using modulation scores across cortical regions (Fig. 4). Collectively, analyses of cortical regions suggest that memories recollected with higher temporal precision engage some of the same cortical circuits and regions known to play a role in the representation of detail memory.

Fig. 4: Cortical reinstatement effects.
Fig. 4

Cortical temporal modulation scores across regions previously implicated in recollection and recall of contextual or detail memory including the RSC: t = −0.0027, d.f. = 18, two-tailed P = 0.9979, minimum = −0.5316, 25th percentile = −0.2357, median = 0.0012, 75th percentile = 0.2478, maximum = 0.536); PreC: t = 1.685, d.f. = 18, two-tailed P = 0.1093, minimum = −0.2382, 25th percentile = −0.0648, median = 0.0345, 75th percentile = 0.1931, maximum = 0.4965); PCC: t = 2.7984 d.f. = 18, two-tailed P = 0.0119, minimum = −0.0851, 25th percentile = −0.0426, median = 0.0571, 75th percentile = 0.1635, maximum = 0.295); angular gyrus (AG: t = 3.3742, d.f. = 18, two-tailed P = 0.0034, minimum = −0.1062, 25th percentile = 0.0197, median = 0.0984, 75th percentile = 0.2662, maximum = 0.4543); mPFC: t = 2.899, d.f. = 18, two-tailed P = 0.0096, minimum = −0.1148, 25th percentile = −0.0118, median = 0.1211, 75th percentile = 0.2584, maximum = 0.0846); and the entire hippocampus (Hipp) (t = 3.9518, d.f. = 18, two-tailed P = 0.0021, minimum = −0.0784, 25th percentile = −0.0077, median = 0.1245, 75th percentile = 0.3094, maximum = 0.6192) for reference. Only modulation scores in the PCC, AG, mPFC and hippocampus are significantly different from zero (two-tailed one-sample t tests, Bonferroni–Holm corrected, n = 19 participants). The hippocampus is shown here for comparison.


Results from this study suggest that temporal precision judgments in the order of minutes are associated with increased BOLD fMRI activity in the alEC and PRC, which is consistent with a broad role for these regions in the processing of external input including information about temporal context. The observation that the alEC–PRC network, but not the pmEC–PHC network, was significantly more engaged for trials with high temporal precision suggests that distinct mechanisms may be used to process and store spatial and longer-timescale temporal information. Past studies in rodents have demonstrated little spatial selectivity in the LEC but strong coding for object properties14,22. One study which used a similar timeline asked participants to make retrospective estimates of the duration of time between audio clips from a radio story. They found that these duration estimates correlated with BOLD fMRI pattern similarity in the right entorhinal cortex, though the authors did not segment the aLEC and pMEC23. More recently, an examination of LEC firing properties during open exploration demonstrated strong temporal coding in the order of minutes, consistent with our results24.

The observation that the PRC was significantly more engaged for the most temporally precise trials is only partially consistent with previous studies. Inactivation of the PRC in rats has been associated with impaired temporal order memory for objects25, and a subset of neurons in the PRC alter their firing based on how recently an object was viewed26. In contrast, a number of studies have demonstrated a role for the PRC in object recognition but not the recall of contextual details per se27. Studies in humans using fMRI have reported signals linked to temporal context, operationalized in terms of the ordinal positions of items in a sequence, in the PHC but not the PRC28,29,30. It is worth noting that these prior studies used a short timescale of event proximity (seconds, not minutes), whereas the current study used a much longer timescale (minutes to tens of minutes). It is possible that coding for temporal relations on this longer timescale may involve distinct mechanisms that are more in line with the hypothesized functions for the alEC and PRC regions in semantic recall.

Consistent with the possibility that distinct neural mechanisms support short- and long-timescale temporal coding, we also found no temporally modulated signals in the PHC, a region that has been associated with fine temporal memory judgments29 on a short timescale. One previous study31 reported PHC engagement during retrieval of temporal order for events in a television show but that this activity was not associated with precision, and thus it is difficult to draw conclusions about whether the activity supported performance.

Another aspect of this work that differs significantly from the extant literature is that all fMRI data discussed are derived from retrieval, not encoding. Previous research investigating temporal memory and using a timeline23,29 found that fMRI activity at encoding predicted aspects of subsequent temporal memory. In contrast, our work sought to investigate networks that support retrieval of experiences to make temporal memory judgments. This difference in experimental design fills a gap in the literature and may partially explain the divergence between the reported results and those of previous studies.

One potential limitation is that the current study and other tasks using naturalistic stimuli are less able to control every aspect of encoding and retrieval. We tried to control for alternative explanations to the extent that this was possible. One possibility is that our results could have been driven by attention at encoding, with participants preferentially attending to objects in scenes for which they later had greater temporal precision. After they had completed the study, we asked 12 of our fMRI participants to rate how vividly they could recall the scene associated with each still-frame image from the experiment (Supplementary Fig. 2). We then used those ratings to perform a univariate analysis to test whether there was significantly higher BOLD fMRI activity for high vs low vividness trials in our regions of interest. We found no significant differences, indicating that the most vividly recalled scenes were not associated with higher alEC activity. It is possible that participants’ self-reports of vividness were imperfect or that, during encoding, participants preferentially attended to certain parts of the video that were later recalled more precisely.

Overall, naturalistic tasks and tightly controlled laboratory tasks each have different strengths and weaknesses. Tightly controlled laboratory experiments are less generalizable to real-life situations. We controlled for potential confounds as much as possible, by choosing an episode from a television show that uses situational humor that requires an understanding of the characters and the narrative, has been used in the past by other investigators32, takes place in a relatively small number of physical locations and does not include a laugh track. Integrating evidence from both naturalistic and laboratory studies will advance understanding of memory systems.

It is important to consider the relative contributions of pure timing information vs sequence/event information in determining when events occurred. This is especially true for more naturalistic paradigms involving multisensory information, since events can be salient and have meaning. It is likely that both types of information are important for making temporal judgments. It would be useful for future studies to compare memory for events that occur in a meaningful order to events that have less of a sequential structure.

Our results demonstrate a prominent role for the alEC and PRC in temporal memory in the scale of minutes. This demonstration also brings timescale into consideration as a potential critical variable in studying temporal memory that may affect which brain networks are recruited to support encoding and retrieval. Single MTL neurons fire at a preferred time during trials lasting a few seconds8. However, it is likely that a gradually changing pattern from many MTL neurons would be necessary to encode longer time periods (minutes to days). Experiences that span minutes to hours are probably associated with evolving internal states (wake/sleep cycles, hunger, and so on) that may help in distinguishing them from similar experiences that occurred at different times. Further work will be necessary to elucidate the specific molecular and synaptic mechanisms that underlie temporal storage and retrieval at these different timescales.



Twenty-six healthy adult volunteers were recruited from the University of California, Irvine and the surrounding community. This study was approved by the Institutional Review Board at the University of California, Irvine, and we complied with the study protocol as approved by that board. Participants gave informed consent in accordance with the board and received monetary compensation. All participants were right-handed and were screened for psychiatric disorders. Six were excluded due to excessive motion (>20% of repetition times excluded due to the Euclidian norm of the motion derivative exceeding 0.3 mm), and one requested to leave the study after the first functional scan. Data from the remaining 19 participants (10 female, ages 18–29 years (mean = 21.42, s.d. = 2.85)) were analyzed. Sample size was calculated a priori based on power analyses which demonstrate that, for high-resolution functional MRI studies, a minimum of 16 subjects is required to achieve 80% power at an alpha of 0.05.

Functional MRI task


Participants viewed an episode of Curb Your Enthusiasm (Season 2, Episode 9, ‘The Baptism’) while in the MRI scanner. This was presented using PsychoPy33 version1.82.01. The episode was split into three equal parts, each 9 min and 26 s long (Fig. 1). Participants were instructed to pay attention to the videos and that they would be asked questions about them later. After each video segment, we collected a 5 min resting-state scan in which participants were instructed to look at a fixation cross in the middle of the screen.


Retrieval took place approximately 5 min after the final resting state scan at encoding. During each of two runs, participants were presented with 72 still-frames from the video segments and were asked to indicate when during the episode they thought each still-frame occurred. Above each still-frame, a timeline appeared that ranged from 0 s (beginning of the episode) to 28:18 s (the end of the episode). No still-frames from the first or last minute of the episode were used, to avoid primacy/recency effects. A cursor was visible and moved in synchronization with an MR-compatible scroll-click device similar to the scroll wheel on a mouse (Current Designs). On perceptual baseline trials, two gray circles appeared on the screen and participants were instructed to indicate which circle was brighter. Each of these trials was 9 s long, and they comprised 25% of total retrieval trials. Outside of the scanner, participants took a test about events that had occurred during the episode. All reported analyses were performed on retrieval data only.

Behavioral control experiment

To ensure that participants were performing the task adequately, we conducted a behavioral experiment on a separate group of participants. These participants did not watch the episode of Curb Your Enthusiasm; instead, they were asked to place the still-frames from the episode on a timeline. Because they were not able to use memory to guide their responses, their performance is considered to be at chance. We then performed a Kolmogorov–Smirnov test using GraphPad Prism ( to determine whether performance from this experiment was significantly different than that of the actual fMRI participants.

MRI acquisition

Neuroimaging data were acquired on a 3.0 Tesla Philips Achieva scanner, using a 32-channel sensitivity-encoding (SENSE) coil at the Neuroscience Imaging Center at the University of California, Irvine. A high-resolution three-dimensional (3D) magnetization-prepared rapid-gradient echo (MP-RAGE) structural scan (0.65 × 0.65 × 0.65 mm3) was acquired at the beginning of each session and used for co-registration. Each of two functional MRI scans consisted of a T2*-weighted echo planar imaging sequence using BOLD contrast: 2,500 ms repetition time, 26 ms echo time, 70o flip angle, 33 slices, 172 dynamics per run, 1.8 × 1.8 mm2 in plane resolution, 1.8 mm slice thickness, 180 × 65.8 × 180 field of view. Slices were acquired as a partial axial volume and without offset or angulation. Four initial ‘dummy scans’ were acquired to ensure T1 signal stabilization.

Functional MRI analysis


Preprocessing and general linear model analysis were conducted using analysis of functional neuroimages (AFNI) software35. First, data were brain extracted (3dSkullStrip) then, using, repetition time pairs where the Euclidian norm of the motion derivative exceeded 0.3 mm were excluded from the analysis. Functional data were slice-timing corrected (3dTshift), motion corrected (3dvolreg) and blurred to 2 mm (3dmerge). Each subject’s functional data were aligned to their anatomical scan (3dallineate). We then used advanced normalization technique software36 to align each subject’s data to a common template (0.65 mm isotropic).

General linear model

For each subject, retrieval trials were ordered by the amount of error in seconds (distance between the subject’s response and the correct answer). The ordered trials were then split into three conditions: high-, medium- and low precision. These three conditions were entered into the general linear model using 3D deconvolution in AFNI (3dDeconvolve), in addition to six-dimensional motion regressors generated during motion correction. We restricted our analysis to task-activated voxels, which we obtained by thresholding the full F statistic containing all experimental conditions (thresholded at P = 0.35, cluster extent threshold = 20), which thus does not bias voxel selection toward any particular condition of interest. Subsequent analyses compared parameter estimates (beta-coefficients) from the most and least precise trials, compared to perceptual baseline trials. This was done using the AFNI 3dmaskave function to extract average beta-coefficients across the left and right components of each region.

Regions of interest were traced on the common template (0.65 mm isotropic) to which each subject’s data were aligned. Beta-coefficients were averaged across all voxels in each ROI (3dmaskave). For each ROI, paired t-tests were conducted on parameter estimates from the most precise and least precise trials. Bonferroni–Holm correction for multiple comparisons was used for clusters of a priori ROIs (hippocampal and medial temporal lobe cortex (CA1, DGCA3, subiculum, alEC, pmEC, PRC and PHC) and other cortical regions (RSC, mPFC, AG, PCC and PreC)). Cohen’s d was calculated for significant effects using the formula (Mean1 – Mean2)/pooled s.d.

Still-frame presentation was pseudo-randomized for each participant using PsychoPy33. Otherwise, high-, medium- and low-precision conditions were based on participant performance and therefore could not be randomized. Data collection and analysis were not performed blind to the conditions of the experiments.


We conducted the Kolmogorov–Smirnov test using GraphPad Prism. This software was also used for the following analyses: (1) to compare BOLD fMRI activity for high- and low-precision trials using two-tailed paired-samples t-tests; (2) to conduct a one-way repeated-measures ANOVA comparing trials with short, medium and long distances from video boundaries; and (3) to compare BOLD fMRI activity for high- and low-vividness trials using two-tailed paired-samples t-tests. To assess whether modulation scores (high- to low-precision beta-coefficients) were significantly different from 0, we used RStudio (R v.1.1.442; to conduct one-sample t-tests. Data distribution was assumed to be normal, but this was not formally tested. Individual data points are shown for key analyses. Sample size was calculated a priori based on power analyses demonstrating that, for high-resolution functional MRI studies, a minimum of 16 subjects is required to achieve 80% power at an alpha of 0.05. Histograms and scatterpots were generated using Matplotlib 2.0.237. Additional methodological details can be found in the Life Sciences Reporting Summary.

Reporting Summary

Further information on research design is available in the Nature Research Reporting Summary linked to this article.

Code availability

The code used to collect and analyze data from this study is available from the corresponding author upon reasonable request.

Data availability

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Additional information

Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.


  1. 1.

    Kesner, R. P. & Hunsaker, M. R. The temporal attributes of episodic memory. Behav. Brain Res. 215, 299–309 (2010).

  2. 2.

    Ekstrom, A. D. & Bookheimer, S. Y. S. Spatial and temporal episodic memory retrieval recruit dissociable functional networks in the human brain. Learn. Mem. 14, 645–654 (2007).

  3. 3.

    Ekstrom, A. D. & Ranganath, C. Space, time, and episodic memory: the hippocampus is all over the cognitive map. Hippocampus 28, 680–687 (2018).

  4. 4.

    Hartley, T., Lever, C., Burgess, N. & O’Keefe, J. Space in the brain: how the hippocampal formation supports spatial cognition. Philos. Trans. R. Soc. Lond. B Biol. Sci. 369, 20120510 (2013).

  5. 5.

    Hafting, T., Fyhn, M., Molden, S., Moser, M. B. & Moser, E. I. Microstructure of a spatial map in the entorhinal cortex. Nature 436, 801–806 (2005).

  6. 6.

    Save, E. & Sargolini, F. Disentangling the role of the mec and lec in the processing of spatial and non-spatial information: contribution of lesion studies. Front. Syst. Neurosci. 11, 81 (2017).

  7. 7.

    McNaughton, B. L., Battaglia, F. P., Jensen, O., Moser, E. I. & Moser, M. B. Path integration and the neural basis of the ‘cognitive map’. Nat. Rev. Neurosci. 7, 663–678 (2006).

  8. 8.

    MacDonald, C. J., Lepage, K. Q., Eden, U. T. & Eichenbaum, H. Hippocampal ‘time cells’ bridge the gap in memory for discontiguous events. Neuron 71, 737–749 (2011).

  9. 9.

    MacDonald, C. J., Carrow, S., Place, R. & Eichenbaum, H. Distinct hippocampal time cell sequences represent odor memories in immobilized rats. J. Neurosci. 33, 14607–14616 (2013).

  10. 10.

    Kraus, B. J. et al. During running in place, grid cells integrate elapsed time and distance run. Neuron 88, 578–589 (2015).

  11. 11.

    Pastalkova, E., Itskov, V., Amarasingham, A. & Buzsáki, G. Internally generated cell assembly sequences in the rat hippocampus. Science 321, 1322–1327 (2008).

  12. 12.

    Salz, X. D. M. et al. Time cells in hippocampal area ca3. J. Neurosci. 36, 7476–7484 (2016).

  13. 13.

    Eichenbaum, H. On the integration of space, time, and memory. Neuron 95, 1007–1018 (2017).

  14. 14.

    Deshmukh, S. S. & Knierim, J. J. Representation of non-spatial and spatial information in the lateral entorhinal cortex. Front. Behav. Neurosci. 5, 69 (2011).

  15. 15.

    Knierim, J. J., Neunuebel, J. P., Deshmukh, S. S. & Knierim, J. J. Functional correlates of the lateral and medial entorhinal cortex: objects, path integration and local–global reference frames. Phil. Trans. R. Soc. Lond. B 369, 20130369 (2013).

  16. 16.

    Reagh, Z. M. & Yassa, M. A. Object and spatial mnemonic interference differentially engage lateral and medial entorhinal cortex in humans. Proc. Natl. Acad. Sci. USA 111, E4264–E4273 (2014).

  17. 17.

    Reagh, Z. M., Noche, J. A., Tustison, N. J., Delisle, D., Murray, E. A., & Yassa, M. A. Functional imbalance of anterolateral entorhinal cortex and hippocampal dentate/CA3 underlies age-related object pattern separation deficits. Neuron 97, 1187–1198.e4 (2018).

  18. 18.

    Suzuki, W. A. & Amaral, D. G. Perirhinal and parahippocampal cortices of the macaque monkey: cortical afferents. J. Comp. Neurol. 350, 497–533 (1994).

  19. 19.

    Maass, A., Berron, D., Libby, L. A., Ranganath, C. & Düzel, E. Functional subregions of the human entorhinal cortex. eLife 4, e06426 (2015).

  20. 20.

    Navarro Schröder, T., Haak, K. V., Zaragoza, Jimenez,N. I., Beckmann, C. F. & Doeller, C. F. Functional topography of the human entorhinal cortex. eLife 4, e06738 (2015).

  21. 21.

    Ranganath, C. & Ritchey, M. Two cortical systems for memory-guided behaviour. Nat. Rev. Neurosci. 13, 713–726 (2012).

  22. 22.

    Knierim, J. J., Neunuebel, J. P. & Deshmukh, S. S. Functional correlates of the lateral and medial entorhinal cortex: objects, path integration and local-global reference frames. Philos. Trans. R. Soc. Lond. B Biol. Sci. 369, 20130369 (2013).

  23. 23.

    Lositsky, O. et al. Neural pattern change during encoding of a narrative predicts retrospective duration estimates. eLife 5, 1–40 (2016).

  24. 24.

    Tsao, A., Sugar, J., Lu, L., Wang, C., Knierim, J. J., Moser, M. B. & Moser, E. I. Integrating time from experience in the lateral entorhinal cortex. Nature 561, 57–62 (2018).

  25. 25.

    Hannesson, D. K., Howland, J. G. & Phillips, A. G. Interaction between perirhinal and medial prefrontal cortex is required for temporal order but not recognition memory for objects in rats. J. Neurosci. 24, 4596–4604 (2004).

  26. 26.

    Brown, M. W. Neuronal responses and recognition memory. Semin. Neurosci. 8, 23–32 (1996).

  27. 27.

    Eichenbaum, H., Yonelinas, A. P. & Ranganath, C. The medial temporal lobe and recognition memory. Annu. Rev. Neurosci. 30, 123–152 (2007).

  28. 28.

    Hsieh, L. T., Gruber, M. J., Jenkins, L. J. & Ranganath, C. Hippocampal activity patterns carry information about objects in temporal context. Neuron 81, 1165–1178 (2014).

  29. 29.

    Jenkins, L. J. & Ranganath, C. Prefrontal and medial temporal lobe activity at encoding predicts temporal context memory. J. Neurosci. 30, 15558–15565 (2010).

  30. 30.

    Tubridy, S. & Davachi, L. Medial temporal lobe contributions to episodic sequence encoding. Cereb. Cortex 21, 272–280 (2011).

  31. 31.

    Lehn, H. et al. A specific role of the human hippocampus in recall of temporal sequences. J. Neurosci. 29, 3475–3484 (2009).

  32. 32.

    Furman, O., Dorfman, N., Hasson, U., Davachi, L. & Dudai, Y. They saw a movie: long-term memory for an extended audiovisual narrative. Learn. Mem. 14, 457–467 (2007).

  33. 33.

    Peirce, J. W. PsychoPy--Psychophysics software in Python. J. Neurosci. Methods 162, 8–13 (2007).

  34. 34.

    GraphPad Prism v.7.00 (GraphPad Software, Inc., 2017);

  35. 35.

    Cox, R. W. AFNI: software for analysis and visualization of functional magnetic resonance neuroimages. Comput. Biomed. Res. 29, 162–173 (1996).

  36. 36.

    Avants, B. B., Tustison, N. & Song, G. Advanced Normalization Tools (ANTS) Sherbrooke Connectivity Imaging Lab (2009).

  37. 37.

    Hunter, B. J. D. (2007). Computing in Science & Engineering, 9(3), 90–95.

Download references


We thank M. Tsai, J. Noche and A. Chun for assistance with data collection. We also thank C. Stark, N. Fortin and D. Huffman for helpful discussions. This work was supported by US NIH grants nos. P50AG05146, R01MH1023921 and R01AG053555 (PI: M.A.Y.), and Training Grant no. T32DC010775 (to M.E.M., PI: Metherate).

Author information


  1. Department of Neurobiology and Behavior, Center for the Neurobiology of Learning and Memory, University of California, Irvine, Irvine, CA, USA

    • Maria E Montchal
    •  & Michael A Yassa
  2. Center for Neuroscience, University of California, Davis, Davis, CA, USA

    • Zachariah M Reagh


  1. Search for Maria E Montchal in:

  2. Search for Zachariah M Reagh in:

  3. Search for Michael A Yassa in:


M.E.M. and M.A.Y. designed the experiment. M.E.M. collected and analyzed the data with contributions from Z.M.R. M.E.M., Z.M.R. and M.A.Y. contributed substantially to the interpretation of results. M.E.M. and M.A.Y. drafted and revised the manuscript with support from Z.M.R.

Competing interests

The authors declare no competing interests.

Corresponding author

Correspondence to Michael A Yassa.

Integrated supplementary information

  1. Supplementary Fig 1 Effect of distance from segment boundary on performance.

    A one-way repeated-measures ANOVA was conducted to determine whether performance differed as a function of each trial’s distance from a segment boundary at encoding (n = 19 participants). A segment boundary is defined as the beginning or end of a video segment at encoding (the episode was split into three segments). We conducted a one-way repeated-measures ANOVA comparing trials that were of short (2–107 seconds), medium (108–186 seconds) and long (200–277 seconds) distances from a segment boundary, which was not statistically significant [F(2,18)= 3.29, p = 0.0506], indicating that error does not differ significantly based on a trial’s distance from a segment boundary.

  2. Supplementary Fig 2 Effect of vividness on MTL and cortical regions.

    After scanning, participants viewed the still-frames one more time and were asked to indicate how vividly they could recall the scene associated with each one on a 5 point scale (n = 12 participants). High, medium, and low vividness trials were entered into a GLM. Paired t-tests were conducted on high and low vividness beta coefficients, and no significant results were found after correcting for multiple comparisons using the Bonferroni-Holm method in the alEC [t = 0.4983, df = 11, two-tailed p = 0.6281], pmEC [t = 1.947, df = 11, p = 0.0774], angular gyrus [t = 3.06, df = 11, p = 0.0109], MPFC [t = 2.956, df = 11, two-tailed p = 0.0131; critical p is 0.0083], PRC [t = 0.4744, df = 11, two-tailed = 0.6445], PHC [t = 1.976, df = 11, two-tailed p = 0.0738; critical p is 0.01], ACC [t = 0.5422, df = 11, two-tailed p = 0.5985], PCC [t = 0.1654, df = 11, two-tailed p = 0.8716], DGCA3 [t = 0.7672, df = 11, two-tailed p = 0.4591], CA1 [t = 0.6167, df = 11, two-tailed p = 0.549], precuneus [t = 0.3441, df = 11, two-tailed p = 0.7373], RSC [t = 0.703, df = 11, two-tailed p = 0.4967]).

Supplementary information

About this article

Publication history





Further reading