Shared memories reveal shared structure in neural activity across individuals


Our lives revolve around sharing experiences and memories with others. When different people recount the same events, how similar are their underlying neural representations? Participants viewed a 50-min movie, then verbally described the events during functional MRI, producing unguided detailed descriptions lasting up to 40 min. As each person spoke, event-specific spatial patterns were reinstated in default-network, medial-temporal, and high-level visual areas. Individual event patterns were both highly discriminable from one another and similar among people, suggesting consistent spatial organization. In many high-order areas, patterns were more similar between people recalling the same event than between recall and perception, indicating systematic reshaping of percept into memory. These results reveal the existence of a common spatial organization for memories in high-level cortical areas, where encoded information is largely abstracted beyond sensory constraints, and that neural patterns during perception are altered systematically across people into shared memory representations for real-life events.

Access options

Rent or Buy article

Get time limited or full article access on ReadCube.


All prices are NET prices.

Figure 1: Experiment design and behavior.
Figure 2: Pattern similarity between movie and recall.
Figure 3: Between-participants pattern similarity during spoken recall.
Figure 4: Classification accuracy.
Figure 5: Dimensionality of shared patterns.
Figure 6: Scene-level pattern similarity between individuals.
Figure 7: Alteration of neural patterns from perception to recollection.
Figure 8: Reinstatement in individual participants versus between participants.


  1. 1

    Isola, P., Xiao, J., Torralba, A. & Oliva, A. What makes an image memorable?. in 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 145–152 (2011) doi:10.1109/CVPR.2011.5995721.

  2. 2

    Halbwachs, M. The Collective Memory (Harper & Row Colophon, 1980).

  3. 3

    Sperber, D. Explaining Culture: A Naturalistic Approach (Blackwell, 1996).

  4. 4

    Coman, A. & Hirst, W. Cognition through a social network: the propagation of induced forgetting and practice effects. J. Exp. Psychol. Gen. 141, 321–336 (2012).

  5. 5

    Roediger, H.L. III & Abel, M. Collective memory: a new arena of cognitive study. Trends Cogn. Sci. 19, 359–361 (2015).

  6. 6

    Raichle, M.E. et al. A default mode of brain function. Proc. Natl. Acad. Sci. USA 98, 676–682 (2001).

  7. 7

    Hasson, U., Nir, Y., Levy, I., Fuhrmann, G. & Malach, R. Intersubject synchronization of cortical activity during natural vision. Science 303, 1634–1640 (2004).

  8. 8

    Jääskeläinen, I.P. et al. Inter-subject synchronization of prefrontal cortex hemodynamic activity during natural viewing. Open Neuroimag. J. 2, 14–19 (2008).

  9. 9

    Wilson, S.M., Molnar-Szakacs, I. & Iacoboni, M. Beyond superior temporal cortex: intersubject correlations in narrative speech comprehension. Cereb. Cortex 18, 230–242 (2008).

  10. 10

    Lerner, Y., Honey, C.J., Silbert, L.J. & Hasson, U. Topographic mapping of a hierarchy of temporal receptive windows using a narrated story. J. Neurosci. 31, 2906–2915 (2011).

  11. 11

    Honey, C.J., Thompson, C.R., Lerner, Y. & Hasson, U. Not lost in translation: neural responses shared across languages. J. Neurosci. 32, 15277–15283 (2012).

  12. 12

    Lahnakoski, J.M. et al. Synchronous brain activity across individuals underlies shared psychological perspectives. Neuroimage 100, 316–324 (2014).

  13. 13

    Simony, E. et al. Dynamic reconfiguration of the default mode network during narrative comprehension. Nat. Commun. 7, 12141 (2016).

  14. 14

    Regev, M., Honey, C.J., Simony, E. & Hasson, U. Selective and invariant neural responses to spoken and written narratives. J. Neurosci. 33, 15978–15988 (2013).

  15. 15

    Wang, M. & He, B.J. A cross-modal investigation of the neural substrates for ongoing cognition. Front. Psychol. 5, 945 (2014).

  16. 16

    Borges, J.L. Funes the Memorious. La Nación (Mitre, 1942).

  17. 17

    Wheeler, M.E., Petersen, S.E. & Buckner, R.L. Memory's echo: vivid remembering reactivates sensory-specific cortex. Proc. Natl. Acad. Sci. USA 97, 11125–11129 (2000).

  18. 18

    Danker, J.F. & Anderson, J.R. The ghosts of brain states past: remembering reactivates the brain regions engaged during encoding. Psychol. Bull. 136, 87–102 (2010).

  19. 19

    Polyn, S.M., Natu, V.S., Cohen, J.D. & Norman, K.A. Category-specific cortical activity precedes retrieval during memory search. Science 310, 1963–1966 (2005).

  20. 20

    Johnson, J.D., McDuff, S.G.R., Rugg, M.D. & Norman, K.A. Recollection, familiarity, and cortical reinstatement: a multivoxel pattern analysis. Neuron 63, 697–708 (2009).

  21. 21

    Kuhl, B.A., Rissman, J., Chun, M.M. & Wagner, A.D. Fidelity of neural reactivation reveals competition between memories. Proc. Natl. Acad. Sci. USA 108, 5903–5908 (2011).

  22. 22

    Buchsbaum, B.R., Lemire-Rodger, S., Fang, C. & Abdi, H. The neural basis of vivid memory is patterned on perception. J. Cogn. Neurosci. 24, 1867–1883 (2012).

  23. 23

    Wing, E.A., Ritchey, M. & Cabeza, R. Reinstatement of individual past events revealed by the similarity of distributed activation patterns during encoding and retrieval. J. Cogn. Neurosci. 27, 679–691 (2015).

  24. 24

    Bird, C.M., Keidel, J.L., Ing, L.P., Horner, A.J. & Burgess, N. Consolidation of complex events via reinstatement in posterior cingulate cortex. J. Neurosci. 35, 14426–14434 (2015).

  25. 25

    Kriegeskorte, N., Mur, M. & Bandettini, P. Representational similarity analysis - connecting the branches of systems neuroscience. Front. Syst. Neurosci. 2, 4 (2008).

  26. 26

    Buckner, R.L., Andrews-Hanna, J.R. & Schacter, D.L. The brain's default network: anatomy, function, and relevance to disease. Ann. NY Acad. Sci. 1124, 1–38 (2008).

  27. 27

    Rugg, M.D. & Vilberg, K.L. Brain networks underlying episodic memory retrieval. Curr. Opin. Neurobiol. 23, 255–260 (2013).

  28. 28

    Honey, C.J. et al. Slow cortical dynamics and the accumulation of information over long timescales. Neuron 76, 423–434 (2012).

  29. 29

    Hasson, U., Malach, R. & Heeger, D.J. Reliability of cortical activity during natural stimulation. Trends Cogn. Sci. 14, 40–48 (2010).

  30. 30

    Mitchell, T.M. et al. Learning to decode cognitive states from brain images. Mach. Learn. 57, 145–175 (2004).

  31. 31

    Poldrack, R.A., Halchenko, Y.O. & Hanson, S.J. Decoding the large-scale structure of brain function by classifying mental States across individuals. Psychol. Sci. 20, 1364–1372 (2009).

  32. 32

    Shinkareva, S.V., Malave, V.L., Mason, R.A., Mitchell, T.M. & Just, M.A. Commonality of neural representations of words and pictures. Neuroimage 54, 2418–2425 (2011).

  33. 33

    Kaplan, J.T. & Meyer, K. Multivariate pattern analysis reveals common neural patterns across individuals during touch observation. Neuroimage 60, 204–212 (2012).

  34. 34

    Rice, G.E., Watson, D.M., Hartley, T. & Andrews, T.J. Low-level image properties of visual objects predict patterns of neural response across category-selective regions of the ventral visual pathway. J. Neurosci. 34, 8837–8844 (2014).

  35. 35

    Charest, I., Kievit, R.A., Schmitz, T.W., Deca, D. & Kriegeskorte, N. Unique semantic space in the brain of each beholder predicts perceived similarity. Proc. Natl. Acad. Sci. USA 111, 14565–14570 (2014).

  36. 36

    Wandell, B.A., Dumoulin, S.O. & Brewer, A.A. Visual field maps in human cortex. Neuron 56, 366–383 (2007).

  37. 37

    Formisano, E. et al. Mirror-symmetric tonotopic maps in human primary auditory cortex. Neuron 40, 859–869 (2003).

  38. 38

    Benson, N.C. et al. The retinotopic organization of striate cortex is well predicted by surface topology. Curr. Biol. 22, 2081–2085 (2012).

  39. 39

    Moser, E.I., Kropff, E. & Moser, M.-B. Place cells, grid cells, and the brain's spatial representation system. Annu. Rev. Neurosci. 31, 69–89 (2008).

  40. 40

    O'Keefe, J. & Conway, D.H. Hippocampal place units in the freely moving rat: why they fire where they fire. Exp. Brain Res. 31, 573–590 (1978).

  41. 41

    Kriegeskorte, N. et al. Matching categorical object representations in inferior temporal cortex of man and monkey. Neuron 60, 1126–1141 (2008).

  42. 42

    Hassabis, D. & Maguire, E.A. Deconstructing episodic memory with construction. Trends Cogn. Sci. 11, 299–306 (2007).

  43. 43

    Ranganath, C. & Ritchey, M. Two cortical systems for memory-guided behaviour. Nat. Rev. Neurosci. 13, 713–726 (2012).

  44. 44

    Ames, D.L., Honey, C.J., Chow, M.A., Todorov, A. & Hasson, U. Contextual alignment of cognitive and neural dynamics. J. Cogn. Neurosci. 27, 655–664 (2015).

  45. 45

    Alba, J.W. & Hasher, L. Is memory schematic? Psychol. Bull. 93, 203–231 (1983).

  46. 46

    Kurby, C.A. & Zacks, J.M. Segmentation in the perception and memory of events. Trends Cogn. Sci. 12, 72–79 (2008).

  47. 47

    Baldassano, C. et al. Discovering event structure in continuous narrative perception and memory. Preprint at bioRxiv (2016).

  48. 48

    Buzsáki, G. & Moser, E.I. Memory, navigation and theta rhythm in the hippocampal-entorhinal system. Nat. Neurosci. 16, 130–138 (2013).

  49. 49

    Hasson, U., Ghazanfar, A.A., Galantucci, B., Garrod, S. & Keysers, C. Brain-to-brain coupling: a mechanism for creating and sharing a social world. Trends Cogn. Sci. 16, 114–121 (2012).

  50. 50

    Zadbood, A., Chen, J., Leong, Y.C., Norman, K.A. & Hasson, U. How we transmit memories to other brains: constructing shared neural representations via communication. Preprint at bioRxiv (2016).

  51. 51

    McGuigan, P. A Study in Pink. Sherlock (BBC, 2010).

  52. 52

    Stephens, G.J., Silbert, L.J. & Hasson, U. Speaker–listener neural coupling underlies successful communication. Proc. Natl. Acad. Sci. USA 107, 14425–14430 (2010).

  53. 53

    Silbert, L.J., Honey, C.J., Simony, E., Poeppel, D. & Hasson, U. Coupled neural systems underlie the production and comprehension of naturalistic narrative speech. Proc. Natl. Acad. Sci. USA 111, E4687–E4696 (2014).

  54. 54

    Desikan, R.S. et al. An automated labeling system for subdividing the human cerebral cortex on MRI scans into gyral based regions of interest. Neuroimage 31, 968–980 (2006).

  55. 55

    Shirer, W.R., Ryali, S., Rykhlevskaia, E., Menon, V. & Greicius, M.D. Decoding subject-driven cognitive states with whole-brain connectivity patterns. Cereb. Cortex 22, 158–165 (2012).

  56. 56

    Kriegeskorte, N., Goebel, R. & Bandettini, P. Information-based functional brain mapping. Proc. Natl. Acad. Sci. USA 103, 3863–3868 (2006).

  57. 57

    Chen, P.-H. et al. in Advances in Neural Information Processing Systems 28 (eds. Cortes, C., Lawrence, N.D., Lee, D.D., Sugiyama, M. & Garnett, R.) 460–468 (Curran Associates, 2015).

  58. 58

    Naselaris, T., Kay, K.N., Nishimoto, S. & Gallant, J.L. Encoding and decoding in fMRI. Neuroimage 56, 400–410 (2011).

  59. 59

    Mitchell, T.M. et al. Predicting human brain activity associated with the meanings of nouns. Science 320, 1191–1195 (2008).

  60. 60

    Wild, F. lsa: latent semantic analysis. R package version 0.73.1 (2015).

  61. 61

    Freeman, J., Heeger, D.J. & Merriam, E.P. Coarse-scale biases for spirals and orientation in human visual cortex. J. Neurosci. 33, 19695–19703 (2013).

  62. 62

    Kamitani, Y. & Tong, F. Decoding the visual and subjective contents of the human brain. Nat. Neurosci. 8, 679–685 (2005).

  63. 63

    Haxby, J.V. et al. A common, high-dimensional model of the representational space in human ventral temporal cortex. Neuron 72, 404–416 (2011).

  64. 64

    Wang, L., Mruczek, R.E.B., Arcaro, M.J. & Kastner, S. Probabilistic maps of visual topography in human cortex. Cereb. Cortex 25, 3911–3931 (2015).

  65. 65

    Yarkoni, T., Poldrack, R.A., Nichols, T.E., Van Essen, D.C. & Wager, T.D. Large-scale automated synthesis of human functional neuroimaging data. Nat. Methods 8, 665–670 (2011).

  66. 66

    Cichy, R.M., Heinzle, J. & Haynes, J.-D. Imagery and perception share cortical representations of content and location. Cereb. Cortex 22, 372–380 (2012).

  67. 67

    St-Laurent, M., Abdi, H. & Buchsbaum, B.R. Distributed patterns of reactivation predict vividness of recollection. J. Cogn. Neurosci. 27, 2000–2018 (2015).

  68. 68

    Kosslyn, S.M. & Thompson, W.L. When is early visual cortex activated during visual mental imagery? Psychol. Bull. 129, 723–746 (2003).

  69. 69

    Harrison, S.A. & Tong, F. Decoding reveals the contents of visual working memory in early visual areas. Nature 458, 632–635 (2009).

  70. 70

    Serences, J.T., Ester, E.F., Vogel, E.K. & Awh, E. Stimulus-specific delay activity in human primary visual cortex. Psychol. Sci. 20, 207–214 (2009).

Download references


We thank M. Aly, C. Baldassano, M. Arcaro and E. Simony for scientific discussions and comments on earlier versions of the manuscript; J. Edgren for help with transcription; M. Arcaro for advice regarding visual area topography; P. Johnson for improving the classification analysis; P.-H. Chen and H. Zhang for development of the SRM code; and other members of the Hasson and Norman laboratories for their comments and support. This work was supported by the US National Institutes of Health (R01-MH094480, U.H.; 2T32MH065214-11, J.C.).

Author information

J.C., Y.C.L. and U.H. designed the experiment. J.C. and Y.C.L. collected and analyzed the data. J.C., U.H., Y.C.L., K.A.N. and C.J.H. designed analyses and wrote the manuscript. C.H.Y. produced the semantic labels.

Correspondence to Janice Chen.

Ethics declarations

Competing interests

The authors declare no competing financial interests.

Integrated supplementary information

Supplementary Figure 1 Overlap between default mode network (DMN) and movie/recall maps.

We defined the DMN for each individual using the posterior medial cortex ROI as a seed for functional connectivity during the first scan of the movie (23 minutes), thresholded at R > 0.4; a group-level DMN map was then created by averaging across participants. While the DMN is typically defined using resting state data, it has been previously demonstrated that this network can be mapped either during rest or during continuous narrative with largely the same results. See Table S3 for overlap calculations for all searchlight maps (from Figs. 2B, 2E, 3B, and 7B). Note that this DMN definition procedure is independent from the calculations of the searchlight maps, because functional connectivity is calculated across time (and during movie only), while the searchlight analyses were spatial pattern comparisons (between movie and recall). A) Overlap between the group DMN and the within-participant movie-recall searchlight map from Fig. 2B. 39.7% of this map falls within the DMN. B) Overlap between the group DMN and the between-participant recall-recall map from Fig. 3B. 50.7% of this map falls within the DMN.

Supplementary Figure 2 Within-participant pattern reinstatement at a finer temporal scale.

While averaging at the scene level was effective for observing neural reinstatement, the behavior of mnemonic recollection that we observed unfolded over time at a finer scale than the scene level. For example, participant 8 used 131 words over 67 seconds to describe scene 13. Here, we further examined reinstatement effects at individual timepoints. A) For each scene for a given participant, we compared the pattern of activity at each timepoint in the movie scene with the pattern from the first timepoint of recall of that scene in the posterior medial cortex ROI. These correlation values were averaged across all scenes and all participants. Correlations with the earliest timepoints of encoding scenes tended to be higher than correlations with later timepoints, suggesting sub-scene level specificity of reinstatement. Error bars represent standard error across subjects. B) For each scene for a given participant, we compared the pattern of activity at each timepoint in the movie scene with the pattern from the last timepoint of recall of that scene in the posterior medial cortex ROI. These correlation values were averaged across all scenes and all participants. Error bars represent standard error across subjects.

Supplementary Figure 3 Pattern similarity between participants during the movie.

A) Schematic for between-participant movie-movie analysis. BOLD data from the movie were divided into scenes, then averaged across time within-scene, resulting in one vector of voxel values for each movie scene and each recalled scene. Correlations were computed between matching pairs of movie scenes between participants. B) Searchlight map showing correlation values for across-participant pattern similarity during the movie. Searchlight was a 5x5x5 voxel cube. C) Correlation values for all 17 participants in independently-defined PMC (posterior medial cortex). Red circles show average correlation of matching scenes and error bars show standard error across scenes; black squares show average of the null distribution for that participant. At far right, the red circle shows the true participant average and error bars show standard error across participants; black histogram shows the null distribution of the participant average; white square shows mean of the null distribution. D) Posterior medial cortex region of interest, cluster in the “dorsal default mode network” set (

Supplementary Figure 4 Overlap of recall–recall map with visual areas.

To what extent did spoken recollection in this study engage visual imagery? Movie-recall reinstatement effects were not found in low level visual areas, but instead were located in high level visual areas, and extensively in higher order brain regions outside of the visual system. Our observation of reinstatement in high level visual areas is compatible with studies showing reinstatement in these regions during cued visual imagery. The lack of reinstatement effects in low-level areas may be due to the natural tendency of most participants to focus on the episodic narrative (the plot) when recounting the movie, rather than on fine visual details. See also Methods: Visual imagery. A) In gray, brain areas where recollection patterns were significantly similar across participants (map from Fig. 3B). In other colors, commonly studied visual areas. Retinotopic visual areas were taken from a published probabilistic atlas (Wang et al., 2014, Cereb. Cortex). Face-selective areas were generated using Neurosynth (Yarkoni et al., 2011, Nat. Methods). B) For each of the visual area ROIs shown in [A], similarity of scene-level recollection patterns was calculated between participants in the same manner as Fig. 3. Statistical significance was determined by shuffling scene labels to generate a null distribution of the participant average. For each region, red circle shows the true participant average, error bars show standard error across participants; black histogram shows null distribution of the participant average; white square shows mean of the null distribution. In low-order visual regions, recall-recall pattern similarity was not different from chance; however, significant recall-recall pattern similarity was observed in higher-order visual regions (VO/PHC and face-selective areas).

Supplementary Figure 5 Between-participants pattern similarity in PMC, scene by scene.

A) Between-participants movie-movie correlation values for 50 individual scenes in the posterior medial cortex (PMC) ROI (same ROI as Fig. 2C, 2F, 3C). For each scene, each participant’s movie pattern from that scene was compared to the pattern from the corresponding movie scene averaged across the remaining participants. The bars show the average across participants for each scene. Error bars represent the standard error across participants. Striped bars indicate introductory video clips at the beginning of each functional scan (see Methods). B) Between-participants movie-recall correlation values for individual scenes in the PMC ROI (46 scenes were recalled by two or more participants). C) Between-participants recall-recall correlation values for individual scenes in the PMC ROI.

Supplementary Figure 6 Encoding model.

To explore what semantic information is represented in the shared neural patterns that supports our ability to discriminate patterns of activity between scenes, we constructed an encoding model to predict neural activity patterns from semantic content. See Supp. Note 4 for additional details. A) Detailed semantic labels were generated by an independent coder: 1000 time segments spanning the entire movie stimulus, and 10 labels for each segment. A score was derived for each of the 50 scenes for each label, creating 50-element predictor vectors. It should be noted that the list of 10 labels is by no means comprehensive, and is intended merely to serve as a starting point for future analyses. B) Predicted patterns in PMC were generated by regressing voxel activity on label values, and scene-level classification accuracy was assessed using a hold-2-out procedure validated across 100 combinations of independent groups (N=8 and N=9). Classification accuracy increased as predictors (labels) were added to the model, peaking at 69.5% with five predictors (chance level 50%). C) Predictors were ranked according to how much they improved accuracy for each of the 100 combinations; the most successful predictor was the proportion time during a scene that speech was present (Speaking, ranked first for 80% of combinations), followed by the number of different locations visited during a scene (NumberLocations, ranked 2nd for 48%), arousal (Arousal, ranked 3rd for 31%), proportion time that written words were present (WrittenWords, ranked 4th for 51%), and valence (Valence, ranked 5th for 31%). The number of persons in a scene (NumberPersons) was ranked first for 10% of combinations. D) Confusion matrix for the 10 predictors. Note that when two predictors are correlated, one may dominate in the predictor rankings.

Supplementary Figure 7 Simulation of movie-to-recall pattern alteration.

In this simulation, five 125-voxel random patterns are created (five simulated subjects) and random noise is added to each one, such that the average inter-subject correlation is R=1.0 (red lines) or R=0.3 (blue lines). These are the “movie patterns”. Next, we simulate the change from movie pattern to recall pattern by 1) adding random noise (at different levels of intensity, y-axis) to every voxel in every subject to create the “recall patterns”, which are noisy versions of the movie pattern; and 2) adding a common pattern to each movie pattern to mimic the “systematic alteration” from movie pattern to recall pattern, plus random noise (at different levels of intensity, x-axis). We plot the average correlation among the five simulated subjects’ recall patterns (Rec-Rec), as well as the average correlation between movie and recall patterns (Mov-Rec). A) Results when no common pattern is added, i.e., the recall pattern is merely the movie pattern plus noise (no systematic alteration takes place): Even as noise varies at the movie pattern stage and at the movie-to-recall change stage, similarity among recall patterns (Rec-Rec, solid lines) never exceeds the similarity of recall to movie (Mov-Rec, dotted lines). B) Results when a common pattern is added to each subject’s movie pattern, in addition to the same levels of random noise, to generate the recall pattern. Now, it becomes possible (even likely, under these simulated conditions) for the similarity among recall patterns (Rec-Rec, solid lines) to exceed the similarity of recall to movie (Mov-Rec, dotted lines). In short, when the change from movie to recall involves a systematic alteration across subjects, recall patterns may become more similar to each other then they are to the original movie pattern. Note that the similarity of the movie pattern to each other (movie-movie correlation) does not impact the results. See Methods: Simulation of movie-to-recall pattern alteration.

Supplementary Figure 8 Scene-by-scene difference of recall–recall minus movie–recall in regions shown in Figure 7b.

A) In Fig. 7B we plotted brain regions in which participants’ recollection activity patterns were more similar to the recollection patterns in other individuals than they were to movie patterns (“neural alteration” effect). Here we show the results broken down scene-by-scene in the same regions. Error bars represent the standard error across participants. B) Recall-recall minus movie-recall difference values thresholded at 0.01.

Supplementary Figure 9 Subsequent memory analyses.

To examine how the systematic alteration of neural activity from movie to recall might be related to memorability, we divided scenes into remembered and forgotten for each participant. For each scene, the number of participants who had successfully recalled that scene was counted. We then extracted data from the PMC ROI and calculated the pairwise between-participants correlation during recall (same analysis as in Fig. 3A-C, except pairwise), the pairwise between-participants correlation between movie and recall (same analysis as in Fig. 2D-F, except pairwise), and used the difference as the degree of neural alteration (recall-recall similarity minus movie-recall similarity), at the scene level. Pairwise comparisons were used because the mean value of pairwise correlations is not affected by the number of participants (and the number of participants was different across data points in this analysis). A) We calculated Spearman’s rank correlation for the number of participants who successfully recalled each scene vs. the average degree of neural alteration for each scene. The magnitude of neural alteration was significantly related to how many participants remembered that scene (R = 0.33, p = 0.03). In other words, the more that a given movie scene pattern was altered in systematic manner across subjects between perception and recall, the more likely that scene was to be remembered. B) A control analysis in PMC showing that between-participants movie-movie pattern similarity was not predictive of the likelihood of recall (R = -0.01, p > 0.9). C) A control analysis showing that the degree of neural alteration (i.e., recall-recall minus movie-recall) in early visual areas V1-V4 was not predictive of the likelihood of recall (R = 0.12, p = 0.43, same ROI as Fig. S4).

Supplementary Figure 10 Hippocampal inter-subject correlation (ISC).

We examined hippocampal contributions to recall success. During movie viewing, we calculated the correlation between a given participant’s hippocampal timecourse (using an anatomically-defined whole hippocampus ROI) and the average hippocampal timecourse of all other participants, for individual scenes (i.e., the inter-subject correlation (ISC) for each scene). For each participant, scenes were binned by whether they were later remembered or forgotten. ISC was significantly greater for remembered scenes than forgotten scenes (left panel; 2-tailed paired t-test across participants, t = 2.17, p = 0.045), complementing previous results linking ISC in parahippocampal cortex to later recognition memory (Hasson et al., 2008, Neuron). The same analysis is shown for the hippocampus ROI split into anterior, middle, and posterior sections (second, third, and fourth panels from the left). A repeated-measures ANOVA with region (anterior, middle, posterior) and memory (remembered, forgotten) as factors revealed significant main effects of region F(2,32) = 12.02, p < 0.0005 and of memory F(1,16) = 4.98, p = 0.04, but not a significant region x memory interaction F(2,32) = 1.69, p = 0.2.

Supplementary Figure 11 No evidence of hippocampal sensitivity to the gap between part 1 and part 2 of the movie.

Evidence from time cells in the rodent hippocampus might predict that the hippocampus would be sensitive to the gap between the first segment and the second. In order to explore this question, we examined recall patterns in the hippocampus for the 3 scenes just before and 3 scenes just after the gap, specifically asking where the correlations of these patterns with their corresponding movie scenes fell in the distribution of all such movie-recall scene correlations. The left panel shows the distribution of movie-vs-recall pattern correlations for all 50 scenes (averaged across subjects), and the right panel shows the distribution of movie-vs-recall pattern correlations for the 3 scenes just before and 3 scenes just after the gap. There does not appear to be anything unusual about the scenes near the gap, in terms of their pattern similarity to the corresponding movie scenes (the near-gap values fall near the middle of the distribution). Thus, in this analysis, we did not find any evidence to support the hypothesis that the hippocampus is sensitive to the gap between part 1 and part 2 of the movie during recall.

Supplementary information

Supplementary Text and Figures

Supplementary Figures 1–11 and Supplementary Tables 1–3 (PDF 1888 kb)

Supplementary Methods Checklist

(PDF 503 kb)

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Chen, J., Leong, Y., Honey, C. et al. Shared memories reveal shared structure in neural activity across individuals. Nat Neurosci 20, 115–125 (2017).

Download citation

Further reading