Decoding individual identity from brain activity elicited in imagining common experiences

Everyone experiences common events differently. This leads to personal memories that presumably provide neural signatures of individual identity when events are reimagined. We present initial evidence that these signatures can be read from brain activity. To do this, we progress beyond previous work that has deployed generic group-level computational semantic models to distinguish between neural representations of different events, but not revealed interpersonal differences in event representations. We scanned 26 participants’ brain activity using functional Magnetic Resonance Imaging as they vividly imagined themselves personally experiencing 20 common scenarios (e.g., dancing, shopping, wedding). Rather than adopting a one-size-fits-all approach to generically model scenarios, we constructed personal models from participants’ verbal descriptions and self-ratings of sensory/motor/cognitive/spatiotemporal and emotional characteristics of the imagined experiences. We demonstrate that participants’ neural representations are better predicted by their own models than other peoples’. This showcases how neuroimaging and personalized models can quantify individual-differences in imagined experiences.

participants would have first-hand experience of the different scenarios, we estimated this by having participants rate each imaginary scenario on a Likert scale of 0 (fictitious) to 6 (it has happened). We also asked them to rate how vividly each scenario was imagined (0-6, 6=vivid). See "Detailed protocol for rating experiential attributes and scenario vividness and likelihood" in Supplementary Table 6 for details of the rating protocol. Table 1. Complete listing of RSA results for all ROIs, serving as a companion to Figure 4. The leftmost (second to seventh) columns cover RSA results comparing each individual's fMRI data to their own personal multimodal model. The rightmost columns (ninth to fourteenth) cover RSA results comparing each individual's fMRI data to a group-average multimodal model built from other participants (e.g. participant 1's fMRI would be compared to a group-model corresponding to participants 2 to 26). t>0 and p correspond to the tstatistic and uncorrected p-value associated with one sample t-tests against zero (1-tailed Figures 3 and 6. The leftmost (second to seventh) columns cover partial RSA results, when each individual's fMRI data was compared to their own personal multimodal model, when controlling for a groupaverage multimodal model derived from the other participants (as in Figure 3). t>0 and p correspond to the tstatistic and uncorrected p-value associated with one sample t-tests against zero (1-tailed). fdrP corresponds to False Discovery Rate (FDR) corrected p-values. d corresponds to Cohen's d, which was computed by dividing the t-statistic by 26 1/2 . The rightmost three columns list results arising from decoding tests when using individual's personal models to identify their fMRI data (as in Figure 6). decode_acc indicates the proportion of times (0 to 1) that individuals were correctly identified (Note that these values were transformed to percentages in Figure   6). decode_p indicates the associated p-values computed using permutation testing. Note that the p-values tabulated below are not precisely the same as   Figure 2. Both the verbal and attribute models contributed to predicting person-specific fMRI representations. We estimated whether both the verbal and attribute models had made independent contributions to predicting fMRI representational structure in the eight anatomical ROIs illustrated in Figure 3.

Supplementary
To test this, we ran a partial correlation-based RSA: fMRI similarity vectors were correlated with verbal similarity vectors whilst controlling for attribute similarity vectors and vice versa. In both cases similarity vectors were person-specific. Partial correlation coefficients for each participant were r-to-z transformed. One sample t-tests were then applied to test whether the set of values was greater than zero (1-tailed). The bar plots illustrate the results for each of the 8 ROIs identified in (and then computing a group-level similarity matrix from this averaged data rather than averaging personal similarity matrices as was illustrated in Figure 2). The entire analysis presented in Figure 3 and 4 was repeated from scratch using this alternative group-averaging strategy, and yielded broadly the same pattern of results.   Source data are provided as a Source Data file.  elicited during the imagination of common scenarios" when the analysis was performed using verbal and attribute models in isolation. Tests were repeated for each pairwise combination of the 26 participants.

Supplementary
Each bar illustrates the percentage of times that participant-specific models better predicted the same participant's fMRI representations than another participants' fMRI data (see Figure 2 and main text for details).
P-values were estimated using permutation tests (see Methods) and are uncorrected. The 8 ROIs illustrated were identified in Source data are provided as a Source Data file. elicited in imagining personal experiences" when differently the analysis was performed on the subset of 9 males. Results show a broadly significant trend to previous, though statistical significance estimates are weaker, reflecting the lower power associated with testing fewer participants (e.g. 9 rather than 26 in Figure 6).
Each bar illustrates the percentage of times that participant-specific models better predicted the same participant's fMRI representations than another participants' fMRI data (see Figure 2 and main text for details).
The eight ROIs illustrated were identified in Supplementary Figure 19. Different participants were characterized by different hemodynamic response functions (HRF). We conjectured that for the current episodic simulation task HRFs would differ between people.
If true this would challenge the validity of modeling the current fMRI data using the same canonical HRF for each person. To test this, we separately estimated HRFs for each individual using multiple regression to predict voxel activation based on a time-lagged stimulus representation * . The resultant beta-weights at different time lags estimate the HRF unfolding over time. We explored two ways of modeling the visual stimuli: (1) as an onset "spike" (top left), or as a "boxcar" reflecting when the stimulus was on display (bottom left). HRFs were separately estimated for each of the 20 scenarios, within each run. First, a separate stimulus timeseries was created for each scenario, within each run, at the same sample rate as fMRI (2.5sec). Ones were entered to mark stimulus display (spike/boxcar), the rest of the vector was zeros. To account for hemodynamic delays the vector was copied 6 times (reflecting the inter-stimulus interval), and each copy was temporally offset by one TR greater than the previous. Thus, if vector 1 had a one in position three, then vector 2 would have a one in position four.
The vectors were concatenated into a 6column matrix. 6 head motion parameters and linear trend were concatenated with this matrix. Voxel activation time series were also represented as column vectors. Both fMRI and stimulus matrices were normalized so that each column had mean 0 and SD 1. To estimate person-specific HRFs, each voxel's activity was separately regressed on the stimulus matrix. This was repeated for each stimulus (scenario per run). To counteract overfitting, ridge regression was used (penalty=1). Beta-weights estimated the magnitude of each voxel's hemodynamic response across the 6 volumes post stimulus onset. Beta weight profiles were averaged across all scenarios and then across all voxels in Left Precuneus (which yielded strong results in our main analyses e.g. Figure 3)   Perceiving something may in some cases involve touching it. Example low score: 0; Touch does not play a role.

Audition:
Please rate to what degree each of your scenarios involves hearing something. Example high score: 6; When something beeps it makes a sound. Example medium score: 3; Perceiving something may involve sound. Example low score: 0; The scenario involves no sound.

Music:
Please rate to what degree each of your scenarios involves music or musical sounds. Example high score: 6; Singing creates musical sounds. Example medium score: 3; Chiming noises can be somewhat musical.
Example low score: 0; The scenario involves no musical sounds Speech: Please rate the degree to which each of your scenarios involves human speech sounds. Example high score: 6; Talking involves human speech sounds. Example medium score: 3; Babbling often refers to human speech that is hard to understand. Example low score: 0; The scenario involves no human speech sounds.

Taste:
Please rate to what degree each of your scenarios involves tasting something. Example high score: 6; Sipping involves tasting a beverage. Example medium score: 3; Cooking is often accompanied by tasting.
Example low score: 0; Taste does not play a role in this scenario.

Head:
Please rate to what degree each of your scenarios involves the use of the face, mouth, or tongue. Example high score: 6; Smiling is an action involving the face and mouth. Example medium score: 3; Breathing involves the mouth or nose, although they don't actually move. Example low score: 0; The head does not play a role in this scenario.

Upper Limbs:
Please rate to what degree each of your scenarios involves the use of the arms, hands, or fingers.
Example high score: 6; Applauding is an action involving the arms and hands. Example medium score: 3; Jogging usually involves the arms to some degree. Example low score: 0; The upper limbs do not play a role in this scenario.

Lower Limbs:
Please rate to what degree each of your scenarios involves the use of the leg(s) or feet. Example high score: 6; Jumping requires using your legs and feet. Example medium score: 3; Sitting involves some minimal positioning of the legs. Example low score: 0; The lower limbs do not play a role in this scenarios

Path:
Please rate to what degree each of your scenarios involves someone or something moving from one location to another. Example high score: 6; Traveling involves going from one place to another. Example medium score: 3; Searching may or may not require you to change your location. Example low score: 0; The scenario does not involve moving around.

Landmark:
Please rate to what degree each of your scenarios involves an action or activity that occurs at a fixed location, as on a map. Example high score: 6; Libraries and other buildings have a very fixed location. Example medium score: 3; Bushes have a fixed location but are not distinctive enough to be marked on maps. Example low score: 0; The imagined scenario could happen anywhere.

Time:
Please rate to what degree each of your scenarios involves an occurrence at a typical or predictable time.
Example high score: 6; Waking up is something you do at a certain time of the day. Example medium score: 3; Cooking is something that often occurs in the evening. Example low score: 0; The scenario does not occur at a specific time.

Social:
Please rate to what degree each of your scenarios involves interactions between people. Example high score: 6; Collaborating requires interactions between people. Example medium score: 3; Driving in a car is often done with other people. Example low score: 0; The scenario does not involve other people.

Communication:
Please rate to what degree each of your scenarios involves communication or transmitting/receiving information. Example high score: 6; Explaining is when a person clarifies by communicating information. Example medium score: 3; Painting may be an artistic form of communication.
Example low score: 0; The scenario does not involve communication.

Cognition:
Please rate to what degree each of your scenarios involve a mental activity or state of mind that involves thinking. Example high score: 6; Considering something involves thinking about it. Example medium score: 3; Grieving is a state of mind that involves some degree of thinking. Example low score: 0; The scenario does not involve thinking.

Pleasant:
Please rate to what degree each of your scenarios involves something that is pleasant. Example high score: 6; Relaxing is probably something you find pleasant. Example medium score: 3; Conversing is probably something you find somewhat pleasant. Example low score: 0; The scenario does not involve anything pleasant.

Unpleasant:
Please rate to what degree each of your scenarios involves something that is unpleasant. Example high score: 6; Arguing is something you probably find unpleasant. Example medium score: 3; waiting is probably