Abstract
Traversing sensory environments requires keeping relevant information in mind while simultaneously processing new inputs. Visual information is kept in working memory via feature-selective responses in early visual cortex, but recent work has suggested that new sensory inputs obligatorily wipe out this information. Here we show region-wide multiplexing abilities in classic sensory areas, with population-level response patterns in early visual cortex representing the contents of working memory alongside new sensory inputs. In a second experiment, we show that when people get distracted, this leads to both disruptions of mnemonic information in early visual cortex and decrements in behavioral recall. Representations in the intraparietal sulcus reflect actively remembered information encoded in a transformed format, but not task-irrelevant sensory inputs. Together, these results suggest that early visual areas play a key role in supporting high-resolution working memory representations that can serve as a template for comparison with incoming sensory information.
This is a preview of subscription content, access via your institution
Access options
Access Nature and 54 other Nature Portfolio journals
Get Nature+, our best-value online-access subscription
$29.99 / 30 days
cancel any time
Subscribe to this journal
Receive 12 print issues and online access
$209.00 per year
only $17.42 per issue
Buy this article
- Purchase on SpringerLink
- Instant access to full article PDF
Prices may be subject to local taxes which are calculated during checkout
Similar content being viewed by others
Data availability
We have uploaded all preprocessed fMRI and behavioral data, from each subject and ROI, to the Open Science Framework (OSF) at https://osf.io/dkx6y. An accompanying wiki is available here as well, providing an overview of all the data and code.
Code availability
The experiment code used during data collection and the analysis code used to generate the figures in the main manuscript and Supplementary Materials is available from the Open Science Framework (OSF) at https://osf.io/dkx6y.
References
Harrison, S. A. & Tong, F. Decoding reveals the contents of visual working memory in early visual areas. Nature 458, 632–635 (2009).
Serences, J. T., Ester, E. F., Vogel, E. K. & Awh, E. Stimulus-specific delay activity in human primary visual cortex. Psych. Sci. 20, 207–214 (2009).
Riggall, A. C. & Postle, B. R. The relationship between working memory storage and elevated activity as measured with functional magnetic resonance imagine. J. Neurosci. 32, 12990–12998 (2012).
Christophel, T. B., Hebart, M. N. & Haynes, J. D. Decoding the contents of visual short-term memory from human visual and parietal cortex. J. Neurosci. 32, 12983–12989 (2012).
Ester, E. F., Anderson, D. E., Serences, J. T. & Awh, E. A neural measure of precision in visual working memory. J. Cog. Neurosci. 25, 754–761 (2013).
Bettencourt, K. C. & Xu, Y. Decoding the content of visual short-term memory under distraction in occipital and parietal areas. Nat. Neurosci. 19, 150–157 (2016).
Mendoza-Halliday, D., Torres, S. & Martinez-Trujillo, J. C. Sharp emergence of feature-selective sustained activity along the dorsal visual pathway. Nat. Neurosci. 17, 1255–1262 (2014).
Stokes, M. G. ‘Activity-silent’ working memory in prefrontal cortex: a dynamic coding framework. Trends Cog. Sci. 19, 394–405 (2015).
Ester, E. F., Rademaker, R. L. & Sprague, T. S. How do visual and parietal cortex contribute to visual short-term memory? eNeuro 3, e0041–16 (2016). 2016 1–3.
Nassi, J. J. & Callaway, E. M. Parallel processing strategies of the primate visual system. Nat. Rev. Neurosci. 10, 360–372 (2009).
Van Kerkoerle, T., Self, M. W. & Roelfsema, P. R. Layer-specificity in the effects of attention and working memory on activity in primary visual cortex. Nat. Comm. 8, 13804 (2017).
Miller, E. K., Li, L. & Desimone, R. Activity of neurons in anterior inferior temporal cortex during a short-term memory task. J. Neurosci. 13, 1460–1478 (1993).
Serences, J. T. Neural mechanisms of information storage in visual short-term memory. Vis. Res. 128, 53–67 (2016).
Brouwer, G. J. & Heeger, D. J. Decoding and reconstructing color from responses in human visual cortex. J. Neurosci. 29, 13992–14003 (2009).
Sprague, T. C., Saproo, S. & Serences, J. T. Visual attention mitigates information loss in small- and large-scale neural codes. Trends Cogn. Sci. 19, 215–226 (2015).
Rademaker, R. L., Bloem, I. M., De Weerd, P. & Sack, A. S. The impact of interference on short-term memory for visual orientation. J. Exp. Psychol. Hum. Percept. Perform. 41, 1650–1665 (2015).
Wildegger, T., Meyers, N. E., Humphreys, G. & Nobre, A. C. Supraliminal but not subliminal distracters bias working memory recall. J. Exp. Psychol. Hum. Percept. Perform. 41, 826–839 (2015).
Silver, M. A., Ress, D. & Heeger, D. J. Topographic maps of visual spatial attention in human parietal cortex. J. Neurophysiol. 94, 1358–1371 (2005).
Serences, J. T. & Yantis, S. Selective visual attention and perceptual coherence. Trends Cogn. Sci. 10, 38–45 (2006).
Poltoratski, S., Ling, S., McCormack, D. & Tong, F. Characterizing the effects of feature salience and top-down attention in the early visual system. J. Neurophysiol. 118, 564–573 (2017).
Sprague, T. C., Itthipuripat, S., Vo, V. A. & Serences, J. T. Dissociable signatures of visual salience and behavioral relevance across attentional priority maps in human cortex. J. Neurophysiol. 119, 2153–2165 (2018).
Murray, J. D. et al. Stable population coding for working memory coexists with heterogeneous neural dynamics in prefrontal cortex. Proc. Natl Acad. Sci. USA 114, 394–399 (2017).
DiCarlo, J. J., Zoccolan, D. & Rust, N. C. How does the brain solve visual object recognition? Neuron 73, 415–434 (2012).
Rademaker, R. L., Park, Y. E., Sack, A. T. & Tong, F. Evidence of gradual loss of precision for simple features and complex objects in visual working memory. J. Exp. Psychol. Hum. Percept. Perform. 44, 925–940 (2018).
Bisley, J. W., Zaksas, D., Droll, J. A. & Pasternak, T. Activity of neurons in cortical area MT during a memory for motion task. J. Neurophysiol. 91, 286–300 (2004).
Zaksas, D. & Paternak, T. Direction signals in the prefrontal cortex and in area MT during a working memory for visual motion task. J. Neurosci. 26, 11726–11742 (2006).
Gayet, S. et al. Visual working memory enhances the neural response to matching visual input. J. Neurosci. 37, 6638–6647 (2017).
Merrikhi, Y. et al. Spatial working memory alters the efficacy of input to visual cortex. Nat. Comms. 8, 15041 (2017).
Miller, E. K., Li, L. & Desimone, R. A neural mechanism for working and recognition memory in inferior temporal cortex. Science 254, 1377–1379 (1991).
Maunsell, J. H. R., Sclar, G., Nealey, T. A. & DePriest, D. D. Extraretinal representations in area V4 in the macaque monkey. Vis. Neurosci. 7, 561–573 (1991).
Miller, E. K. & Desimone, R. Parallel neuronal mechanisms for short-term memory. Science 263, 520–522 (1994).
Miller, E. K., Erickson, C. A. & Desimone, R. Neural mechanisms of visual working memory in prefrontal cortex of the macaque. J. Neurosci. 16, 5154–5167 (1996).
Jacob, S. N. & Nieder, A. Complementary roles for primate frontal and parietal cortex in guarding working memory from distractor stimuli. Neuron 83, 226–237 (2014).
Qi, X.-L., Elworthy, A. C., Lambert, B. C. & Constantinidis, C. Representation of remembered stimuli and task information in the monkey dorsolateral prefrontal and posterior parietal cortex. J. Neurophysiol. 113, 44–57 (2015).
Silver, M. A. & Kastner, S. Topographic maps in human frontal and parietal cortex. Trends Cogn. Sci. 13, 488–495 (2009).
Bressler, D. W. & Silver, M. A. Spatial attention improves reliability of fMRI retinotopic mapping signals in occipital and parietal cortex. Neuroimage 53, 526–533 (2010).
Deutsch, D. Tones and numbers: specificity of interference in immediate memory. Science 168, 1604–1605 (1970).
Deutsch, D. Interference in memory between tones adjacent in the musical scale. J. Exp. Psychol. 100, 228–231 (1973).
Magnussen, S., Greenlee, M. W., Asplund, R. & Dyrnes, S. Stimulus-specific mechanisms of visual short-term memory. Vis. Res. 31, 1213–1219 (1991).
Magnussen, S. & Greenlee, M. W. Retention and disruption of motion information in visual short-term memory. J. Exp. Psychol. Learn. Mem. Cogn. 18, 151–156 (1992).
Pasternak, T. & Zaksas, D. Stimulus specificity and temporal dynamics of working memory for visual motion. J. Neurophysiol. 90, 2757–2762 (2003).
Van der Stigchel, S., Merten, H., Meeter, M. & Theeuwes, J. The effects of a task-irrelevant visual event on spatial working memory. Psychon. Bull. Rev. 14, 1066–1071 (2007).
Huang, J. & Sekuler, R. Distortions in recall from visual memory: two classes of attractors at work. J. Vis. 10, 1–27 (2010).
Nemes, V. A., Parry, N. R., Whitaker, D. & McKeefry, D. J. The retention and disruption of color information in human short-term visual memory. J. Vis. 12, 1–14 (2012).
Bae, G. Y. & Luck, S. J. Interactions between visual working memory representations. Atten. Percep. Psychophys. 79, 2376–2395 (2017).
Lorenc, E. S., Sreenivasan, K. K., Nee, D. E., Vandenbroucke, A. R. E. & D’Esposito, M. Flexible coding of visual working memory representations during distraction. J. Neurosci. 38, 5267–5276 (2018).
Chunharas, C., Rademaker, R. L., Brady, T. F. & Serences, J. T. Adaptive memory distortion in visual working memory. Preprint at PsyArXiv https://psyarxiv.com/e3m5a/ (2019).
Sprague, T. C., Ester, E. F. & Serences, J. T. Restoring latent visual working memory representations in human cortex. Neuron 91, 694–707 (2016).
Christophel, T. G., Iamshchinina, P., Yan, C., Allefeld, C. & Haynes, J. D. Cortical specialization for attended versus unattended working memory. Nat. Neurosci. 21, 494–496 (2018).
Rose, N. S. et al. Reactivation of latent working memories with transcranial magnetic stimulation. Science 354, 1136–1139 (2016).
Brainard, D. H. The Psychophysics Toolbox. Spat. Vis. 10, 433–436 (1997).
Kleiner, M. et al. What’s new in psychtoolbox-3. Perception 36, 1–16 (2007).
Tyler, C. W. & Nakayama, K. Grating induction: a new type of aftereffect. Vis. Res. 20, 437–441 (1980).
Goeleven, E., De Raedt, R., Leyman, L. & Verschuere, B. The Karolinska directed emotional faces: a validation study. Cogn. Emot. 22, 1094–1118 (2008).
Rovamo, J. & Virsu, V. An estimation and application of the human cortical magnification factor. Exp. Brain Res. 37, 495–510 (1979).
Andersson, J. L. R., Skare, S. & Ashburner, J. How to correct susceptibility distortions in spin-echo echo-planar images: application to diffusion tensor imaging. Neuroimage 20, 870–888 (2003).
Smith, S. M. et al. Advances in functional and structural MR image analysis and implementation as FSL. Neuroimage 23, 208–219 (2004).
Jenkinson, M., Beckmann, C. F., Behrens, T. E., Woolrich, M. W. & Smith, S. M. FSL. Neuroimage 62, 782–790 (2012).
Dale, A. M., Fischl, B. & Sereno, M. I. Cortical surface-based analysis. I. Segmentation and surface reconstruction. Neuroimage 9, 179–194 (1999).
Greve, D. & Fischl, B. Accurate and robust brain image alignment using boundary-based registration. Neuroimage 48, 63–72 (2009).
Jenkinson, M. & Smith, S. M. A global optimisation method for robust affine registration of brain images. Med. Image Anal. 5, 143–156 (2001).
Jenkinson, M., Bannister, P., Brady, J. M. & Smith, S. M. Improved optimization for the robust and accurate linear registration and motion correction of brain images. Neuroimage 17, 825–841 (2002).
Dale, A. M. Optimal experimental design for event‐related fMRI. Hum. Brain Mapp. 8, 109–114 (1999).
Smith, S. M. Fast robust automated brain extraction. Hum. Brain Mapp. 17, 143–155 (2002).
Woolrich, M. W., Ripley, B. D., Brady, M. & Smith, S. M. Temporal autocorrelation in univariate linear modeling of FMRI data. Neuroimage 14, 1370–1386 (2001).
Engel, S. A. et al. fMRI of human visual cortex. Nature 369, 525 (1994).
Swisher, J. D., Halko, M. A., Merabet, L. B., McMains, S. A. & Somers, D. C. Visual topography of human intraparietal sulcus. J. Neurosci. 27, 5326–5337 (2007).
Sprague, T. C. & Serences, J. T. Attention modulates spatial priority maps in the human occipital, parietal and frontal cortices. Nat. Neurosci. 16, 1879–1887 (2013).
Wolff, M. J., Jochim, J., Akyürek, E. G. & Stokes, M. G. Dynamic hidden states underlying working-memory-guided behavior. Nat. Neurosci. 20, 864–871 (2017).
Haynes, J. D. A primer on pattern-based approaches to fMRI: principles, pitfalls, and perspectives. Neuron 87, 257–270 (2015).
Berens, P. CircStat: a MATLAB toolbox for circular statistics. J. Stat. Softw. 31, 1–21 (2009).
Acknowledgements
This work was supported by grant no. NEI R01-EY025872 to J.T.S., and by the European Union’s Horizon 2020 research and innovation program under the Marie Sklodowska-Curie Grant Agreement No 743941 to R.L.R. We thank A. Jacobson at the UCSD CFMRI for assistance with multiband imaging protocols. We also thank R. van Bergen for assistance setting up an FSL/FreeSurfer retinotopy pipeline, A. Chakraborty for collecting the behavioral data shown in Supplementary Fig. 9 and V. Vo for discussions on statistical analyses.
Author information
Authors and Affiliations
Contributions
This study was designed by R.L.R., C.C. and J.T.S. Data were collected by R.L.R., and C.C. and R.L.R. preprocessed the data. R.L.R. and J.T.S. did the main analyses and wrote the manuscript.
Corresponding authors
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Peer review information: Nature Neuroscience thanks Thomas Christophel and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.
Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Integrated supplementary information
Supplementary Figure 1 No systematic relationship between target and distractor orientations.
Both Experiments 1 (a) and 2 (b) included a condition where participants were shown a grating distractor during the memory delay. Here we plot the distractor orientation (y-axis) against the target orientation (x-axis) on each single trial (dots) for all experimental subjects (subplots). To ensure relatively uniform sampling of target and distractor orientations across orientation space, both orientations were drawn pseudo-randomly from one of six orientation bins (each bin spanning 30º). The boundaries between these bins are indicated with dashed lines. Importantly, orientations were drawn from each bin equally often. Thus, of the 108 total trials in the grating distractor condition, the target orientation was randomly drawn from the first orientation bin (1º–30º) 18 times, from the second bin (31º–60º) 18 times, and so on. Thus, for each subject there are 18 points (that is trials) in each of the six columns (defined by the vertical dashed lines) of the scatter plots. Similarly, distractor orientations were randomly drawn 18 times from each bin. Thus, each row of the scatter plot (defined by the horizontal dashed lines) also contains 18 points (that is trials). Moreover, we counterbalanced the orientation bins from which target and distractor orientations were drawn. Thus, each bin combination (that is each square defined by the dashed lines) contains a total of 3 points (that is trials). We quantified the relationship between target and distractor orientations via circular correlation (rho) for Experiment 1 (a): 0.019, –0.021, 0.03, 0.052, 0.102, and 0.001 for all subjects in subplots from left-to-right (with p-values of 0.84, 0.833, 0.754, 0.588, 0.301, and 0.992 respectively). The same test was used for Experiment 2 yielding the following correlations (b): –0.016, 0.039, 0.007, –0.026, –0.036, –0.004, and 0.024 for all subjects in subplots from left-to-right (with p-values of 0.866, 0.689, 0.946, 0.791, 0.704, 0.966, and 0.808 respectively).
Supplementary Figure 2 Behavioral performance on the working memory task in the scanner for Experiments 1 and 2.
Behavioral performance in Experiment 1 (a) and Experiment 2 (b) with the top row showing distributions of recall error combined across participants (in degrees). Recall error was calculated by subtracting the target orientation from a participant’s response on every trial (that is responseº – targetº). Each subplot shows a different distractor condition, and each insert is a cartoon version of the image on the screen during the delay in each condition (photo used with permission). The bottom row shows the within-subject mean signed errors (on the y-axis) across orientation space (on the x-axis, where 0º represents vertical, and larger numbers are degrees clockwise relative to 0º) for each distractor condition (in subplots). A characteristic “bias away from cardinal” (Wei, X.X. & Stocker, A.A. A Bayesian observer model constrained by efficient coding can explain ‘Anti-Bayesian’ percepts. Nat. Neurosci. 18, 1509–1517, 2015) can be observed irrespective of distractor condition. Shaded error areas represent bootstrapped 95% confidence intervals on the mean signed errors (for n=6 and n=7 independent subjects in a and b, respectively). The mean signed error at each degree was calculated within a window of + 6º, thus smoothing the data within that range.
Supplementary Figure 3 Hemodynamic response functions across all retinotopically defined .
(a) Experiment 1 BOLD responses for the no distractor, grating distractor, and noise distractor conditions in dark teal, mid teal, and light teal, respectively. Distractors presented during the delay effectively drove univariate response in all ROIs (left-to-right subplots), with the Fourier filtered noise distractor (light teal) yielding especially strong activation in V1 and V2. (b) BOLD in Experiment 2 when there was no distractor (darkest teal), a grating distractor (mid teal), or picture distractors (yellow). Both grating and picture distractors presented during the delay effectively drove univariate response in all ROIs (left-to-right subplots), with pictures yielding the strongest activation overall, especially in extra-striate cortex. For both a and b, the three gray background panels within each subplot represent the target (0–0.5s), distractor (1.5–12.5s), and recall (13.5–16.5) epochs of the working memory trial. Lines are group averaged BOLD responses, with shaded error areas representing + 1 within-subject SEM (for n=6 and n=7 independent subjects in a and b, respectively).
Supplementary Figure 4 Inverted Encoding Model (IEM).
(a) Estimating the encoding model is the first step in the IEM. Each voxel differs with respect to the size of the response evoked by each orientation, and showing many orientations over many trials is used to quantify this response profile (left). Response profiles (R) for each voxel (j) are the weighted (w) sum of 9 hypothetical orientation channels (i), as shown on the right. (b) Inverting the encoding model is the second step in the IEM. Channel weights represent each voxel’s orientation selectivity, and when a new response is evoked (left), the combined selectivity of all voxels is used to generate a model-based reconstruction of the new orientation from the voxel pattern. These reconstructed channel response profiles describe the activation of modeled channels in response to either a remembered or a seen orientation, and the resulting response profile is in a stimulus-referred space.
Supplementary Figure 5 Reconstructions based on independent training data, tested on working memory delay data.
In (a) and (b) we show model-based reconstructions for Experiments 1 and 2, respectively. Top rows show reconstructions for the orientation held in memory (in color), and bottom rows show reconstructions for the orientation that was physically on the screen during grating distractor trials (in grey). Shifts in the baseline offset for the different distractor conditions largely reflect differences in the mean BOLD response across all voxels. Note that V1 reconstructions are also shown in Fig. 1c and Fig. 3c but are included here as well for ease of comparison. Lines are group averaged reconstructions (based on delay data averaged across 5.6–13.6 seconds after stimulus onset), with shaded error areas representing + 1 within-subject SEM (for n=6 and n=7 independent subjects in a and b, respectively).
Supplementary Figure 6 Reconstruction fidelity over time based on independent training data, tested on each individual TR of the working memory trial.
(a) In Experiment 1, a significant reconstruction fidelity for the remembered orientation arises about 3–4 seconds into the trial irrespective of distractor condition (in shades of teal), and persists throughout the delay in all early retinotopic areas (V1–V4) and LO1, but not in IPS0 and IPS1. On trials with a grating distractor, the physically-present orientation is also represented throughout in V1-V4 and LO1 (though arising a little later, around 4–5 seconds into the delay, consistent with its delayed onset of 1.5 seconds after the target), and dissipates roughly 2 seconds after distractor offset. (b) When distractors were presented during the delay in Experiment 2, reconstruction fidelity for the remembered orientation is significant at some TR’s, but not consistently significant throughout the trial (mid teal and yellow) in early retinotopic areas (V1–V4) and LO1. Again, there was no above chance fidelity in IPS0 and IPS1. In both (a) and (b), the three gray panels in each subplot represent the target, distractor, and recall epochs of the working memory trial. Dots in the bottom of each subplot indicate the time points at which fidelity is significantly above zero. Small, medium, and large dot sizes indicate significance levels of p < 0.05, p < 0.01, and p < 0.001, respectively. Statistical tests were identical to those employed for Figs. 2b and 3e. Lines are group averaged fidelities, with shaded error areas representing + 1 within-subject SEM (for n=6 and n=7 independent subjects in a and b, respectively). Note that fidelity over time for V1 is also shown in Fig. 2b and Fig. 3e, but is included here for ease of comparison.
Supplementary Figure 7 Reconstructions during the grating distractor condition in Experiment 1, as a function of target and distractor orientation difference.
Data in the grating distractor condition were binned according to the target-distractor orientation similarly (in columns) for each ROI (in rows). Bins centers were chosen in 30º steps between maximal (0º) and minimal (90º) target-distractor similarity (columns depicting maximum and minimum similarity are highlighted by gray panels). The solid-black and dashed-red vertical lines represent the memory target orientation, and bin-center of the distractor orientation, respectively. For every participant, all model-based reconstructions within a bin were averaged together, after which we determined the circular mean of the reconstruction in that bin (Supplementary Fig. 8a) and the fidelity at that mean (Supplementary Fig. 8b). Shown here are the group averaged reconstructions in each bin, with black error areas indicating + 1 within-subject SEM for n=6 independent subjects.
Supplementary Figure 8 Reconstructions shift and have lower fidelity as target and grating distractor orientations become more dissimilar in Experiment 1.
(a) Circular mean (mu) of model-based reconstructions of remembered orientation during the grating distractor condition, as a function of target-distractor similarity (see also Supplementary Fig. 7). Non-parametric one-way ANOVA’s in each ROI (uncorrected) show that reconstruction mu’s are significantly shifted as a function of target-distractor similarity in many early retinotopic ROIs (in subplots; from left-to-right F(5,25) = 12.709, 22.406, 13.527, 2.189, 7.498, 1.875, 1.229, and 1.364). Note that for this test (and plot) we excluded the 90º difference bin, which yielded mu estimates that were essentially noise due to the flatness of reconstructions in this bin. (b) Reconstruction fidelity similarly showed interdependencies between target and distractor in early retinotopic ROIs and LO1 (in subplots; from left-to-right F(5,25) = 10.492, 17.526, 7.119, 2.684, 4.113, 0.526, 0.486, and 2.985), as indicated by the same non-parametric test used in a with the exception that now the 90º difference bin was included. These interdependencies make it hard to separately estimate information from the target and the distractor, as they co-vary, and the superimposed reconstructions cancel each other out. Nevertheless, this analysis shows that fidelity is enhanced when the target and distractor are more similar, and highlights local interactions between remembered and directly sensed information in early visual cortex. Bars in each subplot represent the average mu (a) and fidelity (b) at each of the target-distractor differences. In both a and b, unfilled circles represent individual participants. Note that in a, IPS1 fidelity in the -60º bin is missing two individual subject data points, this is because their values exceeded the y-axis scale (that is fidelities of -0.236 and 0.692 respectively). One, two, or three asterisks indicate significance in each ROI of p < 0.05, p < 0.01, and p < 0.001, respectively. Error bars represent + 1 within-subject SEM for n=6 independent subjects.
Supplementary Figure 9 Better behavioral performance when target and distractor orientations are similar versus dissimilar.
A separate psychophysical experiment was conducted to explore the distractor grating condition in more detail. Participants (n=17) remembered a random target orientation, and an irrelevant distractor orientation was shown during the delay on 90% of trials. Target and distractor orientations were chosen independently and pseudo-randomly, in a manner identical to the two main imaging experiments (Experiments 1 and 2). Unlike the imaging experiments, trials in this behavioral experiment were shorter: A 200ms target was followed by a 3000ms delay (and a 200ms distractor presented during the central portion of the delay). After the delay participants provided an unspeeded response, followed by an 800-1000ms inter-trial interval. Furthermore, grating stimuli in this behavioral experiment were smaller (2º radius, 2 c/º, 20% contrast, phase jittered). Each participant performed a total of 1620 trials over the course of several days. The analyses and plots presented in both a and b show circular statistics at each target-distractor difference calculated within a window of + 5º. These data were normalized by first subtracting out individual-subject means, and the resultant within-subject average is depicted by the white lines. Black error areas represent bootstrapped 95% confidence intervals on the within-subject data (across all possible target-distractor differences). The single data points presented on the far right of each subplot are from the 10% of trials where no distractor was shown during the delay. (a) The normalized mean response to the target as a function of the target-distractor similarity. Behavioral responses were attracted towards the irrelevant distractor orientation, and this attraction was most pronounced at target-distractor differences around ~22º. At its strongest, attraction had a magnitude of ~1º. (b) The precision of participant’s responses (as indexed by the circular standard deviation) fluctuated as a function of target-distractor similarity: Memory was more precise for more similar orientations, and less precise for less similar orientations. The overall magnitude of this effect was ~1.5º. Both the findings in (a) and (b) replicate previously published work looking at the impact of irrelevant distractors16, with the most notable difference in paradigms that here we used entirely independent target and distractor orientations, whereas the previous work had built-in dependencies between the two. While these effects on the circular mean and standard deviation are small, they far exceed the JND for orientation, and provide evidence for an interaction between a memory target and irrelevant distractors.
Supplementary Figure 10 Decoding the picture distractor (face or gazebo) during the memory delay of Experiment 2.
A two-way classifier (linear support vector machine) was trained to distinguish between face and gazebo pictures using independent localizer data. Next, this classifier proved highly successful at determining whether a face or a gazebo picture was shown during the working memory delay (decoding was based on the average activation patterns from 5.6–13.6 seconds after stimulus onset), with near-perfect decoding in all retinotopically defined ROIs. Note that this is not at all surprising given that the face and gazebo pictures were not controlled for low-level image statistics. In fact, these stimuli occupied different portions of visual space and therefore systematically activated different subsets of voxels. Statistics were based on one-sided randomization tests comparing decoding accuracy in each ROI to chance (that is 0.5; see Methods). Three asterisks indicate significant decoding of p < 0.001 in all ROIs (the upper limit of resolvable p-values based on 1000 permutations). Dots indicate individual subject decoding in each ROI. Error bars represent + 1 within-subject SEM (for n=7 independent subjects).
Supplementary Figure 11 Reconstructions based on a leave-one-out procedure, where model training and testing was performed on data from the working memory delay.
In (a) and (b) we show model-based reconstructions for Experiments 1 and 2, respectively. Top rows show reconstructions for the orientation held in memory (in color), and bottom rows show reconstructions for the orientation that was physically on the screen during grating distractor trials (in grey). Lines are group averaged reconstructions (based on delay data averaged across 5.6–13.6 seconds after stimulus onset), with shaded error areas representing + 1 within-subject SEM (for n=6 and n=7 independent subjects in a and b, respectively).
Supplementary Figure 12 Reconstruction fidelity over time based on a leave-one-out procedure, where model training and testing was performed on data from individual TR’s of the working memory trial.
In (a) and (b) we show reconstruction fidelity over time for Experiments 1 and 2, respectively. For both a and b, the leave-one-trial out procedure meant that at each TR, we trained the IEM on data from all trials but one, and tested on the left-out trial. This was repeated until all trials had been left out once, after which the reconstructions were averaged, and a single fidelity value was calculated at that TR. This was then done for all TRs. The V1-V4 and LO1 data presented here in both a and b, look roughly the same as those in Supplementary Fig. 6. However, there is one notable difference: Mnemonic representations in IPS0 and IPS1 are clearly present in this leave-one-out analysis, and remain fairly stable for all distractor conditions throughout trials of both Experiments 1 and 2. In both (a) and (b), the three gray panels in each subplot represent the target, distractor, and recall epochs of the working memory trial. Dots in the bottom of each subplot indicate the time points at which fidelity is significantly above zero. Small, medium, and large dot sizes indicate significance levels of p < 0.05, p < 0.01, and p < 0.001, respectively. Statistical tests were identical to those employed for Figs. 2b, 3e, and Supplementary Fig. 6. Lines are group averaged fidelities, with shaded error areas representing + 1 within-subject SEM (for n=6 and n=7 independent subjects in a and b, respectively).
Supplementary Figure 13 Memory fidelity in all voxels from retinotopically defined IPS0–IPS3 areas, analyzed without voxel selection based on visual sensitivity.
Our main analyses (Fig. 1e, 3d, and 4) might bias IPS results in favor of ‘sensory-like’ stimulus-driven codes by virtue of including only voxels with a significant sensory response. To avoid this potential bias, here we analyze data from all IPS voxels irrespective of their visual sensitivity. We again see that there is little mnemonic information represented in IPS when the IEM is trained on independent sensory data (subplots on the left). By contrast, when training and testing the IEM on data from within the memory epoch itself, the remembered orientation is robustly represented (subplots on the right). There are no differences in memory fidelity between the three distractor conditions when training on independent sensory data (left subplots; Experiment 1: all F(2,10) < 2.145, all p > 0.194; Experiment 2: all F(2,12) < 2.041, all p < 0.197), which is not surprising given the overall absence of information. However, also when trained on memory delay data (right subplots) there are no differences in memory fidelity between the three distractor conditions in Experiment 1 (all F(2,10) < 1.259, all p > 0.333) and Experiment 2 (all F(2,12) < 0.822, all p < 0.473; with the exception of IPS0, F(2,12) = 4.31, p = 0.032; a difference that didn’t hold up in post-hoc tests, with all t(6) < 2.494, and all p > 0.054). Note that the sensory distractor (in grey) is not represented in either analysis, implying that IPS is not representing visual inputs when they are task-irrelevant. Statistical testing was identical to that in Figs. 1e, 3d, and 4 (see also Methods). One, two, or three asterisks indicate significance levels of p < 0.05, p < 0.01, or p < 0.001, respectively. Dots indicate individual subject fidelities in each ROI and condition. Error bars represent + 1 within-subject SEM (for n=6 and n=7 independent subjects in Experiments 1 and 2, respectively).
Supplementary information
Rights and permissions
About this article
Cite this article
Rademaker, R.L., Chunharas, C. & Serences, J.T. Coexisting representations of sensory and mnemonic information in human visual cortex. Nat Neurosci 22, 1336–1344 (2019). https://doi.org/10.1038/s41593-019-0428-x
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1038/s41593-019-0428-x
This article is cited by
-
Perceptual comparisons induce lasting and generalizing changes to face memory reports
Cognitive Research: Principles and Implications (2024)
-
Dynamic layer-specific processing in the prefrontal cortex during working memory
Communications Biology (2024)
-
A retinotopic code structures the interaction between perception and memory systems
Nature Neuroscience (2024)
-
Representation and computation in visual working memory
Nature Human Behaviour (2024)
-
Centering cognitive neuroscience on task demands and generalization
Nature Neuroscience (2024)