Abstract
Stimulus-dependent eye movements have been recognized as a potential confound in decoding visual working memory information from neural signals. Here we combined eye-tracking with representational geometry analyses to uncover the information in miniature gaze patterns while participants (nā=ā41) were cued to maintain visual object orientations. Although participants were discouraged from breaking fixation by means of real-time feedback, small gaze shifts (<1Ā°) robustly encoded the to-be-maintained stimulus orientation, with evidence for encoding two sequentially presented orientations at the same time. The orientation encoding on stimulus presentation was object-specific, but it changed to a more object-independent format during cued maintenance, particularly when attention had been temporarily withdrawn from the memorandum. Finally, categorical reporting biases increased after unattended storage, with indications of biased gaze geometries already emerging during the maintenance periods before behavioural reporting. These findings disclose a wealth of information in gaze patterns during visuospatial working memory and indicate systematic changes in representational format when memory contents have been unattended.
Similar content being viewed by others
Main
Working memory (WM) enables observers to actively keep stimulus information āon the mindā for upcoming tasks. A key question in understanding WM function is which aspects of a task-relevant stimulus it retains and in which format(s). On removal of a stimulus from sight, sensory systems briefly retain a detailed sensory memory of the just-removed (for example, visual) input. Without active maintenance, these rich āphotographicā memories decay rapidly in a few hundreds of milliseconds1 (but see ref. 2). It is widely assumed that only a limited amount of information can be accurately maintained in WM3,4,5,6. However, despite intense research, the very nature of the information that WM maintains remains poorly understood.
In neuroscientific experiments examining the representation of visual WM information in the brain, often only a single stimulus feature needs to be reported after a delay (for example, the orientation of a visual grating)7,8,9,10. At one extreme, such tasks could be solved by sustaining a concrete visual memory of the stimulus and its visual details. Various human neuroimaging studies have shown that WM contents can be decoded from early visual cortices8,10,11, which seems consistent with storage in a sensory format (but see ref. 12). At the other extreme, many WM tasks can also be solved with a high-level abstraction of the task-relevant stimulus parameter only, such as its orientation, speed or colour12,13,14,15. Such abstractions may also be recoded into pre-existing categories (such as āleftā, āslowā or āgreenā)16,17, which may result in memory reports that are biased18 but still sufficient to achieve oneās behavioural goals. Abstraction may render working memories more robust, afford transfer across tasks and massively reduce the amount of information that must be maintained17,19,20.
However, progress in understanding the temporal dynamics of WM abstraction has thus far been limited. A few studies have examined the extent to which neural WM representations generalize (or not) across different stimulus inputs12,14,15,21 and/or become categorically biased22,23,24. In humans, these studies used functional imaging, which lacks the temporal resolution to disclose rapid format changes, or electroencephalography (EEG)/magnetoencephalography, which often can decode the task-relevant stimulus information only during the first 1ā2āseconds of unfilled WM delays22,25,26,27,28. Here, we used a different approach that leverages the finding that subtle ocular activity (for example, microsaccades)29,30 can reflect attentional orienting during visuospatial WM tasks31. Although traditionally considered a confound that experimenters seek to avoid, small gaze shifts can reflect certain types of visuospatial WM information with greater fidelity than EEG/magnetoencephalography recordings32,33 and even throughout prolonged WM delays, which opens new avenues for tracking dynamic format changes.
On the basis of previous behavioural and theoretical work, we hypothesized that the level of abstraction in WM may change when the to-be-maintained information has been temporarily unattended. While unattended, WM contents cannot easily be decoded with neuroimaging approaches (but see ref. 21), and the neural substrates of unattended storage remain disputed25,34,35. Behaviourally, however, temporary inattention renders working memories less precise36,37 and more categorically biased38, which may indicate increased abstraction of the WM content39. Physiological evidence for when and how such modifications may occur during WM maintenance is still lacking.
Here we recorded eye movements while participants memorized the orientations of rotated objects in a dual retro-cue task (Fig. 1a). With such a task layout, it is commonly assumed that the initially uncued information is unattended (or deprioritized) in WM, and the cued information is in the focus of attention28,40,41. Representational geometry analyses borrowed from neuroimaging42 allowed us to track with high temporal precision whether orientation encoding in gaze patterns was object-specific (indicating a concrete visual memory), object-independent (indicating more generalized/abstract task coordinates) and/or categorically biased throughout the different stages of the task. Participants were encouraged to keep fixation through online feedback (closed loop) to restrict eye movements to small and involuntary gaze shifts.
We found that despite this fixation monitoring, miniature gaze patterns clearly encoded the cued stimulus orientation throughout the WM delays. Although the orientation encoding was object-specific at first (indicating attentional focusing on concrete visual details), its format rapidly became object-independent (generalized/abstract) when another stimulus was encoded or maintained in the focus of attention. We further found that temporary inattention increased repulsive cardinal bias in subsequent memory reports, with some evidence for such biases already emerging during the delay periods in the geometry of gaze patterns. Together, our findings indicate adaptive format changes during WM maintenance within and outside the focus of attention and highlight the utility of detailed gaze analysis for future work.
Results
Participants (nā=ā41) performed a cued visual WM task (Fig. 1a) while their gaze position was tracked. On each trial, two randomly oriented stimuli (pictures of real-world objects) were sequentially presented (each for 0.5ās followed by a 0.5ās blank screen), after which an auditory āretroā-cue (Cue 1) indicated which of the two orientations was to be remembered after a delay period (Delay 1, 3.5ās) at Test 1. On half the trials (randomly varied), Test 1 was followed by another retro-cue (Cue 2) and another delay period (Delay 2, 2.5ās), after which participants were required to also remember the orientation of the other, previously uncued stimulus (Test 2).
Behavioural accuracy
At each of the two memory tests, the probed stimulus was shown with a slightly altered orientation (Ā±6.43Ā°), and participants were asked to rerotate it to its previous orientation by means of button press (two-alternative forced choice, 2-AFC). As expected, the percentage of correct responses was descriptively higher on Test 1 (meanā=ā73.41%, standard deviation (s.d.)ā=ā6.42%) than on Test 2 (meanā=ā66.62%, s.d.ā=ā5.78%). Further, the second presented orientation (Stimulus 2) was remembered better (meanā=ā70.99%, s.d.ā=ā7.07%) than the first presented orientation (Stimulus 1; meanā=ā69.04%, s.d.ā=ā6.79%). A 2āĆā2 repeated-measures analysis of variance (ANOVA) with the factors Test (1/2) and Stimulus (1/2) confirmed that both these effects were significant (F(1,40)ā=ā95.396, Pā<ā0.001, eta squared (Ī·2)ā=ā0.521 and F(1,40)ā=ā13.319, Pā<ā0.001, Ī·2ā=ā0.043)), whereas there was no significant interaction between the two factors (F(1,40)ā=ā3.681, Pā=ā0.062, Ī·2ā=ā0.008).
The effects of presentation and testing order may both be attributed to task periods since stimulus presentation during which the other stimulus was to be attended, either for perceptual processing (Stimulus 2) or for cued maintenance and reporting (Delay 1 and Test 1). We may combine these two factors into the āmnemonic distanceā of a stimulus, which in our experiment had four levels (from shortest to longest: Stimulus 2 at Test 1, Stimulus 1 at Test 1, Stimulus 2 at Test 2 and Stimulus 1 at Test 2). The behavioural accuracy results were compactly described as a monotonic decrease across these distance levels (t(40)ā=āā9.404, Pā<ā0.001, Cohenās dā=āā1.469, 95% confidence interval (CI) (ā0.038, ā0.024); t-test of linear slope against zero), as would be expected if processing Stimulus 2 temporarily withdrew attention from the memory of Stimulus 1, similar (and additive) to the withdrawal of attention from the uncued item during Delay 1 and Test 1.
Object orientation was reflected in miniature gaze patterns
We informed participants that their gaze would be monitored to ensure that they constantly fixated a centrally presented dot throughout the task. To enforce this, we provided real-time feedback (closed-loop) when fixation was lost (Methods). Figure 1c shows the participantsā gaze distribution in relation to stimulus size after rotating the trial data to the respective objectās real-world (upright) orientation (for similar approaches, see refs. 43,44). Despite this rotation alignment, the gaze density was concentrated narrowly (mostly in a <1Ā° visual angle) at centre during both the stimulus and the delay periods (Fig. 1c). The instructions and online feedback thus proved effective in preventing participants from overtly gazing at the location of the objectsā peripheral features (such as, for example, the spire of the lighthouse in Fig. 1a). However, inspecting the participantsā average gaze positions for each stimulus orientation (without rotational alignment) disclosed miniature circle-like patterns (Fig. 1d), indicating that miniscule gaze shifts near fixation did carry information about the objectsā orientation (for related findings with other stimulus materials, see refs. 32,33).
For quantitative analysis of the orientation encoding in gaze, we used an approach on the basis of representational similarity analysis (RSA)42. Specifically, we examined the extent to which the gaze patterns showed the characteristic Euclidean distance structure of evenly spaced points on a circle (Fig. 2a). We implemented RSA on the single-trial level (Fig. 2b) by correlating for each trial the model-predicted distances with the vector of gaze distances between the current trial and the trial average for each stimulus orientation. The procedure yields a cross-validated estimate of orientation encoding at each time point for every trial (Methods).
We first inspected the mean time courses of orientation encoding (averaged over trials) during stimulus presentation. We observed robust encoding of stimulus orientation from about 500āms after stimulus onset for both Stimulus 1 and Stimulus 2 (both Pclusterā<ā0.001, cluster-based permutation tests; Methods). The encoding of either stimulus orientation peaked at ~650āms (that is, only after the stimuliās offset; Fig. 2c), after which it slowly decayed.
Concurrent encoding of Stimulus 1 and Stimulus 2
Although gaze data are only two-dimensional, we found that while encoding the second presented orientation (Stimulus 2), the gaze pattern also continued to carry information about the first-presented orientation (Stimulus 1; see Fig. 2c). Such a concurrency in the average time courses may have arisen if one of the orientations was encoded on some trials and the other orientation on others. Alternatively, however, the pattern may indicate that gaze encoded both orientations simultaneously (that is, additively, on the same trials). To shed light on this, we capitalized on our single-trial approach (Methods and Fig. 2b) and binned each participantās trials according to how strongly the orientation of Stimulus 2 was encoded between 250 and 1,000āms after Stimulus 2 onset (Fig. 2e). If encoding of the two orientations had alternated between different trials, we would expect a negative relationship with the encoding of the orientation of Stimulus 1 in the same time window. However, we found no significant relationship (t(40)= ā1.368, Pā=ā0.18, dā=āā0.214, 95% CI (ā0.006, 0.001); linear trend analysis). What is more, the encoding of Stimulus 1ās orientation was significantly above chance even on those trials on which the encoding of Stimulus 2ās orientation was maximally strong (t(40)ā=ā3.656, Pā=ā0.01, dā=ā0.571, 95% CI (0.029, 0.099); t-test against 0). Together, these results indicate that small shifts in 2D gaze space carried information about the two stimulus orientations simultaneously, on the same trials.
Encoding of the cued orientations throughout the delay periods
Our main interest was in how gaze patterns reflected information storage during the unfilled delay periods (Delay 1 and 2; Fig. 1a). During Delay 1, about 500āms after auditory cueing (Cue 1), the encoding of the cued orientation ramped up and continuously increased in strength until the time of Test 1, whereas the encoding of the uncued orientation slowly returned to baseline (Fig. 2c). During Delay 2 (which occurred in half the trials), a similar ramping-up pattern was observed for the second-cued orientation (which was previously uncued; Fig. 2d; both Pclusterā<ā0.001). Thus, miniature gaze deflections robustly encoded the currently cued (or āattendedā) memory information during the two delay periods in a ramp-up fashion that resembled the encoding of WM information in neural recordings (for example, in monkey prefrontal cortex)45,46.
Object-specific versus object-independent orientation encoding
We next examined more closely the format(s) in which the gaze patterns reflected the WM information. A priori, memory reports in our task could be on the basis of a concrete visual memory of the presented stimulus, but they could also be on the basis of a mental abstraction of orientation: for example, in terms of directional spatial coordinates. To the extent that the small eye movements during WM maintenance reflected mental focusing on concrete visual features (for example, the location of a specific point on the objectās contour), we expect the orientation encoding in gaze to be object-specific: that is, not fully transferable between different objects. In contrast, an abstraction of orientation (for example, in terms of a direction in which any object may point with its real-world top) should be reflected in gaze patterns that are object-independent and transferable.
We examined object specificity by comparing the orientation encoding in gaze distances within objects (Fig. 3a, left) with that in gaze distances between objects (Fig. 3a, right). On stimulus presentation, the orientation encoding in gaze patterns was object-specific, in that within-objects encoding clearly exceeded between-objects encoding (Fig. 3c; all Pclusterā<ā0.012). For Stimulus 1, the object specificity diminished abruptly after ~1,300āms (when the gaze patterns began to also encode Stimulus 2) and changed to a more object-independent format for the remainder of the trial epoch (Fig. 3c, top). For Stimulus 2, in contrast, when cued for Test 1, the object specificity decayed less and was sustained throughout most of Delay 1 (Fig. 3c, bottom). Later, in Delay 2, no object specificity was evident for either stimulus (no Pclusterā<ā0.60). Figure 3e,f summarizes the temporal evolution of object independence in terms of Bayes factors, showing the swift change of Stimulus 1 encoding from object-specific (BF01ā<ā1/3) towards object-independent (BF01ā>ā3) at the time of Stimulus 2 encoding, whereas the encoding of Stimulus 2 retained object specificity during Delay 1 (Fig. 3e). In Delay 2, after unattended storage throughout Delay 1, the orientation encoding in gaze had become object-independent for both stimuli (Fig. 3f).
Focusing on the delay periods, we examined whether the object specificity of cued orientation encoding differed between the two delay periods (Delay 1 or 2) and/or between the first and second presented stimulus (Stimulus 1 or 2). A 2āĆā2 repeated-measures ANOVA on the difference in encoding strength (within- minus between-objects, averaged across the respective delay periods) showed a main effect of delay period (Delay 1/2; F(1,40)ā=ā6.204, Pā=ā0.017, Ī·2ā=ā0.064) indicating greater object specificity during Delay 1 but no effect of presentation order (Stimulus 1/2; F(1,40)ā=ā0.985, Pā=ā0.327). There was also moderate interaction between the two factors (F(1,40)ā=ā4.466, Pā=ā0.041, Ī·2ā=ā0.027), reflecting that the difference between the two delays was stronger for Stimulus 2. Again, we also inspected these results in terms of the mnemonic distance from stimulus presentation, that is, the time the orientation in question had been unattended while focusing on the other orientation (Fig. 3b). Indeed, this analysis confirmed a decrease in object specificity with increasing mnemonic distance (t(40)ā=āā2.473, Pā=ā0.018, dā=āā0.386, 95% CI (ā0.010, ā0.001); t-test of linear slope against zero).
Together, these results showed that unlike during perceptual processing, gaze patterns during the delay periods reflected WM information in more generalized (or abstract) coordinates and that the level of this abstraction increased after periods of temporary (or partial) inattention.
Cardinal repulsion bias in gaze patterns and behaviour
In studies of WM for stimulus orientation (for example, of Gabor gratings) it is commonly observed that behavioural reports are biased away from the cardinal (vertical and horizontal) axes18,22. We asked (1) whether such a repulsive cardinal bias also occurred with our rotated object stimuli, (2) whether the strength of bias was modulated by periods of inattention38 and (3) the extent to which such bias was already expressed in the geometry of the miniature gaze patterns observed during the delay periods.
To model bias in behaviour, we used a geometrical approach that quantifies bias as a mixture of a perfect (unbiased) circle (Fig. 4a, middle) with perfect (fully biased) square geometries (Fig. 4a, leftmost and rightmost; see Methods, āBehavioural modellingā for details). Intuitively, the mixture parameter B quantifies the extent to which the reported orientations were repulsed away from the cardinal axes, with Bā>ā0 indicating repulsion (that is, cardinal bias), Bā=ā0 no bias and Bā<ā0 attraction.
Fitting the model to participantsā behavioural responses (Fig. 4b, left and middle), we observed values of Bā>ā0 (grand mean, 0.124, s.d.ā=ā0.092) in both memory tests (Test 1 and 2) and for both orientations (Stimulus 1 and 2; all Bā>ā0.071; all t(40)ā>ā4, all Pā<ā0.001; t-tests against 0). Thus, participants overall showed a repulsive cardinal bias, which replicates and extends previous work with simpler stimuli (such as gratings)18,22. A 2āĆā2 repeated-measures ANOVA showed a main effect of test (Test 1/2; F(1,40)ā=ā19.743, Pā<ā0.001, Ī·2ā=ā0.144) indicating a stronger bias on Test 2 and a main effect of presentation order (Stimulus 1/2; F(1,40)ā=ā4.669, Pā=ā0.037, Ī·2ā=ā0.024) with no interaction between the two factors (F(1,40)ā=ā1.083, Pā=ā0.304). The overall pattern could again be described compactly as an increase in cardinal bias with increasing mnemonic distance from stimulus presentation (t(40)ā=ā5.315, Pā<ā0.001, dā=ā0.830, 95% CI (0.019, 0.043); t-test of linear slope against zero; Fig. 4b, left). Thus, we found robust cardinal repulsion in participantsā overt memory reports, and this bias increased with periods of unattended storage.
Finally, we addressed the extent to which the cardinal bias was also reflected in the gaze patterns recorded throughout the two delay periods. To do so, our geometric model yields distinctive distance structures for extreme cardinal repulsion (Bā=ā1; Fig. 4a, rightmost) and attraction (Bā=āā1; Fig. 4a, leftmost), respectively. If the gaze patterns were unbiased, we would expect both these āsquareā models to correlate less well with the data than the unbiased (ācircleā) model with Bā=ā0 (Fig. 4a, middle). However, to the extent that the gaze patterns were repulsively biased, we would expect the repulsion model to outperform the attraction model, nearing (or, in the case of extreme bias, even exceeding) the circle model (dashed black in Fig. 4c). Contrasting repulsion and attraction models thus allowed us to quantify the extent of repulsive or attractive bias in the gaze patterns during the delay periods.
Descriptively, the three different models (repulsion, unbiased, attraction) showed only small differences in correlation with the data (Fig. 4c), indicating that the statistical power to detect bias in the gaze data was relatively low (see Methods, āModel geometriesā). Nevertheless, contrasting the repulsion model with the attraction model showed two small clusters (Pclusterā=ā0.02 and Pclusterā=ā0.035), indicating a repulsive bias, near the end of the delay periods for Stimulus 1 (Fig. 4c, top). A similar tendency for Stimulus 2 failed to reach significance in Delay 2 (Fig. 4c, lower-right; Pclusterā=ā0.085, below display threshold) and was absent in Delay 1 (Fig. 4c, lower-left; no cluster-forming time points). A 2āĆā2 repeated-measures ANOVA on the difference between repulsion and attraction models (averaged across the last second of the delay periods) showed a main effect of presentation order (Stimulus 1/2; F(1,40)ā=ā4.561, Pā=ā0.039 Ī·2ā=ā0.026; main effect of Delay 1/2: F(1,40)ā=ā1.650, Pā=ā0.206; interaction: F(1,40)ā<ā1) indicating a stronger repulsive bias for the first presented orientation (Stimulus 1). Complementary analysis in terms of mnemonic distance (Fig. 4b, right) showed a positive trend similar to that for behaviour, albeit only at the significance level of a one-tailed test (t(40)ā=ā1.772, Pā=ā0.042, dā=ā0.278, 95% CI lower bound ā0.001; t-test of linear slope against zero; one-tailed, hypothesis derived from behavioural result). Together, although the differentiation of models (repulsive, unbiased, attractive) in the gaze data was not as clear-cut as in behaviour (cf. Fig. 4b, right and left), we found indications that the gaze patterns may have carried a repulsive cardinal bias, most evidently during the later portions of the WM delays and after temporary and/or partial inattention to the WM information.
Orientation-dependent microsaccades
Although our RSA-based approach was designed to characterize the time-varying geometries of aggregate gaze-position patterns (Figs. 2ā4), we performed further analysis on request by reviewers to explore whether the findings may indeed be related to microsaccades: that is, small, ājerk-likeā47 eye movements. Figure 5 illustrates the directions of microsaccades detected after stimulus presentation (Stimulus 1, Stimulus 2) and during the two delay periods (Delay 1, Delay 2), respectively, for each of the 16 orientations of the currently relevant object. The saccade directions in the poststimulus periods correlated positively with stimulus orientation (circular correlation coefficients (R): Stimulus 1, Rā=ā0.089; Stimulus 2, Rā=ā0.077; both, Pā<ā0.001). Weakly positive correlations were also evident during the delay periods (Delay 1, Rā=ā0.01; Delay 2, Rā=ā0.03; both, Pā<ā0.001). For further inspection, we again rotated the trial data (analogous to Fig. 1b,c) to illustrate the saccade directions relative to the objectsā real-world (upright) orientation. As expected if microsaccades reflected stimulus orientation, in all time windows, the aligned distributions were not uniform (Rayleigh tests for uniformity: all zā>ā34.36, all Pā<ā0.001) but appeared egg-shaped, with a main peak near the objectās real-world top (at 90Ā°) and another, smaller peak near the opposite angle (270Ā°, which may reflect āreturnā microsaccades to fixation). Together, these complementary results support the idea that the effects observed in our main analyses may have been related to microsaccadic activity during attempted fixation29,48.
Discussion
The processing of WM information during delay periods has been studied extensively using neural recordings (for reviews, see refs. 35,49,50,51,52,53). Here, using novel stimulus materials and tailored geometry analyses, we showed that miniature gaze deflections can disclose an array of WM-associated phenomena that to the best of our knowledge was previously only observed in neural signals, including (1) a sustained encoding of the task-relevant stimulus feature, which (2) shows a different format than during perception, can (3) persist while also encoding new perceptual information, (4) ramps up throughout delay periods when relevant for an upcoming test and (5) returns to baseline when uncued (or āunattendedā). Beyond this, the gaze geometries indicated that temporary inattention rendered the WM information more generalized (object-independent) and potentially more categorically biased. These format changes during maintenance were similarly observed when attention to the memorandum was withdrawn by explicit retro-cueing or by presenting further WM information.
Behaviourally, our results replicate and extend previous findings that temporary inattention renders working memories less precise and more biased37,38. The eye-tracking results shed light on the temporal unfolding of potentially underlying format changes during WM storage. The gaze patterns during perceptual processing were clearly object-specific, indicating a focus on concrete visual details. When the last-seen stimulus (Stimulus 2) was immediately cued (with an auditory retro-cue that offered no visual distraction), some of this object specificity was sustained throughout the ensuing WM delay. In contrast, for the first-presented stimulus (Stimulus 1), the object specificity dropped abruptly as soon as Stimulus 2 processing commenced. Of note, Stimulus 1 encoding did continue throughout Stimulus 2 processing. However, its format changed to object-independent (or āabstractā) during the object-specific (or āconcreteā) encoding of Stimulus 2āas if the memory of Stimulus 1 was reformatted to āevadeā the format of the currently perceived stimulus. Subsequent cueing did not revert this effect, nor did we find any re-emergence of object specificity after unattended storage, for either stimulus, in the second delay. Together, these results support the idea that temporary (or partial) inattention may render the task-relevant WM information (here, orientation) increasingly less āconcreteā (visual-sensory) and more generalized or āabstractā.
Further support for this idea comes from our analysis of the cardinal repulsion bias. In parallel with the object independence of gaze patterns, the repulsive cardinal reporting bias increased with the time a given stimulus had been temporarily (or partially) unattended. Repulsive-orientation bias in WM tasks has been explained, for example, by efficient coding principles, in terms of relatively finer tuning to cardinal orientations, reflecting their relative prevalence in natural environments18. An alternative framing of the cardinal repulsion biases in our experiment with real-life objects could be in terms of more explicit semantic categorization (for example, āleftā/ārightā and āupā/āupside-downā)17,54. The results may thus also reflect increased reliance on semantics39 and/or (pre)verbal labels when restoring information from unattended storage34, which would be in line with a higher level of abstraction. Although our geometrical analysis approach is agnostic to the mechanistic cause of cardinal biases, we found some indications that they were also evident in biased gaze geometries during the stimulus-free retention periods (for related findings in neuroimaging, see refs. 22,23,55). The latter result was statistically weak and should be revisited in future work, possibly under conditions that induce even stronger biases in behaviour (for example, higher WM loads)18.
A remarkable aspect of our results is the small amplitude of the eye movements that disclosed such rich information. The mass of the raw position samples in our analysis were within a <1Ā° visual angle around fixation (Fig. 1c). A discernible ācircularā structure in averaged data points (Fig. 1d) measured only ~0.2ā0.3Ā° in diameter, which is near the eye-trackerās accuracy limit, and was only a fraction of the memory itemsā physical size. Together with our online fixation control (Methods), these descriptives render it unlikely that our results were attributable to reflexive saccades to the location of peripheral stimulus features. Further analysis (Fig. 5) indicated that the findings more likely reflect microsaccadic activity during attempted fixation. Systematic microsaccade patterns have previously been linked to covert spatial attention29,48, indicating that in the present context, they might have reflected mental orienting towards a spatial coordinate or direction30,31. Together, our results indicate that participants generally oriented attention towards the objectsā real-life ātopā, but with varying degrees of bias towards specific object features (resulting in object-specific orientation patterns) and/or away from cardinal axes (resulting in cardinal repulsion).
Under a view of the miniature gaze patterns reflecting covert spatial attention, our analysis tracked with high temporal resolution the time course of attention allocation to WM information in a dual retro-cue task. Before cueing, encoding a new stimulus (Stimulus 2) did not immediately eradicate or replace the attentional orienting to the previous stimulus (but did change its qualitative format; see above). At face value, the temporary simultaneity of both WM contents (Fig. 2e), in a putative index of attention, might seem to be at odds with the idea of an exclusive single-item focus of attention in WM56,57 (but see refs. 58,59). However, another possible interpretation is that the (re-)allocation of attention to different stimuli (or tasks) in WM may take time to complete. For instance, the encoding of the uncued stimulus fully returned to baseline only ~0.5ā1.5ās after the cue, which is broadly consistent with previous behavioural and EEG work on the time course of WM-cueing effects27,60,61,62. Compared to this, the reformatting into a more generic, object-independent format was rapid, both for Stimulus 1 when encoding Stimulus 2 (see above) and for Stimulus 2 itself when it was uncued. Consistent with these results, a recent study found that low-level perceptual bias induced by concurrent WM information (cf. ref. 63) dissipated quickly with new visual input44. These findings are in line with adaptive format changes in WM, potentially providing fast protection from interference beyond the overall reallocation of attention between different stimuli and/or tasks.
Previous WM studies using retro-cues yielded mixed results about potential costs for the first of two successively presented stimuli. Using visual retro-cues, one study found no differences between visual gratings presented first or second in either behaviour or functional imaging-decoding during the WM delay8, which has been taken as evidence that intervening stimuli may cause little to no interference for visual WM representations9. Another study, using visual retro-cues with tactile WM stimuli, did find lower performance for the first stimulus27, a finding we replicated here with auditory cueing of visual WM information. One possibility is that different-modality cues (for example, auditory when the WM stimuli were visual) interfere less with the short-term memory of the last-presented stimulus than same-modality cues would (for example, visual cues with visual WM stimuli)38. Different-modality cues may thus leave the memory trace of the last-presented stimulus more intact compared to the first stimulus (which is always followed by the same-modality input of the second stimulus). This aside, the format changes induced by the intervening stimulus were qualitatively similar to those after unattended storage, in line with a common explanation in terms of temporarily withdrawn attention.
Our findings of increasingly more object-independent gaze geometries do not rule out that the brain may maintain detailed visual memories in ways that would not register in eye tracking. More generally, we can only speculate whether the minuscule eye movements observed in our experiment played a functional role or whether they were merely epiphenomena of other processes. We consider it possible that our paradigm promoted aspects of WM-related processing to become visible at the surface of ocular activity, but that the ocular activity itself may have had little or no direct role in the WM processing proper (for related discussion, see refs. 30,47,64; but see refs. 65,66 for a role of eye movements in episodic memory retrieval). This speculation also takes note of several recent failures to decode visuospatial WM information from eye tracking, most notably in control analyses supplemental to neural decoding, where systematic eye movements were ruled out as a potential confound12,67,68,69 (but see refs. 32,33,70). At the same time, our findings sound a cautionary note that stimulus-dependent eye movements in visual WM tasks can be very small, hard to prevent, persistent and, above all, informative.
In summary, despite discouraging participants from eye motion through closed-loop fixation control, we found the orientation of visual objects robustly reflected in miniature gaze patterns during cued WM maintenance. The geometry of the gaze patterns underwent systematic changes, indicating that temporary inattention increased the level of abstraction (and categorical bias) of the information in WM. Stimulus-dependent eye movements may not only pose a potential confound but also be a valuable source of information in studying visuospatial WM.
Methods
Participants
Fifty-five participants (31 female, 24 male, mean age 26.95āĀ±ā3.98 years) took part in the experiment. Forty-four of the participants were recruited from a pool of external participants, and 11 were recruited internally in the Max Planck Institute for Human Development. All participants were blind to our research questions, and all of them received compensation of ā¬10 per hour plus a bonus on the basis of task performance (ā¬5 bonus if four out of five randomly selected memory reports were correct). Written informed consent was obtained from all participants, and all experiments were approved by the ethics committee of the Max Planck Institute for Human Development. Two participants (both wearing glasses) were excluded because of difficulties in acquiring a stable eye-tracking signal, and one participant was excluded because she reported feeling unwell during the experimental session. Of the remaining participants, we excluded nā=ā9 for failing to perform above chance level in each of the two memory tests (Pā<ā0.05, Binomial test against 50% correct responses). Finally, after preprocessing the eye-tracking data, we excluded nā=ā2 participants for whom more than 15% of the data had to be rejected because of blinks and other recording artefacts. After this, nā=ā41 participants remained for analysis.
Stimuli, task and procedure
Nine colour photographs of everyday objects from the BOSS database71 (candelabra, table, outdoor chair, crown, radio, lighthouse, lamppost, nightstand, gazebo) were used as stimuli. All objects were cropped (that is, background removed), and one object (gazebo) was slightly modified using GNU image manipulation software v.2.1 (http://www.gimp.org) to increase its mirror symmetry. We grouped the pictures into three different sets of three, always combining objects with different aspect ratios (width/height; see the example set in Fig. 3a). Each participant was assigned one of these sets, with each set being used similarly often across the participant sample (two sets were used 18 times and one set 19 times). As auditory cue stimuli, we prepared recordings of the words āoneā, ātwoā and āthanksā spoken by a female lab member. The recordings were time-compressed to a common length of 350āms using a pitch-preserving algorithm provided in Audacity v.2.3.0 (GNU software; https://www.audacityteam.org/).
Each trial started with a fixation dot (8āĆā8āpx, corresponding to a 0.17āĆā0.17Ā° visual angle) displayed at the centre of the screen for 500ā1,000āms (randomly varied), followed by sequential presentation of two objects, each in a random orientation (see below). Each stimulus was displayed for 500āms (display size ~6.5Ā° visual angle, see Fig. 1c) followed by a 500āms blank screen. After this, an auditory retro-cue (āoneā or ātwoā, 350āms) indicated which of the two stimulus orientations was to be reported after a delay (Delay 1, 3,500āms) in the upcoming memory test (Test 1). Test 1 started with the cued object reappearing on display, but with its previous orientation changed by Ā±6.43Ā°. Participants were asked to indicate by means of key press (2-AFC) whether the object would need to be rotated clockwise or anticlockwise (right or left arrow key) to match its memorized orientation. On key press, the object rotated accordingly (by 6.43Ā°), followed by a written feedback message (ācorrectā or āincorrectā) displayed in the upper part of the screen (500āms). After another 500āms, in half the trials, an auditory message (āthanksā, 350āms) signalled the end of the trial. In the other half of the trials (randomly varied), a second auditory retro-cue (Cue 2) was presented (for example, ātwoā, if the first retro-cue was āoneā), indicating that the thus-far-untested stimulus orientation would still need to be reported. In these trials, another delay period ensued (Delay 2, 2,500āms), and participantsā memory for the second-cued stimulus was tested (Test 2), using the same procedure as before for the first-cued stimulus in Test 1. Each participant performed 16 blocks of 32 trials, for a total of 512 trials (265 of which included a Test 2).
Stimulus presentation was pseudorandom across trials, with the following restrictions: (1) each pairing of objects from the participantās object set occurred equally often, (2) each object was equally often presented first (as Stimulus 1) and second (as Stimulus 2), and (3) Stimulus 1 and Stimulus 2 were equally often cued for Test 1. The orientations of the two objects on each trial were drawn randomly and independently from 16 equidistant values (11.25Ā° to 348.75Ā° in steps of 22.5Ā°), which excluded the cardinal axes (0Ā°, 90Ā°, 180Ā° and 270Ā°).
The experiment was run using Psychophysics Toolbox v.3 (ref. 72) with the included Eyelink Toolbox73 in MATLAB 2017a (MathWorks). The visual stimuli were presented on a 60āĆā34ācm screen with a 2,560āĆā1,440āpx resolution and a frame rate of 60āHz. The auditory cue words were presented through desktop loudspeakers (Harman Kardon KH206). To minimize head motion, participants performed the experiment with their head positioned on a chin rest with a viewing distance of ~62ācm from the screen. Gaze position was monitored and recorded throughout the experiment at a sampling rate of 500āHz using a desktop-mounted EyeLink 1000 eye-tracker (SR Research), with file and link/analogue filters set to āEXTRAā and āSTDā, respectively.
Participants were instructed to constantly keep their gaze on the fixation dot, which was displayed throughout the entire trial except for the test and feedback periods. Whenever a participantās gaze deviated more than 71āpx (1.53Ā° visual angle) from the centre of the fixation dot either before object presentation or for longer than 500āms during any of the two delay periods, a warning message (āFixateā) was displayed at the centre of the screen. This occurred during less than 15% (mean: 13.03%) of the trial epoch on average.
Behavioural modelling
To model participantsā behavioural memory reports (2-AFC), we used a geometrical approach similar to that used in our eye-tracking analyses (see below). We first defined three prototypical geometries: (1) an unbiased ācircleā model (Mcircle) corresponding to the memory itemsā 16 original orientations (Fig. 4a, middle), (2) a cardinal repulsion model (Mrepulsion) that shifts the 16 orientations to the nearest diagonal orientation (that is, 45Ā°, 135Ā° 225Ā° or 315Ā°; see Fig. 4a, rightmost) and (3) a cardinal attraction model (Mattraction) that shifts them to the nearest cardinal orientation (that is, 0Ā°, 90Ā°, 180Ā° or 270Ā°; see Fig. 4a, leftmost). The continuum from attraction to repulsion was formalized with a mixture parameter B (ranging from ā1 to 1), which blends the circle model with the repulsion model for Bāā„ā0
and with the attraction model for Bā<ā0
Figure 4a illustrates the resulting model continuum from Bā=āā1 (maximal attraction) over Bā=ā0 (unbiased) to Bā=ā1 (maximal repulsion). To simulate memory reports (clockwise or anticlockwise) for each trial, we computed the angular difference d between the orientation modelled in Mmix and the probe orientation displayed at test and transformed it into a probability of making a āclockwiseā response using a logistic choice function
where s is a noise parameter that relates inversely to memory strength or precision (see also ref. 74). For completeness, our model also allowed for greater memory precision near the cardinal axes (a so-called oblique effect18,75). This was implemented by a further parameter c, which up- or downregulated noise s for those eight orientations in the stimulus set that were near the cardinal axes (Fig. 4a, middle) relative to the remaining eight orientations that were nearer the diagonal axes
where values of cā<ā0 would indicate relatively greater precision (lower noise) near the cardinal axes (that is, an oblique effect). The model was fitted to the memory reports of each participant individually using exhaustive gridsearch (B, ā1ā¦1; s, 0ā¦1; c, ā0.5ā¦0.5; with a step size of 0.01 for each parameter) and least squares to identify the best-fitting parameter values.
Although our analysis focused on bias (B), we note for completeness that we also observed values of c significantly smaller than 0 (mean across conditions: cā=āā0.292, t(40)ā=āā10.159, P < .001, dā=āā1.586, 95% CI (ā0.350, ā0.234), t-test against 0): that is, an oblique effect, which replicates and extends previous work18. The strength of this oblique effect tended to decrease with mnemonic distance (t(40)ā=ā2.286, dā=ā0.357, Pā=ā0.028; t-test of linear slope against zero) (see ref. 18 for related findings).
Eye-tracking analysis
The eye-tracking data were only minimally preprocessed. The data from each participant were zero-centred (using the overall mean over all trials), and data points with a Euclidean distance larger than 100āpx (corresponding to a 2.17Ā° visual angle) from the zero-centre were excluded from analysis (Fig. 1c and Fig. 5 show data before this exclusion). We analysed the data in two epochs of interest, one time-locked to Stimulus 1 onset (from ā500āms until the onset of Test 1 at 5,850āms) and the other time-locked to Cue 2 onset (from ā500āms until the onset of Test 2 at 2,850āms). After artefact exclusion, on average 97.87% (s.d.ā=ā1.50%, first epoch) and 95.14% (s.d.ā=ā3.67%, second epoch) of the data remained for analysis.
Representational similarity analysis
RSA of the gaze-position data was performed separately for each participant using a single-trial approach. For each trial, we first obtained the trial average for each of the 16 orientations while leaving out the current trial. We then computed at each time point the 16 Euclidean distances between the gaze position in the current trial and the trial averages formed from the remaining data. This yielded a representational dissimilarity vector (RDV) of the distances between the (single-trial) gaze associated with the orientation in the current trial and the (trial-averaged) gaze associated with each of the 16 orientations (Fig. 2b). To examine orientation encoding, we computed at each time point and for each trial the Pearson correlation (Ļ) between the empirical RDV and the theoretical RDV predicted under a model of orientation encoding (see below) for the orientation on the current trial. When averaged over trials (and hence also across orientations), the procedure yields a leave-one-out cross-validated time course of orientation encoding, similar to more conventional RSA approaches with trial averages. However, the single-trial approach additionally retains the trial-by-trial variability in orientation encoding (Fig. 2b, right, and Fig. 2e).
To examine orientation encoding within and between objects (Fig. 3a), we used the same approach but obtained the 16 trial averages separately for each of the three different objects in the participantās stimulus set. This yielded three empirical RDVs per trial (one within and two between objects) that were independently correlated (Pearsonās Ļ) with the model RDV. The two between-objects correlations were then averaged.
All RSA results were obtained individually for each participant and examined statistically on the group level. We used cluster-based permutation testing76, where we first identified clusters of consecutive samples that showed an effect with Psampleā<ā0.05 (uncorrected) and calculated the sum of t-values in a cluster as its test statistic. We then estimated the probability Pcluster that a cluster with a larger test statistic would emerge by chance, on the basis of 20,000 iterations where the individual participant effects were randomly sign-flipped. Unless otherwise specified, all reported statistical tests were two-sided.
Model geometries
Our basic orientation model was a perfect circle geometry (Fig. 2a, left), where the model RDVs reflected the pairwise Euclidean distances between 16 evenly spaced points on a circle (Fig. 2a, right; note that each line in the distance matrix corresponds to the model RDV for a given stimulus orientation (Fig. 2b)). The geometry of this model corresponds to our behavioural analysis model with Bā=ā0 (that is, Mcircle, unbiased). To examine bias in the gaze patterns (Fig. 4), we used the Euclidean distance structures associated with our maximally biased models with Bā=āā1 (Mattraction) and Bā=ā1 (Mrepulsion), respectively. Comparing these two extreme models (which both have a square geometry) yields an estimate of the extent to which the gaze patterns were repulsively or attractively biased (Results). Note that the distance structures expected under the three different models (Bā=ā0, Bā=ā1 and Bā=āā1) correlate with each other (rā=ā0.77 and 0.34). We thus did not expect very large differences in their fit of the data and report the results with a more liberal statistical threshold (Pclusterā<ā0.05).
Microsaccade detection
For complementary analysis of microsaccades (Fig. 5), we used a velocity-based detection algorithm established in previous work77,78,79. In brief, the gaze-position data were transformed into a velocity time course by calculating the Euclidean distances between consecutive samples and smoothing with a 7āms Gaussian kernel. Saccade onsets and endpoints were inferred from when the gaze velocity exceeded a trial-specific threshold (5 times the median velocity in the trial) and when it returned to below threshold, with a minimum interval of 100āms between successively detected saccades.
Reporting summary
Further information on the research design is available in the Nature Portfolio Reporting Summary linked to this article.
Data availability
The data that support the findings of this study are available at https://gin.g-node.org/lindedomingo/mpib_memoreye.
Code availability
The experiment and analysis code is available at https://gin.g-node.org/lindedomingo/mpib_memoreye.
References
Sperling, G. The information available in brief visual presentations. Psychol. Monogr. 74, 1ā29 (1960).
Brady, T. F., Konkle, T., Alvarez, G. A. & Oliva, A. Visual long-term memory has a massive storage capacity for object details. Proc. Natl Acad. Sci. 105, 14325ā14329 (2008).
Bays, P. M., Catalao, R. F. G. & Husain, M. The precision of visual working memory is set by allocation of a shared resource. J. Vis. 9, 7 (2009).
Cowan, N. The magical number 4 in short-term memory: A reconsideration of mental storage capacity. Behav. Brain Sci. 24, 87ā114 (2001).
Luck, S. J. & Vogel, E. K. The capacity of visual working memory for features and conjunctions. Nature 390, 279ā281 (1997).
Ma, W. J., Husain, M. & Bays, P. M. Changing concepts of working memory. Nat. Neurosci. 17, 347ā356 (2014).
Christophel, T. B., Iamshchinina, P., Yan, C., Allefeld, C. & Haynes, J.-D. Cortical specialization for attended versus unattended working memory. Nat. Neurosci. 21, 494ā496 (2018).
Harrison, S. A. & Tong, F. Decoding reveals the contents of visual working memory in early visual areas. Nature 458, 632ā635 (2009).
Rademaker, R. L., Chunharas, C. & Serences, J. T. Coexisting representations of sensory and mnemonic information in human visual cortex. Nat. Neurosci. 22, 1336ā1344 (2019).
Serences, J. T., Ester, E. F., Vogel, E. K. & Awh, E. Stimulus-specific delay activity in human primary visual cortex. Psychol. Sci. 20, 207ā214 (2009).
Riggall, A. C. & Postle, B. R. The relationship between working memory storage and elevated activity as measured with functional magnetic resonance imaging. J. Neurosci. 32, 12990ā12998 (2012).
Kwak, Y. & Curtis, C. E. Unveiling the abstract format of mnemonic representations. Neuron 110, 1822ā1828.e5 (2022).
Romo, R., Brody, C. D., HernĆ”ndez, A. & Lemus, L. Neuronal correlates of parametric working memory in the prefrontal cortex. Nature 399, 470ā473 (1999).
Spitzer, B. & Blankenburg, F. Supramodal parametric working memory processing in humans. J. Neurosci. 32, 3287ā3295 (2012).
Vergara, J., Rivera, N., Rossi-Pool, R. & Romo, R. A neural parametric code for storing information of more than one sensory modality in working memory. Neuron 89, 54ā62 (2016).
Bae, G.-Y., Olkkonen, M., Allred, S. R. & Flombaum, J. I. Why some colors appear more memorable than others: a model combining categories and particulars in color working memory. J. Exp. Psychol. Gen. 144, 744ā763 (2015).
Hardman, K. O., Vergauwe, E. & Ricker, T. J. Categorical working memory representations are used in delayed estimation of continuous colors. J. Exp. Psychol. Hum. Percept. Perform. 43, 30ā54 (2017).
Taylor, R. & Bays, P. M. Efficient coding in visual working memory accounts for stimulus-specific variations in recall. J. Neurosci. 38, 7132ā7142 (2018).
Ricker, T. J. & Cowan, N. Loss of visual working memory within seconds: the combined use of refreshable and non-refreshable features. J. Exp. Psychol. Learn. Mem. Cogn. 36, 1355 (2010).
Vergauwe, E., Camos, V. & Barrouillet, P. The impact of storage on processing: how is information maintained in working memory? J. Exp. Psychol. Learn. Mem. Cogn. 40, 1072ā1095 (2014).
Christophel, T. B., Allefeld, C., Endisch, C. & Haynes, J.-D. View-independent working memory representations of artificial shapes in prefrontal and posterior regions of the human brain. Cereb. Cortex 28, 2146ā2161 (2018).
Bae, G. Y. Neural evidence for categorical biases in location and orientation representations in a working memory task: EEG decoding of categorical biases. Neuroimage 240, 118366 (2021).
Wolff, M. J., Jochim, J., AkyĆ¼rek, E. G., Buschman, T. J. & Stokes, M. G. Drifting codes within a stable coding scheme for working memory. PLoS Biol. 18, 1ā19 (2020).
Yu, Q., Panichello, M. F., Cai, Y., Postle, B. R. & Buschman, T. J. Delay-period activity in frontal, parietal, and occipital cortex tracks noise and biases in visual working memory. PLoS Biol. 18, e3000854 (2020).
Barbosa, J., Lozano-Soldevilla, D. & Compte, A. Pinging the brain with visual impulses reveals electrically active, not activity-silent, working memories. PLoS Biol. 19, e3001436 (2021).
King, J.-R., Pescetelli, N. & Dehaene, S. Brain mechanisms underlying the brief maintenance of seen and unseen sensory information. Neuron 92, 1122ā1134 (2016).
Spitzer, B. & Blankenburg, F. Stimulus-dependent EEG activity reflects internal updating of tactile working memory in humans. Proc. Natl Acad. Sci. 108, 8444ā8449 (2011).
Wolff, M. J., Jochim, J., AkyĆ¼rek, E. G. & Stokes, M. G. Dynamic hidden states underlying working-memory-guided behavior. Nat. Neurosci. 20, 864ā871 (2017).
Engbert, R. & Kliegl, R. Microsaccades uncover the orientation of covert attention. Vision Res. 43, 1035ā1045 (2003).
Liu, B., Nobre, A. C. & van Ede, F. Functional but not obligatory link between microsaccades and neural modulation by covert spatial attention. Nat. Commun. 13, 3503 (2022).
van Ede, F., Chekroud, S. R. & Nobre, A. C. Human gaze tracks attentional focusing in memorized visual space. Nat. Hum. Behav. 3, 462ā470 (2019).
Mostert, P. et al. Eye movement-related confounds in neural decoding of visual working memory representations. eNeuro https://doi.org/10.1523/ENEURO.0401-17.2018 (2018).
Quax, S. C., Dijkstra, N., van Staveren, M. J., Bosch, S. E. & van Gerven, M. A. Eye movements explain decodability during perception and cued attention in MEG. Neuroimage 195, 444ā453 (2019).
Beukers, A. O., Buschman, T. J., Cohen, J. D. & Norman, K. A. Is activity silent working memory simply episodic memory? Trends Cogn. Sci. 25, 284ā293 (2021).
Stokes, M. G. āActivity-silentā working memory in prefrontal cortex: a dynamic coding framework. Trends Cogn. Sci. 19, 394ā405 (2015).
Bae, G.-Y. & Luck, S. J. Dissociable decoding of spatial attention and working memory from EEG oscillations and sustained potentials. J. Neurosci. 38, 409ā422 (2018).
Emrich, S. M., Lockhart, H. A. & Al-Aidroos, N. Attention mediates the flexible allocation of visual working memory resources. J. Exp. Psychol. Hum. Percept. Perform. 43, 1454 (2017).
Bae, G. & Luck, S. J. What happens to an individual visual working memory representation when it is interrupted? Br. J. Psychol. 110, 268ā287 (2019).
KerrƩn, C., Linde-Domingo, J. & Spitzer, B. Prioritization of semantic over visuo-perceptual aspects in multi-item working memory. Preprint at http://biorxiv.org/lookup/doi/10.1101/2022.06.29.498168 (2022).
Lewis-Peacock, J. A. & Postle, B. R. Decoding the internal focus of attention. Neuropsychologia 50, 470ā478 (2012).
Rose, N. S. et al. Reactivation of latent working memories with transcranial magnetic stimulation. Science 354, 1136ā1139 (2016).
Kriegeskorte, N. & Kievit, R. A. Representational geometry: integrating cognition, computation, and the brain. Trends Cogn. Sci. 17, 401ā412 (2013).
Ester, E. F., Sutterer, D. W., Serences, J. T. & Awh, E. Feature-selective attentional modulations in human frontoparietal cortex. J. Neurosci. 36, 8188ā8199 (2016).
Kang, Z. & Spitzer, B. Concurrent visual working memory bias in sequential integration of approximate number. Sci. Rep. 11, 1ā12 (2021).
Barak, O., Tsodyks, M. & Romo, R. Neuronal population coding of parametric working memory. J. Neurosci. 30, 9424ā9430 (2010).
Watanabe, K. & Funahashi, S. Prefrontal delay-period activity reflects the decision process of a saccade direction during a free-choice ODR task. Cereb. Cortex 17, i88āi100 (2007).
Rolfs, M. Microsaccades: small steps on a long way. Vis. Res. 49, 2415ā2441 (2009).
Hafed, Z. M. & Clark, J. J. Microsaccades as an overt measure of covert attention shifts. Vis. Res. 42, 2533ā2545 (2002).
Christophel, T. B., Klink, P. C., Spitzer, B., Roelfsema, P. R. & Haynes, J.-D. The distributed nature of working memory. Trends Cogn. Sci. 21, 111ā124 (2017).
DāEsposito, M. & Postle, B. R. The cognitive neuroscience of working memory. Annu. Rev. Psychol. 66, 115ā142 (2015).
Goldman-Rakic, P. S. Architecture of the prefrontal cortex and the central executive. Ann. N. Y. Acad. Sci. 769, 71ā84 (1995).
Miller, E. K., Lundqvist, M. & Bastos, A. M. Working memory 2.0. Neuron 100, 463ā475 (2018).
Wang, X.-J. 50 years of mnemonic persistent activity: quo vadis? Trends Neurosci. 44, 888ā902 (2021).
Ricker, T., Souza, A. S. & Vergauwe, E. Feature identity determines representation structure in working memory. Preprint at https://osf.io/k7ptm (2022).
Ester, E. F., Sprague, T. C. & Serences, J. T. Categorical biases in human occipitoparietal cortex. J. Neurosci. 40, 917ā931 (2020).
Oberauer, K. Access to information in working memory: exploring the focus of attention. J. Exp. Psychol. Learn. Mem. Cogn. 28, 411ā421 (2002).
Olivers, C. N. L., Peters, J., Houtkamp, R. & Roelfsema, P. R. Different states in visual working memory: when it guides attention and when it does not. Trends Cogn. Sci. 15, 327ā334 (2011).
Beck, V. M., Hollingworth, A. & Luck, S. J. Simultaneous control of attention by multiple working memory representations. Psychol. Sci. 23, 887ā898 (2012).
Zhang, B., Liu, S., Doro, M. & Galfano, G. Attentional guidance from multiple working memory representations: evidence from eye movements. Sci. Rep. 8, 1ā9 (2018).
LaRocque, J. J., Lewis-Peacock, J. A., Drysdale, A. T., Oberauer, K. & Postle, B. R. Decoding attended information in short-term memory: an EEG study. J. Cogn. Neurosci. 25, 127ā142 (2013).
Souza, A. S. & Oberauer, K. In search of the focus of attention in working memory: 13 years of the retro-cue effect. Atten. Percept. Psychophys. 78, 1839ā1860 (2016).
Spitzer, B., Gloel, M., Schmidt, T. T. & Blankenburg, F. Working memory coding of analog stimulus properties in the human prefrontal cortex. Cereb. Cortex 24, 2229ā2236 (2014).
Teng, C. & Kravitz, D. J. Visual working memory directly alters perception. Nat. Hum. Behav. 3, 827ā836 (2019).
Loaiza, V. M. & Souza, A. S. The eyes donāt have it: eye movements are unlikely to reflect refreshing in working memory. PLoS ONE 17, e0271116 (2022).
Ferreira, F., Apel, J. & Henderson, J. M. Taking a new look at looking at nothing. Trends Cogn. Sci. 12, 405ā410 (2008).
Johansson, R. & Johansson, M. Look here, eye movements play a functional role in memory retrieval. Psychol. Sci. 25, 236ā242 (2014).
Brissenden, J. A. et al. Topographic cortico-cerebellar networks revealed by visual attention and working memory. Curr. Biol. 28, 3364ā3372.e5 (2018).
GĆ¼nseli, E. et al. Overlapping neural representations for dynamic visual imagery and stationary storage in spatial working memory. Preprint at https://www.biorxiv.org/content/10.1101/2022.09.24.509255v1 (2022).
Muhle-Karbe, P. S., Myers, N. E. & Stokes, M. G. A hierarchy of functional states in working memory. J. Neurosci. 41, 4461ā4475 (2021).
Thielen, J., Bosch, S. E., van Leeuwen, T. M., van Gerven, M. A. J. & van Lier, R. Evidence for confounding eye movements under attempted fixation and active viewing in cognitive neuroscience. Sci. Rep. 9, 17456 (2019).
Brodeur, M. B., GuƩrard, K. & Bouras, M. Bank of Standardized Stimuli (BOSS) phase II: 930 new normative photos. PLoS ONE 9, e106953 (2014).
Brainard, D. H. & Vision, S. The psychophysics toolbox. Spat. Vis. 10, 433ā436 (1997).
Cornelissen, F. W., Peters, E. M. & Palmer, J. The Eyelink Toolbox: eye tracking with MATLAB and the Psychophysics Toolbox. Behav. Res. Methods Instrum. Comput. 34, 613ā617 (2002).
Schurgin, M. W., Wixted, J. T. & Brady, T. F. Psychophysical scaling reveals a unified theory of visual memory strength. Nat. Hum. Behav. 4, 1156ā1172 (2020).
Pratte, M. S., Park, Y. E., Rademaker, R. L. & Tong, F. Accounting for stimulus-specific variation in precision reveals a discrete capacity limit in visual working memory. J. Exp. Psychol. Hum. Percept. Perform. 43, 6 (2017).
Maris, E. & Oostenveld, R. Nonparametric statistical testing of EEG- and MEG-data. J. Neurosci. Methods 164, 177ā190 (2007).
De Vries, E., Fejer, G. & Van Ede, F. No trade-off between the use of space and time for working memory. Preprint at http://biorxiv.org/lookup/doi/10.1101/2023.01.20.524861 (2023).
De Vries, E. & Van Ede, F. Microsaccades track location-based object rehearsal in visual working memory. Preprint at http://biorxiv.org/lookup/doi/10.1101/2023.03.21.533618 (2023).
Liu, B., Alexopoulou, Z.-S. & Van Ede, F. Jointly looking to the past and the future in visual working memory. Preprint at http://biorxiv.org/lookup/doi/10.1101/2023.01.30.526235 (2023).
Acknowledgements
We thank I. Padezhki, C. Wicharz, J. Hebisch, A. Faschinger, G. Inciuraite and A. Anouk Bielefeldt for their help with data collection and J. WƤscher for participant recruitment. We also thank J. Hebisch for recording the auditory stimuli, M. Rolfs for helpful comments and discussion, R. Hertwig for general support and T. Graham for editorial assistance. Part of this work was conducted at the Max Planck Dahlem Campus of Cognition of the Max Planck Institute for Human Development, Berlin, Germany. This research was supported by European Research Council Consolidator Grant ERC-2020-COG-101000972 (B.S.) and by DFG grant SP 1510/7-1 (B.S.). J.L.-D. received support from Ramon-y-Cajal fellowship RYC2021-033940-I by the Spanish Ministry of Science and Innovation. The funders had no role in the study design, data collection and analysis, decision to publish or preparation of the manuscript.
Funding
Open access funding provided by Max Planck Society.
Author information
Authors and Affiliations
Contributions
J.L.-D. was responsible for data curation, formal analysis, investigation, validation and visualization. B.S. was responsible for funding acquisition, methodology, resources and supervision. Both authors contributed to study conceptualization, project administration, and writing, reviewing and editing the manuscript.
Corresponding authors
Ethics declarations
Competing interests
The authors declare no competing interests.
Peer review
Peer review information
Nature Human Behaviour thanks Freek van Ede and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available.
Additional information
Publisherās note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary information
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the articleās Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the articleās Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Linde-Domingo, J., Spitzer, B. Geometry of visuospatial working memory information in miniature gaze patterns. Nat Hum Behav 8, 336ā348 (2024). https://doi.org/10.1038/s41562-023-01737-z
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1038/s41562-023-01737-z