Gaze-pattern similarity at encoding may interfere with future memory

Human brains have a remarkable ability to separate streams of visual input into distinct memory-traces. It is unclear, however, how this ability relates to the way these inputs are explored via unique gaze-patterns. Moreover, it is yet unknown how motivation to forget or remember influences the link between gaze similarity and memory. In two experiments, we used a modified directed-forgetting paradigm and either showed blurred versions of the encoded scenes (Experiment 1) or pink noise images (Experiment 2) during attempted memory control. Both experiments demonstrated that higher levels of across-stimulus gaze similarity relate to worse future memory. Although this across-stimulus interference effect was unaffected by motivation, it depended on the perceptual overlap between stimuli and was more pronounced for different scene comparisons, than scene–pink noise comparisons. Intriguingly, these findings echo the pattern similarity effects from the neuroimaging literature and pinpoint a mechanism that could aid the regulation of unwanted memories.

. Sequence of events during a remember (green cue), control (white cue), and forget (purple cue) trial of Experiment 1 and Experiment 2. Both experiments entailed two experimental sessions which were separated by two or three days: each trial of session 1 consisted of a memory-encoding and a memory-regulation phase, while session 2 consisted of a recognition-memory test. 3. The other two comparisons (i.e., remember versus forget & control versus forget) did not reach statistical significance (both p's > .05). These results suggest that participants succeeded to enhance (remember > control), but not suppress (forget < control), their memory.

Results
Additional analyses of image memorability revealed that participants forgot different pictures, thus indicating that any observed differences in gaze behavior between remembered (hits) and forgotten (misses) images are unlikely to be explained by actual differences between the images themselves (see online Supplementary Information; in addition to the image-memorability analyses, the online Supplementary Information also includes: analyses of reaction times during the recognition test, subjective motivation and effort ratings, potential strategies for remembering or forgetting the scenes, respectively, and local versus global gaze similarity analyses).
Gaze-similarity analyses. Gaze-pattern similarity was computed using the ScanMatch toolbox for Matlab 49 (The MathWorks, Natick, MA). Using this method, each sequence of fixations on an image was spatially (12 × 8 bin ROI grid) and temporally (50 ms) binned. Pairs of these sequences were then compared using the Needleman-Wunsch algorithm and the correspondence between pairs was expressed by a normalized similarity score (0 = no correspondence, 1 = identical; see Fig. 2). Mean similarity scores were analyzed with a Motivation (remember, control, forget) × Memory Accuracy (hits vs. misses) repeated measures ANOVA. Results are depicted in Fig. 3b.
Global encoding-regulation similarity To further asses the across-stimulus similarity and memory relationship, a more global type of similarity-similar to that used in previous neural reinstatement studies-was computed 15 . First, for each participant, stimuli were back-sorted according to subsequent memory (hits versus misses) 15,52,53 . Then, global similarity was computed separately for hits and misses. Specifically, the gaze pattern of each hit (or miss) in the memory-encoding phase was compared to the gaze patterns of all other hits (or misses) in the memory-regulation phase (see dashed arrows in Fig. 3a). The ANOVA on this similarity measure revealed a significant main effect of memory accuracy, F(1,33) = 21.64, f = .81 (95% CI = [.40, 1.29]), p < .001, BF Inclusion = 6.8 × 10 6 , indicating higher similarity scores for misses compared to hits (across motivational conditions; see Fig. 3b). This result supports the idea that across-stimulus gaze similarity may interfere with memory. All other effects did not reach statistical significance (p's > .05). Please see the online Supplementary Information for a direct comparison of (local) encoding-regulation and global encoding-regulation similarity scores.
Global encoding-encoding similarity As another global measure, we also compared the gaze pattern of each hit (or miss) in the memory-encoding phase with the gaze patterns of all other hits (or misses) in the memory-encoding phase (see dotted arrows in Fig. 3a). The ANOVA on this global similarity measure also revealed a significant main effect of memory accuracy, In summary, the results of Experiment 1 provide preliminary evidence that: (1) when different images are encoded with more similar gaze patterns, their subsequent memory is reduced, and (2) that this similarity-memory relationship is unaffected by extrinsic motivation. For the ease of this and later discussion, we will refer to the former effect as the across-stimulus interference effect. It should be noted that this effect was observed both when comparing the gaze patterns during viewing of scene and other scene images (global encoding-encoding similarity) as well as when comparing the gaze patterns during viewing of scene and blurred scene images (global encoding-regulation similarity). The blurred images were however blurred versions of the encoded scene images. Hence, although they might have appeared differently, there was clearly some "perceptual overlap". The visual distraction studies described in the introduction suggest that only when there is such perceptual overlap between stimuli, memory will be disrupted 29,30 . Taking this into account, we set up Experiment 2 and changed the blurred scenes in the regulation phase to pink noise images. Experiment 2. The experimental procedures of Experiment 2 were identical to Experiment 1, with the exception that blurred images were replaced with pink noise images in the memory-regulation phase (see Fig. 1). These pink noise images were randomly created and had no prior connection, nor perceptual overlap, with the encoded scene images. Although pink noise images are known to elicit an overall less explorative scanning pattern when compared to natural images, they induce a greater level of visual exploration than white noise images-i.e., larger saccades, higher fixation-rate, and shorter fixation-durations 54,55 . In this experiment, we also analyzed the maximal increase in skin conductance during the 1-5 s after stimulus onset (i.e., skin conductance response; SCR) 56,57 in all experimental phases. www.nature.com/scientificreports/ remembered (hits) and forgotten (misses) images are unlikely to be explained by actual differences between the images themselves.
Gaze-similarity analyses. As for the SCR data, all gaze-similarity data were analyzed with a Motivation (remember, control, forget) × Memory Accuracy (hits vs. misses) repeated measures ANOVA. Results are depicted in Fig. 3c.
Encoding-test similarity The ANOVA revealed a significant main effect of memory accuracy, F(1,35) = 14.05, f = .63 (95% CI = [.26, 1.06]), p < .001, BF Inclusion = 8.3. This result indicates that, across motivational conditions, remembered images were scanned in a more similar manner during encoding and test, compared to forgotten images (which is in line with findings from Experiment 1 and previous studies 7,9 ). All other effects were not statistically significant (p's > .05).
Encoding-regulation similarity The ANOVA revealed a significant main effect of motivation, F(2,68) = 5.74, f = .41 (95% CI = [.13, .66]), ε = .72, p = .011, BF Inclusion = 97.6. Post hoc comparisons showed that similarity scores in the remember condition were significantly higher than similarity scores in the forget condition, t(35) = 2.66, p = .035, d = .46 (95% CI = [− .02, .93]), BF 10 = 3.7; the other two comparisons did not reach statistical significance (both p's > .05). These results suggest that participants explored the pink noise image and its preceding scene image more similarly when motivated to remember (versus forget) the scene image. All other effects were not statistically significant (both p's > .05). Hence, there is no support for the proposed across-stimulus interference effect (BF Inclusion for the main effect of Memory Accuracy = .4), nor for the idea that this effect may be modulated by extrinsic motivation (BF Inclusion for the Motivation × Memory Accuracy interaction = .1).
Global encoding-regulation similarity The ANOVA revealed no significant effects (all p's > .05). Thus, again there is no support for the suggested across-stimulus interference effect (BF Inclusion = 1.1). Please see the online Supplementary Information for a direct comparison of (local) encoding-regulation and global encoding-regulation similarity scores.
Global encoding-encoding similarity The ANOVA revealed a trend towards significance for the main effect of memory accuracy, F(1,35) = 3.90, f = .33 (95% CI = [.00, .70]), p = .056, with a BF Inclusion of 16.4, reflecting higher similarity scores for misses compared to hits (across motivational conditions). As in Experiment 1, this result supports the idea that across-stimulus similarity interferes with later memory. All other effects were not statistically significant (all p's > .05).
Taken together, the results of Experiment 2 are partly consistent with those reported in Experiment 1. Specifically, we found strong (Bayesian) evidence for the suggested across-stimulus interference effect, for 1 out of 3 comparison types. Specifically, when gaze-scanning patterns were compared across scenes and other scene images (during encoding), more similar explorations were related to lower subsequent memory. On the other hand, when gaze-scanning patterns were compared across scenes and pink noise images, exploration similarity seemed extraneous to future memory success. These results provide some support for the hypothesis that there needs to be a certain degree of perceptual overlap between the encoded images-as was the case for the scene and blurred scene images in Experiment 1, but not for the scene and pink noise images in Experiment 2-for gaze similarity to affect later memory. In addition, as found in Experiment 1, although extrinsic motivation affected the way in which participants viewed the pink noise images (i.e., encoding-regulation similarity was highest when motivated to remember), it did not mediate the observed similarity-memory relationship.

Discussion
How we look out at the world influences the way in which we remember it. Until now, studies on gaze-pattern similarity have focused on how we look at identical stimuli (e.g., images) during encoding and memory testing [7][8][9][10][11] . These studies showed that greater encoding-test gaze similarity relates to better memory (as replicated here). The present study extends these findings by examining how we look at various different stimuli during encoding and whether this affects future memory success. Based on existing neural reinstatement work, we hypothesized that high levels of across-stimulus gaze similarity would relate to low levels of memory. Furthermore, we examined whether the motivation to either forget or remember affects the hypothesized similarity-memory relationship.
When considering the first aim of this study, our results provide initial evidence that higher levels of gaze similarity (during encoding) do indeed relate to lower levels of subsequent memory. Experiment 2, however, suggests that there may be a limit to this across-stimulus interference effect and that it depends on the perceptual similarity www.nature.com/scientificreports/ (overlap) between the stimuli-namely, there needs to be some perceptual overlap between the relevant stimuli (as was true for the scene-blurred scene, but not for the scene-pink noise comparisons) for similar gaze patterns to disrupt memory. Although this idea fits with earlier findings from the visual distraction literature 29,30 -i.e., only similar distracters disrupt memory-it remains to be determined whether the observed memory disruption in these studies might be driven by underlying gaze-similarity effects (to the best of our knowledge none of the related studies used eye tracking).
As the suggested across-stimulus interference effect seems to depend on the perceptual overlap between stimuli, one may wonder what happens when eye-movements are reinstated in the absence of a stimulus (i.e., "lookingat-nothing"). Previous studies showed that more similar exploration patterns during encoding and a blank delay may actually benefit memory [12][13][14] (for studies showing that memory is also improved for items that had appeared at a saccade target location, see Refs. [58][59][60] ). Interestingly, this suggests a non-linear effect of gaze reinstatement, with beneficial memory effects when no other stimulus is presented, no memory effects when the encoded stimuli are perceptually different, and disruptive memory effects when the encoded stimuli are perceptually similar. This could potentially provide an intriguing avenue for future research.
Integrating the present findings with the neuroimaging literature on pattern similarity reveals an interesting commonality. Just as observed for gaze similarity here, activation similarity (i.e., reduced pattern separation) in the hippocampus (but not in other medial temporal lobe regions) has been found to interfere with memory [15][16][17][18] . This may however not be that surprising, considering that the hippocampus and the oculomotor system are anatomically well connected through an extensive set of polysynaptic pathways 25 . In line with this reasoning, individuals with amnesia whose damage includes the hippocampus, show alterations in their gaze patterns 61,62 . In addition, there is recent evidence that visual sampling during encoding can predict hippocampal activity in neurologically intact individuals [22][23][24] . Finally, hippocampus activation was shown to be linked to the expression of relational memory in viewing patterns even when explicit retrieval failed 63 . Taken together, without assuming causality, these findings suggest that the hippocampal and oculomotor networks are inherently linked and raise the possibility that the observed gaze similarity effects, in the present and previous studies, may be related to activation similarity in the hippocampus 19 .
An alternative explanation of the observed across-stimulus interference effect could be that the forgotten images were simply less perceptually engaging, less distinctive and less memorable, consequently inducing more similar gaze patterns. Two of our findings however argue against this explanation. First, our SCR analysis in Experiment 2 revealed that the forgotten images (i.e., misses) induced larger SCRs than the remembered images (i.e., hits) during encoding. This suggests that the forgotten images were encoded with a higher level of arousal, which would be unlikely if these images were less engaging and less memorable. Second, different participants forgot different pictures, suggesting that there were no consistent differences in the distinctiveness and memorability of the images.
If the across-stimulus interference effect proves to be stable in future replications (preferably with different types of stimuli), a potential mechanism for successful memory regulation might have been illuminated. A deeper understanding of such mechanism may hold clinical implications for the treatment of different psychopathologies characterized by the intrusion of unwanted memories: e.g., posttraumatic-stress disorder, obsessive-compulsive disorder, and depression. Intrusions are typically vivid, detailed, unexpected and uncontrollable. To resist such intrusions, people often attempt to self-distract and avoid triggers, which paradoxically increase thought frequency, hyper-vigilance, and negative appraisal of the intrusions 64 . If, however, there is a causal connection between gaze reinstatement and successful memory suppression, it could pave the way for alternative clinical interventions. These could for instance involve simple experimental manipulations of visual exploration patterns to manipulate gaze reinstatement. The ease by which eye movements can be measured, and controlled, presents an important advantage over the more covert neural measures. Nevertheless, considering again the close functional and anatomical connection between the visual and hippocampal systems [19][20][21][22][23][24][25] , gaze manipulations may possibly also affect the underlying neural processes.
Regarding the second aim of this study, our results suggest that the motivation to either forget or remember does not mediate the observed link between gaze-similarity and memory. Motivated memory did however affect (encoding-regulation) gaze similarity. Specifically, participants looked more similarly at either a blurred or pink noise image and a preceding scene image, when motivated to remember the scene image. Interestingly, only 2-3 participants, in each experiment, reported to have purposely changed their gaze behavior depending on their motivational state. This suggests that participants were unconsciously reinstating their gaze when trying to hold on to (and remember) an image. At any case, whether conscious or not, this gaze-reinstatement effect did not benefit memory, since across motivational conditions, higher encoding-regulation similarity scores were observed for misses compared to hits.
The absence of a memory suppression effect (memory performance in forget condition < memory performance in control condition) follows earlier inconsistent findings in the motivated memory literature. When considering previous research that used a directed forgetting paradigm (like in the present study), initial studies did not include a control condition and only compared memory in a remember condition with memory in a forget condition 37,38 . Clearly, if a difference is observed, this difference could have resulted from both a memory facilitation (remember > control) and a memory suppression (forget < control) effect. More recent studies have included a control condition and found evidence solely for memory facilitation (as in the present study) 40,41 . When considering studies that used a think-no-think paradigm, the evidence for memory suppression seems stronger 35,36,65 , however contradictory evidence exists 39,66 . In a typical think-no-think study, participants are presented with item-pairs during encoding, whereas during testing they are presented with only one item of a pair. Participants are then asked to try and think or not to think of the paired associate. Thus, while actual item-memories are suppressed in the directed forgetting paradigm, associative memories are suppressed in the www.nature.com/scientificreports/ think-no-think paradigm. This difference in the type of suppression could possibly explain the discrepancy in findings with the two paradigms 44 .
One limitation of the present study which should be acknowledged is that, in certain participants, the blurred and pink noise images induced very little visual exploration. Hence, after excluding all trials with less than 3 fixations, the eye-tracking data of some participants had to be excluded. As mentioned earlier, it is common that pink noise images (and even more so white noise images) induce more fixations to the center of the screen than natural images 54,55 . Hence, future studies should consider using other types of more complex stimuli (e.g., fractals), in order to stimulate eye movements and reduce the central fixation bias. Another limitation of this study is that no confidence ratings were obtained in the memory test. Thus, no claims can be made about the relationship between gaze similarity and memory confidence. This could be an interesting avenue for future research.
Taken together, the present study provides initial evidence for the idea that across-stimulus gaze similarity during encoding interferes with subsequent memory. In other words, when different stimuli are encoded with more similar gaze-scanning patterns, later memory reports are hampered. In addition, while this effect seems dependent upon the amount of perceptual overlap between the encoded stimuli, it seems unaffected by extrinsic motivation to either forget or remember. Although these findings translate to established neural reinstatement effects, the implications are divergent. Specifically, the ability to control eye movements raises the possibility that direct manipulations of gaze similarity (during encoding) would affect later recognition. If so, it could potentially serve as a mechanism to aid the control of unwanted memories. The 180 images were randomly divided into two sets of 90 images, with an equal number of natural and urban scenes within each set. Importantly, during the memory-encoding phase (of session 1), only one picture-set was presented to participants (set 1 for uneven participant numbers, set 2 for even participant numbers), whereas in the memory-test phase (of session 2), both picture-sets were presented to participants (see "Procedure" section below).

Participants.
During the memory-regulation phase (of session 1), participants were presented either with blurred, blackand-white, versions of the encoded scene images (Experiment 1) or pink noise images (Experiment 2). Image blurring was accomplished using a free online blurring tool (https:// pinet ools. com/ blur-image; stack blur, 100 radius). Pink noise images (90 in total) were created using a Matlab utility function (the function can be downloaded from: https:// github. com/ kendr ickkay/ knkut ils). Please note that while the presentation of the blurred scenes depended on the preceding scene stimulus (Experiment 1), the presentation of pink noise images was randomly determined (Experiment 2). All images were presented on a Syncmaster monitor at a resolution of 1024 × 768 pixels (screen resolution was 1920 × 1080), at a viewing distance of approximately 60 cm.
Procedure. All participants underwent two testing sessions which were separated by 2-3 days (see Fig. 1). In both sessions, a short break was inserted halfway (splitting each session into two blocks), to enable a short rest for the participants and keep their vigilance. The sessions were structured as follows: Session 1 Once informed consent was obtained, the experimenter identified the participant's dominant eye and attached the skin conductance electrodes. Next, skin conductance was measured during 1 min of rest (i.e., baseline period). When ready, participants were instructed about the exact experimental procedures: The experiment consisted of 90 trials, and each trial began with the presentation of 1 out of 90 colored scene-pictures for 3000 ms (i.e., memory-encoding phase). Immediately after each display, participants were presented with both an auditory message and visual cue (for 650 ms) instructing them to: either do nothing (white fixation dot), forget (purple fixation dot) or remember (green fixation dot) the previously presented picture. The cue condition of each picture was randomly determined and no more than 2 identical cues appeared consecutively. Following the cue, participants were presented either with a blurred version of the previously presented scene (Experiment 1) or a pink noise image (Experiment 2) for 3000 ms (i.e., memory-regulation phase). In order to enhance motivation, participants were asked to imagine that they are guilty of a crime and that all pictures followed by a forget cue are related to the crime, whereas all pictures followed by a remember cue are related to their alibi. Moreover, participants were told that in the next experimental session (2-3 days later), they would undergo a polygraph test in which they can win a 15 NIS (~ 4.3 USD) bonus if the polygraph test: (1) will not connect them to the crime-related pictures, but (2) will connect them to the alibi-related pictures. After the experimenter ensured that the instructions were understood, the eye-tracker was set up and a standard nine-point calibration and validation procedure (Experiment Builder SR research; Ontario Canada) was performed. Finally, before starting the actual experiment, all participants also underwent a short practice phase (of 3 trials) to familiarize them with the procedure. www.nature.com/scientificreports/ Session 2 After the experimenter attached the skin conductance electrodes, participants were explained that they would undergo a regular recognition-memory test, not a polygraph test (as told in session 1). In this memory test, participants were presented with the 90 studied pictures from session 1 of the experiment and 90 foils taken from the unstudied picture set (each picture was presented for 3000 ms; i.e., memory-test phase). After the picture disappeared, participants were asked whether or not it had been presented during session 1 (i.e., provide a "yes" or "no" answer; see Fig. 1). Participants were not asked about the cues from session 1. Actually, they were told to disregard the previous forget and remember instructions (i.e., cues), and all earlier presented stimuli, regardless of previous instructions, should be endorsed with "yes". Participants were given unlimited time to make their responses and accuracy was encouraged by promising a monetary bonus (i.e., 15 NIS) if at least 75% of their answers were correct (i.e., "yes" responses to studied items, "no" responses to foils). After the experimenter ensured that all experimental procedures were understood, the eye-tracker was set up and a standard nine-point calibration and validation procedure was performed. When the participant reported to be ready, the experimental session started.
At the end of session 2, participants received a paper-and-pencil questionnaire in which they were asked to rate, on a scale 1 to 6 (1 = not at all, 6 = very much), their overall motivation to remember versus forget the images (that were followed by remember versus forget cues, respectively), as well as their efforts to remember versus forget the images (that were followed by remember versus forget cues, respectively). Furthermore, participants were asked to verbally describe any strategies used to help them remember or forget. Analyses of the questionnaire data are presented in the online Supplementary Information. Finally, all participants were debriefed and compensated for their participation in the experiment.
Data acquisition and reduction. The experiment was conducted in a sound attenuated room with dedicated air-conditioning in order to keep the temperature stable. Behavioral responses (from session 2) were coded as either hits or misses. Specifically, "yes" answers to previously studied items (from session 1) were coded as hits, while "no" answers to studied items were coded as misses (see Fig. 1).
The apparatus included a Biopac MP160 system (BIOPAC Systems, Inc., Camino Goleta, CA) to measure skin conductance and a ThinkCentre M Series computer to save the relevant data. Skin conductance was obtained with a sampling rate of 1000 Hz and two Ag/AgCl electrodes (1.6-cm diameter) which were placed on the distal phalanges of the left index and left ring finger. Due to technical issues in Experiment 1, SCRs were analyzed only in Experiment 2. An EyeLink 1000 Plus table-mount setup was used to measure the eye-movement data and a SilverStone computer saved the relevant data. Eye-movement data were parsed into saccades and fixations using EyeLink's standard parser configuration: samples were defined as a saccade when the deviation of consecutive samples exceeded 30°/s velocity or 8000°/s 2 acceleration. Samples gathered from time intervals between saccades were defined as fixations.
In Experiment 1, after disqualifying trials with less than 3 fixations (2.3% of all trials), all eye-tracking data of two (out of thirty-six) participants were excluded entirely from analysis because more than 20% of their data (from the memory-regulation phase) were removed. Thus, all eye-movement analyses of Experiment 1 were based on data of 34 participants. An a-priori power analysis revealed that this sample size allows for detecting a medium effect size (i.e., Cohen's d of .50) with a statistical power of at least .80.
In Experiment 2, after disqualifying trials with less than 3 fixations (2.3% of all trials), all eye-tracking data of three (out of forty-three) participants were excluded entirely from analysis because more than 20% of their data (from the memory-regulation phase) were removed. In addition, depending on the type of similarity scores analyzed (i.e., encoding-test, encoding-regulation, global encoding-regulation, global encoding-encoding), the eye-tracking data of either one or two additional participants were disqualified because no "misses" data remained in one of the experimental conditions. Finally, all data of three participants were disqualified either because they did not show up to the second part of the experiment or because of non-compliance. Thus, all eye-movement analyses were based on data of 35 to 36 participants.
In Experiment 2, we also analyzed SCRs that were defined as the maximal increase in skin conductance during the 1-5 s after stimulus onset 56,57 . Although raw SCRs were analyzed, all SCR values were standardized to identify outliers (i.e., standard score is larger than 5 or smaller than − 5) as well as trials with excessive movements (i.e., standard score is larger than 0 when a movement occurred). A total of 1.5% of SCRs were eliminated from the memory-encoding phase, a total of 1.5% of SCRs were eliminated from the memory-regulation phase, and a total of 1.1% of SCRs were removed from the memory-test phase. Importantly, the within-subject standardization was performed within experimental blocks (before and after the break), minimizing habituation effects 69,70 .
In addition, skin-conductance nonresponsivity was determined after the elimination of single items 56,57 . Specifically, participants in which the standard deviation across trials in both the first and second block of a session was below .01 μS, were considered to be nonresponders and their SCR data were eliminated entirely from all analyses. In case of nonresponsivity in either the first or the second block, only the data from the respective trials were removed (Please note we did not pre-register the removal of SCR data based on nonresponsivity. Nonetheless, when including the 'nonresponding' blocks, similar results are observed.). For the memory-encoding phase, this led to the removal of all SCR data of 1 participant, the SCR data of the first block of 2 participants, as well as the SCR data of the second block of another 2 participants. For the memory-regulation phase, this led to the removal of the SCR data from the first block of 2 participants and the SCR data from the second block of another 3 participants. For the memory-test phase, this led to the removal of the SCR data of the second block of 2 participants. Finally, the skin conductance data of one additional participant were disqualified because no "misses" data remained in one of the experimental conditions. Thus, all skin conductance analyses were based on data of 38 to 39 participants. www.nature.com/scientificreports/ Data analyses. Gaze similarity was computed using the ScanMatch toolbox for Matlab 49 (The MathWorks, Natick, MA). Using this method, each sequence of fixations on an image was spatially and temporally binned and then recoded to create a sequence of letters that retains fixation location, order, and time information. Pairs of these sequences are then compared using the Needleman-Wunsch algorithm (borrowed from the field of genetics) to find the optimal alignment between a pair. The correspondence between two sequences is expressed by a normalized similarity score (0 = no correspondence, 1 = identical; see Fig. 2)-which is inversely related to the number of actions needed to transform one sequence into the other. In the present study, ScanMatch was run using a 12 × 8 bin ROI grid (the default). Furthermore, for temporal binning we applied a value of 50 ms, which has been demonstrated to give the most accurate sampling across a wide variety of fixation durations 49 . Finally, a substitution matrix threshold of 4 was used, which was 2 times the standard deviation of the 'gridded' saccade size (i.e., threshold = 2 × standard deviation (mean saccadic amplitude in pixels)/(Xres/Xbin); with Xres = X resolution of the stimuli, and Xbin = number of bins horizontally). This means that the alignment algorithm aimed to align only regions that were a maximum of 4 bins apart.
Results were analyzed using Matlab R2016a (The MathWorks, Natick, MA) and R software (version 3.6.1) 51 . Mean recognition, SCR and gaze-similarity scores were subjected to repeated-measures ANOVAs; for ANOVAs involving more than one degree of freedom in the enumerator, the Greenhouse-Geisser procedure was applied when the assumption of sphericity was violated. For all post hoc comparisons, the Bonferroni-corrected p value is reported. Both Cohen's ƒ and Cohen's d values were computed as effect size estimates 71 . In addition to frequentist statistical inference, we relied on Bayesian analyses and computed Jeffreys-Zellener-Siow (JZS) Bayes factors (BFs). Please note that the default prior settings (used by R) were left unchanged. For all t-tests (twosided), either the BF 10 (quantifying the evidence favoring the alternative hypothesis) or the BF 01 (quantifying the evidence favoring the null hypothesis) is reported. For all ANOVA main and interaction effects, either the BF Inclusion or BF Exclusion is reported, reflecting a comparison of all models including (or excluding) a particular effect to those without (or with) the effect. In other words, the BF Inclusion can be interpreted as the evidence in the data for including an effect or interaction, similar to BF 10 in the case of simple comparisons 72 . Therefore, the conventions used to interpret substantial/moderate support for either the null or alternative hypothesis (BF 10 > = 3) 73 may apply also to BF Inclusion .
Preregistration and data availability. All analyses (as well as the experimental design and hypotheses) of Experiment 2 were preregistered on: https:// aspre dicted. org/ k8re8. pdf. The original data and analysis files of both Experiments 1 and 2 can be accessed on: https:// osf. io/ gh7ba/.

Data availability
Data of all participants in the two experiments are publicly available on the OSF (https:// osf. io/ gh7ba/).

Code availability
Custom code that supports the findings of this study is publicly available on the OSF (https:// osf. io/ gh7ba/).