Dissociating different temporal stages of emotional word processing by feature-based attention

Negative emotional content is prioritized across different stages of information processing as reflected by different components of the event-related potential (ERP). In this preregistered study (N = 40), we investigated how varying the attentional focus allows us to dissociate the involvement of specific ERP components in the processing of negative and neutral words. Participants had to discriminate the orientation of lines overlaid onto the words, the word type (adjective/noun), or the emotional content (negative/neutral). Thus, attention was either not focused on words (distraction task), non-emotional aspects, or the emotional relevance of words. Regardless of the task, there were no significant differences between negative and neutral words for the P1, N1, or P2 components. In contrast, interactions between emotion and task were observed for the early posterior negativity (EPN) and late positive potential (LPP). EPN differences were absent during the distraction task but were present in the other two tasks. LPP emotion differences were found only when attention was directed to the emotional content of words. Our study adds to the evidence that early ERP components do not reliably separate negative and neutral words. However, results show that mid-latency and late stages of emotion processing are separable by different attention tasks. The EPN represents a stage of attentional enhancement of negative words given sufficient attentional resources. Differential activations during the LPP stage are associated with more elaborative processing of the emotional meaning of words.

Concerning early emotion effects and their modulation by attention asks, effects are mixed for the P1.Some studies found larger amplitudes for negative compared to neutral words 21 , with some only in males 22 samples or the left hemisphere 23 , while other studies reported decreased P1 amplitudes for negative compared to neutral words, depending on target relevance or word frequency 3,24 , and other studies did not find differences 4,12,[25][26][27][28][29][30][31] .No differential effects concerning negative valence were observed between an emotional vs. color judgment task 4 , when attending to the lexical or emotional information 26 , when performing a lexical decision task rather than reading 29 , or when instructed to inhibit a word 32 .For the N1, several studies reported larger N1 amplitudes for negative than neutral words 33,34 , sometimes restricted in the left hemisphere 17,23 .Nevertheless, other studies found emotion effects depending on word frequency 3 or target status 24 .Some studies reported effects restricted to the right hemisphere, or only in response to positive words 26 , or the absence of effects 12,25,30,35 .The N1 to negative as compared to neutral words was not modulated by emotional vs. color judgment tasks 4,36,37 , or when attending to lexical vs. emotional information 26,29 .For the following P2, increased P2 amplitudes for negative words have been observed 31,37,38 , sometimes being left-or right-lateralized 15,36 , or descriptively larger for concrete negative words 39 .Other studies reported effects only for positive (concrete) words or no effects for negative words 1, 40 .Thus, early emotion effects differ considerably between studies and may reflect differences in the used languages, specific stimulus sets, and attention tasks.Systematic studies with large sample sizes, well-controlled stimuli, and variation of task conditions are strongly needed.
More reliably, effects for emotional words are found during mid-latency and late processing stages.The EPN arises at about 200 ms and is related to early lexical access 17 , perceptual tagging 2 , and attention processes 41 .The LPP occurs from about 400 ms after presenting a word and reflects later stages of attention, stimulus evaluation, and episodic memory encoding 25,33,42 .Nevertheless, several studies showed for negative words either no EPN [43][44][45] or no LPP effects 26,27,35 .Effects seem to depend on attentional conditions.For example, EPN effects were present for tasks requiring emotional judgments but not when attending to the color of stimuli 4 .Hinojosa et al. 44 showed a similar EPN pattern, although not significant for negative words.Following these studies, attention to the semantic meaning seems necessary to elicit mid-latency emotion effects in words.Hinojosa and colleagues 44 also showed LPP effects for negative words when participants had to identify a word among pseudowords (i.e., had to attend to the meaning) but not when words had to be identified among non-recognizable stimuli.In this regard, late emotion effects for negative words were absent in several studies during structural (font consistency) 46 , color 37 , lexical 3 , or semantic 26 tasks.Further, for the LPP, a study reported increasing effects when negative words were target-relevant as compared to neutral words 47 , but see 48 .This pattern of findings would be in line with a recent meta-analysis across different visual stimuli (with smaller effect sizes for word stimuli) that reported no reliable late amplitude effects during non-emotional tasks (e.g., watching, reading, or classification according to non-emotional attributes), but reliable effects during explicit emotion decision tasks 49 .However, as pointed out above, there are several conflicting findings, mostly due to studies that report late emotion effects during color, lexical, or semantic tasks 1,4,40,46,50 .This might be explained by attentional spillover to task-irrelevant word features.
To reduce the variability in experimental conditions and to better differentiate processing stages during emotion processing, we recently developed a design that systematically varied feature-based attention to emotional visual stimuli 51 .Here, participants pay attention to a stimulus-unrelated feature (e.g., overlaid thin lines), to the stimulus (e.g., specific emotion irrelevant stimulus features), or to the emotional meaning (e.g., negative or neutral content).Studies using faces or complex scenes showed dissociable modulations of the EPN and LPP across attention tasks [51][52][53] .Emotional EPN effects were absent in the perceptual but present in the other tasks, while LPP differences were only present when attention was directed to the emotional information [51][52][53] .For pictures and faces, increased N1/N170 responses were found regardless of task [51][52][53] .Thus, this kind of task allows the separation of more automatic (early) processing stages from subsequent mid-latency and late stages, which require sufficient attentional resources or task relevance during the processing of emotional stimuli.Importantly, the dissociation between ERP components requires brief presentation times to avoid attentional spillover to taskirrelevant stimulus features 54 .It remains an open question whether a similar dissociation of processing stages can also be revealed while processing negative vs. neutral words.
Different stages of emotion processing are dissociated by tasks that systematically increase attention to emotionally relevant stimulus features.To test how modulations depend on the given attended feature in word stimuli, we used three different tasks during which negative and neutral words (adjectives and nouns) were presented and examined differential responses across the whole processing stream (P1, N1, P2, EPN, and LPP).Participants had to decide (1) if the overlaid line orientation was either horizontal or vertical, (2) if the word was a noun or an adjective, or (3) if the word valence was negative or neutral.We expected that the later the component of the ERP, the stronger the increasing attention to emotionally relevant information would increase emotion effects (for the detailed registration, see https:// osf.io/ nrmsb).We explored emotion effects and interactions with the task for earlier ERPs (P1, N1, and P2).Concerning registered effects, we expected the EPN to increase amplitudes for negative words in the grammatical (adjective/noun) and emotion decision tasks compared to the perceptual task.Finally, LPP emotion effects should be increased in the emotion decision task compared to grammatical and perceptual decision tasks.

Event-related potentials
For mean amplitudes of all examined ERPs, see Table 2 below.For hemisphere effects for the P1, N1, and EPN, see respective control analyses below in "Analyses of hemispheric differences in early emotion effects" section.

Discussion
This study investigated how different tasks affect different ERP components to negative vs. neutral words to test whether the dissociation of early, mid-latency, and late emotional differentiation depends on the attended feature of the stimulus.The tasks varied the attentional focus on word-irrelevant features, emotion-irrelevant aspects, or emotional meaning of words.Our findings reveal a systematic pattern of emotional sensitivity varying with the temporal hierarchy of different ERPs, comparable to those observed for other visual, emotional stimuli.We observed interaction effects between emotion and task for the EPN, and LPP, showing an increase of EPN emotion differences during the grammatical and emotion task, while LPP emotion differences were restricted to the emotion task.For the P1, N1, and P2 we observed no main effects of emotion and no interactions between emotion and task, while unregistered control analyses showed reversed P1 emotion effects over right versus left sensors.These right-lateralized P1 effects should be interpreted cautiously since other studies observed the opposite pattern 23 , and most studies do not observe P1 increases for negative words 12,25,27,28,30,31 , or effects of attention tasks 4,26,29 .Taking these considerations into account, we did not find reliable emotion effects on early ERPs.Besides the P1, this concerns the N1.For the N1, effects in previous studies are mixed, with emotion effects in some studies 3,24,33 , but not in others 12,25,30,35 .The conflicting emotion effects across studies may be due to a combination of specific stimuli, task parameters, variable effects in smaller samples, or individual differences, such as differences in morphosyntactic processing 34 .Our study focused on the effects of negative vs. neutral stimuli.Thus, it remains an open question whether there might be findings for positive stimuli, also applying to the findings concerning the P2 component 1, 15,29,55 .While we did not observe early emotion effects with a typical word set used in the field, we do not rule out that specific word by emotion stimulus conditions (e.g., word frequency, word length, concreteness, stimulus presentation duration) might exist, which should be addressed in future high powered studies 12 .Furthermore, early effects might be evident in other analytical EEG/ERP data approaches.
We also found no effects for the P2.In the literature, both increased P2 amplitudes for negative words have been observed 31,37,38 , as well as no differences between negative and neutral words 1, 40 .Emotion effects are often   www.nature.com/scientificreports/ shown to be larger in concrete words 39 and have been reported selectively for positive words in some studies 1 .Thus, similar to the P1 and N1 components, we might have missed specific valence effects in this study.
Concerning the generally increased N1 and P2 amplitudes in the grammatical and emotion task, this could reflect more elaborated processing of the word meaning through top-down instructions 56 .This would align with the observed larger reaction times in the latter two tasks.Please note that we did not register to interpret these general amplitude changes.
In contrast to the earlier components of the ERP, we formulated preregistered hypotheses regarding the EPN and and the LPP.Findings supported the outlined expectations.Our study found no EPN effects when participants attended to the lines.This might be surprising, given that EPN effects are frequently reported across various tasks 2,46,48 .We reason that the absence of emotional differentiation is due to combining a brief presentation duration with the attention directed to the line orientation.Longer durations might also enable participants to decode the emotional information.Brief stimulus durations are necessary to ensure the relevant task focus and avoid additional cognitive processes 51,52,54 .In contrast to the perceptual task, when participants attended to the word meaning or were explicitly asked to evaluate the emotional meaning of the words, larger EPN amplitudes were observed for negative compared to neutral stimuli.Our findings suggest that such differential processing is attenuated or abolished when participants attended to perceptual information, in line with some previous findings 4,44 , but see 45 .Additional analyses with the hemisphere as a factor showed a left-lateralization of EPN emotion modulations in line with the literature 2,17 .The EPN has been suggested to signal early attentional selection 41 , which typically is increased by emotionally (arousing) stimuli 57,58 but also increased for other salient stimuli 59 .While the EPN was originally thought to be generated in the primary and secondary visual cortex, picture-wise correlation approaches showed stronger correlations with subcortical structures, including the amygdala, anterior cingulate cortex, or the striatum 60,61 .However, studies using separate or combined EEG/fMRI recordings that enable the localization of EPN generators in word stimuli are missing.
In contrast to earlier ERPs, a differential LPP effect for negative words was only seen during the emotion task.In line with this finding, several studies report late effects during tasks that require the processing of the emotion 37,44,[46][47][48] .Further, late emotion effects were larger when participants attended to the emotion but compared to perceptual 37 , or semantic (touchable/not touchable) features 45 .While late effects are generally smaller in verbal stimuli, these are reliable during explicit emotion decision tasks but during non-emotional tasks, such as watching, reading, or classification according to non-emotional attributes 49 .Thus, these findings support the idea that during the LPP stage, evaluative and controlled attention processes occur 20,62 .The LPP likely results from multiple and distributed sources 63 .It has been suggested that these include visual cortices, temporal cortices, the amygdala, the insula, and the orbitofrontal cortex 63 .Stimulus-specific effects can be expected, as for emotional words, fMRI studies show that different frontal regions (inferior frontal gyrus, dorsomedial prefrontal, and cingulate cortex) are involved 39,64 .Studies with suited designs are needed to disentangle the differential involvement of brain areas in the generation of the EPN and the LPP.
Our findings further support the general notion that at least the dynamics and the functional significance of EPN and LPP effects are highly similar across different visual stimulus categories, scenes, faces, or words.For reviews, see 6,42,[65][66][67] .In most studies, EPN and LPP effects are highly correlated 60 , while our attention task manipulation enables a clear dissociation.The findings are remarkably similar to our recent studies with other stimulus categories [51][52][53] .For example, we found that faces and scenes elicited increased N1/N170 modulations for negative stimuli were task-independent, while an EPN effect was not observed during the perceptual task but found in similar amplitude for the other two tasks 51,52 .LPP differences were only present when attention was directed to the emotional expression of the face 51,52 .Thus, a similar task dependency can be shown across different categories of emotional stimuli, at least for the EPN and the LPP.

Limitations and future directions
Concerning our study's findings, some constraints have to be mentioned.We only focused on comparing ERPs to negative and neutral words.Future studies might investigate whether findings depend on valence and/or arousal.While positive words would be interesting to examine, several reasons led to the inclusion of only negative and neutral words.First, we used a design comparable to other recent studies focusing on negative versus neutral stimuli 51,52 .Secondly, we aimed to have a similar two-forced choice task in all three tasks.An additional differentiation of positive words would likely increase the difficulty of the emotion task.However, future studies might also use other word stimulus sets or systematically vary stimulus features to better understand possible early effects and include positive words.Furthermore, we would like to note that the dissociation between ERP components requires brief presentation times to avoid attentional spillover to task-irrelevant stimulus features.Here, the brief presentation durations ensured that the participants' attention was only directed at a specific task, showing successful modulations of emotional effects across tasks, similar to previous studies.Nevertheless, future studies might use tasks with varying presentation durations to test whether effects differ with longer presentation times 54 .Finally, we used the collapsed conditions or collapsed differences to identify ERP components of interest.We decided to predefine our ERPs of interest (see methods section) based on studies with similar tasks but different visual stimuli 51,52 .However, other methods can also be used to identify ERPs without biases and may result in different time windows or sensors.

Conclusion
We found no evidence of early (P1, N1, and P2) ERP differences between negative vs. neutral words across three different attention tasks.However, we observed task-sensitive mid-latency (EPN) and late (LPP) differential processing of emotional words.EPN effects required attention to the word's meaning, while the LPP effect was only seen during the emotional task.These findings reveal a systematic pattern of emotional sensitivity varying www.nature.com/scientificreports/with the temporal hierarchy of different components of the ERP, showing the graded increase in processing steps depending on the participants` task set.

Participants
In total, forty-eight participants enrolled in this study and provided complete data.Eight participants were excluded due to EEG rejection criteria, defined as more than 10 interpolated electrodes or more than 40% of usable trials rejected.All participants provided written informed consent to participate in the study.Participants received 10 Euros per hour for participation.The final sample of forty participants (30 female), exhibited a mean age of 23.58 years (SD = 3.95, Median 23, Min = 18; Max = 35), all had normal or corrected-to-normal vision, were right-handed, native German speakers, with no reported history of psychiatric disorders.We followed the updated data-sampling plan and collected 40 usable datasets (see https:// osf.io/ nrmsb), based on power calculations from a recent previous study using the same attention tasks for faces 51 .Our sample size exhibited a power of > 99% to detect the previously observed large effect sizes (η p 2 = 0.149 and 0.155) for the EPN and LPP interactions.Participants performed an unrelated auditory attention task first, and participation in this task was optional.Data were uploaded to the attached OSF project (https:// osf.io/ eyndu/).The University of Münster medical ethics committee has approved the study protocol.All experiments were performed in accordance with relevant guidelines and regulations of the University of Münster.

Stimuli
The words were taken from previously collected rating datasets, rated by students regarding valence and arousal, and matched for linguistic variables.For the experiment, 60 negative and 60 neutral words (30 nouns, 30 adjectives) were used, differing in valence and arousal (see Table 3).Word length and word frequency strongly affect word processing; shorter and more frequent words are processed more quickly [68][69][70][71] .Secondly, many (highfrequent) orthographic neighbors have been argued to elicit lateral inhibitory mechanisms at a lexical level 72 .Lines were overlaid to the words using presentation (www.neuro behav ioral syste ms.org), showing three horizontal or vertical lines (horizontal lines 2 lengths; vertical lines 0.7 lengths; thickness 0.01; centered around x = 0, y = 0, RGB color words 0,0,0; RGB color lines 47,79,79).

Procedure
Participants were instructed to avoid eye-movements and blinks during the stimulus presentation.While participants were prepared for the EEG, they responded to a demographic questionnaire.They started with either the perceptual decision, grammatical decision, or emotion decision task.Each task contained a block of 120 trials, with all 60 negative and 60 neutral words.The trial structure and presentation were identical (see Fig. 5).In each trial, a word was presented for 100 ms.Afterwards, a variable fixation cross was presented for 2300-2500 ms.
Responses were recorded within the first 1500 ms.Task order and response buttons (x and m) were counterbalanced.In each task, participants always had to decide in a two-alternative forced-choice task: (1) if line orientation was horizontal or vertical, or (2) if the word was a noun or adjective, or (3) if the word was negative or neutral.All 60 negative and 60 neutral words were presented in each task, summing up to a total of 360 trials.Recorded eye-movement was corrected using the automatic eye-artefact correction method implemented in BESA 73 .The remaining artifacts were rejected based on an absolute threshold (< 120 µV), signal gradient (< 75 µV/∂T), and low signal (i.e., the SD of the gradient, > 0.01 µV/∂T).Noisy EEG sensors were interpolated using a spline interpolation procedure.A delay of the LCD screen for stimulus presentation of 15 ms, measured by a photodiode, was corrected during epoching.We included only participants with at least 15 correct trials in each condition (see https:// osf.io/ nrmsb).On average, 47 trials were kept per single condition (Ms = 45 to 49, SDs = 7 to 8; Min = 29 to 32), with no differences between emotion or task conditions and with no interaction (Fs < 2.33, ps > 0.122).Filtered data were segmented from 100 ms before stimulus onset until 1000 ms after stimulus presentation.Baseline correction used the 100 ms before stimulus onset.

Data analyses
All data were statistically analyzed with two (Emotion: negative, neutral) by three (Task: perceptual, grammatical, emotion decision) repeated measure ANOVAs.We investigated the main effects of task and emotion and their interaction.Partial eta-squared (partial η 2 ) were used to describe effect sizes, where η P 2 = 0.02 describes a small, η P 2 = 0.13 a medium and η P 2 = 0.26 a large effect 74 .Behavioral data were analyzed with JASP (https:// jasp-stats.org) for both reaction time and accuracy.We predefined expected time windows and sensor clusters, with a validation based on inspection of the waveforms.Using these time windows and scalp regions as priors, the P1, N1, and P2 were identified using a collapsed localizer across all conditions, for which we used the predefined sensors and time windows.Similarly, negative-neutral differences were used for the EPN and LPP (see Fig. 6 for the ERP identification results).Based on the scalp topography, we visually examined whether sensors adequately capture the collapsed positivity/negativity, where the scalp differences for LPP led to a modification of the included sensors (see below).In a second step, we averaged the ERP waveform to visually judge whether the time windows were symmetrically around the positive or negative peak or captured the emotion differences in cases of the EPN and LPP.This led to slight deviations from the registration for the used time windows and sensors.We identified the P1 from 90 to 110 ms (registered 80-100 ms), the N1 from 140 to 180 ms (registered 140-190 ms), the P2 from 150 to 200 ms (registered), the EPN from 200 to 350 ms (registered), and the LPP from 380 to 800 ms (registered 400-650 ms).We could not clearly identify a centro-parietal P3.We averaged ERPs from all examined sensors in the above-defined time windows.We used occipital sensors for the P1, and occipito-temporal sensors for the N1 and the EPN (P1: P9, P7, PO7, P10, P8, PO8; N1 and EPN: TP7, P9, P7, PO7, TP8, P10, P8, PO8).The central cluster (P2, LPP) was examined over an extended sensor cluster (registered C1, Cz, C2, CP1, CPz, CP2, additionally including F1, Fz, F2, FC1, FCz, FC2, P1, Pz, P2).In addition, unregistered analyses tested for hemispheric differences in early emotion effects and possible lateralized interactions between emotion effects and the attention task (P1, N1, EPN).

Figure 1 .
Figure 1.Occipital cluster showing P1 effects.(a) Scalp topographies depict the differences between negative and neutral words.(b) ERP waveforms show the time course over highlighted sensors.For bar plots, error bars show 95% confidence intervals for amplitudes averaged across selected sensors.Lines connect individual data points.(c) Respective difference plots displayed below contain 95% bootstrap confidence intervals of intraindividual differences.

Figure 2 .
Figure 2. Occipito-temporal cluster showing N1 and EPN effects.(a) Scalp topographies depict the differences between negative and neutral words.(b) ERP waveforms show the time course over highlighted sensors.For bar plots, error bars show 95% confidence intervals for amplitudes averaged across selected sensors.Lines connect individual data points.(c) Respective difference plots displayed below contain 95% bootstrap confidence intervals of intra-individual differences.

Figure 3 .
Figure 3. Central cluster showing P2 and LPP effects.(a) Scalp topographies depict the differences between negative and neutral words.(b) ERP waveforms show the time course over highlighted sensors.For bar plots, error bars show 95% confidence intervals for amplitudes averaged across selected sensors.Lines connect individual data points.(c) Respective difference plots displayed below contain 95% bootstrap confidence intervals of intra-individual differences.

Figure 4 .
Figure 4. Occipito-temporal clusters, showing P1, N1, and EPN emotion effects across tasks for the left and right hemispheres.(a) and (d) Scalp topographies depict the differences between negative and neutral words.(b) and (e) ERP waveforms show the time course over highlighted sensors.For bar plots, error bars show 95% confidence intervals for amplitudes averaged across selected sensors.Lines connect individual data points.(c) and (f) Respective difference plots displayed below contain 95% bootstrap confidence intervals of intraindividual differences. https://doi.org/10.1038/s41598-023-43794-4

Figure 5 .
Figure 5. Experiment and trial overview.(a) Overview of the three attention tasks.(b) Trial structure for each of the three attention tasks.Please note that screen proportions were increased to increase visibility.

Figure 6 .
Figure 6.ERP identification validation.(a) Collapsed conditions to identify the P1, N1, and P2 components.(b) Respective waveforms averaged across highlighted selected sensors.(c) Collapsed task conditions to identify the EPN and LPP components.(d) Respective waveforms averaged across highlighted selected sensors.

Table 1 .
Behavioral results across the three attention tasks.Hits are displayed in proportion correct.Reaction times are rounded to milliseconds.

Table 2 .
Mean amplitudes for all ERPs across the three attention tasks.Mean amplitudes are displayed in microvolts, averaged across the respective time windows and sensors of interest (see methods section below).

Table 3 .
Comparisons of negative and neutral words by One-Way-ANOVAs.All categories contained 30 adjectives/nouns each.*** = p ≤ 0.001.Standard deviations appear in parentheses below means; means in the same row sharing the same superscript letter do not differ significantly from one another at p ≤ 0.05 based on LSD test post-hoc comparisons.