Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Neural mechanisms of affective matching across faces and scenes

Abstract

The emotional matching paradigm, introduced by Hariri and colleagues in 2000, is a widely used neuroimaging experiment that reliably activates the amygdala. In the classic version of the experiment faces with negative emotional expression and scenes depicting distressing events are compared with geometric shapes instead of neutral stimuli of the same category (i.e. faces or scenes). This makes it difficult to clearly attribute amygdala activation to the emotional valence and not to the social content. To improve this paradigm, we conducted a functional magnetic resonance imaging study in which emotionally neutral and, additionally, positive stimuli within each stimulus category (i.e. faces, social and non-social scenes) were included. These categories enabled us to differentiate the exact nature of observed effects in the amygdala. First, the main findings of the original paradigm were replicated. Second, we observed amygdala activation when comparing negative to neutral stimuli of the same category. However, for negative faces, the amygdala response habituated rapidly. Third, positive stimuli were associated with widespread activation including the insula and the caudate. This validated adaption study enables more precise statements on the neural activation underlying emotional processing. These advances may benefit future studies on identifying selective impairments in emotional and social stimulus processing.

Introduction

Amygdala functioning is of high interest for clinical psychology, psychiatry and neuroscience, as heightened amygdala activation has been reported in various patient groups1,2,3,4,5. The emotional matching paradigm by Hariri et al.6 and its extended version7 are widely used as emotional reactivity measures, which reliably activate the amygdala8,9,10. Despite its current use in psychiatry, this paradigm has a potential drawback since faces with negative emotional expressions and negative social scenes are compared with simple geometric shapes. Thus, it compares pictures that differ in more than one domain: social content and emotional valence. It is therefore difficult to draw conclusions about which of the two different domains causes the increase in amygdala activation. This differentiation may arguably not be relevant for all purposes, but to study specific populations, such as patients with deficits in one or the other domain (e.g. those with autism spectrum disorder (ASD))11,12, it is crucial to distinguish the two.

A second issue is that negative emotions have been studied more widely than positive emotions, as exemplified by the original emotional matching paradigm, putatively due to their high functional value for action. For example previous research suggests that threatening scenes, whether they contained faces or no human features, elicited activation in the extrastriate body area, suggesting that this activity to threatening scenes, represents the capacity of the brain to associate certain situations with threat, in order to prepare for fast reactions13. Positive emotions are, however, the other side of the coin, as they allow psychological growth and well-being14. Positive stimuli are most commonly used in the context of reward experiments, for example in performance-based feedback tasks15,16. A brain region that has been related to the processing of various reward types, ranging from primary reinforcers to more abstract social rewards17,18, is the ventral striatum19. Also, meta-analytically, the ventral striatum elicits the strongest activation across the different reward types such as monetary, food and erotic rewards20. The amygdala was also found to be activated in response to positive stimuli. For direct comparisons of positive and negative faces, not all studies found amygdala activation differences21, but a meta-analysis of positive and negative affect revealed that the amygdala is more strongly activated for negative stimuli22. Other brain regions involved in positive emotion processing include the ventrolateral prefrontal cortex,23, medial prefrontal cortex, the insula, the superior temporal sulcus and the inferior frontal gyrus24, the cuneus, the inferior occipital lobule, the inferior parietal gyrus, the caudate and the temporal gyrus20.

Although probing mean brain activation is the most straightforward attempt to analyze brain data, another important characteristic of the amygdala is its quick habituation25. Habituation effects in the amygdala have been found to have a stronger test-retest reliability than mean activation analysis. Plichta and colleagues26 found strong retest reliability in the bilateral amygdala for negative faces across two sessions on the group but not on the within-subject level, and they demonstrate in a follow-up article, that amygdala habituation findings pose a better within-subject measure than mean activation amplitudes, because amygdala habituation findings are associated with the highest retest reliability27. To potentially treat amygdala activation as a biomarker for psychiatric disorders, it is essential that its activation measures are stable over time in the sense of reflecting a characteristic of the individual. Furthermore, differences in habituation may be more informative than mean activation itself28, because habituation is an important adaptive process which reduces responding to irrelevant stimuli29. In healthy populations, amygdala habituation to faces occurs very quickly, while in patients with posttraumatic stress disorder, who display an increased amygdala response to fearful faces in comparison to healthy controls (HCs), amygdala habituation to fearful faces is reduced30. In ASD patients, amygdala habituation to emotional faces is absent31. Furthermore, increased (physiological) habituation is more strongly associated with viewing negative stimuli as compared to viewing neutral stimuli in the healthy population32.

To gain a broader understanding of what causes amygdala activation (i.e., emotional valence, salience, social or non-social stimulus content)7 we expanded the original task version and included more categories and carefully chosen control stimuli. Specifically, we presented three different stimulus categories, distinguishing faces, from social and non-social scenes. And we presented negative, positive and neutral stimuli within each of these three categories. All neutral stimuli act as appropriate controls for the respective condition. Comparable to the low-level control condition in the original matching task, the neutral non-social scenes serve additionally as control stimuli for all emotional conditions in the analyses that aim to replicate the original findings. This full set of stimuli allows us to evaluate whether typically observed amygdala activation are due to faces, the social versus non-social value of the stimuli, or their respective emotional valence. Participants’ task was to decide which of the two pictures presented on the bottom, was identical to a third picture that was simultaneously presented on the top of the screen.

We analyzed behavioral as well as functional magnetic resonance imaging data, with a particular focus on the amygdala. The region of interest (ROI) approach is based on the original Hariri paradigm, which focused exclusively on the amygdala, and the strong meta-analytic data showing amygdala involvement in negative emotion processing. In a first step, we aimed at replicating the findings of the original emotional matching task by computing contrasts that are comparable to the ones reported by Hariri et al.7. These comparisons contrast negative faces and negative social scenes to neutral non-social pictures and we expected to find amygdala activation in both contrasts. Second, we expected elevated amygdala activation for negative faces and negative scenes compared to their respective within-stimulus category control conditions: neutral faces and neutral social and non-social scenes. This would indicate that negative emotion, not social content, is the driving force of the observed amygdala activity. Regarding the more recent discussions on habituation effects, we hypothesize that amygdala habituation is strongest for negative face pictures as it shows stronger sensitivity to negative affect22 and habituates specifically to faces27. Third, concerning positive stimuli, we predict stronger neural activation in the ventral striatum, when compared to neutral or negative stimuli. Amygdala activity may also be increased for positive stimuli, but to a smaller degree than for negative stimuli.

Results

Behavioral data

All hit rates, describing the rate of correctly matched pictures, were above chance level (Ps < 0.001), it can therefore be assumed that the task was carried out correctly. The mean reaction times (RTs) and hit rates for each category are presented in Table 1. The Shapiro-Wilk test revealed that all RTs are normally distributed (all Ps > 0.05).

Table 1 Reaction times and hit accuracy per condition.

Even though hit rates were generally very high, a Friedman test revealed a statistically significant difference in accuracy depending on sociality and emotion, χ2(2) = 106.764, P < 0.001, as well as on specific category, χ2(2) = 163.644, P < 0.001. Post hoc analyses with Wilcoxon signed-rank tests resulted in significant differences between the social and non-social conditions (Z = −4.457, P < 0.001), between the non-social and face conditions (Z = −3.431, P = 0.001) and between the social scenes and face conditions (Z = −3.535, P < 0.001). Further, significant differences between negative and positive conditions (Z = −3.024, P = 0.002), between neutral and negative conditions (Z = −4.544, P < 0.001) and between neutral and positive conditions (Z = −4.527, P < 0.001) were observed.

For reaction times, an ANOVA was calculated. Two main effects for RTs were found. How quickly the stimulus pictures were matched correctly, depended on social content (F(2,52) = 82.92; P < 0.001, ηp2 = 0.761), as well as on emotional valence (F(2,52) = 33.45, P < 0.001, ηp2 = 0.563). Non-social scenes were matched significantly faster than social scenes and social scenes were matched faster than faces (Ps < 0.05). Neutral pictures were matched faster than negative and positive pictures. A significant interaction was found for social content × emotional valence (F(4,104) = 15.12, P < 0.001, ηp2 = 0.368). For social scenes, neutral stimuli were matched faster than negative and positive stimuli, while for non-social scenes, neutral pictures were matched faster than negative pictures, but negative pictures were also matched faster than positive non-social scenes.

fMRI Results

Validation of paradigm

To validate the paradigm, the comparable contrasts to those from the original emotional matching paradigm were calculated, that is, negative faces and negative social scenes versus non-social neutral stimuli (cf. Hariri et al. 2002). As presented in Table 2, these contrasts yielded the expected significant activation in the bilateral amygdala within the anatomically pre-defined region of interests (ROIs).

Table 2 Validation contrasts to the original emotional matching paradigm.

Amygdala responses to negative emotions versus adjusted control conditions

In this section, we report the comparisons within social categories. Significant bilateral amygdala activation was found for negative social and non-social scenes compared to neutral social and neutral non-social scenes, respectively. For negative faces versus neutral faces, however, no significant difference was observed. Additional whole-brain analyses showed several further activation clusters, including activity in parts of the occipital cortex. The amygdala activation findings, as well as the whole brain findings, of negative faces versus neutral faces and negative social and non-social scenes versus their respective neutral control condition, are presented in Table 3.

Table 3 Basic comparisons of negative > neutral stimuli within the same social category.

Brain activation to positive emotions versus the adjusted control conditions

No increased brain activation for positive emotions as opposed to neutral or negative emotions was found in the ventral striatum. However, the caudate nucleus, as part of the dorsal striatum, showed stronger activation for positive versus negative pictures on a whole brain level. Details on neural activation for these findings are presented in Table 4.

Table 4 Mean brain activation for positive emotion.

Amygdala activation was also explored for the contrasts of positive versus neutral conditions for all social content categories, but did not reach significance in any social domain. Comparisons of negative social and non-social scenes versus positive social and non-social scenes did show stronger amygdala activation for the negative conditions. Results can be found in Table S1 (Supplementary Material).

Amygdala habituation effects

Significant habituation effects for negative and neutral faces were found in the amygdala in a first minus last block analysis (see Table 5) as previously applied by Plichta et al.27. As can be seen in the activation time course in the amygdala for negative and neutral faces (see Fig. 1), amygdala activation is highest in the first block of negative faces, which is significant when comparing activation within the first blocks only (see Table 5, there was no significant effect when comparing the last blocks only). Furthermore, an interaction between the first and the last block and the emotional category was found: amygdala habituation to negative faces was significantly stronger than amygdala habituation to neutral faces.

Table 5 First > last block amygdala habituation findings for fearful and neutral faces.
Figure 1
figure1

The time course of amygdala activation for negative (upper left) and neutral (upper right) faces, as well as for negative non-social (lower left) and neutral non-social scenes (lower right) across the different blocks is presented. Block 1 is shown in red, block 2 in green, block 3 in dark blue, block 4 in light blue, block 5 in pink and block 6 in yellow.

No amygdala habituation effects were found for any of the other categories. The respective time course series are presented in the supplementary material (Supplementary Figure S3). Whole brain habituation findings for the first minus last block analysis are presented in the Supplementary Table S4.

In our study, neither linear parametric modulation (Pmod), nor logarithmic parametric modulation27 could explain the relationship of amygdala habituation between blocks for negative faces (for details about the other conditions, please see the section about Habituation Analysis in supplementary material and Tables S2 and S3).

Discussion

The present study successfully validated and extended the well-known and widely used emotional matching fMRI paradigm. The new neutral control stimuli, the additional positive emotional condition and the habituation analysis of the amygdala response to negative faces provide further insights into the relationship of (social) emotion processing and amygdala functioning.

The validation contrasts that are comparable to those in the original publication (negative faces and scenes >non-social neutral stimuli) resulted in bilateral amygdala activation. Comparisons within the social categories also yielded significant bilateral amygdala activation for negative social and non-social scenes compared to neutral social and non-social scenes, respectively. We observed no overall amygdala activation for negative versus neutral faces, but a habituation analysis revealed that the amygdala is activated during the first block and then habituates. Concerning the positive stimuli, we did not find activation in the ventral striatum, but increased activation was observed in various other areas including inferior occipital and parietal regions, the insula, the caudate and the temporal lobe.

The additional stimulus categories of neutral and positive emotional pictures, as well as social and non-social scenes allow further insights on the relationship between different stimulus categories and can explain more precisely what stimulus characteristics contribute to increased amygdala activation.

The data convincingly show that negative social and non-social scenes elicit stronger amygdala activation than neutral control scenes of the same category confirming previous claims of negative valence eliciting amygdala activation. Therefore, negative valence seems to have a strong influence on amygdala activation, even though additional explanatory influences such as salience or arousal cannot be ruled out. Against expectations, for the face conditions, the comparison of negative versus neutral did not result in significant overall amygdala activation. This was surprising, given the vast amount of literature showing that the amygdala responds particularly strongly to negatively valenced facial expressions33,34,35,36,37. However, also other studies that have conducted the comparison of fearful versus neutral faces38 and fearful body postures versus neutral body postures did not find amygdala activation39. The results of the habituation analysis do, however, offer a reasonable explanation for the missing overall amygdala activation for negative emotional faces. The analysis of the first versus last block revealed that the amygdala habituates significantly more to negative than to neutral faces; the negative versus neutral contrast was only significant in the first, not the last blocks. In contrast to the first versus last block habituation findings, parametric modulation effects did not explain amygdala habituation to negative faces; neither the linear nor the logarithmic habituation approaches can explain the relationship of habituation to negative faces in the amygdala. This suggests very rapid habituation to negative faces, which reaches a plateau already after the first block and does not decrease further. The first versus last block habituation results are consistent with previous findings that also observed significant amygdala habituation to negative face stimuli27. Our results of an interaction in the amygdala for face conditions with activation to negative faces habituating more strongly than to neutral faces are at odds, however, with those of Tam et al.31. The lack of an interaction in this study may be explained by the applied experimental task. Tam et al.31, used an n-back task in which three types of emotional faces (neutral, angry, happy) were presented in addition to scrambled images as distractors. The emotional faces were not tested for memory, but a letter that was presented between two identical faces. Our task had qualitatively different negative faces (fearful instead of angry) and these faces, as well as the other stimuli were the direct focus of attention for participants. Moreover, rapid amygdala habituation to negative faces has some plausibility, when considering evolutionary explanations. Stimuli which initially result in aversive responses are no longer perceived as aversive once the participant has learned that these stimuli do not pose any threat. Stimuli that are not threatening at any time do not require re-evaluation and consequently no habituation29,40. The specific habituation to fearful faces when compared to threatening scenes might be explained by the higher similarity of the fearful faces amongst each other. Threatening scenes are more diverse and might pose different threats to the observer that would require a different type of habituation for each scenes. This would be an interesting and important question to investigate for future studies.

Positive affective stimuli did not result in amygdala activation when compared to neutral stimuli, which suggests that the amygdala responds strongest to negative affective stimuli. The neural activation observed for positive stimuli is not in accordance with our expectations, because we did not find ventral striatal activation. The function of ventral striatal activation is strongly debated, but most frequently linked to reward processes, either in the context of reward prediction (errors) or reward value processes41,42. The previously mentioned meta-analysis20 revealed that the strongest ventral striatal activation resulted from monetary rewards as compared to erotic and food rewards. The authors suggested that this might be due to the nature of the applied monetary reward paradigms, rather than to monetary rewards themselves, because monetary reward paradigms usually involve stimulus-reward associations. The other positive stimuli that were investigated in that meta-analysis, such as erotic and food pictures, were mostly used in passive viewing paradigms. Additionally, stronger ventral striatal activation was elicited when stimuli were presented unexpectedly43 or during the anticipation of a reward44. The study at hand is more closely related to the passive viewing paradigms, which contributed less to ventral striatal activation. Moreover, the block design prevents unexpectedness, at least to some extent, because four pictures of the same category are always presented in a row. These experimental characteristics might explain the missing activation in the ventral striatum to the positive versus neutral stimuli in our study. However, even though we did not find ventral striatal activity, we did find other regions such as the cuneus, inferior occipital and parietal regions, the insula, part of the dorsal striatum (caudate) and the temporal lobe, that have also been reported in prior studies investigating reward processing20. The large network of brain areas that was activated when positive stimuli were shown is consistent with meta-analytic findings, suggesting that our results are plausible. Even though the insula is predominantly associated with negative stimuli22, it also mediates approach and avoidance behavior when social affective stimuli45 are presented and the positive stimuli in this study are chosen for their affective and affiliative nature.

There are several other adaptation experiments of the original Hariri paradigm (2002), for example by Paulus and colleagues46, who focused on emotional matching, thereby presenting three different facial identities from which two identities expressed the same emotion, and they continued to use geometrical shapes as control stimuli. Other studies used different designs, with presenting dynamic faces47, or negative and neutral pictures with human or nonhuman content48, investigating brain systems which are involved in social emotion perception and regulation. In this study, we tried to stick very closely to the original Hariri paradigm to ensure comparability between the two experimental designs and simultaneously eliminate important confounds which we think was successful. However, two major limitations of the study should be addressed. Firstly, a block design is not the best approach to measure habituation effects. We chose to use the block design, nonetheless, because our main goal was to validate and extend an existing paradigm, which was originally built as a block design and which has already been successfully studied with respect to habituation effects27. Furthermore, the different arousal ratings for the three affective stimuli categories might be a potential confound and in part account for the amygdala activation49. However, we chose low arousal positive stimuli on purpose, because we wanted to investigate affectionate neural activations.

This study closes an important gap by providing more appropriate control stimuli for the differentiation of emotional valence versus social content effects on amygdala activation. Our adapted paradigm could be validated by replicating the contrasts of the original emotional matching paradigm, and successfully improved some shortcomings. These advances may benefit future studies aiming to identify selective impairments in social emotional processing in psychiatric conditions.

Furthermore, we would like to note that the sample size was relatively small and included only male participants. While this was a conscious choice since this study served as a pilot study for clinical trials with male participants only11,12. Future studies should however investigate this extended paradigm with a larger sample, also including female participants.

Methods

Participants

Thirty-two men, from which five participants had to be excluded, because of missing data, participated in this study after giving written informed consent. Thus, data from 27 men aged between 22 and 35 years (M = 28.78, SD = 3.41), could be analyzed. The study was approved by the ethics committee of the University of Leipzig and was conducted in accordance with the Declaration of Helsinki. This study served as a pilot study for clinical trials involving male ASD patients and neurotypical males, therefore, only male participants were included in this pilot study. For a detailed description of the clinical trials, please see11,12.

Acquisition of fMRI Data

Functional magnetic resonance imaging (fMRI) data was acquired for two runs, using a 3 T Siemens Verio scanner (Siemens Medical Systems, Erlangen), equipped with a 32-channel head coil. T2*-weighted echoplanar images with blood-oxygen-level-dependent contrast were obtained (TR = 2 s, TE = 27 ms, matrix size = 70 × 70, number of slices = 37, slice thickness = 3 mm, FoV = 210, flip angle = 90°). A high quality T-1 weighted image was available from the in-house database (TR = 2.3 s, TE = 2.98 ms, slices = 176, slice thickness = 1 mm, voxel volume = 1 × 1 × 1 mm, FOV = 256 mm2, flip angle = 9°) for each participant so that no additional T-1 weighted image for anatomical reference was acquired.

Procedure

After completing a practice trial to ensure that instructions were correctly understood, the participant was placed into the scanner. While lying inside the scanner, a beamer projected stimuli onto a screen and the participant viewed the images through a mirror mounted on the head-coil. Stimuli were presented with the Presentation Software (Neurobehavioral Systems, Albany, CA).

The presented picture stimuli were taken from different databases, the International Affective Picture system (IAPS)50, the Emotional Picture System (EmoPics)51, the Radboud Faces Database52 as well as from the internet. Previous to the scanning sessions, these pictures were divided into 9 categories: negative (fearful) faces, neutral faces, positive (happy) faces, negative social scenes (e.g. a crying baby), neutral social scenes (e.g. a man reading a newspaper), positive social scenes (e.g. a smiling couple), negative non-social scenes (e.g. rubbish dumb), neutral non-social scenes (e.g. mugs on a table) and positive non-social scenes (e.g. delicious food). Stimuli were chosen and placed into categories based on their provided valence ratings (social and non-social scenes) or their percentage of agreement on emotion categorization (faces), respectively. Furthermore, pictures belonged to the social categories when people were shown on the pictures. People depicted in the social scenes never looked directly into the camera.

Two stimuli pictures in one three-picture alignment resembled each other as much as possible in their scenery set-up to increase matching difficulty. Participants watched each three-picture alignment and indicated by button press, which of the two pictures presented at the bottom of the screen was identical to the one presented in the middle on top. Four three-picture alignments (henceforth: stimulus pictures) of the same category (e.g. positive social scenes) were presented per block.

Experimental Paradigm

During the fMRI session, participants performed an adapted version of the emotional matching task7. Most importantly, the adaption entailed different picture choices. In this block design, stimuli differed in social content (faces, social and non-social scenes) as well as in emotional valence (negative, neutral, positive). Blocks were presented in a pseudorandomized fashion. Each participant completed two experimental runs. Each run presented a different stimulus picture set to avoid viewing the same stimulus pictures in both runs. A further difference between the two runs was the presentation duration of each block. One run presented 16-second blocks and the other run presented 12-second blocks. Four blocks per category were presented in each session, thus eight blocks per category in total, if no missings occurred (for details on missings please refer to Table S5a–c in the supplementary methods). Each block consisted of four pictures of the same category, e.g. social positive scenes. The order of stimulus set and duration presentation was balanced. In total 9 categories were shown in both runs of this block design. Due to failure of equipment, some data was lost so that each category block was presented 6–8 times instead of each category block being presented 8 times (for details see Supplementary Table S5a–c). The order in which stimulus blocks were presented in each run was alternated to avoid sequence effects. More details on the procedure can be found in the supplementary material.

MRI Data Analysis

The MRI data were analyzed using SPM8 software (Welcome Trust Centre for Neuroimaging, London, UK; http://www.fil.ion.ucl.ac.uk/spm) implemented in Matlab 8 (The MathWorks, Natick, MA). All volumes were coregistered to the SPM single-subject canonical EPI image, slice-time corrected and realigned to the mean image volume. A high resolution anatomical image of each subject was first coregistered to the SPM single-subject canonical T1 image and then to the average functional image. The transformation matrix obtained by normalizing the anatomical image was used to normalize functional images to MNI space. The normalized images were spatially smoothed using an 8-mm full width at half maximum Gaussian kernel. A high-pass temporal filter with cutoff of 432 s and 512 s, respective to the short or long session, was applied to remove low-frequency drifts from the data.

Statistical fMRI analysis

Statistical analysis was carried out using a general linear model. Onsets and durations of the 9 conditions block design were modeled with a boxcar function and convolved with a hemodynamic response function53. To reduce potential noise-artifacts we used the rWLS toolbox in which a restricted maximum likelihood algorithm estimates the variance of the noise for each image and weighs images in accordance to their variance54. Paired t-tests were calculated between the two different runs, to determine whether the collected data from the two runs can be analyzed together or not. This comparison was made for the two different stimuli sets, as well as for the two different block durations. The main objective of this study was to validate our task by testing whether amygdala activation, as observed by Hariri et al.7, can be replicated with our adapted control stimuli. Therefore, we first calculated the contrasts of interest on the first level for each participant (here: negative faces – neutral non-social scenes), afterwards the parameter estimates for each contrast were used for a one sample t-test on the second level in order to investigate the amygdala activation for this contrast. The ROI analyses of the amygdala were performed by using atlas-based structurally defined masks which were based on amygdala classification of the automatic anatomic labeling (AAL) and Talairach Deamon (TD) templates55,56 in the WFU Pickatlas57. The threshold we chose was at P < 0.05 (small volume (family-wise error (FWE)) corrected). The second aim was to investigate whether amygdala activation can be observed for negative faces and negative social scenes versus their respective control condition, thus one-sample t-tests were conducted for the respective contrasts the same ways as previously described. Our third aim was to test whether positive stimuli elicit activation in the ventral striatum. A ROI from the WFU Pickatlas was used for the analysis and t-tests were conducted in the same way as previously described for the amygdala and the respective contrasts. Our fourth goal was to test, which stimuli the amygdala habituates to. In order to investigate this, we used the first minus last block analysis, as well as the habituation approach by means of Pmod. The first minus last block analysis investigates the activation amplitude difference between the first and the last block and if a difference is found, it can be claimed that significant habituation has taken place from the first to the last block. The Pmod approach examines the linear relationship of block number and amygdala activation across the whole experiment and if a significant result is obtained, it can be concluded that the habituation of brain activation can be explained by a systematic reduction over time and that the brain amplitude can be predicted by knowing the block number. Our approaches are based on the description by Plichta27 and Tam31.

The whole brain analysis used one sample t-tests of direct comparisons between two different categories of interest, for example: positive >neutral and positive >negative stimuli on a P < 0.05 FWE-corrected level.

Data Availability

The described dataset of the current study is available from the corresponding author on reasonable request.

References

  1. 1.

    Shin, L. M. & Liberzon, I. The neurocircuitry of fear, stress, and anxiety disorders. Neuropsychopharmacology: official publication of the American College of Neuropsychopharmacology 35, 169–191, https://doi.org/10.1038/npp.2009.83 (2010).

    Article  Google Scholar 

  2. 2.

    Kanske, P., Schonfelder, S., Forneck, J. & Wessa, M. Impaired regulation of emotion: neural correlates of reappraisal and distraction in bipolar disorder and unaffected relatives. Transl Psychiatry 5, e497, https://doi.org/10.1038/tp.2014.137 (2015).

    CAS  Article  PubMed  PubMed Central  Google Scholar 

  3. 3.

    Stein, M. B., Goldin, P. R., Sareen, J., Zorrilla, L. T. & Brown, G. G. Increased amygdala activation to angry and contemptuous faces in generalized social phobia. Archives of general psychiatry 59, 1027–1034 (2002).

    Article  Google Scholar 

  4. 4.

    Phan, K. L., Fitzgerald, D. A., Nathan, P. J. & Tancer, M. E. Association between amygdala hyperactivity to harsh faces and severity of social anxiety in generalized social phobia. Biological psychiatry 59, 424–429, https://doi.org/10.1016/j.biopsych.2005.08.012 (2006).

    Article  PubMed  Google Scholar 

  5. 5.

    Pitman, R. K. et al. Biological studies of post-traumatic stress disorder. Nature Reviews Neuroscience 13, 769–787, https://doi.org/10.1038/nrn3339 (2012).

    CAS  Article  PubMed  PubMed Central  Google Scholar 

  6. 6.

    Hariri, A. R., Bookheimer, S. Y. & Mazziotta, J. C. Modulating emotional responses: effects of a neocortical network on the limbic system. Neuroreport 11, 43–48, https://doi.org/10.1097/00001756-200001170-00009 (2000).

    CAS  Article  PubMed  Google Scholar 

  7. 7.

    Hariri, A. R., Tessitore, A., Mattay, V. S., Fera, F. & Weinberger, D. R. The amygdala response to emotional stimuli: A comparison of faces and scenes. NeuroImage 17, 317–323, https://doi.org/10.1006/nimg.2002.1179 (2002).

    Article  PubMed  Google Scholar 

  8. 8.

    Contreras-Rodriguez, O. et al. Disrupted neural processing of emotional faces in psychopathy. Soc Cogn Affect Neur 9, 505–512, https://doi.org/10.1093/scan/nst014 (2014).

    Article  Google Scholar 

  9. 9.

    Foland-Ross, L. C. et al. Amygdala Reactivity in Healthy Adults Is Correlated with Prefrontal Cortical Thickness. Journal of Neuroscience 30, 16673–16678, https://doi.org/10.1523/Jneurosci.4578-09.2010 (2010).

    CAS  Article  PubMed  Google Scholar 

  10. 10.

    Thomason, M. E. & Marusak, H. A. Within-subject neural reactivity to reward and threat is inverted in young adolescents. Psychol Med 47, 1549–1560, https://doi.org/10.1017/S0033291716003111 (2017).

    CAS  Article  PubMed  PubMed Central  Google Scholar 

  11. 11.

    Preckel, K., Kanske, P., Singer, T., Paulus, F. M. & Krach, S. Clinical trial of modulatory effects of oxytocin treatment on higher-order social cognition in autism spectrum disorder: a randomized, placebo-controlled, double-blind and crossover trial. BMC Psychiatry 16, 1–10 (2016).

    Article  Google Scholar 

  12. 12.

    Kamp-Becker, I. et al. Study protocol of the ASD-Net, the German research consortium for the study of Autism Spectrum Disorder across the lifespan: from a better etiological understanding, through valid diagnosis, to more effective health care. Bmc Psychiatry 17, https://doi.org/10.1186/s12888-017-1362-7 (2017).

  13. 13.

    Sinke, C. B. A., Van den Stock, J., Goebel, R. & De Gelder, B. The Constructive Nature of Affective Vision: Seeing Fearful Scenes Activates Extrastriate Body Area. PloS one 7, https://doi.org/10.1371/journal.pone.0038118 (2012).

  14. 14.

    Fredrickson, B. L. The role of positive emotions in positive psychology - The broaden-and-build theory of positive emotions. Am Psychol 56, 218–226, https://doi.org/10.1037//0003-066x.56.3.218 (2001).

    CAS  Article  PubMed  PubMed Central  Google Scholar 

  15. 15.

    Drueke, B. et al. Neural correlates of positive and negative performance feedback in younger and older adults. Behav Brain Funct 11, 17, https://doi.org/10.1186/s12993-015-0062-z (2015).

    Article  PubMed  PubMed Central  Google Scholar 

  16. 16.

    Rademacher, L. et al. Dissociation of neural networks for anticipation and consumption of monetary and social rewards. NeuroImage 49, 3276–3285, https://doi.org/10.1016/j.neuroimage.2009.10.089 (2010).

    Article  PubMed  Google Scholar 

  17. 17.

    Paulus, F. M., Rademacher, L., Schafer, T. A., Muller-Pinzler, L. & Krach, S. Journal Impact Factor Shapes Scientists’ Reward Signal in the Prospect of Publication. PloS one 10, e0142537, https://doi.org/10.1371/journal.pone.0142537 (2015).

    CAS  Article  PubMed  PubMed Central  Google Scholar 

  18. 18.

    Spreckelmeyer, K. N. et al. Anticipation of monetary and social reward differently activates mesolimbic brain structures in men and women. Soc Cogn Affect Neur 4, 158–165, https://doi.org/10.1093/scan/nsn051 (2009).

    Article  Google Scholar 

  19. 19.

    Daniel, R. & Pollmann, S. A universal role of the ventral striatum in reward-based learning: Evidence from human studies. Neurobiol Learn Mem 114, 90–100, https://doi.org/10.1016/j.nlm.2014.05.002 (2014).

    Article  PubMed  Google Scholar 

  20. 20.

    Sescousse, G., Caldu, X., Segura, B. & Dreher, J. C. Processing of primary and secondary rewards: a quantitative meta-analysis and review of human functional neuroimaging studies. Neuroscience and biobehavioral reviews 37, 681–696, https://doi.org/10.1016/j.neubiorev.2013.02.002 (2013).

    Article  PubMed  Google Scholar 

  21. 21.

    Santos, A., Mier, D., Kirsch, P. & Meyer-Lindenberg, A. Evidence for a general face salience signal in human amygdala. NeuroImage 54, 3111–3116, https://doi.org/10.1016/j.neuroimage.2010.11.024 (2011).

    Article  PubMed  Google Scholar 

  22. 22.

    Lindquist, K. A., Satpute, A. B., Wager, T. D., Weber, J. & Barrett, L. F. The Brain Basis of Positive and Negative Affect: Evidence from a Meta-Analysis of the Human Neuroimaging Literature. Cereb Cortex 26, 1910–1922, https://doi.org/10.1093/cercor/bhv001 (2015).

    Article  PubMed  PubMed Central  Google Scholar 

  23. 23.

    Greening, S. G., Osuch, E. A., Williamson, P. C. & Mitchell, D. G. The neural correlates of regulating positive and negative emotions in medication-free major depression. Soc Cogn Affect Neurosci 9, 628–637, https://doi.org/10.1093/scan/nst027 (2014).

    Article  PubMed  Google Scholar 

  24. 24.

    Perry, D., Hendler, T. & Shamay-Tsoory, S. G. Can we share the joy of others? Empathic neural responses to distress vs joy. Soc Cogn Affect Neur 7, 909–916, https://doi.org/10.1093/scan/nsr073 (2012).

    Article  Google Scholar 

  25. 25.

    Breiter, H. C. et al. Response and habituation of the human amygdala during visual processing of facial expression. Neuron 17, 875–887 (1996).

    CAS  Article  Google Scholar 

  26. 26.

    Plichta, M. M. et al. Test-retest reliability of evoked BOLD signals from a cognitive-emotive fMRI test battery. NeuroImage 60, 1746–1758, https://doi.org/10.1016/j.neuroimage.2012.01.129 (2012).

    Article  PubMed  Google Scholar 

  27. 27.

    Plichta, M. M. et al. Amygdala habituation: A reliable fMRI phenotype. NeuroImage 103, 383–390, https://doi.org/10.1016/j.neuroimage.2014.09.059 (2014).

    Article  PubMed  Google Scholar 

  28. 28.

    Kleinhans, N. M. et al. Reduced Neural Habituation in the Amygdala and Social Impairments in Autism Spectrum Disorders. Am J Psychiat 166, 467–475, https://doi.org/10.1176/appi.ajp.2008.07101681 (2009).

    Article  PubMed  Google Scholar 

  29. 29.

    Rankin, C. H. et al. Habituation revisited: An updated and revised description of the behavioral characteristics of habituation. Neurobiol Learn Mem 92, 135–138, https://doi.org/10.1016/j.nlm.2008.09.012 (2009).

    Article  PubMed  Google Scholar 

  30. 30.

    Shin, L. M. et al. A functional magnetic resonance imaging study of amygdala and medial prefrontal cortex responses to overtly presented fearful faces in posttraumatic stress disorder. Archives of general psychiatry 62, 273–281, https://doi.org/10.1001/archpsyc.62.3.273 (2005).

    Article  PubMed  Google Scholar 

  31. 31.

    Tam, F. I. et al. Altered behavioral and amygdala habituation in high-functioning adults with autism spectrum disorder: an fMRI study. Sci Rep 7, 13611, https://doi.org/10.1038/s41598-017-14097-2 (2017).

    ADS  CAS  Article  PubMed  PubMed Central  Google Scholar 

  32. 32.

    Pace-Schott, E. F. et al. Napping promotes inter-session habituation to emotional stimuli. Neurobiol Learn Mem 95, 24–36, https://doi.org/10.1016/j.nlm.2010.10.006 (2011).

    Article  PubMed  Google Scholar 

  33. 33.

    Morris, J. S., Ohman, A. & Dolan, R. J. A subcortical pathway to the right amygdala mediating “unseen” fear. Proceedings of the National Academy of Sciences of the United States of America 96, 1680–1685, https://doi.org/10.1073/pnas.96.4.1680 (1999).

    ADS  CAS  Article  PubMed  PubMed Central  Google Scholar 

  34. 34.

    Morris, J. S. et al. A neuromodulatory role for the human amygdala in processing emotional facial expressions. Brain: a journal of neurology 121, 47–57, https://doi.org/10.1093/brain/121.1.47 (1998).

    Article  Google Scholar 

  35. 35.

    Whalen, P. J. et al. Masked presentations of emotional facial expressions modulate amygdala activity without explicit knowledge. Journal of Neuroscience 18, 411–418 (1998).

    ADS  CAS  Article  Google Scholar 

  36. 36.

    Morris, J. S. et al. A differential neural response in the human amygdala to fearful and happy facial expressions. Nature 383, 812–815, https://doi.org/10.1038/383812a0 (1996).

    ADS  CAS  Article  PubMed  Google Scholar 

  37. 37.

    Morris, J. S., Ohman, A. & Dolan, R. J. Conscious and unconscious emotional learning in the human amygdala. Nature 393, 467–470, https://doi.org/10.1038/30976 (1998).

    ADS  CAS  Article  PubMed  Google Scholar 

  38. 38.

    Van den Stock, J., Vandenbulcke, M., Sinke, C. B. A., Goebel, R. & de Gelder, B. How affective information from faces and scenes interacts in the brain. Soc Cogn Affect Neur 9, 1481–1488, https://doi.org/10.1093/scan/nst138 (2014).

    Article  Google Scholar 

  39. 39.

    Van den Stock, J., Vandenbulcke, M., Sinke, C. B. A. & De Gelder, B. Affective Scenes Influence Fear Perception of Individual Body Expressions. Human brain mapping 35, 492–502, https://doi.org/10.1002/hbm.22195 (2014).

    Article  PubMed  Google Scholar 

  40. 40.

    Ramaswami, M. Network Plasticity in Adaptive Filtering and Behavioral Habituation. Neuron 82, 1216–1229, https://doi.org/10.1016/j.neuron.2014.04.035 (2014).

    CAS  Article  PubMed  Google Scholar 

  41. 41.

    O’Doherty, J. P. Reward representations and reward-related learning in the human brain: insights from neuroimaging. Curr Opin Neurobiol 14, 769–776, https://doi.org/10.1016/j.conb.2004.10.016 (2004).

    CAS  Article  PubMed  Google Scholar 

  42. 42.

    Haber, S. N. In Neurobiology of Sensation and Reward. (ed Gottfried J. A.) Ch. 11, (CRC Press/ Taylor & Francis 2011).

  43. 43.

    Sescousse, G., Redoute, J. & Dreher, J. C. The Architecture of Reward Value Coding in the Human Orbitofrontal Cortex. Journal of Neuroscience 30, 13095–13104, https://doi.org/10.1523/Jneurosci.3501-10.2010 (2010).

    CAS  Article  PubMed  Google Scholar 

  44. 44.

    Diekhof, E. K., Kaps, L., Falkai, P. & Gruber, O. The role of the human ventral striatum and the medial orbitofrontal cortex in the representation of reward magnitude - an activation likelihood estimation meta-analysis of neuroimaging studies of passive reward expectancy and outcome processing. Neuropsychologia 50, 1252–1266, https://doi.org/10.1016/j.neuropsychologia.2012.02.007 (2012).

    Article  PubMed  Google Scholar 

  45. 45.

    Rogers-Carter, M. M. et al. Insular cortex mediates approach and avoidance responses to social affective stimuli. Nature neuroscience 21, 404–414, https://doi.org/10.1038/s41593-018-0071-y (2018).

    CAS  Article  PubMed  PubMed Central  Google Scholar 

  46. 46.

    Paulus, M. P., Feinstein, J. S., Castillo, G., Simmons, A. N. & Stein, M. B. Dose-dependent decrease of activation in bilateral amygdala and insula by lorazepam during emotion processing. Archives of general psychiatry 62, 282–288, https://doi.org/10.1001/archpsyc.62.3.282 (2005).

    CAS  Article  PubMed  Google Scholar 

  47. 47.

    Vrticka, P. et al. Neural substrates of social emotion regulation: a fMRI study on imitation and expressive suppression to dynamic facial signals. Front Psychol 4, https://doi.org/10.3389/fpsyg.2013.00095 (2013).

  48. 48.

    Fang, Z. Y., Li, H., Chen, G. & Yang, J. J. Unconscious Processing of Negative Animals and Objects: Role of the Amygdala Revealed by fMRI. Front Hum Neurosci 10, https://doi.org/10.3389/fnhum.2016.00146 (2016).

  49. 49.

    Nielen, M. M. A. et al. Distinct brain systems underlie the processing of valence and arousal of affective pictures. Brain Cognition 71, 387–396, https://doi.org/10.1016/j.bandc.2009.05.007 (2009).

    CAS  Article  PubMed  Google Scholar 

  50. 50.

    Lang, P. J., Bradley, M. M. & Cuthbert, B. N. International Affective Picture System (IAPS): Affective Ratings of Pictures and Instruction Manual. Technical Report A. 8 (2008).

  51. 51.

    Wessa, M. et al. Emotional Picture Set (EmoPicS) (2010).

  52. 52.

    Langner, O. et al. Presentation and Validation of the Radboud Faces Database (RaFD). Cognition & Emotion 28, 1377–1388, https://doi.org/10.1080/02699930903485076 (2010).

    Article  Google Scholar 

  53. 53.

    Friston, K. J. et al. Statistical parametric maps in functional imaging: A general linear approach. Human brain mapping 2, 189–2010 (1995).

    Article  Google Scholar 

  54. 54.

    Diedrichsen, J. & Shadmehr, R. Detecting and adjusting for artifacts in fMRI time series data. NeuroImage 27, 624–634, https://doi.org/10.1016/j.neuroimage.2005.04.039 (2005).

    Article  PubMed  PubMed Central  Google Scholar 

  55. 55.

    Tzourio-Mazoyer, N. et al. Automated anatomical labeling of activations in SPM using a macroscopic anatomical parcellation of the MNI MRI single-subject brain. NeuroImage 15, 273–289, https://doi.org/10.1006/nimg.2001.0978 (2002).

    CAS  Article  PubMed  Google Scholar 

  56. 56.

    Lancaster, J. L. et al. Automated Talairach Atlas labels for functional brain mapping. Human brain mapping 10, 120–131, 10.1002/1097-0193(200007)10:3<120::Aid-Hbm30>3.0.Co;2-8 (2000).

  57. 57.

    Maldjian, J. A., Laurienti, P. J., Kraft, R. A. & Burdette, J. H. An automated method for neuroanatomic and cytoarchitectonic atlas-based interrogation of fMRI data sets. NeuroImage 19, 1233–1239, https://doi.org/10.1016/S1053-8119(03)00169-1 (2003).

    Article  PubMed  Google Scholar 

Download references

Acknowledgements

We would like to thank the support staff of the Max Planck Institute for Human Cognitive and Brain Sciences in Leipzig, for helping with participant recruitment and data acquisition. We also thank all participants who took part in this study.

Author information

Affiliations

Authors

Contributions

K.P. designed the new stimuli, acquired, analyzed the data and wrote the manuscript. F.-M.T. gave valuable advice and support for data analysis. P.K., S.K., F.M.P., T.S. and Pe.K. contributed to the design of the study and F.-M.T., F.M.P., S.K., T.S., and Pe.K. edited the manuscript. All authors read and approved the manuscript.

Corresponding author

Correspondence to Katrin Preckel.

Ethics declarations

Competing Interests

The authors declare no competing interests.

Additional information

Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Preckel, K., Trautwein, FM., Paulus, F.M. et al. Neural mechanisms of affective matching across faces and scenes. Sci Rep 9, 1492 (2019). https://doi.org/10.1038/s41598-018-37163-9

Download citation

Further reading

Comments

By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Search

Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing