The human amygdala parametrically encodes the intensity of specific facial emotions and their categorical ambiguity

The human amygdala is a key structure for processing emotional facial expressions, but it remains unclear what aspects of emotion are processed. We investigated this question with three different approaches: behavioural analysis of 3 amygdala lesion patients, neuroimaging of 19 healthy adults, and single-neuron recordings in 9 neurosurgical patients. The lesion patients showed a shift in behavioural sensitivity to fear, and amygdala BOLD responses were modulated by both fear and emotion ambiguity (the uncertainty that a facial expression is categorized as fearful or happy). We found two populations of neurons, one whose response correlated with increasing degree of fear, or happiness, and a second whose response primarily decreased as a linear function of emotion ambiguity. Together, our results indicate that the human amygdala processes both the degree of emotion in facial expressions and the categorical ambiguity of the emotion shown and that these two aspects of amygdala processing can be most clearly distinguished at the level of single neurons.

a 1 to 10 scale. We asked 'how pleasant is this emotion that the face shows' for valence, with 1 for very unpleasant and 10 for very pleasant. We asked 'how intense is this emotion that the face shows' for intensity, with 1 for very mild/calm and 10 for very intense/excited. Valence ratings decreased with morph levels whereas intensity ratings increased with morph levels. Error bars denote ±SEM across subjects. g,j,m,p, Valence (g,m) and intensity (j,p) ratings shown separately by subject gender. h,k,n,q, Valence (h,n) and intensity (k,q) ratings shown separately for each identity shown in a. Shaded area denotes ±SEM across subjects. neurosurgical, and k-n, control subjects. a,b, Though without explicit confidence ratings, fMRI subjects also showed inverted U-shaped reaction times with respect to morph levels, consistent with lesion, neurosurgical and behavioral healthy controls. b-n, Subjects judged facial emotions faster when they subsequently indicated higher confidence (c, g, k), and they tended to report confidence faster for higher confidence, especially for lesion and control subjects (d, h, l).
However, subjects did not show difference in reaction times of reporting confidence for different morph levels (e, i, m) or ambiguity levels (f, j, n). The behavioral patterns of all three subject groups were comparable. Error bars denote one SEM across subjects/sessions.
Morph levels. b, Ambiguity levels. c-n, Summary of the effect size across all runs. Effect size was computed in a 1.5-second window starting 250 ms after stimulus onset (single fixed window, not a moving window) and was averaged across all neurons for each run. Gray and red vertical lines indicate the chance mean effect size and the observed effect size, respectively. c,

Logistic mixed model
We
Both explicit confidence ratings (one-way repeated-measure ANOVA of ambiguity levels; lesion

Fusiform face area (FFA) also tracks emotion degree and ambiguity
We first identified a functional ROI within the FFA sensitive to faces using the face localizer task

Control analysis for emotion-tracking neurons and ambiguity-coding neurons
We carried out several control analyses to confirm the emotion tracking (Supplementary Fig.   7e,f). When we randomly assigned morph levels for each trial, we observed chance selection Using the same selection (linear regression) as emotion-tracking neurons but with only those trials on which patients classified the face as "fear" or "happy", respectively, we could select 32 We carried out a permutation test to confirm the ambiguity coding (Supplementary Fig. 7h-j).
When we randomly assigned ambiguity levels for each trial, we observed chance selection of

Model comparisons
Did emotion-tracking neurons differentiate continuously between levels of fear/happy in a face, or might they respond only once a certain threshold level of "fear" was present in a face? To confirm that our linear model was a better fit of our data and that our data could be better described by a linear relationship rather than a step-like thresholded model, we assessed how good this linear model was compared to two more complex models using the Akaike Information

Valence and intensity
Could the decreased neural responses to ambiguity be explained by systematic variability in valence and intensity? To test this, we acquired valence and intensity ratings on our stimuli from 23 additional subjects (Supplementary Fig. 1f-q). Ten subjects were Eastern Asians ( Supplementary Fig. 1f-k) and 13 were Western Caucasians (Supplementary Fig. 1l-q). As expected, there was a relationship between decreasing valence and increasing intensity as a function of the fearfulness of the morphed faces (Supplementary Fig. 1f,i,l,o). This is consistent with the general evaluation of happy and fear in a two-dimensional structure of affect 2 . In addition, this control experiment also demonstrates that the subtle and gradual changes of facial emotions could be resolved by subjects, a result we also found for both genders ( Supplementary   Fig. 1g,j,m,p) and face identities (Supplementary Fig. 1h,k,n,q). Crucially, the valence/ intensity ratings did not exhibit the U-shaped pattern that we found for ambiguity-coding signals.
In particular, note that stimuli with the smallest and largest values on both valence and intensity were equally ambiguous. We therefore conclude that the ambiguity signal was not driven by valence or intensity dimensions. Notably, Asians (Supplementary Fig. 1f-k) and Caucasians ( Supplementary Fig. 1l-q) demonstrated similar ratings.
! 24 In addition, when adding the mean intensity rating from Western Caucasians for each face as a covariate into our regression model, we could still select 28 ambiguity-coding neurons (12.0%; binomial P=7.36×10 −6 ; 5 neurons increased firing rate as a function of ambiguity and 23 neurons decreased firing rate as a function of ambiguity), and these neurons had a similar pattern of response to that shown in Fig. 4. Furthermore, using the same selection as the ambiguity-coding neurons, we found that only 10 neurons had a significant trial-by-trial correlation with intensity ratings (4.27%; binomial P=0.63), and only 1 of these 10 neurons was a ambiguity-coding neuron. Again, similar results were derived when excluding the neuron that also encoded emotion intensity. Together, our results suggest that the response of ambiguity-coding neurons could not be explained by emotion intensity.

Emotion degree and ambiguity coding by each facial identity
Subsets of amygdala neurons have been reported in prior studies to be sensitive to facial when only using anchor trials, we found that there were 10 neurons (4.27%, binomial P=0.63) selective for facial identity and similarly, none of these neurons were emotion-tracking neurons and only 2 neurons were ambiguity-coding neurons. Second, we separately analyzed each facial identity. With Face Model F1 only (Supplementary Fig. 1a) showed an opposite trend. Such interaction between stimulus and response indicates that these neurons track how much decisions match with stimulus.

Response aligned to button presses and confidence ratings
To provide an initial analysis that might partly separate perceptual and decision processes, we also aligned trials to button presses and quantified the response of each neuron based on the number of spikes in preparation to button press (1s window, starting 1000 ms before button press). We first found that 16 neurons ( suggesting that the representations of emotion degree and emotion ambiguity were still a robust finding, even using this later time window for analysis. In another analysis of a different temporal window, we aligned response to confidence ratings and quantified the response of each neuron based on the number of spikes in response to confidence rating (1s window, starting at confidence rating onset). Since we omitted confidence neurons selected by correlation also encoded decisions. These results suggest that emotiontracking and ambiguity-coding representations are no longer evident at this later time window.

Supplementary Discussion
Neuroimaging studies in humans have identified brain areas that co-vary with some parameters of faces in a continuous manner. In particular, using faces with varying levels of trustworthiness, it has been shown that regions in the amygdala track both how untrustworthy a face appeared (i.e., negative-linear responses) and the overall strength of a face's trustworthiness signal (i.e., nonlinear responses), despite faces not being subjectively perceived 5 . In our present study, we found similar parametric effects, not only with neuroimaging but also in direct electrophysiological responses.
Although most studies find activation within the amygdala that is highest for fearful faces 6-8 , there are also studies showing that the amygdala responds to neutral or happy faces 9 as well as to some extent all facial expressions 10 . Such general coding of facial expressions is also evident at the single-neuron level [11][12][13][14] . Notably, even using the same faces, amygdala BOLD response increases for fearful faces vs. happy faces when using a face mask whereas it decreases for fearful faces vs. happy faces when using a pattern mask 15 . Therefore, the sign of the amygdala's BOLD response to facial emotions may largely depend on the task. In the present study, we found both fear-tracking and happy-tracking neurons. There were overall more fear-tracking neurons than happy-tracking neurons (21 vs. 12; χ 2 -test: n.s.), whereas we found a greater BOLD signal for happy faces in the left amygdala. However, these findings are actually not discrepant, since 11 out of 12 neurons showing increasing firing rate with the degree of happiness were in the left amygdala.
The most ambiguous faces are in the middle of a face continuum whereas anchor faces are at the extremes of the continuum. Thus, another possible explanation of the amygdala responses to emotion ambiguity that we observed could be that the amygdala encodes the absolute distance from the average face, consistent with a previous finding that the amygdala has a stronger response to the extremes of the dimensions than to faces near the average face 16 . Furthermore, adaptation studies seek a graded recovery from neural adaptation with ever greater dissimilarity between pairs of stimuli 1 . In the present study, anchor faces are more distinctive compared to ambiguous faces and are thus less subject to adaptation from perceptual neighbors. Therefore, ! 29 more pronounced adaptation for stimuli in the middle than at the extremes of the face continuum could in principle be one mechanism that explains the amygdala's response to ambiguity we found in our study. However, we used a sufficient number of distinct stimuli, and their order was completely randomized, making face adaptation not likely in our protocol. Moreover, adaptation to facial emotion was likely to be lower when facial identity also changed on a trial-by-trial basis as in the present study.
In this study, we showed that two closely related variables with meta-information about the decision itself (fear/happy) are represented in the amygdala, one based on objective discriminability of the stimuli, and the second based on the subjective judgment of their discriminability: ambiguity and confidence. This is interesting, because a judgment of confidence is thought to be a direct consequence of an assessment of uncertainty 17 . The mechanisms by which such confidence judgments are made, however, remain poorly understood. It has been suggested that confidence judgments rely on a modified "race to threshold" approach, which relies on integrating the evidence for and against a hypothesis separately. The difference between the two quantities of integrated evidence is proportional to the subjective confidence in the decision 17 . In a recognition memory task, we have recently shown that the activity of a specific subset of human amygdala neurons is compatible with this model: the stronger the integrated difference between a familiarity and a novelty signal carried by individual neurons, the larger the subjective confidence 18 . Together, this raises the important question of whether the amygdala provides a general ambiguity signal that provides the underlying information necessary to judge the confidence in decisions about internal states in general. This important hypothesis remains to be investigated by comparing the activity of the same neurons in several different decisionmaking tasks.
Many neurons in the macaque amygdala change their firing rate in response to rewards or stimuli predicting the later delivery of rewards or punishments. For example, amygdala neurons differentiate between cues that predict delivery of positively-or negatively valued rewards 19 and they respond to reinforcements that are unexpected 20 . Reward-related coding of uncertainty has also been identified in macaque midbrain dopamine 21 and septal neurons 22 , which signal after ! 30 cues that predict unreliable rather than reliable rewards 23 . In these studies, individual visual stimuli are associated with the probability of obtaining reward after extensive conditioning. In contrast, in our task no trial-by-trial feedback or reward is delivered, no training on the ambiguity associated with each stimulus is provided and the neurons we identified have a stimulus-evoked response unrelated to reward delivery or expectation. We thus here show a reward-and reward-value independent decreasing response to ambiguity (increasing response to certainty) in the amygdala. Human single-neuron recordings are uniquely suited to test this hypothesis, because no training before the recording is required and patients are able to perform the task without rewards and punishments to incentivize correct performance.