Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

# The impact of facemasks on emotion recognition, trust attribution and re-identification

## Introduction

The impact of facemasks on emotion recognition, trustworthiness and face identity, however, is not necessarily of the same degree. The most prominent theory on face perception suggests that the recognition of emotional expressions and face identity are distinct perceptual processes encoded by independent psychological8 and neural9,10 mechanisms, with emotions and other social attributes heavily reliant on highly mobile facial regions, and facial identity mainly based upon invariant, static traits of the face. More specifically, as concerns emotion recognition and trust attribution, several experimental studies investigated the amount and type of social information conveyed by specific regions of the face, revealing that the mouth is pivotal in recognizing emotions, especially happiness11,12,13,14. Similarly, faces are judged as more trustworthy when the contrast of the mouth (and eye) regions is increased by means of experimental manipulations15. While this may suggest that facemasks could impair emotion recognition and trust attribution, by the time we ran this experiment almost every existing study employed explicit experimental manipulations on the mouth (and other facial regions) rather than ecological stimuli such as actual facemasks. As for identity recognition, in contrast, recent findings suggest that the recognition of faces is not necessarily related to internal features16. A recent study shows that a mix of internal and external features seem to weigh more in the recognition of both familiar and unfamiliar faces17. The same study shows that the mouth seems the least relevant feature. Identity recognition is known to be based on configural processing that get easily disrupted when the face percept is presented upside-down18 or its integrity is compromised due to other experimental manipulations19. How deeply the emotion-identity dissociation runs, and whether it depends upon different visual information and neural pathways, is still a matter of contention20,21. Only very recently, some studies on face perception in presence of surgical facemasks highlighted the possible impact of facemasks on emotion recognition, trustworthiness and identity22,23,24. However, to the best of our knowledge, no study has tested whether these processes are differently impaired by different mask types, e.g. comparing standard facemasks and masks with a transparent window that uncovers the mouth region.

On a practical note, it is worth stressing that the social costs of using facemasks should not be considered as a reason against their adoption. Rather, a deeper understanding of the mechanisms underpinning the processing of emotion, trustworthiness and identity might lead—from an applicative perspective—to the development of new methods to mitigate the loss of social information and, at the same time, to maximize socio-sanitary benefits.

## Results

### Emotion recognition in masked and unmasked faces

Results concerning the emotion recognition task were based on the scores given by each participant to the Karolinska Directed Emotional Faces (KDEF)25,26 stimuli (n = 40), posing Happiness, Sadness, Fear or Neutral expression. We found a significant main effect of Condition (TM, SM, NM), proving that the presence/type of mask affected the ability to recognize the posed emotions, χ2(2, N = 122) = 40.53, p < 0.001, pMC < 0.001, 99%CI [0.000, 0.001], ε2 = 0.335). Interestingly, post-hoc analysis (Mann–Whitney) showed that the recognition was significantly worse in SM (N = 40 Mdn = 0.81, 95%CI [0.79, 0.83]) than in both NM (N = 41 Mdn = 0.93, 95%CI [0.93, 0.93]) (U = 179, p < 0.001, r = 0.67) and TM (N = 41 Mdn = 0.93, 95%CI [0.93, 0.93]) (U = 303, p < 0.001, r = 0.54). On the contrary, no difference was found between NM and TM (U = 802, p = 0.722, r = 0.03), hence suggesting that the effect of TM was comparable to NM condition, and that the accuracy in SM was significantly lower than the other conditions (see Fig. 2A).

Since facemasks could have different impact on the four different Emotions (Fear, Sadness, Happiness and Neutral), we ran four Kruskal–Wallis tests to verify if emotions were significantly affected by the presence of the mask. We found a significant main effect of Condition (NM, SM, TM) for Happiness, Sadness and Fear (H: χ2(2, N = 122) = 28.69, p < 0.001, ε2 = 0.237; S: χ2(2, N = 122) = 37.52, p < 0.001, ε2 = 0.310; and F: χ2(2, N = 122) = 19.64, p < 0.001, ε2 = 0.162)). In contrast, no effect was found for the Neutral faces (N: χ2(2, N = 122) = 1.11, p = 0.573, ε2 = 0.009) (Fig. 2B). As concerns Happiness, Sadness and Fear, subsequent post-hoc analysis showed a significant drop for SM with respect to both TM and NM (p < 0.001; Mann–Whitney tests).

To further investigate which emotions were more or less affected by the presence/type of mask, we ran, for each Condition, three Friedman tests on the emotion recognition scores. Coherently with our hypotheses, the test for the NM failed to show a main effect of Emotion (F, S, H, N: χ2(3, N = 41) = 3.81, p = 0.282, W = 0.031), suggesting that participants recognized all the unmasked expressions at the same degree (Fig. 2C). In contrast, in both TM and SM, we found a significant effect of Emotion (F, S, H, N) (TM, χ2(3, N = 41) = 10.36, p = 0.016, W = 0.084 ; SM, χ2(3, N = 41) = 23.22, p < 0.001, W = 0.193)). More specifically, as concerns TM, the ability to correctly recognize emotions was significantly better preserved in the case of Happiness, compared to Neutral (p < 0.05), Sadness (p < 0.05) and Fear (p < 0.01) expressions. As concerns SM, in contrast, the recognition of the Neutral expression was significantly better preserved than all emotional expressions (p < 0.05 for H and p < 0.005 S and F), and that Sadness was the most affected expression (p < 0.001 for N and H; p < 0.05 for F; see Fig. 2C and Table 1 for the whole pattern and the post-hoc analyses).

The analysis of the direction of errors, performed by a chi-square test calculated by comparing for each Emotion (N, H, S, F) the actual responses with the corresponding expected values, showed a significant effect in all conditions (p < 0.0001). Emotions whose real values were significantly higher than the expected ones (i.e. chi-square value exceeds the average value for that emotion) were the following. In both SM and TM, Neutral expressions were mistaken for Sad expressions. In addition, in TM both negative emotions (Sadness and Fear) were mistaken with Neutral expressions. The SM, in contrast, had a different trend, with both negative emotions (Sadness and Fear) reciprocally mistaken. In addition, Happiness was frequently mistaken with Neutral (see Table 2).

The effect of masks on trustworthiness has been studied by means of two distinct analyses, targeting stimuli of the Chicago Face Database (CFD)27 validated for trustworthiness (n = 8), and KDEF stimuli posing emotional expressions (n = 40), respectively.

The first analysis investigated to what extent the ratings of trustworthiness attributed to the CFD pictures are influenced by the presence/type of mask. A Kruskal–Wallis test applied to all CFD stimuli throughout the three Conditions (NM, SM, TM) gave no effects (χ2(2, N = 122) = 3.64, p = 0.161, ε2 = 0.030). We then subdivided all stimuli in two sets, i.e. untrustworthy and trustworthy faces, in accord with previous results29,30, confirming that the untrustworthy stimuli obtained significantly lower scores (Z = − 9.23, p < 0.001, r = 0.836; Wilcoxon signed rank test). The same analysis applied to the two sets of stimuli showed that, while no significant effects were obtained to the faces rated as trustworthy (χ2(2, N = 122) = 0.52, p = 0.770, ε2 = 0.004), the untrustworthy faces showed a significant effect of Condition (NM, SM, TM), χ2(2, N = 122) = 13.16, p = 0.001, pMC = 0.002, 99%CI [0.000, 0.004], ε2 = 0.109) (Fig. 3A). Interestingly, Mann–Whitney post-hoc analysis showed a significant increase of the trust scores assigned to the untrustworthy faces in the SM condition (Mdn = 30%, 95%CI [27.5, 30]) compared to the NM condition (Mdn = 20%, 95%CI [15, 25]) (U = 429, p < 0.001, r = 0.41) and, albeit not fully significant, to the TM one (Mdn = 20%, 95%CI [20, 30]) (U = 636, p = 0.057, r = 0.21) – indicating that untrustworthy faces are rated as “less untrustworthy” when wearing TM and, even less, when wearing SM. The same procedure applied to the KDEF stimuli gave no significant results (χ2(2, N = 122) = 2.32, p = 0.313, ε2 = 0.019).

The second analysis investigated the effect of different emotions on trust attribution across the three conditions (NM, SM, TM) by analyzing the trust scores obtained by KDEF stimuli. A Friedman test applied to the three Conditions separately showed a main effect of Emotion in all Conditions (NM, χ2(3, N = 41) = 16.32, p = 0.001, W = 0.133); TM, χ2(3, N = 41) = 24.46, p < 0.001, W = 0.199; SM, χ2(3, N = 40) = 45.72, p < 0.001, W = 0.381) (Fig. 3B). In particular, Wilcoxon signed-rank tests for multiple comparisons showed that, in all conditions, Happy faces obtained a higher degree of trust with respect to both negative (Fear, Sadness) and Neutral expressions (p < 0.001). In addition, we found that Sad expressions were scored as more trustworthy than Fearful ones in NM (p < 0.05), and a similar trend was also observed in the TM (p = 0.051; see Table 1).

### Re-identification of masked and unmasked faces

The re-identification task was aimed at investigating the capability to correctly re-identify unmasked faces previously observed in masked (TM, SM) or unmasked (NM) fashion. To each participant, we presented pictures of unmasked faces (n = 12), some of which (n = 4) already presented in one of the three Conditions (SM, TM, NM) during the first session. For each picture, participants were required to judge whether they have seen the face or not (Fig. 1C).

Lastly, all previous analyses were controlled for gender, age, residence area, and level of education. None of these factors showed significant differences (all ps = NS). Moreover, when we checked for the experiment duration in the whole sample (Mdn = 583 s. 95% C.I. [554, 608]) (Mdn = 583 s. 95% C.I. [554, 608]), we did not find any response pattern alteration (outliers)—also when we took into account the features of the stimuli (i.e., trustworthy/untrustworthy; unseen/seen faces), proving that our manipulations did not affect the response strategies.

## Discussion

In the present study we tested to what extent observing an individual wearing a standard (SM) or a transparent facemask (TM), rather than no mask (NM), alters emotion recognition and trust attribution, as well as incidental episodic memory of previously observed face. We found that, as expected, standard masks (a) interfere with emotion recognition and trust attribution, and (b) make it harder to re-identify an already encountered face. More interestingly, we found that transparent masks (c) exert minimal to no effect on emotion recognition and trust attribution, but (d) they complicate re-identification as much as standard masks. In the following sub-sections we briefly discuss each of these aspects along with some possible implications.

### Emotion recognition and facemasks

Observing emotional expressions in individuals wearing different types of masks alters the observer’s processing of emotion in a different manner. In particular, while standard masks impair the detection of facial displays, transparent masks—which restore visual access to the mouth region—have virtually no effects on emotion recognition, leading to results that are comparable to those obtained when the face is fully visible. Of note, this effect was particularly strong in the case of the three emotional expressions, but virtually absent in the case of the neutral expression, which was indeed correctly recognized in all conditions.

The evidence that transparent masks do not impair the recognition of emotions suggests that emotional displays are largely detected on the basis of specific individual details—and the mouth in particular—rather than on a holistic processing of the whole face. This hypothesis is in line with huge amount of data highlighting the role of the mouth region in the recognition of many emotional expressions, and in particular happiness13,28,29,30, thus suggesting that transparent masks provide a workable alternative to standard masks to face the Covid-19 emergency and, at the same time, to allow individuals to share emotions and to convey face-mediated social intentions and non-verbal communication in a standard fashion.

Another interesting finding is that, consistently with a previous study22, masks make no difference for identifying that a face is neutral with respect to emotional. While people do seem to treat “neutral face” as a proper category39, the emotional meaning of neutral faces may be influenced by the context40. However, previous literature strongly suggests that the emotional neutrality of a face can be easily decoded by the eyes41,42.On a more practical side, the data suggest that transparent masks almost entirely avoid the “emotional screening” effect of standard masks. Indeed, the accuracy of emotion recognition of faces wearing transparent masks is almost comparable to that obtained with unmasked faces, and significantly better than that obtained with faces wearing standard masks, for all emotions.

### Trustworthiness and facemasks

The effect of masks on trustworthiness has been studied by means of two distinct analyses. First, we established to what extent the ratings of trustworthiness attributed to the CFD pictures—where scores for trustworthiness of unmasked faces are validated—are influenced by the presence/type of mask. While the perceived trustworthiness of faces validated as “trustworthy” in the CFD remains stable between NM, TM and SM conditions, things go differently once we consider those faces that, according to the CFD, are “untrustworthy”. In other words, the low trust judgments on untrustworthy faces in the “no mask” condition are consistent with those of the CFD, but their scores are less negative in faces wearing transparent masks, and even less so in faces wearing standard masks—albeit they never reach the score of trustworthy faces. In a sense, it looks like “untrustworthiness” gets screened by masks.

If trust is based on both valence and dominance, which dimension drives this screen-off? Tentative as it may be, the evidence about emotion recognition is at least suggestive that valence perception is not screened by facemasks. Moreover, the trust judgments of emotional faces discussed above reveal a same pattern of positive correlation between valence and trustworthiness across all three conditions. It is then reasonable to assume that masks screen untrustworthiness by partially obstructing cues relevant for dominance estimation. These findings are in line with a positive correlation between perceived dominance and facial width-to-height ratio (fWHR)54. Since the width of the face is measured on the basis of the distance between cheekbones, that are partially covered by the masks, it is possible that the untrustworthiness-screening effect of masks are mediated by the obstruction of the zygomaticus region, which yields dominance-related cues.

### Re-identification and facemasks

Barring future studies showing that standard and transparent facemasks exert a different impact on familiar faces, this dissociation between emotion and re-identification seems is in line with dual-route models of face perception positing that facial identity and emotional expressions are processed by separate cognitive mechanisms, triggered by distinct visual features8,20,57. It is widely accepted that, in normal conditions and in healthy observers, the process of identity recognition relies on the processing of the whole face rather than focusing on individual parts19,20. In contrast, emotion recognition is largely based on specific information from the mouth, or the eye, region, depending on which emotion is expressed.

Neuroscientific dual-route models of face perception suggest that emotion and identity recognition are processed by two different sectors of the temporal cortex10,57. Emotion recognition, relying on the identification of the changeable, and dynamic, aspects of the face, is processed in the “dorsal stream” for faces encompassing the visual motion area MT and STS areas. Face identity, in contrast, mainly relies on those aspects of the face structure that are invariant across changes (static), and is processed in the “ventral stream” for faces in the inferotemporal region58,59. On the basis of this perspective, we speculate that, during the presentation of the stimuli, the dorsal stream was minimally affected by the transparent mask, being the mouth fully visible. In contrast, the reduced capability of re-identify previously observed (masked) faces is telling of a more dramatic impairment of the ventral stream. The reduced functioning of the ventral stream can be accounted for by two alternative explanations. First, the capability to recognize the actor’s identity could depend on a holistic processing, which is compromised by both types of masks. While this interpretation is in line with previous hypotheses on identity recognition, one could expect that the disruption, via masking, of the holistic processing of the face should lead to a much more pronounced reduction in accuracy than the one observed in our study. An alternative, and more tempting, interpretation is that the capability to re-identify the actor relies on different types of information, not limited to the mouth/eyes regions, but also relying on cues of the lower half of the face, such as contrast reduction of the jaw and cheeks, small freckles and wrinkles, which are screened by both types of masks. To disentangle between these alternative hypotheses, further studies may compare the effect of semi-transparent vs. fully transparent masks, to investigate whether the latter are able to recover not only the capability to recognize emotions and trustworthiness, but also to better re-identify previously observed (masked) individuals.

Given the dissociation between dynamic and static features encoded by the dorsal and ventral streams respectively, one could argue that such a model cannot account for our results, being all our stimuli static. However, despite the dorsal stream for faces is indeed typically triggered by dynamic facial expressions, Furl and colleagues58 demonstrated that the presentation of static emotional expressions—as the ones used in our study—activated the same STS sectors typically activated by dynamic expressions, hypothesizing that static emotional expressions determine an “implied motion”, hence activating the same neuronal population encoding dynamic expressions.

### Implications

To the best of our knowledge, this study represents the first systematic enquiry concerning social readouts from faces with standard and transparent masks. Many more analyses will be needed to get a full grasp over the complex, often context-mediated, interaction between various types of masks and social information based on face perception. The present study could be fruitfully complemented by further within subject designs. Moreover, while for the sake of simplicity we have treated masks only as if they subtract social information by obstructing the face, it is likely that they also add social information of some sort. Fischer and colleagues29 demonstrate that the emotional meaning ascribed to women’s faces covered by a digital manipulation slightly differ from that of the same faces covered by a Niqab (a traditional Muslim veil). More closely to the object investigated here, i.e. the medical facemask, social sciences such as anthropology60 and semiotics61 offer precious insights about how its meaning may change across cultures and across times. Nevertheless, we think that some tentative implications may be legitimately drawn from our data.

First, we have seen that standard, but not transparent, masks compromise the capability to recognize the emotion (albeit probably not the valence) on the basis of facial cues. Being able to see one’s facial movements is not only useful for the sake of knowing mental states. As mentioned above, emotional decoding is likely to involve facial mimicry, which, beside its role in emotion recognition, is also thought to play a role in fostering empathy2,3,4,5,6,7. These expectations seem supported by a study conducted in Hong Kong after the SARS pandemic62, reporting that primary care doctors visiting patients with a medical facemask were perceived on average as less emphatic, especially when subjects have been patients of the same doctor for a long time. It is thus safe to assume that the possible benefits of transparent masks extend beyond enabling verbal communication with sign language, which originally inspired their design, by also favoring empathy mediated by facial mimicry. Consequently, as the social impairments brought about by facemasks partially explains why some people refuse to employ them, by partially re-enabling social communication transparent masks could mitigate the skepticism toward wearing them. Moreover, as it has been shown that empathy is pivotal in promoting compliant behaviors toward physical distancing and mask wearing63, by restoring the emotional display that scaffolds empathy, transparent facemasks may indirectly promote the diffusion of mask wearing itself.

However, we should refrain from the simplistic conclusion that transparent masks are always preferable to standard ones. Recall that facial first impressions profoundly affect observers’ behavior, often by perpetrating prejudices64,65. For instance, it has been recently shown that, during the triages aimed at establishing the severity and hence the priority of patients in the emergency unit of a hospital, the perceived untrustworthiness of faces predicted less severe categorization66. As this outcome is likely to embed some inequalities, based on our findings that masks reduce perceived untrustworthiness, it would be interesting to speculate whether masked patients would have received a fairer treatment.

A final implication is that masked faces are harder to recognize, even if their mouth region is observable. Trivial as it may seem, further investigating this matter will prove paramount in a context such as forensic. Indeed, as face is the more visible hallmark of personal identity, it is not by chance that in many countries the law forbids to cover the face without necessity in public spaces.

## Materials and methods

### Participants

The experiment was an on-line test (see below) carried out on 122 Italian native speakers (47 females; age = 33 ± 8), recruited by means of different social media platforms. Before the experiment, participants provided some basic demographic information (available upon request to the corresponding author), read the main instructions, and provided an informed consent. By accessing a single un-reusable link, each participant could run the experiment directly from home on their laptops, smartphones, or tablets. An anti-ballot box stuffing was employed in order to avoid multiple participations from the same device. Informed consents were requested before the experiment started and the whole procedure was approved by the Institutional Review Board (IRB) of Sapienza University of Rome (ID 0001261 - 31.07.2020). All methods were carried out in accordance with relevant guidelines and regulations.

### Experimental procedure

The experiment consisted of an on-line test (Qualtrics.com) composed by two distinct sessions. The first session (“emotion recognition and trust attribution”) was aimed at evaluating the impact of SM and TM on emotion and trust attribution. The second session (“re-identification task”) was aimed at investigating the impact of SM and TM in the capability to re-identify face identity. The entire study lasted 10 ± 4 min. Response times for each task and condition gave no significant results and were discarded from further analyses (Emotion Recognition: Mdn = 1.71 s. 95% C.I. [1.63, 1.81]; Trust Attribution: Mdn = 1.61 s. 95% C.I. [1.52, 1.70]; Recall: Mdn = 2.11 s. 95% C.I. [1.92, 2.23] ).

1. (a)

Emotion recognition and trust attribution

2. (b)

The second session was aimed at evaluating the capability to recognize an unmasked face previously presented in NM, TM or SM condition. All faces in this session consisted of neutral expressions (from the Chicago Face Database; see below), and were displayed without masks, regardless of the condition the participant was assigned to in the “emotion recognition and trust attribution” session. For each participant, 12 faces were shown, 4 of which were already presented in the previous session, in NM, SM or TM condition. Previously presented faces were alternated, in a random order, with 8 brand new faces. The unmasked face, either previously presented or a new one, was shown on the monitor screen and participants were requested to answer whether or not they had already seen the face in the first session. To minimize both priming and recency effects, the items used in the re-identification task were always shown in the middle blocks (2 and 3) of the first session.

### Experimental stimuli

Original, unmasked, version of the stimuli (NM) were retrieved by two datasets: the Karolinska Directed Emotional Faces (KDEF)25,26 and the Chicago Face Database (CFD)27. More specifically:

1. (a)

Emotion recognition. To evaluate the effects of facemasks on the recognition of emotional expressions, we used 40 images of faces from the KDEF database. This database includes 70 (unmasked) faces depicting 7 emotional expressions from 5 different perspectives. For the current study, we selected the four following expressions: fear, sadness, joy and neutral expression – from the frontal perspective (Fig. 1A). For each of the four expressions, we selected 10 faces (5 males, 5 females) among those whose emotional recognition ratings were the highest. The selection of two negative emotions (fear and sadness) was aimed at obtaining results distinct categories of emotion sharing the same (negative) valence. In our paradigm, each selected face (N = 10) was shown 4 times (1 per facial expression) in a within subjects fashion.

2. (b)

Trust attribution and re-identification task. Since one of the goals of our study was to investigate the effect of facemasks on perceived trustworthiness, we also included 16 images of faces from the CFD, for which a validation of the degree of trust assigned to each face is already available with an Italian sample 67. The CFD consists of 158 high-resolution, standardized, frontal position photographs of (unmasked) males and females between 18 and 40 years old. For the current study, we selected 16 faces (8 males, 8 females) among the most trustworthy and untrustworthy ones in a balanced fashion. Images were cropped to match the size of those from the KDEF (964 × 678 pixels). As regards the trust attribution task, we selected 8 faces (4 males and 4 females) among those who had the highest and lowest scores in trustworthiness. Each face showed a neutral expression and was shown only once during the first session.

In the re-identification task, we presented our participants with 4 CFD faces (2 males and 2 females) among those of the first session and 8 new (i.e., previously unseen) CFD faces (4 males and 4 females). In the whole design, trustworthy and untrustworthy faces were shown in a balanced manner.

All the original databases used in the current study are publicly available. To obtain visual stimuli for the masked TM and SM conditions, a professional graphic designer edited each unmasked stimulus, creating two versions of the same stimulus by superimposing two different masks: a standard medical mask and a transparent one, in which the mouth could be seen through the transparency of a plastic part (Fig. 1A).

### Measurements and Statistical analysis

1. (a)

Emotion recognition

The first analysis was conducted on the stimuli from the KDEF, and was aimed at verifying a difference in emotion recognition across the three conditions (NM, SM, TM)—regardless of the emotion type. For each participant, we assigned 1 point for each correctly recognized emotion and 0 points for each error. A composite score was made by averaging the sum of the 40 ratings obtained from the KDEF faces (Confidence Intervals of the median have been computed via bias corrected and accelerated bootstrap—BCa: 1000 samples). Given the violation of normal distribution assumption, we run a (non-parametric) Krusal-Wallis test with Monte-Carlo exact tests (of which we reported the significance level (pMC; 5000 sampled tables) as well its 99% confidence interval) considering the average scores for the three Conditions (NM, SM, TM). In case of significant effects, a Mann–Whitney tests (Bonferroni corrected) was applied as post-hoc test. Effect sizes for Mann–Whitney tests were computed using the following equation: $$= \frac{Z}{\surd N}$$.

The second analysis was aimed at considering the effect of facemasks on the different emotions. We performed four Krusal-Wallis tests considering the average scores for the three Conditions (NM, SM, TM), for each emotion. Post-hoc test was conducted as in the previous analysis.

The third analysis was aimed at considering which emotions were more affected by the presence/type of mask. For each of the three Conditions (NM, SM, TM), due to the repeated measure design, we performed a Friedman test considering the scores attributed to the four different Emotions (N, H, F, S) as dependent variables. In case of significant effects, a Wilcoxon signed-rank test was applied as post-hoc test to detect differences within different emotions.

The direction of errors in the emotion categorization task was assessed by a chi-square test comparing the actual scores with the expected values for correct answers and for errors. Expected values for correct answers and for errors were calculated on the average value of all correct answers and errors, respectively.

2. (b)

The first analysis was conducted on the stimuli from the CFD, and was aimed at investigating the trust attribution across the three conditions (NM, SM, TM), namely whether the presence/type of mask affects both the trust assigned to a specific face, and the consistency between (un)trustworthiness scores assigned to masked and unmasked faces. Analyses were performed on three distinct sets of data: (a) on all trust scores obtained from all stimuli of the CFD, regardless of their trust scores stored in the dataset, (b) on the trust scores obtained from the CFD stimuli validated as highly trustworthy in an Italian sample67 and (c) on the trust scores obtained from the CFD stimuli validated as highly untrustworthy in the same Italian sample. The difference between the scores obtained by trustworthy and untrustworthy faces was assessed by means of a Wilcoxon signed-rank test. In all the previous tests, we applied a (non-parametric) Kruskal–Wallis test considering the average scores for the three Conditions (NM, SM, TM). In case of significant effects, a Mann–Whitney tests (Bonferroni corrected) was applied as post-hoc test. Effect sizes for Mann–Whitney tests were computed as before. The same statistical procedure was conducted on the stimuli from the KDEF.

The second analysis, also conducted on the stimuli from the KDEF, was aimed at considering which emotions were more affected by the presence/type of mask in terms of trust attribution. For each of the three Conditions (NM, SM, TM), due to the repeated measure design, we performed a Friedman test considering the scores attributed to the four different Emotions (N, H, F, S) as dependent variables. In case of significant effects, a Wilcoxon signed-rank test was applied as post-hoc test to detect differences within different emotions.

3. (c)

Analyses were performed to investigate an effect of Condition (NM, SM, TM) on the re-identification scores with the aim of assessing the impact of the mask on the ability to remember a previously displayed face. For each participant we evaluated the percentage of correct answers by scoring each trial on a binary scale (1–0 points for each correct and incorrect responses, respectively). We ran a Kruskal–Wallis test considering the average scores for the three Conditions (NM, SM, TM). In case of significant effects, a Mann–Whitney tests (Bonferroni corrected) was applied as post-hoc test.

Lastly, in order to control subjects and trials variability in the whole experiment, we incorporated i) subjects, ii) trials (i.e. stimuli) and iii) an additional interaction subjects*faces as random effects within three generalized linear mixed models due to the possible heterogeneity of preferences across participants and trials (GLMMs; emotion recognition; trust attribution; re-identification task) that confirmed all previous significance levels (p < 0.05).

## References

1. Oosterhof, N. N. & Todorov, A. Shared perceptual basis of emotional expressions and trustworthiness impressions from faces. Emotion 9, 128–133 (2009).

2. Palagi, E., Celeghin, A., Tamietto, M., Winkielman, P. & Norscia, I. The neuroethology of spontaneous mimicry and emotional contagion in human and non-human animals. Neurosci. Biobehav. Rev. 111, 149–165 (2020).

3. Hess, U. & Fischer, A. Emotional mimicry as social regulation. Personal. Soc. Psychol. Rev. 17, 142–157 (2013).

4. Hess, U. & Fischer, A. in Oxford Research Encyclopedia of Communication (2017). https://doi.org/10.1093/acrefore/9780190228613.013.433

5. Tramacere, A. & Ferrari, P. F. Faces in the mirror, from the neuroscience of mimicry to the emergence of mentalizing. J. Anthropol. Sci. 94, 113–126 (2016).

6. Dimberg, U., Andréasson, P. & Thunberg, M. Emotional empathy and facial reactions to facial expressions. J. Psychophysiol. 25, 26–31 (2011).

7. Mancini, G., Ferrari, P. F. & Palagi, E. In play we trust. Rapid facial mimicry predicts the duration of playful interactions in geladas. PLoS ONE 8, e66481 (2013).

8. Bruce, V. & Young, A. Understanding face recognition. Br. J. Psychol. 77, 305–327 (1986).

9. Haxby, J. V., Hoffman, E. A. & Gobbini, M. I. The distributed human neural system for face perception. Trends Cogn. Sci. 4, 223–233 (2000).

10. Adolphs, R. Recognizing emotion from facial expressions: psychological and neurological mechanisms. Behav. Cogn. Neurosci. Rev. 1, 21–62 (2002).

11. Blais, C., Roy, C., Fiset, D., Arguin, M. & Gosselin, F. The eyes are not the window to basic emotions. Neuropsychologia 50, 2830–2838 (2012).

12. Roberson, D., Kikutani, M., Döge, P., Whitaker, L. & Majid, A. Shades of emotion: What the addition of sunglasses or masks to faces reveals about the development of facial expression processing. Cognition 125, 195–206 (2012).

13. Schurgin, M. W. et al. Eye movements during emotion recognition in faces. J. Vis. 14, (2014).

14. Wegrzyn, M., Vogt, M., Kireclioglu, B., Schneider, J. & Kissler, J. Mapping the emotional face. How individual face parts contribute to successful emotion recognition. PLoS One 12, (2017).

15. Robinson, K., Blais, C., Duncan, J., Forget, H. & Fiset, D. The dual nature of the human face: There is a little Jekyll and a little Hyde in all of us. Front. Psychol. 5, (2014).

16. Logan, A. J., Gordon, G. E. & Loffler, G. Contributions of individual face features to face discrimination. Vis. Res. 137, 29–39 (2017).

17. Abudarham, N. & Yovel, G. Same critical features are used for identification of familiarized and unfamiliar faces. Vision Res. 157, 105–111 (2019).

18. Yin, R. K. Looking at upide-down faces. J. Exp. Psychol. 81, 141–145 (1969).

19. Young, A. W., Hellawell, D. & Hay, D. C. Configurational information in face perception. Perception 42, 1166–1178 (2013).

20. Calder, A. J. & Young, A. W. Understanding the recognition of facial identity and facial expression. Nat. Rev. Neurosci. 6, 641–651 (2005).

21. Bernstein, M. & Yovel, G. Two neural pathways of face processing: a critical evaluation of current models. Neurosci. Biobehav. Rev. 55, 536–546 (2015).

22. Carbon, C. C. Wearing face masks strongly confuses counterparts in reading emotions. Front. Psychol. 11, 566886 (2020).

23. Olivera-La Rosa, A., Chuquichambi, E. G. & Ingram, G. P. D. Keep your (social) distance: pathogen concerns and social perception in the time of COVID-19. Pers. Individ. Dif. 166, 110200 (2020).

24. Carragher, D. J. & Hancock, P. Surgical face masks impair human face matching performance for familiar and unfamiliar faces. PsyArXiv Prepr. https://doi.org/10.31234/osf.io/n9mt5 (2020).

25. Lundqvist, D., Flykt, A. & Öhman, A. The Karolinska directed emotional faces—KDEF (Dep. Clin. Neurosci. Psychol. Sect . Karolinska Institutet, 1998).

26. Goeleven, E., De Raedt, R., Leyman, L. & Verschuere, B. The Karolinska directed emotional faces: a validation study. Cogn. Emot. 22, 1094–1118 (2008).

27. Ma, D. S., Correll, J. & Wittenbrink, B. The Chicago face database: a free stimulus set of faces and norming data. Behav. Res. Methods 47, 1122–1135 (2015).

28. Calvo, M. G. & Nummenmaa, L. Detection of emotional faces: salient physical features guide effective visual search. J. Exp. Psychol. Gen. 137, 471–494 (2008).

29. Fischer, A. H., Gillebaart, M., Rotteveel, M., Becker, D. & Vliek, M. Veiled emotions: the effect of covered faces on emotion perception and attitudes. Soc. Psychol. Personal. Sci. 3, 266–273 (2012).

30. Nestor, M. S., Fischer, D. & Arnold, D. “Masking” our emotions: Botulinum toxin, facial expression, and well-being in the age of COVID-19. J. Cosmet. Dermatol. (2020).

31. Bombari, D. et al. Emotion recognition: the role of featural and configural face information. Q. J. Exp. Psychol. 66, 2426–2442 (2013).

32. Nummenmaa, L. & Calvo, M. G. Dissociation between recognition and detection advantage for facial expressions: a meta-analysis. Emotion 15, 243–256 (2015).

33. Elfenbein, H. A. & Ambady, N. On the universality and cultural specificity of emotion recognition: a meta-analysis. Psychol. Bull. 128, 203–235 (2002).

34. Ekman, P., Sorenson, E. R. & Friesen, W. V. Pan-cultural elements in facial displays of emotion. Science (80-) 164, 86–88 (1969).

35. Blais, C., Fiset, D., Roy, C., Régimbald, C. S. & Gosselin, F. Eye fixation patterns for categorizing static and dynamic facial expressions. Emotion 17, 1107–1119 (2017).

36. Russell, J. A. Is there universal recognition of emotion from facial expression? A review of the cross-cultural studies. Psychol. Bull. 115, 102–141 (1994).

37. Widen, S. C. Children’s interpretation of facial expressions: the long path from valence-based to specific discrete categories. Emot. Rev. 5, 72–77 (2013).

38. Gendron, M., Crivelli, C. & Barrett, L. F. Universality reconsidered: diversity in making meaning of facial expressions. Curr. Dir. Psychol. Sci. 27, 211–219 (2018).

39. Etcoff, N. L. & Magee, J. J. Categorical perception of facial expressions. Cognition 44, 227–240 (1992).

40. Carrera-Levillain, P. & Fernandez-Dols, J. M. Neutral faces in context: their emotional meaning and their function. J. Nonverb. Behav. 18, 281–299 (1994).

41. Smith, M. L., Cottrell, G. W., Gosselin, F. & Schyns, P. G. Transmitting and decoding facial expressions. Psychol. Sci. 16, 184–189 (2005).

42. Duncan, J. et al. Orientations for the successful categorization of facial expressions and their link with facial features. J. Vis. 17, (2017).

43. Dunn, J. R. & Schweitzer, M. E. Feeling and believing: the influence of emotion on trust. J. Pers. Soc. Psychol. 88, 736–748 (2005).

44. Winston, J. S., Strange, B. A., O’Doherty, J. & Dolan, R. J. Automatic and intentional brain responses during evaluation of trustworthiness of faces. Nat. Neurosci. 5, 277–283 (2002).

45. Caruana, F. et al. Mirroring other’s laughter. Cingulate, opercular and temporal contributions to laughter expression and observation. Cortex 128, 35–48 (2020).

46. Wood, A. & Niedenthal, P. Developing a social functional account of laughter. Soc. Personal. Psychol. Compass 12, e12383 (2018).

47. Martin, J., Rychlowska, M., Wood, A. & Niedenthal, P. Smiles as multipurpose social signals. Trends Cogn. Sci. 21, 864–877 (2017).

48. Dunbar, R. I. M. Bridging the bonding gap: the transition from primates to humans. Philos. Trans. R. Soc. London B Biol. Sci. 367, (2012).

49. Rymarczyk, K., Żurawski, Ł., Jankowiak-Siuda, K. & Szatkowska, I. Neural Correlates of Facial Mimicry: Simultaneous Measurements of EMG and BOLD Responses during Perception of Dynamic Compared to Static Facial Expressions. Front. Psychol. 9, (2018).

50. Dimberg, U. Facial reactions to facial expressions. Psychophysiology 19, 643–647 (1982).

51. Caruana, F. et al. A mirror mechanism for smiling in the anterior cingulate cortex. Emotion 17, 187–190 (2017).

52. Oosterhof, N. N. & Todorov, A. The functional basis of face evaluation. Proc. Natl. Acad. Sci. U. S. A. 105, 11087–11092 (2008).

53. Zebrowitz, L. A. & Montepare, J. M. Social psychological face perception: why appearance matters. Soc. Personal. Psychol. Compass 2, 1497–1517 (2008).

54. Geniole, S. N., Denson, T. F., Dixson, B. J., Carré, J. M. & McCormick, C. M. Evidence from meta-analyses of the facial width-to-height ratio as an evolved cue of threat. PLoS ONE 10, e0132726 (2015).

55. De Gelder, B., Vroomen, J., Pourtois, G. & Weiskrantz, L. Non-conscious recognition of affect in the absence of striate cortex. NeuroReport 10, 3759–3763 (1999).

56. Tamietto, M. et al. Unseen facial and bodily expressions trigger fast emotional reactions. Proc. Natl. Acad. Sci. U. S. A. 106, 17661–17666 (2009).

57. Pitcher, D. & Ungerleider, L. G. Evidence for a third visual pathway specialized for social perception. Trends Cogn. Sci. 25, 100–110 (2021).

58. Furl, N., Hadj-Bouziane, F., Liu, N., Averbeck, B. B. & Ungerleider, L. G. Dynamic and static facial expressions decoded from motion-sensitive areas in the macaque monkey. J. Neurosci. 32, 15952–15962 (2012).

59. Gerbella, M., Caruana, F. & Rizzolatti, G. Pathways for smiling, disgust and fear recognition in blindsight patients. Neuropsychologia 128, 6–13 (2019).

60. Siu, J. Y. M. Qualitative study on the shifting sociocultural meanings of the facemask in Hong Kong since the severe acute respiratory syndrome (SARS) outbreak: Implications for infection control in the post-SARS era. Int. J. Equity Health 15, (2016).

61. Leone, M. The semiotics of the medical face mask: east and west. Signs and Media, forthcoming. http://www.facets-erc.eu/wp-content/uploads/2020/05/Massimo-LEONE-2020-The-Semiotics-of-the-Medical-Face-Mask-Final-Version.pdf (2020).

62. Wong, C. K. M. et al. Effect of facemasks on empathy and relational continuity: A randomised controlled trial in primary care. BMC Fam. Pract. 14, (2013).

63. Pfattheicher, S., Nockur, L., Böhm, R., Sassenrath, C. & Petersen, M. B. The emotional path to action: empathy promotes physical distancing and wearing of face masks during the COVID-19 pandemic. Psychol. Sci. 31, (2020).

64. Olivola, C. Y., Funk, F. & Todorov, A. Social attributions from faces bias human choices. Trends Cogn. Sci. 18, 566–570 (2014).

65. Todorov, A., Olivola, C. Y., Dotsch, R. & Mende-Siedlecki, P. Social attributions from faces: determinants, consequences, accuracy, and functional significance. Annu. Rev. Psychol. 66, 519–545 (2015).

66. Bagnis, A. et al. Judging health care priority in emergency situations: patient facial appearance matters. Soc. Sci. Med. 260, 113180 (2020).

67. Felletti, S. & Paglieri, F. Trust your peers! How trust among citizens can foster collective risk prevention. Int. J. Disaster Risk Reduct. 36, 101082 (2019).

## Acknowledgements

The authors wish to thank Roberto Gamboni for the photoediting. MV has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (Grant Agreement No. 819649 - FACETS, P.I. Massimo Leone), which also supported the photoediting and the publication.

## Author information

Authors

### Contributions

A.A., M.M., F.C., F.P. and M.V. together designed the experiment and interpreted the results. A.A. and M.M. performed data acquisition and analyses and drafted the sections “Methods” and “Results”. M.V. drafted the sections “Introduction” and “Discussion”. F.P. provided comments on trustworthiness. F.C. reworked the entire article. All authors have contributed to, seen, reviewed and approved the manuscript.

### Corresponding author

Correspondence to Fausto Caruana.

## Ethics declarations

### Competing interests

The authors declare no competing interests.

### Publisher's note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

## Rights and permissions

Reprints and Permissions

Marini, M., Ansani, A., Paglieri, F. et al. The impact of facemasks on emotion recognition, trust attribution and re-identification. Sci Rep 11, 5577 (2021). https://doi.org/10.1038/s41598-021-84806-5

• Accepted:

• Published:

• DOI: https://doi.org/10.1038/s41598-021-84806-5

• ### Face masks versus sunglasses: limited effects of time and individual differences in the ability to judge facial identity and social traits

• Rachel J. Bennetts
• Poppy Johnson Humphrey
• Sarah Bate

Cognitive Research: Principles and Implications (2022)

• ### Masked emotions: Do face mask patterns and colors affect the recognition of emotions?

• Olesya Blazhenkova
• Kivilcim Dogerlioglu-Demir
• Robert W. Booth

Cognitive Research: Principles and Implications (2022)

• ### The influence of familiarity on memory for faces and mask wearing

• Diana Kollenda
• Benjamin de Haas

Cognitive Research: Principles and Implications (2022)

• ### Wearing an Anti-COVID Face Mask Predisposes to Spontaneity and Ideas’ Expression in Social Interactions: Findings from a Pilot Experiment

• Matteo Perini
• Simona Sciara

Trends in Psychology (2022)

• ### Wearing N95, Surgical, and Cloth Face Masks Compromises the Perception of Emotion

• Andrew T. Langbehn
• Dasha A. Yermol
• Paula M. Niedenthal

Affective Science (2022)