Abstract
During the COVID-19 pandemic, the use of face masks has become a daily routine. Studies have shown that face masks increase the ambiguity of facial expressions which not only affects (the development of) emotion recognition, but also interferes with social interaction and judgement. To disambiguate facial expressions, we rely on perceptual (stimulus-driven) as well as preconceptual (top-down) processes. However, it is unknown which of these two mechanisms accounts for the misinterpretation of masked expressions. To investigate this, we asked participants (N = 136) to decide whether ambiguous (morphed) facial expressions, with or without a mask, were perceived as friendly or unfriendly. To test for the independent effects of perceptual and preconceptual biases we fitted a drift–diffusion model (DDM) to the behavioral data of each participant. Results show that face masks induce a clear loss of information leading to a slight perceptual bias towards friendly choices, but also a clear preconceptual bias towards unfriendly choices for masked faces. These results suggest that, although face masks can increase the perceptual friendliness of faces, people have the prior preconception to interpret masked faces as unfriendly.
Similar content being viewed by others
Introduction
During the COVID-19 Pandemic, wearing a face mask has become part of our daily life as it restricts the spread of the SARS-CoV-2 virus1,2. Since facial expressions play an important role in our social communication3,4,5,6, wearing such a mask might also affect social conduct. For example, when we accidently step on someone’s toes in the supermarket, apologizing with a friendly smile might not be sufficient to save the situation. Indeed, recent studies show that facial masks affect face perception, recognition and identification7,8,9,10,11,12,13,14,14,15,16, interfere with social interaction and social judgments7,13,14 and might even hamper the development of emotion recognition in children12,17.
Importantly, face masks not only reduce the amount of sensory information, but potentially also influence the classification of facial expression in a more systematically biased way by adding perceptual information (e.g., masks could make people look more angry or sad), and/or evoking preconceptions about obscured emotional expressions. In other words, the detrimental effects face masks have on our social interactions7,13,14 can stem from perceptual aspects of emotion recognition, and potentially from pre-existing biases driven by previously held negative connotations associated with facial masks. Given the general social importance of facial emotion recognition it is vital to disentangle such perceptual from preconceptional biases. Especially since face masks have become part of our daily routine, awareness of how this can impact our social communication might help to counter the potential negative consequences.
Evidence showing misinterpretation of emotional expressions due to face masks lies in agreement with previous research into the effects of occlusion of the lower part of the face in several emotional expressions (e.g.18,19,20,21). This especially holds for the identification of happy expressions, for which people rely more on the mouth-region; in contrast, for identifying angry expressions, the eye-region seems to be the most prominent diagnostic cue21,22,23,24,25,26. However, most of the studies on the effects of facial masks used facial stimuli with full prototypical emotional expressions (e.g., happy, angry, surprise, fear, sadness, disgust), ignoring the fact that emotional expressions during our daily life are often less intense and not profoundly demarcated. As such, facial expressions are often ambiguous, making their interpretation more susceptible to a perceptual (stimulus driven) or preconceptual (top-down) bias21,27,28,29,30,31,32,33,34. Given this ambiguity of emotional expressions in daily life, the question arises how facial masks affect the interpretation of facial expressions. For example, the occlusion of a moderately friendly smile might result in an interpretation of the expression based on the eyes only, which might especially be problematic when the smile is not completely sincere and only used as a social gesture or even used to mask negative feelings35,36. In this sense occlusion of the mouth results in a loss of sensory information which may result in a perceptual (stimulus-driven) bias away from smile-driven friendliness37,38.
On the other hand, as already introduced above, masks may also add (unintended) perceptual information. For instance, although wearing a facial mask during the Covid-19 pandemic is mostly accepted9, facial masks can still elicit a negative association due to occlusion of important parts of the face9,39,40,41,42,43, which might in turn result in a tendency to interpret an ambiguous emotional expression as a negative, unfriendly appearance. Such contextual (goal-directed) effects have proven to affect the interpretation of emotional expression as well, resulting in a preconceptual (top-down) bias27,28,38.
In sum, both the perceptual (stimulus-driven) and the preconceptual (top-down) perspectives, predict that masks will elicit a stronger tendency (bias) to classify facial expressions more often as negative (e.g., unfriendly). To investigate whether facial masks elicit such perceptual and/or preconceptual biases in the interpretation of ambiguous emotional expressions, we conducted an experiment in which participants were asked whether ambiguous happy or angry expressions, with and without facial masks, are perceived as friendly or unfriendly.
Perceptual and preconceptual biases are hard to separate, as both biases result in faster and more choices for a favored alternative. To distinguish between a possible perceptual or a preconceptual bias, we use the drift diffusion model (DDM)44,45,46. The drift–diffusion model allows to decompose the underlying choice process and quantify a possible perceptual or preconceptual bias by utilizing both accuracy and reaction time data46,47,48. The model has been successful applied to distinguish between preconceptual and perceptual biases in various social and motivational decision-making paradigms (e.g.46,49,50,51,52) as well as in studies investigating biases in fundamental perceptual processes (e.g.47,53,54,55,56,57). The DDM assumes that, during a perceptual choice, noisy sensory evidence accumulates until a decision threshold is hit (Fig. 1B; for reviews see refs.45,48,58,59,60,61). For instance, when a facial mask affects the uptake of friendly information (e.g., smile) at the stimulus level, the accumulation process will change in favor for the unfriendly alternative, resulting in a perceptual bias, with faster and more ‘unfriendly’ choices (see Fig. 1C). This process is thus sensitive to stimulus-driven biases, but at the same time the starting-point of evidence accumulation can differ based on top-down priors. For example, a preconception towards the unfriendly choice may lead to a lower decision threshold for the unfriendly option, positioning it closer to the starting point. This asymmetric positioning between the two decision thresholds is equivalent to an asymmetric positioning of the starting point and results in a choice bias, generating more and faster responses for the unfriendly alternative as less evidence is required to meet the unfriendly decision threshold (see Fig. 1D).
Even when these biases result in similar behavioral changes, drift–diffusion modelling will allow us to disentangle them and answer the question whether the face-mask driven impairments in the interpretation of ambiguous facial expressions are due to perceptual and/or preconceptual biases.
To test for a possible perceptual or preconceptual bias, we fitted the DDM to participants’ performance on judging emotionally ambiguous faces, with and without mask, on friendliness. Overall, we expect lower drift-rates for masked facial expressions, reflecting less sensory information to make the correct decision, resulting in slower and more error prone choices. In addition, our methodology allows to disentangle a perceptual from a preconceptual bias in the assessment of the friendliness of masked and unmasked ambiguous facial expressions. We expect that if such a preconceptual bias is indeed present when assessing masked faces, the distance between the start and end point of the accumulation process will be smaller for the unfriendly compared to friendly alternative, due to the negative connotations typically associated with face masks40,42,43. Furthermore, we will test whether facial masks induce a perceptual (stimulus-driven) bias due to the occlusion of the mouth region and possibly due to the diagnostic cues in the visual features of the mask itself.
Results
Below we will first report the effects of stimulus emotion and mask on choice and response time data. Next, we will disentangle perceptual and preconceptual biases using DDM analyses that explain the descriptive results in terms of parameter changes.
Descriptive results
To quantify the effect of facial mask on the interpretation of ambiguous facial expressions, a logistic function was fit on the choice data (Eq. 1; Fig. 2A). For both masked and unmasked ambiguous facial expressions, the proportion unfriendly choices increased as a function of stimulus emotion (from happy to angry: see Fig. 2A). For masked faces, there was a small, but significant negative choice bias (β0) reflecting a tendency to choose for friendly more often (b0mdn = − 0.22, one-sample Wilcoxon signed-rank test for b0mdn = 0, V = 2793, P < 0.01). No significant bias was found for unmasked faces. Sensitivity (b1) to the stimulus was significantly lower for masked (b1mdn = 6.47) vs unmasked (b1mdn = 10.91) facial expressions (Wilcoxon signed-rank test, W = 9151, P < 0.01).
We tested for significant effects in response times using a 2 (masked vs. unmasked) × 2 (happy vs. angry) × 3 (stimulus emotion) repeated measures analysis of variance (ANOVA). For response times, there was a significant main effect of emotional ambiguity of the facial expression, with increasing response times for lower ambiguity levels (stimulus emotion − 10 and 10), symmetrical around zero intensity, F(1,135) = 366.4, P < 0.01. The main effect of mask was significant as well, with slower response times for masked stimuli, F(5,675) = 290.5, P < 0.01. In addition, there was a significant interaction effect between the stimulus emotion of the facial expression and mask, indicating that the effect of mask was not equally distributed across ambiguity levels, F(5,675) = 44.1, P < 0.01. More specifically, the difference between response times for masked and no-masked facial expression became smaller for the high (− 10, 10) ambiguity levels (see Fig. 2B). Furthermore, post-hoc t-test show significant slower response times for masked happy than for masked angry facial expressions with a low (− 60 vs 60) or moderate (− 40 vs 40) ambiguity in stimulus emotion (both ts(135) > 5.4, P < 0.01). No such difference was found for the facial stimuli without a mask. Instead, participants were slower for angry vs happy facial expressions without a mask, with high emotional ambiguity (10, vs − 10), t(135) = 3.82, P < 0.01.
Overall, these results of the analyses of choice and response times show that there are small, asymmetrical effects of a facial mask on interpretation of ambiguous emotional expressions. To further quantify these effects, we fitted the DDM to the data allowing us to decompose the effects in the underlying choice parameters.
DDM analyses
The RT results in Fig. 2 partly suggest a bias towards unfriendly choices for masked stimuli, showing faster choices for easy (60) and moderate (40) angry masked faces. In contrast, the psychometric data reflect a general loss of sensitivity combined with an unexpected bias to friendly choices in the mask condition. These contradictory findings suggest that facial masks might affect the interpretation of ambiguous emotional faces via different underlying mechanisms. To identify whether bias effects are driven by a preconceptual (top-down) or perceptual (stimulus-driven) process, the diffusion model was fitted to both the RT and choice data simultaneously, allowing to disentangle these different types of bias.
For the diffusion-model fits (see the methods section for model selection and Fig. 4 for goodness-of-fit) we found that facial masks lead to a reduced slope for the increasing emotional change (mean(SD) v-slopemask = − 1.78(1.1), Wilcoxon signed-rank test, V > 9294, P < 0.001; see Table 1 and Fig. 3A), relative to the slope of the unmasked facial expressions (mean(SD) v0 = 5.68(0.11) ). This indicates that drift-rate increases less steeply for the emotional change of masked facial expressions compared to unmasked ones.
To test for a perceptual bias, we fitted an additional parameter (vcmask) to capture a possible change in the drift-rate criterion (vc0) due to the facial masks. We found a significant negative effect of mask, showing a perceptual bias towards the friendly alternative (mean(SD) vcmask = − 0.28(0.53), Wilcoxon signed-rank test, V > 2107, P < 0.001; see Table 1 and Fig. 3B) for masked facial expressions, relative to unmasked facial expression with mean(SD) vc0 = 0.17(0.36).
In addition to a perceptual bias, we tested for a preconceptual bias by adding an additional parameter (zmask) to the model that captures possible shifts in the starting-point (z0). We found that masks increased the starting-point of the decision process with mean(SD) zmask = 0.10(0.11) (Wilcoxon signed-rank test, V > 8331, P < 0.001) relative to the unmasked condition (z0 = − 0.06(0.08)). This additional shift in the starting-point of the accumulation process indicates a significant preconceptual bias towards the unfriendly alternative for masked facial expressions (see Table 1 and Fig. 3B).
In addition to drift-rate slope (v), starting point (z) and drift-rate criterion (vc), we tested whether masks affected the early sensory processes prior to the accumulation process, represented by non-decision time (Ter; see Fig. 1). Facial masks increased non-decision time with mean(SD) Termask = 14(26) ms relative to the unmasked facial expressions (mean(SD) Ter0 = 380(61) ms), resulting in slower non-decision times for masked facial expressions.
In sum, our descriptive analyses suggest that masked compared to unmasked faces are judged as more friendly, but that judging masked friendly faces takes more time. Our DDM analyses show that masking a face results in a loss of sensory information and in an unfriendly preconception towards facial expressions. In contrast, we found a friendly perceptual bias as well. This suggests that, although diagnostic cues in masked faces bias our participants towards friendliness via a stimulus-driven process, our participants also have the preconception that masked faces are unfriendly.
Discussion
To investigate whether face masks induce a perceptual or preconceptual bias in the interpretation of ambiguous facial expressions, participants performed a task in which they had to decide whether ambiguous facial expressions, with or without a mask, were perceived as friendly or unfriendly. We fitted a drift–diffusion model (DDM) to their performance data to test for the independent effects of perceptual and preconceptual biases in these decisions. As expected, the analyses of descriptive data showed a lower sensitivity for the masked emotional expressions, generally resulting in slower and less correct responses for the masked compared to the unmasked facial expressions. This was also supported by our Drift Diffusion Model (DDM) analysis. Here, we found that mask had a decreasing effect on the strength of the relationship between stimulus emotion and drift rate for masked faces (v-slopemask), suggesting that less information was available during the decision process. These effects are in line with studies showing that covering the mouth decreases the amount of available information to correctly recognize and identify an emotional expression7,8,9,10,11,12,13,14,14,15,16,18,19,20,21.
In addition, quantification of choice data using the psychometric function shows a small but significant bias toward friendly faces for masked but not for unmasked facial expressions. This is unexpected, since the mouth is often considered to be more important in the recognition of happy facial expressions compared to the recognition of angry facial expressions21,22,23,24,25,26. As such, based on perceptual processes alone, we expected that wearing a facial mask would especially hamper the identification of happy facial expressions, resulting in a bias away from the friendly choices for masked stimuli. Instead, the small perceptual bias towards friendly choices suggests that covering the mouth with a facial mask has a larger effect on the misinterpretation of angry than happy facial expressions, which seems to be particularly the case for expressions with a high emotional ambiguity (i.e., − 10 and 10; see Fig. 2A). In contrast, analyses of average response times show slower response times for happy than for angry masked facial expressions with low (− 60 vs 60) or moderate (− 40 vs 40) emotional ambiguity. These contradictory findings in the descriptive data underscore the importance to fit a computational model to the data that considers both choice and response time data, allowing us to disentangle the underlying biasing mechanisms.
To measure possible systematic perceptual (stimulus driven) or preconceptual (top-down) biases in the interpretation of masked and unmasked facial expressions, we fitted the drift–diffusion model to each participant's choice and response time data. Results show that facial masks affect both the perceptual and preconceptual processes, in opposite direction, with a preconceptual bias towards unfriendly and a perceptual bias towards friendly choices.
As expected, we found a small but significant shift in the starting point towards the unfriendly alternative for masked faces, relative to the unmasked condition. This bias in starting point suggests that participants start the decision process with asymmetrical decision thresholds (i.e., smaller for unfriendly vs friendly, in masked choices), resulting in faster and more choices for the unfriendly alternative. Such a lower threshold might indicate a top-down preconception in which the alternatives already have a different representation for masked versus unmasked stimuli, prior to the initial choice. This might be due to the somewhat threatening connotation of the mask, providing a context which might bias the interpretation of ambiguous facial expressions27,28.
In addition to the preconceptual (top-down) bias observed for masked faces, we identified a perceptual (stimulus-driven) bias favoring friendly sensory information for masked faces, relative to the unmasked condition. One explanation for this unexpected perceptual bias may be related to diagnostic features of the mask itself. Studies investigating the impact of the emotional intensity of the facial expression show that happy expressions are more easily detected, even at low intensities63 and resolutions64. Given the difference in the detectability of happy vs angry at a low emotional intensity, it might be the case that the low-level visual features of the mask are closer to a happy than an angry mouth expression. This might particularly be the case for early perceptual processes that are primarily affected by low-level visual features38,65,66 in which the mask-features might result in a surrogate smile for the face, biasing the effect away from unfriendliness. As such, the drift-rates might be biased towards sensory evidence in favor for the friendly alternative for masked faces, due to the asymmetry in happy/angry sensitivity, where the effect of happy-information in the mask itself is stronger than the angry diagnostic features in the eyes of masked faces. Furthermore, it has been shown that low spatial frequencies of facial expressions are faster and earlier processed in the information stream, compared to high spatial frequencies38,65,67,68. In light of this explanation, our choices might be biased first by the outstanding low-level spatial features (stimulus-driven), while the semantic (top-down) categorization based on the (asymmetric) decision thresholds is processed later in time65,66. Conceptually, the classic view of the DDM states that starting point effects are determined prior to the decision process. However, it is possible that the bias resulting from the asymmetric decision thresholds becomes more prominent in a later stage of the decision process.
Note that, in addition to the biasing effects of facial masks, both the baseline starting point (z0) and drift-rate criterion (vc0) demonstrated a slight but significant shift opposite to the effects of the masked condition (see Table 1). The negative initial starting point values could imply that our participants have the preconception that emotionally ambiguous faces are friendly, but this positive bias could also reflect a reversed effect on faces when the mask is not visible, resulting in a more positive connotation than usual. On the other hand, the positive drift-rate criterion towards unfriendly choices for unmasked faces might suggest that emotionality is not symmetrically distributed across the morphed dimension between happy and angry facial stimuli. This asymmetry seems in turn to disappear after adding a mask to the faces, suggesting that this asymmetry is driven by the mouth region. Future research that includes a condition with a mouth that is covered, but not by a mask, can resolve this issue by showing whether the perceptual bias towards friendliness is due to the additive effect of the mask, or due to the reduction of asymmetry in ambiguity by covering the mouth.
Several limitations must be noted in interpreting our findings. One limitation is the inconsistency observed in the model selection process. While both AIC and BIC were used for model-selection, their values point to different winning models. The fullvc,z model with both a variable starting point (z) and drift-rate criterion (vc), for instance, yielded the lowest average AIC. Furthermore, this model was the most suitable one for the largest fraction of participants (32%). Conversely, the average BIC was lowest for the model that only had the variable drift-rate criterion (vc), implying a better fit by this metric. Despite the lower average BIC values for the vc-only (reducedvc) model, it is noteworthy that for more than half of the participants (54%) the null model fitted their data at best. This discrepancy in criteria suggests a certain degree of uncertainty in the model selection and calls for careful interpretation of the models' outcomes. Given the substantial variability in model selection among participants, a reasonable argument could be made for fitting the fullvc,z model as it would capture all effects, including null effects, thus also accounting for participants who did not exhibit any biasing results. However, it's important to acknowledge that the effects of the facial masks on the starting point were relatively small, which could potentially be attributed to overfitting of the model. Although our model comparison approach is aimed to address this possible issue, the divergence in AIC and BIC results highlights the need for further investigation.
In sum, we investigated whether face masks induce a loss of information and perceptual or preconceptual biases, participants were asked to decide whether masked or unmasked ambiguous facial expressions were perceived as friendly or unfriendly. Results show that wearing a face mask causes a loss in sensory information and a preconceptual bias towards unfriendly but a perceptual (stimulus-driven) bias towards friendly choices for masked faces. These results suggest that people have a prior top-down tendency to interpret facial masks as unfriendly, regardless of the friendly (stimulus-driven) effects of the facial mask itself.
Methods
Participants
Participants (n = 145, mean(std) age = 22.3(4.4), 109 female) were invited via online media or Utrecht University’s Sona Systems (https://www.sona-systems.com/) to participate in an online experiment in exchange for course credit. Nine participants were excluded based on insufficient performance on the task (see descriptive analyses below). Informed consent was obtained from all participants. The experiment was approved by and was in accordance with the guidelines and ethical standards of the the Ethics Committee of Utrecht University (EC-FETC18-129).
Materials and Stimuli
Face stimuli were adapted from the Averaged Karolinska Directed Emotional Faces (AKDEF62). First, to control for possible sex differences in facial expressions69,70, we created an angry and a happy non-binary face by morphing the average (resp. angry and happy) male and female faces to each other using WinMorph (version 3.01). Next, emotionally ambiguous faces were created by morphing the happy face towards the angry face in 41 incremental steps of 2.5% each. From this range of morphed non-binary facial stimuli with different angry/happy ratios, eight ambiguous expressions were chosen. For each face, a masked version was created, by adding surgical mask to each face using Adobe Photoshop (version 22.2). A normal surgical mask was chosen, as these were seen commonly in public at the time of data collection. The color of the masks was adjusted to resemble the black-and-white coloring of the images of the faces. The edges of the mask were softened to incorporate it more naturally into the image.
Six of the facial expressions with ratios (angry/happy%) of 80/20%, 70/30%, 55/45%, 45/55%, 30/70% and 20/80%, with and without a mask, were used as main stimuli (see Fig. 1B). Two facial expressions (60/40% and 40/60%), with and without a mask served as filler trials to add more variance to the stimuli, reducing predictability of the six main facial stimuli. Stimulus emotion was expressed as the difference between the percentage happy and angry facial expressions (assuming 50/50 to have 0% evidence for either a happy or angry expression and thus full ambiguity) resulting in 6 (signed) emotion levels of − 60%, − 40%, − 10% for happy and 10%, 40% and 60% for angry facial expressions.
Procedure
A two-alternative forced (2AFC) choice task was set up and hosted on Gorilla Experiment Builder (www.gorilla.sc)71. After consent was given, general demographic information was collected after which the participant was assigned to one of the two (counterbalanced) versions of the 2AFC-task. In the 2AFC-task, participants were asked to respond as quickly as possible to decide whether the facial expression was perceived as friendly or unfriendly, for a total of 608 trials. These trials consist of 96 trials (48 masked) for each of the six stimulus emotions (− 60%, − 40%, − 10%, 10%, 40%, 60%) and 32 (16 masked) filler-trials (stimulus emotions − 20% and 20%). To keep the participants engaged, the experiment was divided into 8 blocks of 76 trials each. Each block contained a random alternation of all possible conditions (mask x stimulus emotion).
Each trial started with a fixation cross that was presented for a randomly chosen duration between 600 and 1200 ms to prevent anticipatory responses to the stimulus. Next, the stimulus was shown on the screen, during which the participant was required to respond with the ‘C’- or ‘M’-key to indicate their choice. Stimulus display was terminated after a button press or a time-out of 2300 ms. Choice associations with these responses (‘friendly’ or ‘unfriendly’) were counterbalanced between participants. We chose to use the labels friendly and unfriendly since the created images were not fully ‘angry’ or ‘happy’. For example, the ambiguous facial expression with 10% stimulus emotion might not be perceived as angry perse, but still has a mild ‘unfriendly’ expression. Subsequently, the participant’s response feedback was shown for 400 ms (a green check for correct and a red cross for false responses). Whenever a response was made throughout the fixation cross period, an icon with the words “too fast” appeared. If subjects did not make a response within the given response time (2300 ms), the word “miss” was shown. Missed trials were excluded from analyses as response times of > 2300 ms fell well beyond the upper bound of the interquartile range of RTs across the group (1228 ms).
Analyses
Descriptive data
For each participant, response times were log transformed, after which for each condition response times were removed that were three standard deviations away from the average response time (on average, 4.7% of the data). Next, median response times were calculated for each condition separately. A 2 (masked vs. unmasked) × 2 (happy vs. angry) × 3 (stimulus emotion) repeated measures ANOVA was used to test for effects of mask, choice and emotion on response times.
To quantify effects of mask and stimulus emotion on choice performance, a logistic function (see Eq. 1) was fit onto the choice data for each participant.
This function included two terms, an emotion-dependent term that reflected sensitivity to the stimulus and an emotion-independent term that reflected a choice bias towards either ‘friendly’ or ‘unfriendly’ choices. Non-parametric Wilcoxon t-tests were used to test for a difference in sensitivity (b1) between masked and unmasked faces and for a possible choice bias (b0) in the masked and unmasked conditions. Based on these initial analyses, nine participants were excluded based on a significantly large deviance between empirical data and the fit of the psychometric function (all nine deviances > 310; X2(df's < 280) < 242.2, P = 0.05).
DDM analysis
In order to examine a possible perceptual or preconceptual bias in the behavioral data, we fitted the drift–diffusion model (DDM44; see Fig. 1B) to each participant's choice and response time data simultaneously using the pyDDM Python package72. Our aim was to determine which model parameters could account for a potential choice bias for masked facial expressions. Consequently, we performed model selection using four models to each participant's data: (1) a null model, (2) a reducedvc model including a perceptual bias (Fig. 1C), (3) a reducedz model incorporating a preconceptual bias (Fig. 1D), and (4) a fullvc,z model that integrates both preconceptual and perceptual biases (see also49,50,51,53,55,56,57). Models were fitted to the data employing maximum likelihood estimation. Furthermore, we calculated the Akaike Information Criterion (AIC) and the more conservative Bayesian Information Criterion (BIC) to select the model exhibiting the best goodness-of-fit, as indicated by the lowest AIC or BIC value.
For the null model, we assumed a linear relationship between drift-rate and stimulus emotion k (with values − 60, − 40, − 10, 10, 40, 60). The drift-rate incorporated an intercept (vc0), a slope (v-slope0), and an additional term (v-slopemask) to capture the effect of facial masks. Consequently, the drift-rate was defined as follows:
where C = 1 for masked and C = 0 for unmasked facial expressions. To capture possible changes in early sensory and late motor processes, we fitted non-decision time effect Ter0 with an additional term Termask for masked faces. Non-decision time Ter was defined as:
where C = 1 for masked and C = 0 for unmasked facial expressions.
In addition to drift and non-decision time parameters, we included a starting point z0 and an exponentially collapsing decision threshold (a, with tau reflecting the rate of collapse). The collapsing decision threshold was added to the model to account for urgency effects due to the decision deadline55,56,57,57,73,74,75. Both z0 and a were fixed across the masked and unmasked conditions. Variability parameters (sz, st0 and sv) were not fitted and set to zero as fitting these parameters can bias the estimations of the main parameters76,77,78. However, since variability in starting-point (sz) can account for fast-errors, which in turn can explain the (opposite) effects of the mask on choice and RT data, we repeated our model-selection procedure with sz added to each model (see Supplementary Material). Adding sz to the four models did not change the outcome of our model-selection procedure. Neither did it change the outcome of starting-point differences (see Table S1).
The reducedvc model, which incorporates perceptual bias, extends the null model by including an additional drift-rate term, vcmask in Eq. 2 to account for a possible bias on the drift-rate criterion (intercept vc0). In this case, drift rate (v) is defined as follows:
where k is stimulus emotion and C = 1 for masked and C = 0 for unmasked facial expressions.
The reducedz model, which incorporates preconceptual bias, extends the null model by including an additional starting-point term zmask to account for possible bias on the starting point of the accumulation process, relative to the unmasked condition (z0). As such, starting point was defined as:
where C = 1 for masked and C = 0 for unmasked facial expressions.
Finally, the fullvc,z model included all parameters of the null model with both additional terms vcmask and zmask to account for a possible perceptual and preconceptual bias respectively.
Model selection shows that the fullvc,z model outperforms the null and reducedvc and z models, which is reflected in both the average AIC values and the proportion of participants for whom this model was the best fit (see Table 2). For BIC values, the picture is less distinct: Here, average BIC values are lowest for the reducedvc perceptual bias, but with a slight difference compared to the reducedz preconceptual bias model (difference reducedvc – z = 0.21). Notably, for a large faction of participants (54%) the null model seems to be the best model describing their behavioral data. Given the substantial variability in model selection among participants and the better fit of the fullvc,z model according to the less conservative AIC, we decided to continue our analyses using the fullvc,z model to capture the individual differences in parameter values (see Fig. 4 for a visual representation of the Goodness of fit).
Finally, a one sample Wilcoxon signed-rank tests was used to test whether the additional bias effects zmask and vcmask were significantly different from zero.
Data availability
The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.
References
Eikenberry, S. E. et al. To mask or not to mask: Modeling the potential for face mask use by the general public to curtail the COVID-19 pandemic. Infect. Dis. Model. 5, 293–308 (2020).
van der Sande, M., Teunis, P. & Sabel, R. Professional and home-made face masks reduce exposure to respiratory infections among the general population. PLoS ONE 3, e2618 (2008).
Crivelli, C. & Fridlund, A. J. Facial displays are tools for social influence. Trends Cogn. Sci. 22, 388–399 (2018).
Gosselin, P., Kirouac, G. & Doré, F. Y. Components and recognition of facial expression in the communication of emotion by actors. J. Pers. Soc. Psychol. 68, 83–96 (1995).
Haxby, J. V., Hoffman, E. A. & Gobbini, M. I. Human neural systems for face recognition and social communication. Biol. Psychiatry 51, 59–67 (2002).
Nusseck, M., Cunningham, D. W., Wallraven, C. & Bülthoff, H. H. The contribution of different facial regions to the recognition of conversational expressions. J. Vis. 8, 1 (2008).
Calbi, M. et al. The consequences of COVID-19 on social interactions: An online study on face covering. Sci. Rep. 11, 2601 (2021).
Carbon, C.-C. Wearing face masks strongly confuses counterparts in reading emotions. Front. Psychol. 11, 566886 (2020).
Carbon, C.-C. About the acceptance of wearing face masks in times of a pandemic. i-Perception 12, 204166952110211 (2021).
Carragher, D. J. & Hancock, P. J. B. Surgical face masks impair human face matching performance for familiar and unfamiliar faces. Cogn. Res. 5, 59 (2020).
Freud, E., Stajduhar, A., Rosenbaum, R. S., Avidan, G. & Ganel, T. The COVID-19 pandemic masks the way people perceive faces. Sci. Rep. 10, 22344 (2020).
Gori, M., Schiatti, L. & Amadeo, M. B. Masking emotions: Face masks impair how we read emotions. Front. Psychol. 12, 669432 (2021).
Grundmann, F., Epstude, K. & Scheibe, S. Face masks reduce emotion-recognition accuracy and perceived closeness. (2020).
Marini, M., Ansani, A., Paglieri, F., Caruana, F. & Viola, M. The impact of facemasks on emotion recognition, trust attribution and re-identification. Sci. Rep. 11, 5577 (2021).
Nestor, M. S., Fischer, D. & Arnold, D. “Masking” our emotions: Botulinum toxin, facial expression, and well-being in the age of COVID-19. J. Cosmet. Dermatol. 19, 2154–2160 (2020).
Pazhoohi, F., Forby, L. & Kingstone, A. Facial masks affect emotion recognition in the general population and individuals with autistic traits. PLoS ONE 16, e0257740 (2021).
Spitzer, M. Masked education? The benefits and burdens of wearing face masks in schools during the current Corona pandemic. Trends Neurosci. Educ. 20, 100138 (2020).
Kotsia, I., Buciu, I. & Pitas, I. An analysis of facial expression recognition under partial facial image occlusion. Image Vis. Comput. 26, 1052–1067 (2008).
Neta, M. et al. All in the first glance: First fixation predicts individual differences in valence bias. Cogn. Emot. 31, 772–780 (2017).
Pell, P. J. & Richards, A. Cross-emotion facial expression aftereffects. Vis. Res. 51, 1889–1896 (2011).
Wegrzyn, M., Vogt, M., Kireclioglu, B., Schneider, J. & Kissler, J. Mapping the emotional face. How individual face parts contribute to successful emotion recognition. PLoS ONE 12, e0177239 (2017).
Calder, A. J. et al. Caricaturing facial expressions. Cognition 76, 105–146 (2000).
Calvo, M. G., Fernández-Martín, A., Gutiérrez-García, A. & Lundqvist, D. Selective eye fixations on diagnostic face regions of dynamic emotional expressions: KDEF-dyn database. Sci. Rep. 8, 17039 (2018).
Eisenbarth, H. & Alpers, G. W. Happy mouth and sad eyes: Scanning emotional facial expressions. Emotion 11, 860–865 (2011).
Schurgin, M. W. et al. Eye movements during emotion recognition in faces. J. Vis. 14, 14–14 (2014).
Smith, M. L., Cottrell, G. W., Gosselin, F. & Schyns, P. G. Transmitting and decoding facial expressions. Psychol. Sci. 16, 184–189 (2005).
Bublatzky, F., Kavcıoğlu, F., Guerra, P., Doll, S. & Junghöfer, M. Contextual information resolves uncertainty about ambiguous facial emotions: Behavioral and magnetoencephalographic correlates. NeuroImage 215, 116814 (2020).
Hassin, R. R., Aviezer, H. & Bentin, S. Inherently ambiguous: Facial expressions of emotions. Context. Emotion Rev. 5, 60–65 (2013).
Kaminska, O. K. et al. Ambiguous at the second sight: Mixed facial expressions trigger late electrophysiological responses linked to lower social impressions. Cogn. Affect. Behav. Neurosci. 20, 441–454 (2020).
Kinchella, J. & Guo, K. Facial expression ambiguity and face image quality affect differently on expression interpretation bias. Perception 50, 328–342 (2021).
Niedenthal, P. M., Halberstadt, J. B., Margolin, J. & Innes-Ker, S. H. Emotional state and the detection of change in facial expression of emotion. Eur. J. Soc. Psychol. 30, 211–222 (2000).
Niedenthal, P. M., Brauer, M., Robin, L. & Innes-Ker, Å. H. Adult attachment and the perception of facial expression of emotion. J. Person. Soc. Psychol. 82, 419–433 (2002).
Olszanowski, M., Kaminska, O. K. & Winkielman, P. Mixed matters: Fluency impacts trust ratings when faces range on valence but not on motivational implications. Cogn. Emot. 32, 1032–1051 (2018).
Sylvester, C. M., Hudziak, J. J., Gaffrey, M. S., Barch, D. M. & Luby, J. L. Stimulus-driven attention, threat bias, and sad bias in youth with a history of an anxiety disorder or depression. J. Abnorm. Child Psychol. 44, 219–231 (2016).
Ipser, A. & Cook, R. Inducing a concurrent motor load reduces categorization precision for facial expressions. J. Exp. Psychol. Hum. Percep. Perform. 42, 706–718 (2016).
Niedenthal, P. M., Mermillod, M., Maringer, M. & Hess, U. The Simulation of Smiles (SIMS) model: Embodied simulation and the meaning of facial expression. Behav. Brain Sci. 33, 417–433 (2010).
Calvo, M. G., Fernández-Martín, A. & Nummenmaa, L. Perceptual, categorical, and affective processing of ambiguous smiling facial expressions. Cognition 125, 373–393 (2012).
Elsherif, M. M., Saban, M. I. & Rotshtein, P. The perceptual saliency of fearful eyes and smiles: A signal detection study. PLoS ONE 12, e0173199 (2017).
Grundmann, F., Epstude, K. & Scheibe, S. Face masks reduce emotion-recognition accuracy and perceived closeness. PLOS ONE 16, e0249792 (2021).
Kret, M., Stekelenburg, J., Roelofs, K. & De Gelder, B. Perception of face and body expressions using electromyography, pupillometry and gaze measures. Front. Psychol. 4, 56 (2013).
Kret, M. E. & Fischer, A. H. Recognition of facial expressions is moderated by Islamic cues. Cogn. Emot. 32, 623–631 (2018).
Lane, J. et al. Impacts of impaired face perception on social interactions and quality of life in age-related macular degeneration: A qualitative study and new community resources. PLoS One 13, e0209218 (2018).
Wong, C. K. M. et al. Effect of facemasks on empathy and relational continuity: A randomised controlled trial in primary care. BMC Family Pract. 14, 200 (2013).
Ratcliff, R. A theory of memory retrieval. Psychol. Rev. 85, 59–108 (1978).
Voss, A., Rothermund, K. & Voss, J. Interpreting the parameters of the diffusion model: An empirical validation. Mem. Cognit. 32, 1206–1220 (2004).
Voss, A., Rothermund, K. & Brandtstädter, J. Interpreting ambiguous stimuli: Separating perceptual and judgmental biases. J. Exp. Soc. Psychol. 44, 1048–1056 (2008).
Mulder, M. J., Wagenmakers, E.-J., Ratcliff, R., Boekel, W. & Forstmann, B. U. Bias in the brain: A diffusion model analysis of prior probability and potential payoff. J. Neurosci. 32, 2335–2343 (2012).
Ratcliff, R. & McKoon, G. The diffusion decision model: Theory and data for two-choice decision tasks. Neural Comput. 20, 873–922 (2008).
Pleskac, T. J., Cesario, J. & Johnson, D. J. How race affects evidence accumulation during the decision to shoot. Psychonomic Bull. Rev. 25, 1301–1330 (2018).
Gesiarz, F., Cahill, D. & Sharot, T. Evidence accumulation is biased by motivation: A computational account. PLOS Comput. Biol. 15, e1007089 (2019).
Leong, Y. C., Hughes, B. L., Wang, Y. & Zaki, J. Neurocomputational mechanisms underlying motivated seeing. Nat. Hum. Behav. 3, 962–973 (2019).
Zhao, W. J., Walasek, L. & Bhatia, S. Psychological mechanisms of loss aversion: A drift-diffusion decomposition. Cogn. Psychol. 123, 56 (2020).
White, C. N. & Poldrack, R. A. Decomposing bias in different types of simple decisions. J. Exp. Psychol. Learn. Mem. Cogn. 40, 385–398 (2014).
Shinn, M., Ehrlich, D. B., Lee, D., Murray, J. D. & Seo, H. Confluence of timing and reward biases in perceptual decision-making dynamics. J. Neurosci. 40, 7326–7342 (2020).
de Gee, J. W. et al. Pupil-linked phasic arousal predicts a reduction of choice bias across species and decision domains. eLife 9, e54014 (2020).
Tardiff, N., Suriya-Arunroj, L., Cohen, Y. E. & Gold, J. I. Rule-based and stimulus-based cues bias auditory decisions via different computational and physiological mechanisms. PLOS Comput. Biol. 18, e1010601 (2022).
Urai, A. E., de Gee, J. W., Tsetsos, K. & Donner, T. H. Choice history biases subsequent evidence accumulation. eLife 8, e46331 (2019).
Gold, J. I. & Shadlen, M. N. The neural basis of decision making. Annu. Rev. Neurosci. 30, 535–574 (2007).
Mulder, M. J., van Maanen, L. & Forstmann, B. U. Perceptual decision neurosciences-a model-based review. Neuroscience 277, 872–884 (2014).
Ratcliff, R., Smith, P. L., Brown, S. D. & McKoon, G. Diffusion decision model: Current issues and history. Trends Cogn. Sci. 20, 260–281 (2016).
Wagenmakers, E.-J. Methodological and empirical developments for the Ratcliff diffusion model of response times and accuracy. Eur. J. Cogn. Psychol. 21, 641–671 (2009).
Lundqvist, D. & Litton, J. E. The Averaged Karolinska Directed Emotional Faces - AKDEF, CD ROM from Department of Clinical Neuroscience, Psychology section, Karolinska Institutet. (1998).
Guo, K. Holistic Gaze strategy to categorize facial expression of varying intensities. PLoS ONE 7, e42585 (2012).
Guo, K., Soornack, Y. & Settle, R. Expression-dependent susceptibility to face distortions in processing of facial expressions of emotion. Vis. Res. 157, 112–122 (2019).
Du, S. & Martinez, A. M. Wait, are you sad or angry? Large exposure time differences required for the categorization of facial expressions of emotion. J. Vis. 13, 13–13 (2013).
Morris, J. S., Öhman, A. & Dolan, R. J. A subcortical pathway to the right amygdala mediating “unseen” fear. Proc. Natl. Acad. Sci. U.S.A. 96, 1680–1685 (1999).
Smith, F. W. & Schyns, P. G. Smile through your fear and sadness: Transmitting and identifying facial expression signals over a range of viewing distances. Psychol. Sci. 20, 1202–1208 (2009).
Du, S. & Martinez, A. M. The resolution of facial expressions of emotion. J. Vis. 11, 24–24 (2011).
Dores, A. R., Barbosa, F., Queirós, C., Carvalho, I. P. & Griffiths, M. D. Recognizing emotions through facial expressions: A largescale experimental study. IJERPH 17, 7420 (2020).
Wright, D. B. & Sladden, B. An own gender bias and the importance of hair in face recognition. Acta Psychologica 114, 101–114 (2003).
Anwyl-Irvine, A. L., Massonnié, J., Flitton, A., Kirkham, N. & Evershed, J. K. Gorilla in our midst: An online behavioral experiment builder. Behav. Res. Methods 52, 388–407 (2020).
Shinn, M., Lam, N. H. & Murray, J. D. A flexible framework for simulating and fitting generalized drift-diffusion models. eLife 9, e56938 (2020).
Drugowitsch, J., Moreno-Bote, R., Churchland, A. K., Shadlen, M. N. & Pouget, A. The cost of accumulating evidence in perceptual decision making. J. Neurosci. 32, 3612–3628 (2012).
Hawkins, G. E., Forstmann, B. U., Wagenmakers, E.-J., Ratcliff, R. & Brown, S. D. Revisiting the evidence for collapsing boundaries and urgency signals in perceptual decision-making. J. Neurosci. 35, 2476–2484 (2015).
Murphy, P. R., Boonstra, E. & Nieuwenhuis, S. Global gain modulation generates time-dependent urgency during perceptual choice in humans. Nat. Commun. 7, 13526 (2016).
Boehm, U. et al. Estimating across-trial variability parameters of the Diffusion Decision Model: Expert advice and recommendations. J. Math. Psychol. 87, 46–75 (2018).
Lerche, V. & Voss, A. Retest reliability of the parameters of the Ratcliff diffusion model. Psychol. Res. 81, 629–652 (2017).
Tillman, G., Van Zandt, T. & Logan, G. D. Sequential sampling models without random between-trial variability: The racing diffusion model of speeded decision making. Psychon. Bull. Rev. 27, 911–936 (2020).
Author information
Authors and Affiliations
Contributions
M.J.M: Wrote manuscript, designed experiment, collected data, analysed data, prepared figures, F.P.: Wrote manuscript, piloted experiment, prepared stimuli, collected data, D.T.: Wrote manuscript J.L.K.: Wrote manuscript. All authors reviewed the manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher's note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Mulder, M.J., Prummer, F., Terburg, D. et al. Drift–diffusion modeling reveals that masked faces are preconceived as unfriendly. Sci Rep 13, 16982 (2023). https://doi.org/10.1038/s41598-023-44162-y
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s41598-023-44162-y
Comments
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.