Introduction

The same object can project an almost infinite number of images onto our retina. It can be seen from afar or near, from the left or right, from the top or bottom, occluded by other objects, in different backgrounds, in bright sunshine or twilight. When compared pixel-by-pixel, such images might have less in common than images of two completely different objects, such as two people seen from the same viewpoint or two words such as CAT vs. OAT (compare to cat and oat). These challenges are often collectively grouped under the term high-level vision and are generally thought to be solved by later stages of the ventral visual stream1. The ventral stream supports the visual perception and recognition of complex forms and objects2,3,4,5,6, including visually presented faces and words.

Several studies have focused on the domain-generality or domain-specificity of visual word and face processing, both behaviorally and in terms of their neural substrates in the ventral stream (e.g.7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24). On the surface, faces and words have little in common. Accordingly, as noted by Robotham and Starrfelt21, the double dissociation of word and face processing is textbook knowledge. Farah famously reviewed dozens of case studies on people with visual associative agnosias for words, faces, or other objects12,25,26. The patterns of co-occurrences among these agnosias were consistent with the existence of two underlying visual recognition abilities, one highly important for words and not needed for faces, and another of high importance for faces and not needed for words26. Faces and words also consistently evoke high activity in relatively anatomically separable neural patches of the high-level ventral stream2, often interpreted in support of their domain-specificity.

However, links between face and word processing have more recently been proposed. For example, Dehaene et al.11 suggested that literacy, like other forms of expertise, leads to cortical competition effects in regions of the ventral visual stream (see also27,28). More specifically, literacy seems to induce competition with the representation of faces in the left fusiform gyrus, leading the authors to speculate that our face perception abilities suffer in proportion to our reading skills. Behrmann and Plaut7,29 also offer an alternative to the traditional view that higher levels of the ventral visual stream consist of independent domain-specific regions dedicated to the processing of particular categories. They acknowledge that faces and words have the strongest claim of all object classes to domain-specificity, with the potential for distinct cortical regions specialized for their high-level visuoperceptual analysis. They however argue that face and word representations are not independent, and that functional specialization of brain regions is graded, and cite the partial co-mingling of face and word processing, the association between the acquisition of word and face recognition skills, and their related neural mechanisms. In a critical response to Behrmann and Plaut, Susilo and Duchaine23 suggest that at least some of the mechanisms involved in face and word processing are independent and cite neuropsychological cases showing a double dissociation between face and word recognition (see also Behrmann and Plaut7 for their response to Susilo and Duchaine23). Robotham and Starrfelt21 also provide convincing evidence that face and word recognition abilities can be selectively affected.

An almost entirely independent large body of work concerns the possible causes of developmental dyslexia, and visual factors are not generally thought to play a role (but see e.g.30,31; for reviews, see e.g.32,33). However, according to our new high-level visual dysfunction hypothesis, reading problems could in some cases be a salient manifestation of a deficit of visual cognition stemming from disrupted functioning within the ventral visual stream. For a recent review on this hypothesis, where we discuss work on functional neuroimaging, structural imaging, electrophysiology, and behavior that provides evidence for a link between high-level visual impairment and dyslexia, see34. Supporting the hypothesis, hypoactivity of ventral stream areas, particularly in the left hemisphere, appears to be a universal marker for dyslexia as it is found both for dyslexic children and adults, and across deep and shallow orthographies35,36,37). As hypoactivation in the left fusiform and bilateral occipitotemporal gyri is already present in preliterate children with a familial risk for dyslexia38 (see also39), functional abnormalities of ventral stream regions are unlikely to reflect only reading failure, might not be specific to print, and may play a causal role in dyslexia.

Accordingly, our previous studies indicate that some people with dyslexia have problems with tasks thought to rely on high-level ventral stream regions, including the visual perception and recognition of faces40,41,42,43. Studies on the face perception abilities of dyslexic readers are however quite inconsistent, with some studies reporting abnormalities14,39,41,42,43,44,45,46,47,48, other studies find no such thing49,50,51,52,53 and yet other studies are mixed54,55 (for details, see41). A possible reason for this discrepancy is that it could be the wrong question to ask whether faces and words are associated or dissociated, as the answer could be: Both and neither, depending on what visual characteristics or neural mechanisms are important for the task at hand.

What types of visual characteristics and neural mechanisms might these be? As regions hypoactive in dyslexic readers37 may overlap with face-selective regions of the left ventral visual stream43, a starting point is to briefly go over known characteristics of visual face processing in the left hemisphere (for a review on laterality effects in face perception, see56). While right hemisphere regions appear to be automatically recruited by faces, left hemisphere regions seem to be flexibly recruited based on context, task, or attentional demands57,58,59. Left hemisphere face processing is however not just a poor replica of that of the right, as it excels in some types of face analysis. The left hemisphere shows an advantage in a same-different task for faces when the faces can only be distinguished based on a feature (e.g., different nose60,61,62). Later neuroimaging studies have also indicated that left hemisphere regions are relatively more involved in part- or feature-based face processing while the right hemisphere regions are more important for processing whole faces63.

This is a particularly interesting pattern, as configural (or holistic/global) and feature-based processing might provide a dual route to recognition (12,64; concept use varies64,65,66 but configural processing is sometimes used interchangably with holistic or global processing, and feature-based processing is sometimes referred to as featural, componential, part-based, local, or analytical processing). Although holistic or configural processing of words contributes to reading to a degree, recognition by parts is generally thought to be of much greater importance67,68,69,70 (results on featural vs. configural word processing deficits in dyslexic readers are mixed, see e.g.71,72). As an anecdotal example, changing a single feature (assuming that letters are features) in the word CAT to BAT will lead to identity changes, while changing the distance between features from CAT to C A T—a global or configural change—will preserve the word’s identity. Holistic processing of faces appears to be intact in dyslexic readers14,43,71, as evidenced by normal face inversion and composite face effects, leading us to suggest that they may instead be “…specifically impaired at the part-based [i.e. feature-based] processing of words, faces, and other objects, consistent with their primarily left-lateralized dysfunction of the fusiform gyrus.”43. This would be expected to have serious consequences for visual word recognition and less severe yet detectable consequences for other visual tasks that partially rely on such processing. This prediction is tested here.

In the current study, adults with varying degrees of reading abilities, ranging from expert readers to severely impaired dyslexic readers, completed both a feature-based and a global form face matching task. We predicted a dissociation between word and face processing in cases where a face task could effectively be solved by processing the global form of faces (minimal part decomposition), whereas we expected to see an association when a task could most effectively be solved by additionally or instead relying on the feature-based processing of faces (extensive part decomposition). Establishing the association of reading problems with one type of face processing (feature-based) but their dissociation from another type of face processing (global form) provides important information on domain-specificity vs. domain-generality of visual word and face processing and for our high-level visual dysfunction hypothesis of developmental dyslexia.

Method

Procedure

The study was approved by the National Bioethics Committee of Iceland (protocol 14-027) and reported to the Icelandic Data Protection Authority. The study was performed in accordance with the Declaration of Helsinki and Icelandic guidelines/regulations on scientific studies. Participants were tested in a sound-attenuated chamber. All participants gave informed consent. All tasks were computerized (Dell OptiPlex 760 computer, 17-inch monitor, 1024 × 768 pixels, 85 Hz) using PsychoPy73,74. Participants filled out questionnaires on background variables, their history of reading problems, and current and childhood symptoms of ADHD. Participants then completed face perception tasks followed by visual search tasks; viewing distance was set to 57 cm by the use of a chinrest. Participants completed a lexical decision task and were finally asked to read out loud. Data from visual search and lexical decision are analyzed in detail elsewhere75 (see also Supplementary Material S4. Regression Models Accounting for Visual Search).

The Adult Reading History Questionnaire (ARHQ) is a 23-item self-report questionnaire designed to measure people’s history of reading problems76 (e.g. “Which of the following most nearly describes your attitude toward school when you were a child?”, “How much difficulty did you have learning to read in elementary school?”). Questions are answered on a 5-point Likert scale ranging from 0 to 4. In this study, the Icelandic version of the ARHQ (ARHQ-Ice77) was used. As recommended77, only 22 items were analyzed in the current study, resulting in a raw score between 0 and 88; these were rescaled to range from 0 to 1. Higher scores are associated with greater reading difficulties, and a score above 0.43 is a suggested cut-off point when screening for dyslexia77. The Icelandic adaptation of the ARHQ is a valid and reliable (Cronbach’s alpha 0.92) screening instrument for dyslexia77.

Behavioral Evaluation Questionnaire for Adults I and II

Two separate questionnaires regarding ADHD symptoms as defined by the DSM-IV were administered79 (e.g. “Fails to give close attention to details or makes careless mistakes in work or schoolwork”, “Fidgets with hands or feet or squirms in seat”). Questionnaires were self-reports of behavior in the past six months (ADHD-I) and childhood symptoms from ages 5 to 12 years (ADHD-II). Participants answered on a 4-point Likert scale, resulting in a total score from 0 to 54 on each list, where higher scores imply more ADHD-related symptoms. These questionnaires are reliable and valid screening tools for ADHD79.

Face matching

The stimulus set, developed by Van Belle and colleagues78, has been used to measure global or configural as well as feature-based processing of faces. As described in Van Belle et al.78, the stimulus set was developed from 15 pairs of Caucasian male faces, all with identical skin structure and color and no extra-facial cues (e.g. hair, clothing, or makeup). From each pair of faces, A and B, two new faces were created, one of which had the global form (the form of the skull, muscles and fat structure) of face A and the internal features (e.g. the eyes, nose, and mouth) of face B, and the other whose global form was taken from face B but whose internal features came from face A (see Fig. 1). This resulted in a total of 60 face stimuli. For every face in the stimulus set there was thus one face that differed from it only by its global form and another face that differed from it exclusively by its features.

A trial started with the appearance of a dark gray bar on a light gray background. The bar reached all the way from the top to the bottom of the screen (width approximately 6°). A light gray oval hole or window (approximately 3° × 4°) was shown in the middle of the bar. After 1000 ms, a sample face was shown in the middle of the hole along with two choice faces (match and foil) approximately 5° to the left and right of screen center (Fig. 1). Choice face size was approximately 70% of sample face size. The participant’s task was to determine which of the choice faces was more similar to the sample face. Size differences and the oval window were introduced to minimize possible usage of low-level image-based matching, and to keep accuracy off ceiling. The stimuli stayed onscreen until response. The participants pushed the left arrow key to indicate that the face on the left resembled the sample face, and the right arrow key if the face on the right was deemed more similar to the sample face.

Participants listened to prerecorded instructions, completed two practice trials with cartoon faces, and then completed six blocks of experimental trials with 60 trials per block, 360 trials in total. Global form and feature-based trials were intermixed within blocks. All three faces (sample, match, and foil) had the same orientation in each trial. Trial type (feature-based or global form face matching), orientation (facing 30° left, straight ahead, or 30° right), and location of the match face (left or right of screen center) were fully crossed (30 trials of each combination). Trials appeared in the same randomized order for each subject. Trial order was randomized until there was no correlation between trial order and trial type (R2 = 8E−05).

Poor readers may compensate for a deficit in a lower-level process, such as word recognition/decoding, by increasingly relying on context80. For this reason, the reading tests used are context-free by design. In IS-FORM, participants cannot guess the next to-be-read word based on previously read words, nor can they easily guess the entire word after having read its first few letters as Icelandic is an inflected language so the same word can have many endings (e.g. the word for “reading” can be “lestur”, “lestri”, “lestrar”, “lesturinn”, “lestrinum” etc. depending on context). IS-PSEUDO only includes phonologically valid nonsense words, which by definition do not mean anything, yet dyslexic readers have problems in reading such pseudowords81,82.

IS-FORM and IS-PSEUDO reading tests measure (pseudo)words read per minute and percentage of correctly read (pseudo)word forms40,43. Dyslexic readers’ performance on both tests has been shown to be markedly poorer than that of typical readers40,41,43. IS-FORM includes two lists of 128 words each. One contains common Icelandic word forms and the other uncommon word forms. IS-PSEUDO contains one list of 128 pseudowords. The participants were instructed to read each word list aloud as fast as possible, while making as few errors as possible, in the following order: IS-FORM common, IS-FORM uncommon, and IS-PSEUDO.

Results

In the analyses to follow, we estimate to which degree ADHD measures (current ADHD symptoms, childhood ADHD symptoms, ADHD diagnosis) can account for other patterns in our data. For comparison of data with and without the exclusion of participants with a previous ADHD diagnosis, see Supplementary Information.

Other disorders

No participants reported a previous diagnosis of autism spectrum disorders or language disorders other than dyslexia. One typical reader reported poor hearing, and two dyslexic readers reported being dyscalculic. These participants were included in the sample but excluding them would have minimal impact on our analyses.

Face matching

Overall group differences and correlations

As seen in Fig. 2, dyslexic readers as a group were less accurate than typical readers on feature-based face matching but not global form face matching (feature-based: dyslexic readers M = 66.6%, SD = 5.98; typical readers M = 69.3%, SD = 4.33; t(58) = 2.029, p = 0.047, d = 0.517; global form: dyslexic readers M = 83.7%, SD = 5.87; typical readers M = 84.6%, SD = 4.55; t(58) = 0.733, p = 0.466, d = 0.188). The null result for global form but not in feature-based face matching could not be explained by a difference in task reliability, as the global form face matching task was slightly more reliable (α = 0.778) than the feature-based face matching task (α = 0.664) while it is generally easier to detect a group difference with a more reliable measure.

Specific effects: regression models

The logistic regression model at stage 1 was significant, $$\chi^{2}$$(3) = 23.739, p < 0.001, R2Nagelkerke = 0.442. At stage 1 of the linear regression models, ADHD measures also explained a significant amount of the variance in ARHQ scores (F(3,56) = 9.301, p < 0.001, R2 = 0.333, R2adjusted = 0.297) and reading accuracy (F(3,56) = 3.268, p = 0.028, R2 = 0.149, R2adjusted = 0.103), but not reading speed (F(3,56) = 2.383, p = 0.079, R2 = 0.113, R2adjusted = 0.066). As expected, ADHD measures were therefore highly predictive of dyslexia and of reading problems in general.

The addition of global form face matching accuracy and face matching response times at stage 2 did not improve any models (model change for group membership: $$\chi^{2}$$(2) = 0.786, p = 0.675, R2Nagelkerke change = 0.012; for ARHQ: F(2,54) = 0.268, p = 0.766, R2 change = 0.007; for reading speed: F(2,54) = 0.351, p = 0.706, R2 change = 0.011; for reading accuracy: F(2,54) = 1.316, p = 0.277, R2 change = 0.040).

Adding the feature-based face matching accuracy at stage 3 significantly improved all models (model change for group membership: $$\chi^{2}$$(1) = 7.559, p = 0.006, R2Nagelkerke change = 0.106; for ARHQ: F(1,53) = 9.114, p = 0.004, R2 change = 0.097; for reading speed: F(1,53) = 6.002, p = 0.018, R2 change = 0.089; for reading accuracy: F(1,53) = 5.319, p = 0.025, R2 change = 0.074). When all other variables were held constant, lower feature-based face matching accuracy was associated with an increased likelihood of being dyslexic, a greater history of reading problems, and slower and less accurate reading. Poorer task-specific performance for feature-based face matching was therefore associated with poorer reading-specific measures. The final stage 3 models are summarized in Table 1 (see also Supplementary Material S4. Regression Models Accounting for Visual Search).

Laterality effects

Exploratory analysis (hence no p values, as they lose their meaning when not doing confirmatory hypothesis testing84,85; for a partial rebuttal, see86) revealed opposite laterality effects (Fig. 3) for feature-based and global face processing (see also Supplementary Information: Laterality Effects) based on facing direction to the best of our knowledge previously undocumented in the literature even for typical readers. As described in the Methods section, the three faces shown on each trial were all oriented in the same direction, which could be 30° leftward, forward, or 30° rightward. In feature-based face matching, average performance was noticeably better on right-facing (M = 70.5%) than left-facing (M = 65.1%) trials (d = 0.778). The opposite was true for global form face matching, where people tended to perform better on left-facing (M = 85.6%) as compared to right-facing (M = 83.3%) trials (d = 0.431). Both laterality effects were consistently seen as witnessed by their moderate-to-large effect sizes. The effect size estimate for the difference in these laterality effects for the two types of face matching trials was even larger (d = 0.968). There was however not a strong correlation between the two effects (left minus right accuracy difference for feature-based vs. for global form face matching, r = 0.180) which could indicate that they are independent of each other.

Discussion

The current study indicates that dyslexic readers tend to be worse at feature- or part-based processing of faces compared to typical readers, while no group differences were found in global or configural processing of faces. Establishing such a specific feature-based processing deficit was the main reason for conducting this study as it is of theoretical importance for theories on the domain-specificity vs. domain-generality of visual word and face processing as well as for our high-level visual dysfunction hypothesis of developmental dyslexia.

The study shines a light on the codependence versus independence of visual word and face processing, and more generally on domain-specificity vs. domain-generality within the visual system. Traditionally, words and faces are thought to be independently processed, perhaps even in independent cortical regions of the two hemispheres, words in the left hemisphere and faces in the right (see e.g.17,88). According to the many-to-many hypothesis7, no single brain region is however responsible for the visual recognition of objects such as faces or words. Instead, overlapping, distributed, bilateral brain circuits mediate the recognition of both object classes. Our results support the many-to-many view that faces and words share common neural resources within the ventral visual stream.

However, the many-to-many view should be further constrained by the type of processing involved. Here we show that processing the global form of faces apparently shares minimal—if any—resources with visual word processing, while word and face perception are associated when the latter requires the processing of fine-grained visual features of a face (for related work on brain damaged patients, see e.g.89,90).

The current results are at odds with the prediction of Dehaene et al.11 of an inverse relationship between face and word recognition, as reading skills either have no relationship with face perception abilities (global form face matching) or a positive relationship with face perception abilities (feature-based face matching). More generally, they go against the destructive-competition version of the neuronal recycling hypothesis11,27 which suggests that words encroach on cortical space and computational resources that otherwise would have been dedicated to objects such as faces to the detriment of their processing; for similar conclusions based on research on illiterates, see van Paridon et al.91.

The fact that reading problems are associated with a specific feature-based face processing deficit can be compared with acquired and developmental prosopagnosia. The face specificity of prosopagnosia, like the word specificity of acquired and developmental reading problems, has long been debated. A particular disruption of global/configural/holistic processing has however been reported in prosopagnosia (although the specificity of this effect might be better established for acquired prosopagnosia92,93,94,95,96,97,98,99). The current results are consistent with the intriguing possibility that prosopagnosia and our hypothesized high-level visual dysfunction subtype of developmental dyslexia are essentially mirror versions of each other (see7). This needs further validation.

Our results on laterality effects were exploratory and need to be interpreted with caution. As it has long been debated whether faces and words are primarily processed in opposite hemispheres, these laterality effects nonetheless warrant further discussion (see also Supplementary Information: S2 Laterality Effects). Furubacke et al.13 have already called for a modification of the many-to-many hypothesis so as to take laterality of function into account. They report that visual face and word processing share resources only when tasks rely on the same hemisphere—focusing on face identity thus shares some resources with focusing on handwriting, as both rely on right hemisphere processing, and focusing on word identity shares resources with focusing on facial speech sounds/lip reading, as they tap into left hemisphere processing (for more information on left hemisphere processing of lip reading and audio-visual integration of speech, see e.g.100,101,102). While a large body of research suggests that the right hemisphere is highly important for identifying faces and the left hemisphere for identifying words, face-responsive and word-responsive visual regions are nonetheless found bilaterally2 and unilateral lesions can lead to simultaneous face and word recognition deficits19. In accordance with some other literature (see Supplementary Information: S2 Laterality Effects for further discussion), the current results suggest that both hemispheres support the discrimination of faces but to a different degree depending on the type of processing.

Global form face processing laterality effects were consistent with a right hemisphere lateralization, and we found no evidence for overall group differences (dyslexic vs. typical readers) in lateralization for this task. This can be contrasted with ideas of the joint development of hemispheric lateralization for words and faces, where the general left visual field (right hemisphere) superiority for faces is reportedly associated with greater reading abilities and has been suggested to be driven by left hemisphere word lateralization103 (see also7,11,29,104,105). This also seems somewhat at odds with previous work where a left visual field (right hemisphere) advantage for faces was reported for typical readers while no apparent face lateralization was found for dyslexic readers14 or where no consistent face lateralization was found in either group48. Further exploratory analysis however does hint at a possible difference between typical and dyslexic readers in the relationship between reading and the lateralization of global form face processing, see Supplementary Information: L2 Laterality Effects and Supplementary Fig. S3. This needs to be independently replicated.

For feature-based face matching, both groups showed laterality effects consistent with left hemisphere lateralization. This effect was particularly strong in the dyslexic group (d > 1). The replicability of this exploratory analysis should be independently verified, but it is possible that weaknesses in reading are associated with greater left hemisphere lateralization of feature-based face processing. This could be consistent with a weaker left-side bias for Chinese character recognition of dyslexic readers in Hong Kong, likely indicating lessened right hemisphere lateralization/greater left hemisphere lateralization106. The authors suggest that dyslexic readers may not form appropriate part-based representations in the right hemisphere.

Our ideas have ever since the beginning been guided by the possibility that high-level visual problems associated with dyslexia might be feature-based and left-lateralized (for further discussion, see43). The current results fit well with our suggestion of the former but not the latter. The dyslexic group had specific weaknesses in feature-based processing of faces. However, overall group differences in feature-based face processing accuracy were seemingly independent of any differences in laterality effects and were, if anything, smallest for rightward-oriented faces (assumed left hemisphere processing). These results fit better with reports on people with a posterior cerebral artery stroke, where patients with word recognition difficulties could also have problems in face recognition independently of the affected hemisphere109 (see also19). There are also reports of abnormal processing of faces in the bilateral ventral stream of impaired readers110, primarily in the right hemisphere105, and in the left hemisphere39. A possible bilateral processing deficit does not necessarily go against the idea of a feature-based processing deficit as the right hemisphere has been suggested to flexibly switch between holistic and part-based representations depending on the type of information being used111. For example, expert readers of Chinese characters process them less holistically than novices—consistent with the importance of featural information in Chinese character recognition—yet show hints of increased recruitment of right hemisphere regions for these characters112,113, while experts in recognizing Greebles (computer-generated novel objects) show increased holistic processing of these objects as well as increased recruitment of right hemisphere regions (in the fusiform face area)114. It is possible that the right hemisphere becomes sensitive to whatever information is most diagnostic for a particular object class. Regardless of possible hemispheric differences, our results are consistent with unusual or faulty high-level visual mechanisms in dyslexia, which we suggest here are specifically feature-based and not global or configural.

While dyslexia was originally considered a visual deficit115,116,117,118, the focus of dyslexia research moved from perceptual-based theories to language-based theories, particularly to the processing of phonological information (e.g.119; for an overview, see33). The evidence for phonological problems in dyslexia is strong, and it is not our intent to suggest that a visual theory of reading problems should replace the phonological view or other evidence-based views. However, several factors could contribute to reading problems, and interest in the contribution of visual factors has recently resurfaced. Our high-level visual dysfunction hypothesis is a new idea on the causes of reading problems and its empirical testing is thus greatly needed.

The current study suggests that reading problems are independent of the processing of global form but associated with weaknesses in feature-based processing, generally believed to play a much larger role in reading. This is consistent with a high-level visual dysfunction subtype of developmental dyslexia characterized by weaknesses in feature-based processing. It should nonetheless be explicitly stated that we found an association and not direct evidence for a causal role in developmental dyslexia. Finding such a group effect does not indicate that all dyslexic readers have “ventral visual stream problems” nor does it indicate that those who potentially do would have crippling face processing deficits in everyday life. It also does not indicate that high-level visual problems go hand in hand with all reading problems. Reading is a complex skill which can be broken down into several subskills, only some of which might be expected to have anything to do with visual cognition. Our reading measures were specifically focused on visual word decoding, and not on reading comprehension, as visual perception mechanisms are more likely to partake in the former than in the latter. This is also consistent with the fact that people with developmental dyslexia have difficulties with accurately and fluently recognizing and decoding words, while people with specific reading comprehension deficits have poor reading comprehension despite adequate word recognition and decoding abilities, and only the former group but not the latter shows abnormal functioning in high-level regions of the ventral visual stream120.

There are some reasons to believe that problems with feature-based face processing might be underestimated in the current study. First, the sample included only current or former university students. Dyslexic university students might have less profound difficulties in reading compared to those who do not pursue a university degree, and have more experience with written words, making them distinct from dyslexic readers in general. Our previous research indicates that face-processing deficits might be most pronounced for less educated dyslexic readers42. Secondly, while ARHQ is an excellent screening tool for dyslexia, it is always possible that some people who screened positive for dyslexia in this study would not meet formal diagnostic criteria; such group misplacements could attenuate group differences. Third, the reliability for feature-based face matching was questionable, so estimates of individual differences in feature-based processing were noisy which would be expected to diminish measured effect sizes. Finally, stimuli were computer-generated images of faces which are arguably less detailed than real faces, which could diminish the feature-based processing of these faces as compared to real faces or have other unforeseen effects such as making it harder to apply previous visual experience with real faces (then again, the journal’s quality check flagged the face images as identifying participants and wanted them removed, so they seem real enough). It would be good to replicate the current study in a more diverse sample and with other stimuli.