Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Left hemisphere enhancement of auditory activation in language impaired children

Abstract

Specific language impairment (SLI) is a developmental disorder linked to deficient auditory processing. In this magnetoencephalography (MEG) study we investigated a specific prolonged auditory response (N250m) that has been reported predominantly in children and is associated with level of language skills. We recorded auditory responses evoked by sine-wave tones presented alternately to the right and left ear of 9–10-year-old children with SLI (n = 10) and children with typical language development (n = 10). Source analysis was used to isolate the N250m response in the left and right hemisphere. In children with language impairment left-hemisphere N250m responses were enhanced compared to those of controls, while no group difference was found in the right hemisphere. Consequently, language impaired children lacked the typical right-ward asymmetry that was found in control children. Furthermore, left but not right hemisphere N250m responses correlated positively with performance on a phonological processing task in the SLI group exclusively, possibly signifying a compensatory mechanism for delayed maturation of language processing. These results suggest that enhanced left-hemisphere auditory activation reflects a core neurophysiological manifestation of developmental language disorders, and emphasize the relevance of this developmentally specific activation pattern for competent language development.

Introduction

Although the maturing brain is pre-eminently suitable to acquire language, some children have difficulties in learning to fluently speak or understand their native tongue for no apparent reason. Approximately 5% of primary school children (6–11 years) are estimated to have specific language impairment (SLI, also known as developmental language disorder, DLD1,2,3). Cognitive impairments in SLI include deficits in speech perception4, working memory and phonological short-term memory5,6,7. Its causes are still unknown, although it has been suggested it is a, heterogeneous, heritable neurodevelopmental disorder that can affect auditory processing8. Indeed, children with SLI have demonstrated altered processing of auditory information and atypical evoked brain responses to sounds9,10,11,12.

The sequence of brain responses to passive auditory stimulation have originally been characterized using EEG scalp recordings as a waveform with positive and negative peaks with the nomenclature focused on the order of the peaks (P1-N1-P2-N2) or latency (e.g. N100, N250). The peaks that dominate the mature and the developing auditory evoked response differ substantially. In short, whereas the adult waveform is typically dominated by the short lived P1-N1-P2 responses, the child waveform is characterized by a peak around 100 ms (referred to as P1 in EEG and P1m in MEG recordings)13,14 and one robust peak around 250 ms after stimulus presentation (N250/N250m or N2/N2m)15,16,17,18. In primary school children (~6–11 years), the emerging N1(m) overlaps in space and time with both the P1(m) and the N250(m). This complicates the isolation of the N250m and emphasizes the need to include source information to reliably separate and extract neurophysiological signatures that reflect distinct processes. When the underlying neural generators of these main components in the child waveform are modelled with equivalent current dipoles (ECDs), they reflect currents with an anterosuperior direction (P1(m)) and an inferior-posterior direction (N250(m) and N1(m))14,15,16,19.

The developmental changes in the N250(m) suggest it is an important signature of (auditory) brain maturation. The N250(m) starts to gradually decrease in amplitude around a certain age (~10–11 years) until it is no longer or barely visible in the adult waveform16,17,18,20. This decrease has been attributed to cortical reorganization, as more efficient cortical networks are established during development17,18. Nevertheless, it has been less intensively studied, arguably because the N1(m) is the most dominant response in adults21. Similarly, the P1(m) in children received more attention, possibly because it is argued to be the most dominant response in children22, especially during early years.

Even though the child N250(m) shows a similar source configuration as the adult N1, they most likely reflect functionally distinct processes18,19,23. For example, the N1 and N250 are differentially affected by inter stimulus intervals (ISIs) and thus have different refractory properties. Shortening the ISI attenuates the N1(m) while the N250(m) is enhanced or unaffected18,24,25. By changing the experimental design one can emphasize either component.

The buildup of N250m signal strength with shorter ISIs suggests it has a role in neural models of learning25. The idea that processing at this time-window reflects increased receptiveness to learn new items fits well with recent studies that have related prolonged or stronger activity in this time-window, particularly in the left hemisphere, to poorer performance on language related tasks19,26. This evidence suggests that left hemisphere auditory cortex activity around 250 ms plays a crucial role in processing language until more efficient cortical networks are established. Its potential role in language learning makes it especially interesting for SLI. However, to our knowledge there are no earlier studies focusing on the source activity in this time-window in children with SLI.

Earlier studies on auditory processing in SLI and dyslexia suggest deviances in P1-N1-P2 complex to simple speech and non-speech sounds9,10,11,12,27,28,29,30,31, but the results are mixed. The few studies focusing on N250 in dyslexia reported either enhanced activation in dyslexics26,31 or no difference to controls32,33.

Hemispheric differences are likely to clarify discrepancies between studies and may provide pivotal information for understanding atypical language development. Typically developing children generally show a hemispheric preference for auditory brain responses13,14,34, and it has been proposed that atypical auditory lateralization is the core underlying neural deficit of dyslexia34. Studies using EEG are, however, limited in their spatial sensitivity and less sensitive to hemispheric differences, possibly leading to a failure to consistently show a role for hemisphere-specific changes in (language) development. MEG can readily distinguish between sources in the auditory cortices of the left and right hemisphere and can utilize the components’ source information to separate functionally distinct processes that mature differently35. Indeed, a longitudinal MEG study of auditory evoked responses and language development in typically developing children reported a positive correlation between an increase in P1m amplitude in the left hemisphere and linguistic tests14. Nevertheless, the functional significance of having atypical auditory cortical responses for language development has not been established.

The aim of the present experiment was to map typical and atypical N250m responses and to study its functional significance for auditory language skills by correlating the N250m to behavioral performances. Using MEG, we compared the auditory evoked dipole source activity in the N250m time window (~250 ms post-stimulation) of children with SLI and with typical language development in response to passively listening to sine-wave tones presented alternately to the right and left ear. The use of alternating tones allowed us to look at ipsi- and contralateral stimulation and to investigate possible differences between the two hemispheres in more detail. Based on the previous literature we hypothesized stronger neural activation approximately 250 ms after auditory stimulation in the left auditory cortex of children with impaired language development compared to typically developing children. We had no hypotheses pertaining to the behavioral performances, which were used to analyze post-hoc correlations.

Materials and Methods

Subjects

The original source of the data reported here is a larger study by Helenius and colleagues, but only the behavioral results reported in Table 1 overlap with the original study36. Eleven children with SLI (mean age 9 years 8 months; age range from 106 to 127 months) and ten typical developing (TD) children (mean age 9 years 6 months; age range from 110 to 118 months) participated in that study. One child did not complete this particular passive listening task, resulting in a group of ten children with SLI (3 females) and ten TD children (3 females).

Table 1 Cognitive profiles of the typically developing (TD) and language impaired (SLI) children.

All participants were contacted through a larger study aiming to highlight the etiology, linguistic development and prognosis of SLI in the City of Vantaa, Finland37,38. The children in the SLI group had been diagnosed at the Helsinki University Central Hospital prior to school entry. All subjects were native Finnish speakers; one SLI child had a bilingual background. An informed consent was obtained from all subjects and/or their legal guardians, in agreement with the prior approval of the Helsinki and Uusimaa Ethics Committee at the Helsinki University Hospital. The experiments were approved by the Helsinki and Uusimaa Ethics committee and the methods were carried out in accordance with guidelines and regulations. The present study reports on the passive listening task not reported in the earlier articles. The behavioral results have been published before36,37.

Behavioral testing and analysis

All subjects were tested on a concise neuropsychological test battery tapping non-linguistic reasoning39 (Block design), vocabulary39, verbal short-term memory and reading related skills (Table 1). In the block design test, the subject is required to copy a pattern from a figure using colored blocks, in order to assess their ability to understand complex visual information. Verbal short-term memory was tested using the digit span forward subtest39 and the sentence repetition tests40 and phonological encoding/decoding with the pseudoword repetition test (NEPSY). In these tests, the subjects have to repeat a sequence of numbers, pseudowords or complete sentences. A measure of oral reading speed was obtained from silent reading of sentences41 and reading aloud a narrative passage (the number of words read in 1 min). The sentence reading test (ALLU)41 consists of 20 trials composed of a picture that matches one of the four written sentences. The task is to identify as many correct picture-sentence pairs as possible in 2 min and the total score is the number of correctly identified sentences. Naming speed was estimated as the time to name color squares, digits42 (RAN) or color squares, letters and digits in a 5 × 10 matrix43 (RAS). Phonemic awareness was assessed using the phonological processing subtest of NEPSY40. The main purpose of the behavioral testing was to provide cognitive profiles for both groups (Table 1), not to diagnose SLI, as the SLI subjects had been diagnosed earlier. However, in order to study the functional significance of the auditory response, scores were also used to analyze post-hoc correlations between the behavioral tests that showed differences between groups (p < 0.10, Table 1) and the neural responses of interest (i.e. N250m). To do this, we used Kendall’s tau non-parametric correlation because it is more appropriate for small data sets and/or when participants have the same scores, as in the current study44. We controlled the false discovery rate by using the Benjamini-Hochberg procedure45. Hannus and colleagues37 provide interpretation of the different cognitive profiles in an earlier paper. In short, we expect the different p-values of the tests to reflect their sensitivity and specificity to diagnose SLI in Finnish children.

Stimuli and MEG recordings

The stimulus was created using Sound Edit (Macromedia, San Francisco, CA, USA) and consisted of a monaural 50-ms (15-ms rise/fall time) 1-kHz sine wave tone, 65 dB HL. Stimuli were presented alternately to the left and right ear in order to probe ipsi- and contralateral auditory pathways in each hemisphere. Inter-stimulus interval (ISI) varied randomly between 0.8- and 1.2-s.

During the measurement, the child and one accompanying adult were seated in a magnetically shielded room and instructed to avoid excessive head movements. Stimuli were controlled with the Presentation program (Neurobehavioral Systems Inc., San Francisco, CA, USA) running on a PC and delivered to the subject through plastic tubes and earpieces. The children were asked to ignore the tones and they watched a silent cartoon during the whole recording.

The auditory cortical responses were recorded using a 306-channel whole-head system (Vectorview™, Elekta Neuromag Oy, Helsinki, Finland). This system measures magnetic field strength at 102 locations over the scalp; two orthogonally oriented planar gradiometers and one magnetometer at each location. Prior to the measurement, four head-position indicator (HPI) coils were attached to the participant’s scalp. HPI coils were digitized with a 3-D digitizer in order to determine their location in relation to three anatomical landmarks; preauricular points and nasion. At the start of the measurement, HPI coils locations with respect to the MEG helmet were measured. Finally, eye blinks and movements were monitored by placing electro-oculogram (EOG) electrodes directly below and above the right eye and on the outer canthi of each eye.

MEG analysis

The MEG signals were bandpass filtered at 0.1–200 Hz and sampled at 600 Hz. The raw data were processed using the spatio-temporal signal space separation method46. Offline, responses were averaged from −0.2 to 0.8 s relative to stimulus onset. Epochs contaminated by vertical or horizontal eye movements were rejected. To minimize the effect of heartbeat artifacts the MEG signals were offline averaged with respect to the heart signal and principal component analysis was used over this average to project out the resulting magnetic field component47. Finally, the data were manually checked to exclude epochs with major artifacts. On average 107 artifact-free averages were collected in the TD group and 111 in the SLI group.

The active source areas were modelled from the averaged data using equivalent current dipoles48 (ECD). Averages were filtered with a 40 Hz lowpass filter and baseline corrected (−0.2 to 0 s). Xfit software was used to estimate the localization of the current sources (Elekta, Oy, Helsinki, Finland). In each subject, the same 20 planar sensor pairs were selected in each hemisphere that best covered the dipolar field pattern. To identify the cortical response around 250 ms after auditory stimulation, ECDs were accepted when (i) in the time window of interest (175–325 ms), (ii) they had a goodness-of-fit value of >80% and (iii) they had a predominantly inferior-posterior direction. These criteria were based on the pattern of activation that is most reliably repeatable for this specific time-window. ECD locations and orientations were fixed, while their amplitudes were allowed to vary. In each subject, the magnetic field patterns were visually inspected to identify local dipolar fields in each stimulus condition (i.e. ear and hemisphere). From the resulting four ECDs one was selected in each hemisphere that best fit the data in all conditions. As individual MR images of the subjects were not available, a spherical volume conductor model was used with the default center defined as the origin (0, 0, 40). Dipole moment amplitudes were defined as the average of the peak (175ms–325ms). Data points around the peak were included as long as they exceeded two standard deviations above the mean activation of the whole epoch.

Statistical model and analysis

The data were analyzed using R49 and the packages lme4 and pbkrtest50,51. The amplitude values of each factor: ear (2) and hemi (2) were extracted for each participant in each group (2), resulting in four amplitude values for each participant.

In order to assess the effect of impaired language development on auditory evoked source activity we used a linear mixed model (LMM) or, more specifically, a random intercept model44,52,53. A random intercept model is, in our case, more suitable than ANOVAs because it resolves the non-independence of multiple responses from the same subject by assuming a different baseline value for each subject (i.e. random intercept). In addition, it has more opportunities to control for possible problems that may arise due to our small sample size (i.e. power and type 1 error rate). LMM has fewer assumptions compared to ANOVAs54, violations of which affect the type 1 error rate and power of ANOVA F tests55.

In the estimation of the best model for the covariance structure (compound symmetry), we used a backward method with the maximum likelihood (ML) approach. The significance test to be used was the chi²-test based on the likelihood ratio test (LRT) backward selection heuristic of two nested models to compare the models. For small sample sizes, this approach was reported to be more conservative compared to the Akaike information criterion (AIC), maintaining the type 1 error rate of a maximum model56, while increasing the power substantially57. The final model is a collection of fixed and random effects. Here, we calculated it with the restricted maximum likelihood (REML) approach to reduce the bias of the estimators of variances of the random intercept and the residual. The REML is less affected by a small sample size and has consistently shown lower type 1 error rates58,59.

In the diagnostics part of the model, we first confirmed the normality of residuals to be valid using a qq-plot and a scatter plot for groups. Furthermore, we established the normality of random intercepts utilizing a qq-plot. Using the final model, we defined contrasts for separate sets of regression coefficients. For testing if the contrast is zero, we used the Kenward-Roger (KR) approximation for F-test by Halekoh and Højsgaard51, as this method produced acceptable type 1 error rates even for smaller samples59,60. In the contrast calculations from KRmodcomp, we obtained the numerator d = 1. Then, F statistics is the squared t statistics, and for simplicity, contrasts are reported by using t-statistics, dfs and p-values.

Results

Source analysis

In 90% of the participants (18/20) we were able to select at least one dipole in each hemisphere in the time-window of interest. In the other two (one in SLI group and one in TD group) we were unable to find dipoles meeting our criteria and thus they were excluded from further analysis. In Fig. 1, the gradiometer butterfly plots of the auditory evoked fields of one participant are shown; the time window used for source localization is marked by a window. The corresponding field distributions and dipole orientations are depicted on the bottom of Fig. 1.

Figure 1
figure1

Butterfly plot of signals recorded by gradiometer sensors to left and right ear stimulation of one participant (top). ECD’s were selected in the time-window of interest (window). The bottom figure shows the typical field distribution and dipole orientation (arrows).

Figure 2 shows the location of the selected dipoles (x, y coordinates in axial plane) of each individual and the grand average location of the two groups. Figure 3 depicts the resulting grand average waveforms. There were no significant differences between the groups on any of the (x, y, z) coordinates (p ≥ 0.28).

Figure 2
figure2

Dipole x and y coordinates in axial plane of each participant (thin lines) in SLI (grey) and TD group (black) as well as their averages (thick lines).

Figure 3
figure3

Grand average time-course of activation of the dipolar sources in the left and right hemisphere plotted separately for contralateral (thick lines) and ipsilateral (thin lines) responses for SLI (grey) and TD (black) group.

Modelling

In the first most inclusive model we had all variables (group, ear, and hemi), their pairwise interactions and a three-way interaction. Using a cut-off of α = 0.05, we dropped first the three-wise interaction group*ear*hemi, chi²(1) = 0.085, p = 0.78, and second the pairwise interaction group*ear, chi²(1) = 0.006, p = 0.94. Furthermore, the equality of variance in SLI and TD group was checked and could be assumed, chi²(1) = 1.263, p > 0.26. The final random intercept model was calculated using the restricted maximum likelihood (Tables 2 and 3). In Table 2, the estimates, standard errors and their ratios (t-values) are shown. Estimates and confidence intervals of the random effects are shown in Table 3.

Table 2 Fixed effects of the model: estimate (standard error(s.e.)), degrees of freedom, t-value and p-value.
Table 3 Approximate 95% confidence intervals for the standard deviation of random effects.

Effect of ipsilateral vs. contralateral stimulation

The mixed effects model revealed an interaction between ear and hemi, t(50) = −2.752, p = 0.008. Generally, contralateral stimulation showed greater amplitudes compared to ipsilateral stimulation. In the right hemisphere this estimated difference (ED) was clearer (ED = 8.34, SE = 3.52) and significant t(50) = 2.367, p = 0.022. In the left hemisphere this difference was smaller (ED = −5.37, SE = 3.52) and not significant t(50) = 1.524, p = 0.134. Figure 4 depicts the individual (top) and averaged (bottom) strength of activation resulting from ipsi- and contralateral stimulation in the left (left) and right (middle) hemisphere for both groups. The ipsi-contralateral effect did not seem to differ between groups since neither the three-way interaction (ear*hemi*group) nor the two-way interaction (ear*group) were significant (chi²(1) = 0.085, p = 0.771 and chi²(1) = 0.006, p = 0.937, respectively).

Figure 4
figure4

Individual (top) and averaged (bottom) strength of activation in the left hemisphere (LH; left) and right hemisphere (RH; middle) in response to ipsi- and contralateral auditory stimulation of children with SLI (grey) and typical language development (black). Hemispheric differences (right) are plotted as the difference in activation strength to contralateral stimulation (i.e. right ear for left hemisphere and vice versa). Whiskers in the bottom figures represent the standard error of the mean (SEM).

Group differences in the two hemispheres

The mixed effects model revealed a significant interaction between group and hemisphere, t(50) = 2.648, p = 0.011 (Table 1). This effect was limited to the left hemisphere as the difference between groups (ED = −12.74, SE = 5.52) was significant in this hemisphere for both the ipsi- and contralateral stimulation, t(24.70) = 2.306, p = 0.03. In contrast, the difference between groups in the right hemisphere was negligible (ED = 0.45, SE = 5.52), t(24.70) = 0.082, p = 0.935. Figure 4 (left vs. middle) depicts the plots corresponding to this difference. Finally, TD children showed significantly higher amplitudes in the right compared to the left hemisphere (ED = 8.35, SE = 3.52, t(50) = 2.370, p = 0.022), indicating a cortical asymmetry in this group (Fig. 4; right). Children in the SLI group show the opposite pattern with stronger activation in the left hemisphere compared to the right hemisphere (ED = −4.84, SE = 3.52), but this difference was not significant t(50) = −1.374, p = 0.176.

Correlation between behavioral skills and brain responses

Post-hoc correlations were performed between the amplitude of the N250m to contralateral stimulation and the behavioral measures for each group. Data were checked for outliers, and none were found (all individual values < 1.8 SD). In the TD group, no significant correlations were found. However, in the SLI group, we found a significant positive correlation between phonological processing scores and N250m amplitude in the left hemisphere τb = 0.774, p = 0.006, but not the right hemisphere τb = 0.278, p = 0.321. In the SLI group, those with higher N250m amplitudes in the left hemisphere performed better on the phonological processing task (Fig. 5). When corrected for the other behavioral tests that showed differences between groups (i.e. vocabulary, digit span, pseudoword repetition and sentence repetition), the corrected p-value was 0.03.

Figure 5
figure5

Scatterplot representing the correlation between phonological processing (raw score) and N250m amplitude in the left hemisphere, to contralateral stimulation, for the SLI (grey) and TD (black) group.

Discussion

In this study we assessed typical and atypical variation in the N250m response and examined its functional significance for language processing. As was hypothesized, auditory processing in the cortical time-window of the N250m was altered in children with impaired language development and this alteration was limited to the left hemisphere; N250m dipole moment in the left hemisphere was stronger in SLI children. In our view these findings illustrate the association between maturation of the auditory cortex in the left hemisphere and language development, with relevance for neurodevelopmental disorders.

Our results provide further support for the hypothesis that stronger or more sustained activation in the cortical timing of the N250(m)31, especially in the left hemisphere19,26, is indicative of less developed language skills. This cortical response is observed to be specific to the developing brain16,17,19 and weaker neural activation in this time window has been related to better reading skills in typical developing children19,26,31. Indeed, the decrease in amplitude of the N250(m) (and increase in N1(m)) have been speculated to reflect more automatized auditory processing19,61. Paradoxically, in our clinical group, higher N250m amplitudes in the left hemisphere were related to better performance on a phonological processing task; a core deficit in SLI and a crucial component in learning to read62,63. Presumably, children with SLI rely more strongly on neural sources in the left hemisphere as a possible compensatory mechanism for delayed maturation of language processing. However, this correlation should be interpreted with care, as correlations typically only stabilize at considerably larger sample sizes64. Therefore, we do not expect the current correlation coefficient to accurately represent the true value in the SLI population and acknowledge that this might be an over-estimation of the effect size65 or a type-1 error. Nevertheless, our claim is substantiated by an EEG study that identified an enhanced N250 response as a compensatory mechanism for phonological processing deficits in dyslexic children but not in typical developing children31.

The simplest account of our data is an enhanced auditory brain response in the left hemisphere of children with SLI. Several studies have already observed the relationship between language skills and auditory evoked responses in left hemisphere14,36,66 and some have focused on the N250(m)19,26. However, the source activity of the N250m has not been contrasted between children with typical and impaired language development. By using MEG ECD source modelling techniques, we were able to show hemisphere-specific alterations (i.e. increase in left hemisphere exclusively) in the auditory evoked responses of children with impaired language development. In our view, this illustrates the different role of the two hemispheres in developmental language disorders and emphasizes the need to include spatial information to properly distinguish between activation patterns possibly varying in time and between hemispheres. For estimating the detailed location of activation in the two hemispheres, information on individual brain anatomy should be used, which was not available in the present study. Importantly, group differences in source strength could not be explained by differences in dipole locations.

Although it is not possible to draw strong conclusions on hemispheric asymmetry based on our data, given the recent debate on the role of lateralization and asymmetry in developmental language disorders67,68,69,70, we will discuss findings we think are relevant to this discussion. Furthermore, we will speculate on how the interaction between hemispheres could be affected by developmental language disorders.

We used monaural stimulation in order to probe ipsi- and contralateral pathways, allowing us to investigate hemispheric differences (left vs right), laterality effects (ipsi vs contra) and their interactions. The data showed a contralaterality effect in both groups; a greater amplitude in the hemisphere contralateral to the stimulated ear compared to the ipsilateral hemisphere. Typical developing children showed overall higher amplitudes in the right compared to the left hemisphere. Both results are in good agreement with previous literature on hemispheric asymmetry in pure tone processing and contralaterality effects in children13,71 and adults13,72,73,74,75. In the present study however, children with impaired language development showed an opposite, but not statistically significant, asymmetry pattern, indicating a lack of typical asymmetry similar to what was found in dyslexic children34.

Given that speech vs. nonspeech processing typically reflect opposite asymmetry patterns (i.e. leftward vs. rightward respectively), it is important to distinguish between studies looking at auditory and language lateralization. In addition to opposite asymmetry patterns of speech and nonspeech processing, the theory of asymmetric sampling in time (AST) proposes that cerebral asymmetries relate more to the temporal features of auditory information. In this view, the right hemisphere samples slow (syllabic) rate auditory input (~3–7 Hz) and the left hemisphere fast (phonemic) rate auditory input (~12–50 Hz)76,77,78. For certain language processes (e.g. phonological processing), both temporal features must be integrated. This dynamic nature of cerebral asymmetry needs to be considered when discussing asymmetries and hemispheric differences in relation to language and auditory processing, and make it likely interhemispheric connections play a crucial role.

In addition to functional hemispheric differences, anatomical hemispheric differences might also explain the differences between our two groups. Indeed, studies reporting white and grey matter structural differences in children with developmental language disorder are numerous79,80,81. However, of special interest for M/EEG studies is a report that demonstrated a more convoluted auditory cortex produces stronger cancelation effects resulting in lower measured EEG and MEG signal82. The authors argued that the left hemisphere is typically more convoluted resulting in the rightward bias in pure-tone processing. In the present study, the enhanced auditory responses in the left hemisphere of the SLI group could be explained by a less convoluted left auditory cortex, or more focal cortical activity in the left hemisphere compared to controls82. Importantly however, a recent study investigating the neuroanatomical basis of developmental dyslexia identified an atypical sulcal pattern with more convolutions in left hemispheric perisylvian regions compared to controls as a biomarker of dyslexia83. Assuming this result can be extrapolated to our subjects, one would expect lower amplitudes in the left hemisphere in the SLI group. Future studies combining neuroanatomical and functional (MEG) data are needed to clarify laterality of auditory and language processing in developmental language disorders.

Even though it appears inevitable that an abnormal neural activity pattern in the left hemisphere disrupts the cerebral asymmetry of language processes, the question remains whether interhemispheric auditory connections are affected or that it only reflects the primary dysfunction in the left hemisphere. Due to the nature of the present study and the complexity of the auditory system, we cannot conclusively say whether this is the case. Based on our results, it is tempting to conclude that auditory pathway interactions are unaffected by impaired language development, as differences in the right hemisphere were negligible. It should be noted however, that during monaural stimulation, there is no competition between both ears. Others have argued that the stronger the competition between the ears (e.g. in a dichotic listening task), the stronger the interactions between the auditory pathways84.

To examine interaural interaction in developmental language disorders the ‘frequency tagging’ method can be used. With this method, auditory input to each ear is ‘tagged’ with amplitude modulations of different frequencies that can later be decoded from the cortical responses. This has proven a useful tool to evaluate the central auditory pathways in more detail85. Indeed, one study utilizing this method observed weaker ipsilateral suppression (a measure of interaural interaction) in dyslexics depending on the strength of ROBO1 expression (a known dyslexia gene)86. The authors demonstrated that the weaker this gene-expression in dyslexic individuals, the weaker the interhemispheric interaction. Interestingly, this gene is also suggested to be involved in neuronal migration underlying brain lateralization in healthy subjects with a specific function in supporting a short-term buffer for arbitrary phonological strings87. These results indicate that impaired language development is associated with weaker interaction between auditory pathways which may be especially detrimental for phonological processing.

Two issues regarding the increased N250m response amplitude and atypical hemispheric balance still require clarification. First, this study’s design was not well suited to determine whether they are a cause, correlate or consequence of developmental language disorder. Similar to many comparable studies, not all our SLI children show an increased N250m and atypical hemispheric balance. Thus, atypical hemispheric balance (or indeed increased N250m) should not be seen as a critical cause of SLI. We suggest it is more likely a consequence, as we argue that the increased N250m can (partly) compensate for the language deficit. It is also possible the processing differences in the left hemisphere causes problems in language-related functions or that the auditory and language deficits are both markers of an underlying neurodevelopmental disorder9.

Second, we are left with an apparent dichotomy where the N250m is suggested to be both indicative of poorer (in TD group) and superior (in SLI group) language skills. We do not consider it an impossibility that processing in this time-window is both an indicator of language or auditory development as well as a useful tool for the developing brain. The fact that this neural process is present in most children suggests it is beneficial for development, the fact that in adults it typically is not, suggests the brain develops a more efficient way of processing auditory stimuli. We surmise that neural processing in this time-window is exceptionally flexible, which should be a useful tool, and indeed a necessity, in the learning environment of the child brain.

This study’s main limitation is its sample size. Small sample sized studies raised considerable debate65,88,89,90,91,92 and we agree that they deserve additional scrutiny. We strived for maximal power by using methods that have specific advantages concerning type 1 error rates and statistical power with small sample size studies, namely; (i) a statistical model with fewer assumptions (LMM) (ii) model selection (LRT backward heuristic), (iii) model fit (REML), and (iv) evaluating significance (KR approximation). Furthermore, several authors have defended small-N designs, mainly for its inferential validity88,90,91,92,93. Nevertheless, we caution against taking our findings, especially the correlation, at face value.

To conclude, we provide evidence that neural activation at ~250 ms is functionally meaningful for the integrity of language skills and substantiate the claim that enhanced left-hemisphere auditory activation reflects a core neurophysiological manifestation of developmental language disorders. We found significantly stronger activation in the left hemisphere of the SLI group, as compared to controls, that unmistakably differed in language skills. We suggest this might reflect a compensatory mechanism for language processes. The effect was isolated to the language dominant left hemisphere and is thus in agreement with other studies associating altered neural responses in the left hemisphere to language skills and impaired language development.

Data Availability

The dataset analyzed during the current study are not publicly available due to legal restrictions but are available from the research group on reasonable request.

References

  1. 1.

    McArthur, G. M., Atkinson, C. M. & Ellis, D. Can Training Normalize Atypical Passive Auditory ERPs in Children with SRD or SLI? Dev. Neuropsychol. 35(6), 656–678, https://doi.org/10.1080/87565641.2010.508548 (2010).

    Article  PubMed  Google Scholar 

  2. 2.

    Richards, S. & Goswami, U. Auditory Processing in Specific Language Impairment (SLI): Relations With the Perception of Lexical and Phrasal Stress. J. Speech Lang. Hear. Res. 58(4), 1292–1305, https://doi.org/10.1044/2015_JSLHR-L-13-0306 (2015).

    Article  PubMed  Google Scholar 

  3. 3.

    Bishop, D. V. M., Snowling, M. J., Thompson, P. A. & Greenhalgh, T. & the CATALISE-2 consortium. Phase 2 of CATALISE: a multinational and multidisciplinary Delphi consensus study of problems with language development: Terminology. J. Child Psychol. Psychiatry. 58(10), 1068–1080, https://doi.org/10.1111/jcpp.12721 (2017).

    Article  PubMed  PubMed Central  Google Scholar 

  4. 4.

    Burlingame, E., Sussman, H. M., Gillam, R. B. & Hay, J. F. An Investigation of Speech Perception in Children With Specific Language Impairment on a Continuum of Formant Transition Duration. J. Speech Lang. Hear. Res. 48(4), 805–816, https://doi.org/10.1044/1092-4388(2005/056) (2005).

    Article  PubMed  Google Scholar 

  5. 5.

    Bishop, D. V. M. et al. Different origin of auditory and phonological processing problems in children with language impairment: Evidence from a twin study. J. Speech Lang. Hear. Res. 42(1), 155–168, https://doi.org/10.1044/jslhr.4201.155 (1999).

    CAS  Article  PubMed  Google Scholar 

  6. 6.

    Montgomery, J. W., Magimairaj, B. M. & Finney, M. C. Working Memory and Specific Language Impairment: An Update on the Relation and Perspectives on Assessment and Treatment. Am. J. Speech Lang. Pathol. 19(1), 78–94, https://doi.org/10.1044/1058-0360(2009/09-0028) (2010).

    Article  PubMed  Google Scholar 

  7. 7.

    Jackson, E., Leitao, S. & Claessen, M. The relationship between phonological short- term memory, receptive vocabulary, and fast mapping in children with specific language impairment. Int. J. of Lang. Commun. Disord. 51(1), 61–73, https://doi.org/10.1111/1460-6984.12185 (2016).

    Article  Google Scholar 

  8. 8.

    Bishop, D. V. M., Hardiman, M. J. & Barry, J. G. Auditory Deficit as a Consequence Rather than Endophenotype of Specific Language Impairment: Electrophysiological Evidence. PLoS One. 7(5), e35851, https://doi.org/10.1371/journal.pone.0035851 (2012).

    ADS  CAS  Article  PubMed  PubMed Central  Google Scholar 

  9. 9.

    Bishop, D. V. M. & McArthur, G. M. Immature cortical responses to auditory stimuli in specific language impairment: evidence from ERPs to rapid tone sequences. Dev. Sci. 7(4), F11–F18, https://doi.org/10.1111/j.1467-7687.2004.00356.x (2004).

    CAS  Article  PubMed  Google Scholar 

  10. 10.

    Pihko, E. et al. Language impairment is reflected in auditory evoked fields. Int. J Psychophysiol. 68(2), 161–169, https://doi.org/10.1016/j.ijpsycho.2007.10.016 (2008).

    Article  PubMed  Google Scholar 

  11. 11.

    Bishop, D. V. M., Hardiman, M., Uwer, R. & Von Suchodoletz, W. Atypical long-latency auditory event-related potentials in a subset of children with specific language impairment. Dev. Sci. 10(5), 576–587, https://doi.org/10.1111/j.1467-7687.2007.00620.x (2007).

    Article  PubMed  PubMed Central  Google Scholar 

  12. 12.

    McArthur, G. M., Atkinson, C. M. & Ellis, D. Atypical brain responses to sounds in children with specific language and reading impairments. Dev. Sci. 12(5), 768–783, https://doi.org/10.1111/j.1467-7687.2008.00804.x (2009).

    Article  PubMed  Google Scholar 

  13. 13.

    Orekhova, E. V. et al. Auditory Magnetic Response to Clicks in Children and Adults: Its Components, Hemispheric Lateralization and Repetition Suppression Effect. Brain Topogr. 26(3), 410–427, https://doi.org/10.1007/s10548-012-0262-x (2013).

    Article  PubMed  Google Scholar 

  14. 14.

    Yoshimura, Y. et al. A longitudinal study of auditory evoked field and language development in young children. NeuroImage. 101, 440–447, https://doi.org/10.1016/j.neuroimage.2014.07.034 (2014).

    Article  PubMed  Google Scholar 

  15. 15.

    Paetau, R., Ahonen, A., Salonen, O. & Sams, M. Auditory evoked magnetic fields to tones and pseudowords in healthy children and adults. J. Clin. Neurophysiol. 12, 177–185, https://doi.org/10.1097/00004691-199503000-00008 (1995).

    CAS  Article  PubMed  Google Scholar 

  16. 16.

    Ponton, C. W., Eggermont, J. J., Kwong, B. & Don, M. Maturation of human central auditory system activity: evidence from multi-channel evoked potentials. Clin. Neurophysiol. 111(2), 220–236, https://doi.org/10.1016/S1388-2457(99)00236-9 (2000).

    CAS  Article  PubMed  Google Scholar 

  17. 17.

    Čeponienė, R., Rinne, T. & Näätänen, R. Maturation of cortical sound processing as indexed by event-related potentials. Clin. Neurophysiol. 113(6), 870–882, https://doi.org/10.1016/S1388-2457(02)00078-0 (2002).

    Article  PubMed  Google Scholar 

  18. 18.

    Takeshita, K. et al. Maturational change of parallel auditory processing in school-aged children revealed by simultaneous recording of magnetic and electric cortical responses. Clin. Neurophysiol. 113(9), 1470–1484, https://doi.org/10.1016/S1388-2457(02)00202-X (2002).

    CAS  Article  PubMed  Google Scholar 

  19. 19.

    Parviainen, T., Helenius, P., Poskiparta, E., Niemi, P. & Salmelin, R. Speech perception in the child brain: Cortical timing and its relevance to literacy acquisition. Hum. Brain Mapp. 32(12), 2193–2206, https://doi.org/10.1002/hbm.21181 (2011).

    Article  PubMed  Google Scholar 

  20. 20.

    Wunderlich, J. L. & Cone-Wesson, B. K. Maturation of CAEP in infants and children: A review. Hear. Res. 212(1–2), 212–223, https://doi.org/10.1016/j.heares.2005.11.008 (2006).

    Article  PubMed  Google Scholar 

  21. 21.

    Näätänen, R. & Picton, T. The N1 Wave of the Human Electric and Magnetic Response to Sound: A Review and an Analysis of the Component. Structure. Psychophysiol. 24, 375–425, https://doi.org/10.1111/j.1469-8986.1987.tb00311.x (1987).

    Article  Google Scholar 

  22. 22.

    Orekhova, E. V. et al. Auditory Cortex Responses to Clicks and Sensory Modulation Difficulties in Children with Autism Spectrum Disorders (ASD). PLoS One. 7(6), e39906, https://doi.org/10.1371/journal.pone.0039906 (2012).

    ADS  CAS  Article  PubMed  PubMed Central  Google Scholar 

  23. 23.

    Johnstone, S. J., Barry, R. J., Anderson, J. W. & Coyle, S. F. Age-related changes in child and adolescent event-related potential component morphology, amplitude and latency to standard and target stimuli in an auditory oddball task. Int. J. Psychophysiol. 24(3), 223–238, https://doi.org/10.1016/S0167-8760(96)00065-7 (1996).

    CAS  Article  PubMed  Google Scholar 

  24. 24.

    Picton, T. W., Hillyard, S. A., Krausz, H. I. & Galambos, R. Human auditory evoked potentials: I. Evaluation of components. Electroencephalogr. Clin. Neurophysiol. 36, 179–190, https://doi.org/10.1016/0013-4694(74)90155-2 (1974).

    CAS  Article  PubMed  Google Scholar 

  25. 25.

    Karhu, J. et al. Dual cerebral processing of elementary auditory input in children. NeuroReport. 8, 1327–1330, https://doi.org/10.1097/00001756-199704140-00002 (1997).

    CAS  Article  PubMed  Google Scholar 

  26. 26.

    Hämäläinen, J. A. et al. Auditory Event-Related Potentials Measured in Kindergarten Predict Later Reading Problems at School. Age. Dev. Neuropsychol. 38(8), 550–566, https://doi.org/10.1080/87565641.2012.718817 (2013).

    Article  PubMed  Google Scholar 

  27. 27.

    Tonnquist-Uhlén, I. Topography of auditory evoked cortical potentials in children with severe language impairment: the P2 and N2 components. Ear Hear. 17, 314–326, https://doi.org/10.1097/00003446-199608000-00003 (1996).

    Article  PubMed  Google Scholar 

  28. 28.

    Ors, M. et al. Auditory event-related brain potentials in children with specific language impairment. Europ. J. Paediat. Neurol. 6(1), 47–62, https://doi.org/10.1053/ejpn.2001.0541 (2002).

    Article  PubMed  Google Scholar 

  29. 29.

    Rinker, T. et al. Abnormal frequency discrimination in children with SLI as indexed by mismatch negativity (MMN). Neurosci. Lett. 413(2), 99–104, https://doi.org/10.1016/j.neulet.2006.11.033 (2007).

    CAS  Article  PubMed  Google Scholar 

  30. 30.

    Hämäläinen, J. A., Leppänen, P. H. T., Guttorm, T. K. & Lyytinen, H. N1 and P2 components of auditory event-related potentials in children with and without reading disabilities. Clin. Neurophysiol. 118(10), 2263–2275, https://doi.org/10.1016/j.clinph.2007.07.007 (2007).

    Article  PubMed  Google Scholar 

  31. 31.

    Lohvansuu, K. et al. Enhancement of brain event-related potentials to speech sounds is associated with compensated reading skills in dyslexic children with familial risk for dyslexia. Int. J. Psychophysiol. 94(3), 298–310, https://doi.org/10.1016/j.ijpsycho.2014.10.002 (2014).

    Article  PubMed  Google Scholar 

  32. 32.

    Lovio, R., Näätänen, R. & Kujala, T. Abnormal pattern of cortical speech feature discrimination in 6-year-old children at risk for dyslexia. Brain Res. 1335(Supplement C), 53–62, https://doi.org/10.1016/j.brainres.2010.03.097 (2010).

    CAS  Article  PubMed  Google Scholar 

  33. 33.

    Lovio, R., Halttunen, A., Lyytinen, H., Näätänen, R. & Kujala, T. Reading skill and neural processing accuracy improvement after a 3-hour intervention in preschoolers with difficulties in reading-related skills. Brain Res. 1448(Supplement C), 42–55, https://doi.org/10.1016/j.brainres.2012.01.071 (2012).

    CAS  Article  PubMed  Google Scholar 

  34. 34.

    Johnson, B. W. et al. Lateralized auditory brain function in children with normal reading ability and in children with dyslexia. Neuropsychologia. 51(4), 633–641, https://doi.org/10.1016/j.neuropsychologia.2012.12.015 (2013).

    Article  PubMed  Google Scholar 

  35. 35.

    Hansen, P. C., Kringelbach, M. L., & Salmelin R. MEG: An Introduction to Methods. (Oxford University Press Press, 2010).

  36. 36.

    Helenius, P. et al. Abnormal functioning of the left temporal lobe in language-impaired children. Brain Lang. 130, 11–18, https://doi.org/10.1016/j.bandl.2014.01.005 (2014).

    Article  PubMed  Google Scholar 

  37. 37.

    Hannus, S., Kauppila, T., Pitkäniemi, J. & Launonen, K. Use of Language Tests when Identifying Specific Language Impairment in Primary Health Care. Folia Phoniatr. Logop. 65(1), 40–46, https://doi.org/10.1159/000350318 (2013).

    Article  PubMed  Google Scholar 

  38. 38.

    Isoaho, P. Kielellinen erityisvaikeus (SLI) ja sen kehitys ensimmäisinä kouluvuosina. Doctoral dissertation, University of Helsinki, Finland, http://urn.fi/URN: ISBN: 978-952-10-8054 (2012).

  39. 39.

    Wechsler D. Wechsler intelligence scale for children (3rd ed.): Manual. (Psykologien Kustannus Oy, 1999).

  40. 40.

    Korkman, M., Kirk, U., & Kemp, S. L. NEPSY. Lasten neuropsykologinen tutkimus. (Psykologien kustannus, 1997).

  41. 41.

    Lindeman, J. ALLU: Ala-asteen Lukutesti. (University of Turku, Center for Learning Research, 1998).

  42. 42.

    Denckla, M. B. & Rudel, R. Rapid “automatized” naming (RAN): dyslexia differentiated from other learning disabilities. Neuropsychologia. 14, 471–479, https://doi.org/10.1016/0028-3932(76)90075-0 (1976).

    CAS  Article  PubMed  Google Scholar 

  43. 43.

    Wolf, M. Rapid alternating stimulus naming in the developmental dyslexias. Brain Lang. 27, 360–379, https://doi.org/10.1016/0093-934X(86)90025-8 (1986).

    CAS  Article  PubMed  Google Scholar 

  44. 44.

    Field, A. Discovering statistics using SPSS (3rd ed.) (SAGE Publications Inc, 2009).

  45. 45.

    Benjamini, Y. & Hochberg, Y. Controlling the false discovery rate: a practical and powerful approach to multiple testing. J. R. Stat. Soc. Series B. 57, 289–300, https://doi.org/10.2307/2346101 (1995).

    MathSciNet  Article  MATH  Google Scholar 

  46. 46.

    Taulu, S. & Simola, J. Spatiotemporal signal space separation method for rejecting nearby interference in MEG measurements. Phys. Med. Biol. 51(7), 1759, https://doi.org/10.1088/0031-9155/51/7/008 (2006).

    CAS  Article  PubMed  Google Scholar 

  47. 47.

    Uusitalo, M. A. & Ilmoniemi, R. J. Signal-space projection method for separating MEG or EEG into components. Med. Biol. Eng. Comput. 35(2), 135–40, https://doi.org/10.1007/BF02534144 (1997).

    CAS  Article  PubMed  Google Scholar 

  48. 48.

    Hämäläinen, M., Hari, R., Ilmoniemi, R. J., Knuutila, J. & Lounasmaa, O. V. Magnetoencephalography–theory, instrumentation, and applications to noninvasive studies of the working human brain. Rev. Mod. Phys. 65(2), 413–497, https://doi.org/10.1103/RevModPhys.65.413 (1993).

    ADS  Article  Google Scholar 

  49. 49.

    R Core Team. R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. http://www.R-project.org/ (2013).

  50. 50.

    Bates, D., Maechler, M., Bolker, B. & Walker, S. Fitting Linear Mixed-Effects Models Using lme4. J. Stat. Softw. 67(1), 1–48, https://doi.org/10.18637/jss.v067.i01 (2015).

    Article  Google Scholar 

  51. 51.

    Halekoh, U. & Højsgaard, S. A Kenward–Roger approximation and parametric bootstrap methods for tests in linear mixed models—the R package pbkrtest. J. Stat. Softw. 59(9), 1–30, https://doi.org/10.18637/jss.v059.i09 (2014).

    Article  Google Scholar 

  52. 52.

    Hox, J. J. Multilevel Analysis Techniques and Applications. (Routledge, 2010).

  53. 53.

    Brown, H. & Prescott, R. Applied Mixed Models in Medicine. (3rd ed.) (John Wiley & Sons, Ltd, 2014).

  54. 54.

    Smith, P. F. A Guerilla Guide to Common Problems in ‘Neurostatistics’: Essential Statistical Topics in Neuroscience. J. Undergrad. Neurosci. Educ. 16(1), R1–R12 (2017).

    MathSciNet  PubMed  PubMed Central  Google Scholar 

  55. 55.

    Rutherford, A. Introducing ANOVA and ANCOVA: a GLM approach. (Sage, 2001).

  56. 56.

    Barr, D. J., Levy, R., Scheepers, C. & Tily, H. J. Random effects structure for confirmatory hypothesis testing: Keep it maximal. J. Mem. Lang. 68(3), https://doi.org/10.1016/j.jml.2012.11.001 (2013).

    Article  Google Scholar 

  57. 57.

    Matuschek, H., Kliegl, R., Vasishth, S., Baayen, H. & Bates, D. Balancing Type I error and power in linear mixed models. J. Mem. Lang. 94, 305–315, https://doi.org/10.1016/j.jml.2017.01.001 (2017).

    Article  Google Scholar 

  58. 58.

    Pinheiro, J. C., & Bates, D. M. Linear mixed-effects models: basic concepts and examples. Mixed-effects models in S and S-Plus. (Springer, 2000).

  59. 59.

    Luke, S. G. Evaluating significance in linear mixed-effects models in R. Behav. Res. Methods. 49(4), 1494–1502, https://doi.org/10.3758/s13428-016-0809-y (2017).

    Article  PubMed  Google Scholar 

  60. 60.

    Schaalje, G. B., McBride, J. B. & Fellingham, G. W. Adequacy of approximations to distributions of test statistics in complex mixed linear models. J. Agric. Biol. Environ. Stat. 7(4), 512–524 (2002).

    Article  Google Scholar 

  61. 61.

    Albrecht, R., Suchodoletz, W. V. & Uwer, R. The development of auditory evoked dipole source activity from childhood to adulthood. Clin. Neurophysiol. 111(12), 2268–2276, https://doi.org/10.1016/S1388-2457(00)00464-8 (2000).

    CAS  Article  PubMed  Google Scholar 

  62. 62.

    Bishop, D. V. M. & Snowling, M. J. Developmental Dyslexia and Specific Language Impairment: Same or Different? Psychol. Bull. 130(6), 858–886, https://doi.org/10.1037/0033-2909.130.6.858 (2004).

    Article  PubMed  Google Scholar 

  63. 63.

    Melby-Lervåg, M., Lyster, S.-A. H. & Hulme, C. Phonological skills and their role in learning to read: A meta-analytic review. Psychol. Bull. 138(2), 322–352, https://doi.org/10.1037/a0026744 (2012).

    Article  PubMed  Google Scholar 

  64. 64.

    Schönbrodt, F. D. & Perugini, M. At what sample size do correlations stabilize? J. Res. Pers. 47(5), 609–612, https://doi.org/10.1016/j.jrp.2013.05.009 (2013).

    Article  Google Scholar 

  65. 65.

    Button, K. S. et al. Power failure: why small sample size undermines the reliability of neuroscience. Nat. Rev. Neurosci. 14(5), 365–376, https://doi.org/10.1038/nrn3475 (2013a).

    CAS  Article  PubMed  Google Scholar 

  66. 66.

    Helenius, P., Parviainen, T., Paetau, R. & Salmelin, R. Neural processing of spoken words in specific language impairment and dyslexia. Brain 132(7), 1918–1927, https://doi.org/10.1093/brain/awp134 (2009).

    Article  PubMed  Google Scholar 

  67. 67.

    Whitehouse, A. J. O. & Bishop, D. V. M. Cerebral dominance for language function in adults with specific language impairment or autism. Brain. 131(12), 3193–3200, https://doi.org/10.1093/brain/awn266 (2008).

    Article  PubMed  PubMed Central  Google Scholar 

  68. 68.

    de Guibert, C. et al. Abnormal functional lateralization and activity of language brain areas in typical specific language impairment (developmental dysphasia). Brain. 134(10), 3044–3058, https://doi.org/10.1093/brain/awr141 (2011).

    Article  PubMed  PubMed Central  Google Scholar 

  69. 69.

    Bishop, D. V. M. Cerebral Asymmetry and Language Development: Cause, Correlate, or Consequence? Science. 340(6138), 1230531, https://doi.org/10.1126/science.1230531 (2013).

    CAS  Article  PubMed  PubMed Central  Google Scholar 

  70. 70.

    Wilson, A. C. & Bishop, D. V. M. Resounding failure to replicate links between developmental language disorder and cerebral lateralisation. PeerJ. 6, e4217, https://doi.org/10.7717/peerj.4217 (2018).

    Article  PubMed  PubMed Central  Google Scholar 

  71. 71.

    Parviainen, T., Helenius, P. & Salmelin, R. Children show hemispheric differences in the basic auditory response properties. Hum. Brain Mapp. Advance online publication, https://doi.org/10.1002/hbm.24553 (2019).

    Article  Google Scholar 

  72. 72.

    Pantev, C., Ross, B., Berg, P., Elbert, T. & Rockstroh, B. Study of the human auditory cortices using a whole-head magnetometer: left vs. right hemisphere and ipsilateral vs. contralateral stimulation. Audiol. Neurotol. 3(2–3), 183–190, https://doi.org/10.1159/000013789 (1998).

    CAS  Article  Google Scholar 

  73. 73.

    Salmelin, R. et al. Native language, gender, and functional organization of the auditory cortex. Proc. Natl. Acad. Sci. USA 96(18), 10460–10465, https://doi.org/10.1073/pnas.96.18.10460 (1999).

    ADS  CAS  Article  PubMed  Google Scholar 

  74. 74.

    Jin, C. Y., Ozaki, I., Suzuki, Y., Baba, M. & Hashimoto, I. Dynamic movement of N100m current sources in auditory evoked fields: Comparison of ipsilateral versus contralateral responses in human auditory cortex. Neurosci. Res. 60(4), 397–405, https://doi.org/10.1016/j.neures.2007.12.008 (2008).

    Article  PubMed  Google Scholar 

  75. 75.

    Howard, M. F. & Poeppel, D. Hemispheric asymmetry in mid and long latency neuromagnetic responses to single clicks. Hear. Res. 257(1–2), 41–52, https://doi.org/10.1016/j.heares.2009.07.010 (2009).

    Article  PubMed  PubMed Central  Google Scholar 

  76. 76.

    Poeppel, D. The analysis of speech in different temporal integration windows: cerebral lateralization as “asymmetric sampling in time. Speech Commun. 41(1), 245–255, https://doi.org/10.1016/S0167-6393(02)00107-3 (2003).

    Article  Google Scholar 

  77. 77.

    Poeppel, D., Idsardi, W. J. & van Wassenhove, V. Speech perception at the interface of neurobiology and linguistics. Philos. Trans. R. Soc. Lond. B. Biol. Sci. 363(1493), 1071–1086, https://doi.org/10.1098/rstb.2007.2160 (2008).

    Article  PubMed  Google Scholar 

  78. 78.

    Goswami, U. A temporal sampling framework for developmental dyslexia. Trends Cogn. Sci. 15(1), 3–10, https://doi.org/10.1016/j.tics.2010.10.001 (2011).

    Article  PubMed  Google Scholar 

  79. 79.

    Lee, J. C., Nopoulos, P. C. & Bruce Tomblin, J. Abnormal subcortical components of the corticostriatal system in young adults with DLI: A combined structural MRI and DTI study. Neuropsychologia. 51(11), 2154–2161, https://doi.org/10.1016/j.neuropsychologia.2013.07.011 (2013).

    Article  PubMed  Google Scholar 

  80. 80.

    Herbert, M. R. et al. Brain asymmetries in autism and developmental language disorder: a nested whole-brain analysis. Brain. 128(1), 213–226, https://doi.org/10.1093/brain/awh330 (2005).

    CAS  Article  PubMed  Google Scholar 

  81. 81.

    Jäncke, L., Siegenthaler, T., Preis, S. & Steinmetz, H. Decreased white-matter density in a left-sided fronto-temporal network in children with developmental language disorder: Evidence for anatomical anomalies in a motor-language network. Brain Lang. 102(1), 91–98, https://doi.org/10.1016/j.bandl.2006.08.003 (2007).

    Article  PubMed  Google Scholar 

  82. 82.

    Shaw, M. E., Hämäläinen, M. S. & Gutschalk, A. How anatomical asymmetry of human auditory cortex can lead to a rightward bias in auditory evoked fields. NeuroImage. 74, 22–29, https://doi.org/10.1016/j.neuroimage.2013.02.002 (2013).

    Article  PubMed  Google Scholar 

  83. 83.

    Płoński, P. et al. Multi-parameter machine learning approach to the neuroanatomical basis of developmental dyslexia. Hum. Brain Mapp. 38(2), 900–908, https://doi.org/10.1002/hbm.23426 (2017).

    Article  PubMed  Google Scholar 

  84. 84.

    Penna, S. D. et al. Lateralization of Dichotic Speech Stimuli is Based on Specific Auditory Pathway Interactions: Neuromagnetic Evidence. Cereb. Cortex. 17(10), 2303–2311, https://doi.org/10.1093/cercor/bhl139 (2007).

    Article  PubMed  Google Scholar 

  85. 85.

    Fujiki, N., Jousmäki, V. & Hari, R. Neuromagnetic Responses to Frequency-Tagged Sounds: A New Method to Follow Inputs from Each Ear to the Human Auditory Cortex during Binaural Hearing. J Neurosci. 22(3), RC205–RC205 (2002).

    Article  Google Scholar 

  86. 86.

    Lamminmäki, S., Massinen, S., Nopola-Hemmi, J., Kere, J. & Hari, R. Human ROBO1 Regulates Interaural Interaction in Auditory Pathways. J. Neurosci. 32(3), 966–971, https://doi.org/10.1523/JNEUROSCI.4007-11.2012 (2012).

    CAS  Article  PubMed  Google Scholar 

  87. 87.

    Bates, T. C. et al. Genetic Variance in a Component of the Language Acquisition Device: ROBO1 Polymorphisms Associated with Phonological Buffer Deficits. Behav. Genet. 41(1), 50–57, https://doi.org/10.1007/s10519-010-9402-9 (2011).

    MathSciNet  Article  PubMed  Google Scholar 

  88. 88.

    Friston, K. Ten ironic rules for non-statistical reviewers. NeuroImage. 61(4), 1300–1310, https://doi.org/10.1016/j.neuroimage.2012.04.018 (2012).

    Article  PubMed  Google Scholar 

  89. 89.

    Button, K. S. et al. Confidence and precision increase with high statistical power. Nat. Rev. Neurosc. 14(8), 585–586, https://doi.org/10.1038/nrn3475-c4 (2013b).

    CAS  Article  Google Scholar 

  90. 90.

    Quinlan, P. T. Misuse of power: in defence of small-scale science. Nat. Rev. Neurosci. 14(8), 585, https://doi.org/10.1038/nrn3475-c1 (2013).

    CAS  Article  PubMed  Google Scholar 

  91. 91.

    Ashton, J. C. Experimental power comes from powerful theories — the real problem in null hypothesis testing. Nat. Rev. Neurosci. 14(8), 585, https://doi.org/10.1038/nrn3475-c2 (2013).

    CAS  Article  PubMed  Google Scholar 

  92. 92.

    Bacchetti, P. Small sample size is not the real problem. Nat. Rev. Neurosci. 14(8), 585, https://doi.org/10.1038/nrn3475-c3 (2013).

    CAS  Article  PubMed  Google Scholar 

  93. 93.

    Smith, P. L. & Little, D. R. In defense of small-N design. Psychon. Bull. Rev. 25, 2083–2101, https://doi.org/10.3758/s13423-018-1451-8 (2018).

    Article  PubMed  PubMed Central  Google Scholar 

Download references

Acknowledgements

We are grateful to Päivi Sivonen, Leena Isotalo and Mia Illman for assistance in the MEG recordings, to Mari Laine for behavioral testing of the children, to Riitta Salmelin and Timo Kauppila for valuable comments when planning the experiments and Twan Hendrikx for fruitful discussion on the statistical analysis. This work was supported by EU project ChildBrain (Horizon2020 Marie Skłodowska-Curie Action (MSCA) Innovative Training Network (ITN) – European Training Network (ETN), grant agreement no. 641652) and the Academy of Finland (grant 114794 to PH).

Author information

Affiliations

Authors

Contributions

T.P. and P.H. designed the research and performed the experiments, S.B. analyzed the data and wrote the main manuscript, S.B. and S.K. did the statistical analysis. All authors reviewed the manuscript.

Corresponding author

Correspondence to Sam van Bijnen.

Ethics declarations

Competing Interests

The authors declare no competing interests.

Additional information

Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

van Bijnen, S., Kärkkäinen, S., Helenius, P. et al. Left hemisphere enhancement of auditory activation in language impaired children. Sci Rep 9, 9087 (2019). https://doi.org/10.1038/s41598-019-45597-y

Download citation

Further reading

Comments

By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Search

Quick links