An acoustic key to eight languages/dialects: Factor analyses of critical-band-filtered speech

The peripheral auditory system functions like a frequency analyser, often modelled as a bank of non-overlapping band-pass filters called critical bands; 20 bands are necessary for simulating frequency resolution of the ear within an ordinary frequency range of speech (up to 7,000 Hz). A far smaller number of filters seemed sufficient, however, to re-synthesise intelligible speech sentences with power fluctuations of the speech signals passing through them; nevertheless, the number and frequency ranges of the frequency bands for efficient speech communication are yet unknown. We derived four common frequency bands—covering approximately 50–540, 540–1,700, 1,700–3,300, and above 3,300 Hz—from factor analyses of spectral fluctuations in eight different spoken languages/dialects. The analyses robustly led to three factors common to all languages investigated—the low & mid-high factor related to the two separate frequency ranges of 50–540 and 1,700–3,300 Hz, the mid-low factor the range of 540–1,700 Hz, and the high factor the range above 3,300 Hz—in these different languages/dialects, suggesting a language universal.


Results
Four blocks of critical bands, i.e., four frequency bands, consistently appeared in both the three- (Fig. 1a) and the four-factor (Fig. 1b) results-one of the factors obtained in the three-factor analysis was bimodal, thus both threeand four-factor analyses yielded four frequency bands. Two-, five-, and six-factor analyses gave rather obscureinconsistent among the languages/dialects-results ( Supplementary Fig. S2). The boundary frequencies dividing the whole frequency range into the four frequency bands are represented with the vertical orange lines in Fig. 1. Shifting the cut-off frequencies of the filter bank upwards by half a critical band (see Methods: Signal processing and analyses) had negligible effects on the three-factor results ( Fig. 1a; compare the broken vs. continuous curves of the same colours). The three-factor results ( Fig. 1a) exhibited greater agreement across the different languages/ dialects than the four-factor results (Fig. 1b). The cumulative contributions, representing proportions of variance explained by the combinations of specified factors, were about 7% higher in the four-factor analysis (Fig. 1b), but the locations of the factor peaks were very similar comparing the three-factor with the four-factor analysis. The discrepancies between languages/dialects, observed in the lowest frequency band in the four-factor analysis, is likely to have been caused by the inclusion of samples spoken by speakers with relatively high fundamental frequency that could make frequency components too sparse in spectra. Including more than four factors resulted in cumulative contributions larger than 50%, however, the added factors were mainly consumed in capturing   Table S1). The cumulative contributions ranged from 33-41% (a) and from 40-47% (b), depending on the analysed data set and the utilised filters. One division of the horizontal axis corresponds to 0.5 critical bandwidth, with the two sets of centre frequencies alternating. Orange vertical lines represent schematic frequency boundaries estimated from crossover frequencies of the curves. resolved harmonics in the low frequency region ( Supplementary Fig. S2d,e), which was covered by a peak in the lower frequency side of the bimodal factor (low & mid-high factor) in the three-factor results. Thus, it seems to be optimal to take the three factor results for our present purpose, which is to find out the number and frequency ranges of the frequency bands for efficient speech communication.

Discussion
It is worth noting that spoken sentences can be recognised even when they are conveyed only by power fluctuations of four frequency bands without any temporal fine structure, i.e., through noise-vocoded speech [12][13][14][15][16][17][18] . The number and location of these frequency bands (Fig. 1) is suggested both by the present physical analysis and by perceptual studies showing high intelligibility of noise-vocoded speech filtered into nearly the same 18 or very similar 12-14 frequency bands (Supplementary Audios S1 and S2, and Fig. S3). The four-band division must have some value in speech processing if it can be applied to several languages/dialects of different language families. Our own observation showed that the frequency boundaries or factors derived with the present statistical technique were suitable for synthesising noise-vocoded speech in Japanese 18,19 and German 18 . There seems a connection between the present frequency boundaries and the past results of speech-filtering investigations. The second boundary frequency, 1,700 Hz, was located near the centre of the range of the crossover frequency (typically from 1,550-1,900 Hz) [20][21][22][23] , which had been derived as a balancing point of intelligibility between highpass and lowpass filtering of speech. It is also to be noted that the frequency response of the telephone system is standardised to cover the range from 300-3,400 Hz. This frequency range covers at least a part of each frequency band in Fig. 1, presumably enabling the analogue telephone line to convey speech sounds all over the world with minimum cost and reasonable intelligibility.
We designated the factors obtained in the three-factor analysis as the low & mid-high factor, which appeared in two frequency ranges around 300 and around 2,200 Hz, the mid-low factor, which appeared around 1,100 Hz, and the high factor, which encompasses the range above 3,300 Hz. These factors appeared with surprising resemblance across the eight different languages/dialects of three different language families, and thus they are strong candidates for universal components of spoken languages/dialects, i.e., an acoustic language universal. An initial extension of the present analysis into infant utterances has been explored by a research team including the present authors 24 . One way to know how the factors relate to speech perception is to examine the correspondence between factor scores and phonemic categories. This line of investigation on speech sounds in British English has been started, as described in a separate paper.

Methods
The following facts rationalise the use of the PCA-based technique in the present investigation. In order to recognise speech in quiet, it is not always necessary to fully utilise the frequency resolution properties of the basilar membrane. It is possible to accurately recognise speech consisting of power fluctuations in only four frequency bands (noise-vocoded speech 12 ). Although this finding had been replicated in a number of studies [13][14][15][16][17] , the frequency cut-offs to create such frequency bands have not been derived from systematic research. One of the goals of the present study was to provide the characteristics of frequency channels that best represent the speech signal.
Speech samples. Speech samples were extracted from a speech database 11 (16-kHz sampling and 16-bit linear quantisation), upon the condition that the same set of sentences was spoken by all the speakers within each language/dialect. The samples were edited to eliminate irrelevant silent periods and noises. The details of the samples are shown in Table 1.
Signal processing and analyses. Two banks (A and B) of 20 critical-band filters were constructed (Supplementary Table S1). Their centre frequencies ranged from 75-5,800 Hz (for A) and from 100-6,400 Hz (for B). Their overall passbands were 50-6,400 and 50-7,000 Hz, respectively. These two specific filter banks were made in order to check whether there was any artefact caused by cut-off frequencies in the analyses. The cut-off frequencies of each filter in bank A were determined according to Zwicker and Terhardt 6 , except for the lowest cut-off frequency (50 Hz). The cut-off frequencies in bank B were halfway shifted from those in bank A, except for the lowest cut-off frequency. All subsequent analyses were performed separately for these two filter banks. Each filter was constructed as a concatenate convolution of an upward frequency glide and its temporal reversal. Transition regions were 100 Hz wide, with out-of-band attenuations of 50-60 dB. Each filter output was squared, smoothed with a Gaussian window of σ = 5 (ms) which was equivalent to a lowpass filtering with a 45-Hz cut-off, and sampled at every millisecond. Because our analyses primarily focused on relatively slow movements of the vocal tract (amplitude envelopes) rather than fast movements of the vocal folds (temporal fine structure), power fluctuations were calculated by squaring and smoothing the filter outputs, in stead of using the outputs (amplitudes) themselves. Determining correlation coefficients for every possible combination of the power fluctuations yielded a correlation matrix for each data set. This matrix was fed into the PCA. That is, a correlation-based (normalised) analysis was selected, rather than a covariance-based one, in order to prevent the influence of unbalanced weighting between frequency bands of unequal power levels. After PCA was performed, the first 2-6 principal components were rotated with varimax rotation to yield the factors shown in Supplementary Fig. S2 (the terminology is based on convention).