Letter | Published:

Selective cortical representation of attended speaker in multi-talker speech perception

Nature volume 485, pages 233236 (10 May 2012) | Download Citation

Abstract

Humans possess a remarkable ability to attend to a single speaker’s voice in a multi-talker background1,2,3. How the auditory system manages to extract intelligible speech under such acoustically complex and adverse listening conditions is not known, and, indeed, it is not clear how attended speech is internally represented4,5. Here, using multi-electrode surface recordings from the cortex of subjects engaged in a listening task with two simultaneous speakers, we demonstrate that population responses in non-primary human auditory cortex encode critical features of attended speech: speech spectrograms reconstructed based on cortical responses to the mixture of speakers reveal the salient spectral and temporal features of the attended speaker, as if subjects were listening to that speaker alone. A simple classifier trained solely on examples of single speakers can decode both attended words and speaker identity. We find that task performance is well predicted by a rapid increase in attention-modulated neural selectivity across both single-electrode and population-level cortical responses. These findings demonstrate that the cortical representation of speech does not merely reflect the external acoustic environment, but instead gives rise to the perceptual aspects relevant for the listener’s intended goal.

Access optionsAccess options

Rent or Buy article

Get time limited or full article access on ReadCube.

from$8.99

All prices are NET prices.

References

  1. 1.

    Some experiments on the recognition of speech, with one and with two ears. J. Acoust. Soc. Am. 25, 975–979 (1953)

  2. 2.

    Object-based auditory and visual attention. Trends Cogn. Sci. 12, 182–186 (2008)

  3. 3.

    Auditory Scene Analysis: The Perceptual Organization of Sound (MIT Press, 1994)

  4. 4.

    , & Attentional gain control of ongoing cortical speech representations in a “cocktail party”. J. Neurosci. 30, 620–628 (2010)

  5. 5.

    et al. Tuning of the human neocortex to the temporal dynamics of attended events. J. Neurosci. 31, 3176–3185 (2011)

  6. 6.

    & The cocktail party problem: what is it? How can it be solved? And why should animal behaviorists study it? J. Comparative Psychol. 122, 235–252 (2008)

  7. 7.

    & Selective attention in normal and impaired hearing. Trends Amplif. 12, 283–299 (2008)

  8. 8.

    , , , & The neural processing of masked speech: evidence for different mechanisms in the left and right temporal lobes. J. Acoust. Soc. Am. 125, 1737–1743 (2009)

  9. 9.

    , , & Interaction between attention and bottom-up saliency mediates the representation of foreground and background in an auditory scene. PLoS Biol. 7, e1000129 (2009)

  10. 10.

    et al. Categorical speech representation in human superior temporal gyrus. Nature Neurosci. 13, 1428–1432 (2010)

  11. 11.

    , , & Induced electrocorticographic gamma activity during auditory perception. Clin. Neurophysiol. 112, 565–582 (2001)

  12. 12.

    , & Spectrotemporal analysis of evoked and induced electroencephalographic responses in primary auditory cortex (A1) of the awake monkey. Cereb. Cortex 18, 610–625 (2008)

  13. 13.

    & The neuroanatomical and functional organization of speech perception. Trends Neurosci. 26, 100–107 (2003)

  14. 14.

    Information flow in the auditory cortical network. Hear. Res. 271, 133–146 (2011)

  15. 15.

    , , & A speech corpus for multitalker communications research. J. Acoust. Soc. Am. 107, 1065–1066 (2000)

  16. 16.

    Informational and energetic masking effects in the perception of two simultaneous talkers. J. Acoust. Soc. Am. 109, 1101–1109 (2001)

  17. 17.

    , , & Influence of context and behavior on stimulus reconstruction from neural activity in primary auditory cortex. J. Neurophysiol. 102, 3329–3339 (2009)

  18. 18.

    , , & Reading a neural code. Science 252, 1854–1857 (1991)

  19. 19.

    et al. Reconstructing speech from human auditory cortex. PLoS Biol. 10, e1001251 (2012)

  20. 20.

    et al. TIMIT Acoustic-Phonetic Continuous Speech Corpus (Linguistic Data Consortium, 1993)

  21. 21.

    , & Regularized least-squares classification. Nato Science Series Sub Series III Computer and Systems Sciences 190, 131–154 (2003)

  22. 22.

    , , & “Who” is saying “what”? Brain-based decoding of human voice and speech. Science 322, 970–973 (2008)

  23. 23.

    , , , & Sound categories are represented as distributed patterns in the human auditory cortex. Curr. Biol. 19, 498–502 (2009)

  24. 24.

    , & Temporal coherence and attention in auditory scene analysis. Trends Neurosci. 34, 114–123 (2010)

  25. 25.

    Auditory grouping. Trends Cogn. Sci. 1, 327–333 (1997)

  26. 26.

    Perceptual restoration of missing speech sounds. Science 167, 392–393 (1970)

  27. 27.

    , , & The advantage of knowing where to listen. J. Acoust. Soc. Am. 118, 3804–3815 (2005)

  28. 28.

    , & Two protocols comparing human and machine phonetic discrimination performance in conversational speech. INTERSPEECH 1630–1633. (2008)

  29. 29.

    , & Monaural speech separation and recognition challenge. Comput. Speech Lang. 24, 1–15 (2010)

Download references

Acknowledgements

The authors would like to thank A. Ren for technical help, and C. Micheyl, S. Shamma and C. Schreiner for critical discussion and reading of the manuscript. E.F.C. was funded by National Institutes of Health grants R00-NS065120, DP2-OD00862, R01-DC012379, and the Ester A. and Joseph Klingenstein Foundation.

Author information

Affiliations

  1. Departments of Neurological Surgery and Physiology, UCSF Center for Integrative Neuroscience, University of California, San Francisco, California 94143, USA

    • Nima Mesgarani
    •  & Edward F. Chang

Authors

  1. Search for Nima Mesgarani in:

  2. Search for Edward F. Chang in:

Contributions

N.M. and E.F.C. designed the experiment, collected the data, evaluated results and wrote the manuscript.

Competing interests

The authors declare no competing financial interests.

Corresponding author

Correspondence to Edward F. Chang.

Supplementary information

PDF files

  1. 1.

    Supplementary Figures

    This file contains Supplementary Figures 1-3.

About this article

Publication history

Received

Accepted

Published

DOI

https://doi.org/10.1038/nature11020

Further reading

Comments

By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.