Neuron 98, 630–644.e16 (2018)

One of the goals of neuroscience is to predict how the brain will respond to input. This aim can be achieved by building models of brain function and testing whether they behave like people do. Once validated in this way, such models can, in turn, generate insights into how human brains are able to respond to complex stimuli, such as speech and music.

Credit: David Crockett/Moment/Getty

In their recent paper, Alexander Kell, of the Massachusetts Institute of Technology, and colleagues present a computational model of how the human brain responds to everyday sounds. They find that their neural network ‘understands’ speech and music similarly to humans; in other words, it recognizes words and genres as well as human listeners, and makes human-like errors. They then use this model to address an important question in auditory neuroscience: the extent to which the auditory system is organized hierarchically. Their model consists of a cascade of stages, and comparing it to the brain provided a quantitative signature of cortical hierarchy — different model stages best replicated processing in different portions of the auditory cortex.

In summary, this study uses a computational model to uncover principles about how the human brain processes a range of sounds we hear everyday.