We, and other animals, can generally pinpoint the source of a sound in space regardless of how loud it is. A study involving experimentation and computer modelling reveals how our brains perform this clever task.
Information flows from one nerve cell to the next at highly specialized points of contact called synapses. All synapses exhibit depression1 — a decrease in the strength of the connection that occurs after rapid and repeated use — and this might seem like a bad idea, a design fault that prevents synapses from keeping up with the demands placed on them. But on page 66 of this issue, Cook and colleagues2 show that depression is a feature, not a bug, in auditory synaptic transmission. In fact, synaptic depression is used to compute sound levels and to correct neuronal signals, so that the brain's representation of the source of a sound in space is not confounded by the intensity of the sound.
Comprehending the contribution made by Cook et al.2 depends on understanding one of the main ways in which we determine the spatial source of sounds3. You may have noticed that when you hear a series of noises whose source you cannot locate — a bird chirping, for example — you automatically turn your head as you listen. This is because you determine the location of the sound by using the tiny differences in the time it takes for the sounds to reach your two ears. And you turn your head to maximize this difference. If you are facing the source of a sound, the signal arrives simultaneously at both ears, but if the sound originates from, say, your left side, the sound pressure wave arrives a little earlier at your left ear than at your right.
The maximum difference in arrival times depends on head size, but is less than a couple of milliseconds even for elephants, and less than half a millisecond for us. However, we have clever neural machinery to compute the sound's spatial source from this slight difference. This computation takes place in the medial olivary nucleus in the brains of mammals (in birds, the nucleus laminaris). Neurons stimulated directly by each ear feed into this brain region. There, the tiny time differences in the firing of the ear-stimulated neurons are turned into a 'place code', whereby interaural time differences are represented by which olivary neurons are responding and by how much3.
The trick the brain uses to change time differences to a place code involves delaying the neural signals from one ear relative to the other by different amounts for different olivary neurons, and then having these neurons fire an impulse only if the signals arrive simultaneously from both ears. That is, the medial olivary nucleus uses delay lines (conduction delays over nerve 'wires', or axons, of different lengths) and coincidence detection (firing of olivary neurons only when impulses arrive from both ears simultaneously) to construct a place code for submillisecond interaural time differences. This coincidence detection involves some clever circuitry4.
In addition, impulses produced by the ear-stimulated neurons in response to a high-frequency tone are 'time-locked' to the tone, in the sense that each impulse is produced at a particular phase of the sound pressure wave. If this time-locking did not occur, the coincidence-detection scheme described above would not work, because the brain needs precise information about the relative arrival times of sounds at the two ears. But a nerve impulse is not generated on each cycle of the sound pressure wave; rather, many cycles are skipped in any particular nerve fibre, so that only by looking across a population of inputs can the brain know when each wave arrived. As sound intensity increases, however, nerves fire on more and more of the cycles. So, the average number of nerve impulses per second depends on sound intensity, whereas the timing of each impulse, relative to the phase of the sound wave, is independent of intensity.
The fact that higher-intensity sounds produce more nerve impulses causes a potentially serious problem for the neuronal mechanisms used to locate the source of the sound. Each olivary neuron doing the coincidence detection receives inputs from many neurons stimulated by each ear, so that the system still works even when any particular axon fails to produce a nerve impulse on a specific cycle of the sound wave. But if too many axons from one ear produce impulses on a particular cycle — something that could happen with loud sounds — then the coincidence of the many inputs from just that ear would be capable of producing an output in the medial olivary nucleus (or the nucleus laminaris), and the localization mechanism would break down. Behaviourally, though, we and other animals can pinpoint a sound's spatial source pretty accurately irrespective of its intensity. How do the coincidence detectors keep from being overwhelmed by loud sounds? This is the question that Cook et al.2 asked.
Because the problem with loud sounds is that they cause nerve impulses to be produced on too many cycles of the sound pressure wave, Cook et al. reasoned that an easy way for the brain to compensate for this would be to decrease the amount of neurotransmitter released in response to each impulse. Neurotransmitters are the chemicals that pass signals at synapses from one neuron (here, an ear-stimulated neuron) to the next (an olivary neuron). In other words, if the synapses were correctly depressed with use, this could compensate for the greater number of impulses.
To test this idea, Cook et al. characterized the properties of depression at the input synapses of the nucleus laminaris in chicks, and found that these synapses do indeed decrease in strength with use. But can this depression actually compensate for the too-large inputs that result from loud sounds? To find out, the authors constructed a biophysically realistic computer model of neurons from the nucleus laminaris, and incorporated the synaptic depression as found in their experiments. They could then examine the effect of synaptic depression on coincidence detection by simply turning depression on or off in the model. Indeed, Cook et al. found that coincidence detection in their model works well, irrespective of sound loudness, when synaptic depression is present, but not when it is absent. Synaptic depression is a feature of neural computation, not a bug.
Short-term synaptic plasticity — the increase and decrease of synaptic strength that depends on the history of synaptic use — is a prominent feature of all synapses1, and can easily cause several-fold changes in synaptic strength. Although various authors have speculated that short-term plasticity provides a functionally important dynamic filter for arriving nerve impulses (see the first four citations in ref. 2), testing this notion is difficult and the earlier work was mainly speculative. Cook et al.2 have provided the first documented use for such plasticity. Now we need to understand the mechanisms through which the correct properties of synaptic plasticity are set up in development and then adjusted throughout life.
Zucker, R. S. Annu. Rev. Neurosci. 12, 13–31 (1989).
Cook, D. L., Schwindt, P. C., Grande, L. A. & Spain, W. J. Nature 421, 66–70 (2003).
Goldberg, J. M. & Brown, P. B. J. Neurophysiol. 32, 613–636 (1969).
Brand, A., Behrend, O., Marquardt, T., McAlpine, D. & Grothi, B. Nature 417, 543–547 (2002).
About this article
Hippocampal-Prefrontal Circuit and Disrupted Functional Connectivity in Psychiatric and Neurodegenerative Disorders
BioMed Research International (2015)
Desensitization of neurotransmitter-gated ion channels during high-frequency stimulation: a comparative study of Cys-loop, AMPA and purinergic receptors
The Journal of Physiology (2011)
Physical Review A (2005)
Physical Review A (2004)