Noise limits the efficiency of information transfer. But correlations of the intervals between signal pulses can reduce low-frequency noise and thereby increase the transfer of information.
A nerve cell is an example of a system whose excitation codes and transmits dynamic information. When a stimulus is above a given threshold, the neuron responds by generating an action potential, so that over time it fires irregular sequences of all-or-nothing spike responses. The ionic cell mechanisms that generate the spike are understood, but there are many mechanisms for coding the spike train, and our knowledge of these mechanisms ranges from well established to speculative. In Physical Review Letters, Chacron et al.1 show, using a simple action-potential model, that correlations between sequential interspike intervals can shape the noise spectrum — and that this shaping can increase the transmission of information, because it reduces the noise spectrum at low frequencies.
Information theory is essentially linear2: a reaction is directly proportional to an action. So it might be expected that the constraints imposed by interval correlations would reduce the transmission of information. However, spike generation is a strongly nonlinear, excitable process with a threshold, and such systems can behave counterintuitively. A good example is stochastic resonance3, in which additive noise can increase, rather than decrease, the efficiency of information transmission.
A spike train can be represented by the sequence of intervals between spikes; this is characterized by the interval statistics (in the time domain by probability distributions and correlations, and in the frequency domain by spectral densities). Chacron et al.1 consider two schemes of spike generation. The first produces a ‘renewal process’ that has no memory of the excitation because the system resets itself each time a spike is generated. Here, there is no correlation between successive spike intervals. The probability distributions for higher-order intervals (say, between one spike and the third spike following) and the ‘autocorrelation’ (the probability of a spike occurring after some other spike, irrespective of how many intervening spikes there were) can be calculated directly from the interspike-interval probability density.
The second scheme generates a non-renewal spike train, with correlations between adjacent intervals. Spike trains of sensory neurons4 with a constant stimulus often show dependencies between neighbouring intervals; such dependencies are produced either by fluctuations in an oscillatory spike-generator or by activity-dependent changes in threshold. Long intervals followed by short ones (and vice versa) result in negative interval correlations.
The essence of spike generation is that inputs too weak to trigger a response are summed, giving a potential; when the potential reaches a threshold, a spike is generated. Chacron et al.1 use a ‘perfect integrator model’5 for both spike-generation schemes, with a random threshold drawn from a uniform distribution. When the threshold is reached the potential is reset, either by an amount dependent on the magnitude of the threshold just reached (to produce serial correlation), or by a random amount (to generate a spike train with the same interspike-interval density but no correlation).
The advantage of using this simple model is that the spectra, coherence and information transmission rates (mutual information rates) can all be calculated, as well as estimated through computer simulation. Chacron et al. demonstrate that a negative serial correlation between spikes reduces the spike-train spectrum and coherence at very low frequencies, although at middle-range frequencies the coherence is increased. Serial correlation also enhances the ability of the model to transmit information by reducing low-frequency noise compared with the renewal process. Such effects have been seen in more realistic neural models6, and can be quantified in real neural spike trains; but the complexity of both of these situations had meant that the mechanism for improved information transfer was unclear.
Whether or not this increase in the information transmission rate is exploited in neural systems is an open question, as biological evolution produces systems that work well enough and are robust, without necessarily being optimally efficient. The nervous system responds to a spike train in real time and does not process it as an indefinite sequence. But the effect of correlation on the transmission rate in a single spike train might transfer to correlations between multiple spike trains: variability between different neurons could be correlated, through common inputs and feedback7, and coupling within a population could lead to a similar reduction in variability by noise shaping8.
Chacron, M. J., Lindner, B. & Longtin, A. Phys. Rev. Lett. 92, 080601 (2004).
Borst, A. & Theunissen, F. E. Nature Neurosci. 2, 947–957 (1999).
Jaramillo, F. & Wisenfeld, K. Nature Neurosci. 1, 384–388 (1998).
Ratnam, R. & Nelson, M. E. J. Neurosci. 20, 6672–6683 (2000).
Stein, R. B., French, A. S. & Holden, A. V. Biophys. J. 12, 295–322 (1972).
Chacron, M. J., Longtin, A. & Maler, L. J. Neurosci. 21, 5328–5343 (2001).
Azouz, R. & Gray, C. M. J. Neurosci. 19, 2209–2223 (1999).
Marr, D. J., Chow, C. C., Gerstner, W., Adams, R. W. & Collins, J. J. Proc. Natl Acad. Sci. USA 96, 10450–10455 (1999).
About this article
Cite this article
Holden, A. Neural coding by correlation?. Nature 428, 382 (2004). https://doi.org/10.1038/428382a
Nature Neuroscience (2005)