The physics of phase transitions beautifully describes the collective behaviour of many populations of inanimate particles, from water molecules to magnetic spins. But could it also help in understanding ensembles of living neurons?
In experiments with small networks of living neurons, bursts of activity are seen spreading in cascades through the network. The size distribution of these 'neuronal avalanches can be approximated by a power law1,2, a hint that the networks may be operating near the critical point, that is, in a regime, where, on average, one neuron activates exactly one other neuron the overall activity doesn't increase or die out over time. Simulations of neural networks indicate that the critical point would be quite beneficial for information processing. Just as the correlation length and susceptibility are maximized in a ferromagnet at the critical point, the dynamic range3, memory capacity4 and computational power5 are optimized in neural network simulations when they are tuned to the critical point. This tantalizing picture has been missing an important piece, though. Criticality depends on the tuning of network parameters, but it has been unclear how neural networks could reach the critical point. On page 857 of this issue, Anna Levina and colleagues6 provide an interesting clue that may help to solve this puzzle. Through analysis and simulations, they show that a property known as synaptic depression— something commonly observed in synapses — can cause neural networks to robustly self-organize and operate at criticality.
To understand this important result, it may help to first summarize some properties of living neural networks. A typical excitatory neuron in the cerebral cortex makes and receives ∼103 physical connections, called synapses, with other neurons, forming a highly parallel network. Excitatory neurons communicate by sending pulses through these synapses, which increase the membrane potential of recipient neurons. Many pulses arriving within a time window of a few tens of milliseconds will drive the membrane potential of a recipient neuron over a threshold, when it will initiate a pulse of its own that will in turn be transmitted to thousands of other recipient neurons. There are also inhibitory neurons that transmit pulses that decrease membrane potential, but it is easy to show that above-threshold activity at one single neuron can initiate an avalanche of activity that spreads throughout a network.
So what keeps these networks from becoming continually over-excited? It's here where synaptic depression comes into play. Synaptic pulses between neurons are not constant, but vary in strength with frequency of use. When a synapse that has been inactive for several seconds is newly activated, it will produce a pulse at full strength (see Fig. 1). But if this synapse is called upon to transmit tens of times within one second, the strength of its pulses will rapidly diminish. This diminution is called synaptic depression, and is observed at most connections between excitatory cortical neurons7. If the synapse is not activated again for many seconds, it will spontaneously recover to full strength.
Levina and colleagues show that this frequency-dependent modulation of synaptic strength can play a crucial role in tuning a network towards criticality. In their model, when a single neuron is activated it will initiate an avalanche of activity that will spread until it encounters depressed synapses (Fig. 2). Depressed synapses will therefore limit avalanche sizes. But when the network consistently produces many small avalanches, then unused synapses will have time to recover strength, thus increasing the probability that large avalanches will occur later. In a similar manner, large avalanches will depress most synapses, causing the network to produce subsequently more small avalanches. This interplay between synaptic depression and spontaneous recovery pushes slowly driven neural networks to a steady state, poised between a phase where activity will be damped, and a phase where it will be amplified. At this critical point, avalanches of all sizes occur. When the probability of an avalanche is plotted against its size in log–log space, it produces a power law with an exponent of −3/2, matching what has been reported in experiments1,2. It is interesting to note that the dynamics in this model6 bears a strong resemblance to what happens in forest fires, where freshly burned areas are less likely to ignite, and where recovery occurs when trees grow back. Forest fires have also been shown to obey a power law of event sizes8.
Yet models must ultimately make predictions and influence experiments. Perhaps one prediction from the work of Levina and colleagues6 would be that large avalanches should occur more often after long recovery times. Their model should generate a correlation between avalanche sizes and intervals between avalanches; this could be easily evaluated with existing data. Another issue concerns whether this picture really applies to the intact brain, as the experiments that motivated Levina et al. were all performed in isolated neural networks. If their model survives these tests, then perhaps we will have moved one step closer to a statistical mechanics not of particles, but of neurons.
References
Beggs, J. M. & Plenz, D. J. Neurosci. 23, 11167–11177 (2003).
Mazzoni, A. et al. PLoS ONE 2, e439 (2007).
Kinouchi, O. & Copelli, M. Nature Phys. 2, 348–352 (2006).
Haldeman, C. & Beggs, J. M. Phys. Rev. Lett. 94, 058101 (2005).
Bertschinger, N. & Natschläger, T. Neural Comput. 16, 1413–1436 (2004).
Levina, A., Hermann, H. & Geisel, T. Nature Phys. 3, 857–860 (2007).
Thompson, A.M. J Physiol. 502, 131–147 (1997).
Malamud, B.D., Morein, G. & Turcotte, D. L. Science 281, 1840–1842 (1995).
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Beggs, J. How to build a critical mind. Nature Phys 3, 835 (2007). https://doi.org/10.1038/nphys799
Issue Date:
DOI: https://doi.org/10.1038/nphys799