In experiments with small networks of living neurons, bursts of activity are seen spreading in cascades through the network. The size distribution of these 'neuronal avalanches can be approximated by a power law1,2, a hint that the networks may be operating near the critical point, that is, in a regime, where, on average, one neuron activates exactly one other neuron the overall activity doesn't increase or die out over time. Simulations of neural networks indicate that the critical point would be quite beneficial for information processing. Just as the correlation length and susceptibility are maximized in a ferromagnet at the critical point, the dynamic range3, memory capacity4 and computational power5 are optimized in neural network simulations when they are tuned to the critical point. This tantalizing picture has been missing an important piece, though. Criticality depends on the tuning of network parameters, but it has been unclear how neural networks could reach the critical point. On page 857 of this issue, Anna Levina and colleagues6 provide an interesting clue that may help to solve this puzzle. Through analysis and simulations, they show that a property known as synaptic depression— something commonly observed in synapses — can cause neural networks to robustly self-organize and operate at criticality.

To understand this important result, it may help to first summarize some properties of living neural networks. A typical excitatory neuron in the cerebral cortex makes and receives 103 physical connections, called synapses, with other neurons, forming a highly parallel network. Excitatory neurons communicate by sending pulses through these synapses, which increase the membrane potential of recipient neurons. Many pulses arriving within a time window of a few tens of milliseconds will drive the membrane potential of a recipient neuron over a threshold, when it will initiate a pulse of its own that will in turn be transmitted to thousands of other recipient neurons. There are also inhibitory neurons that transmit pulses that decrease membrane potential, but it is easy to show that above-threshold activity at one single neuron can initiate an avalanche of activity that spreads throughout a network.

So what keeps these networks from becoming continually over-excited? It's here where synaptic depression comes into play. Synaptic pulses between neurons are not constant, but vary in strength with frequency of use. When a synapse that has been inactive for several seconds is newly activated, it will produce a pulse at full strength (see Fig. 1). But if this synapse is called upon to transmit tens of times within one second, the strength of its pulses will rapidly diminish. This diminution is called synaptic depression, and is observed at most connections between excitatory cortical neurons7. If the synapse is not activated again for many seconds, it will spontaneously recover to full strength.

Figure 1: Driven into depression.
figure 1

Synaptic depression occurs when pulses are rapidly evoked from one neuron. At a recipient neuron, the first pulse is at full strength, but after many rapid stimulations, the synapse depresses and the received pulses become weaker and weaker. Pulse strength spontaneously recovers when the synapse is left unstimulated for several seconds.

Levina and colleagues show that this frequency-dependent modulation of synaptic strength can play a crucial role in tuning a network towards criticality. In their model, when a single neuron is activated it will initiate an avalanche of activity that will spread until it encounters depressed synapses (Fig. 2). Depressed synapses will therefore limit avalanche sizes. But when the network consistently produces many small avalanches, then unused synapses will have time to recover strength, thus increasing the probability that large avalanches will occur later. In a similar manner, large avalanches will depress most synapses, causing the network to produce subsequently more small avalanches. This interplay between synaptic depression and spontaneous recovery pushes slowly driven neural networks to a steady state, poised between a phase where activity will be damped, and a phase where it will be amplified. At this critical point, avalanches of all sizes occur. When the probability of an avalanche is plotted against its size in log–log space, it produces a power law with an exponent of −3/2, matching what has been reported in experiments1,2. It is interesting to note that the dynamics in this model6 bears a strong resemblance to what happens in forest fires, where freshly burned areas are less likely to ignite, and where recovery occurs when trees grow back. Forest fires have also been shown to obey a power law of event sizes8.

Figure 2: Setting off avalanches.
figure 2

One stimulated neuron in a highly connected network can initiate an avalanche of activity, indicated by red neurons. Some pathways are blocked as a consequence of previous avalanches. Over time, spontaneous recovery re-opens pathways. Antagonism between depression and recovery leads to a steady state where avalanches of all sizes occur.

Yet models must ultimately make predictions and influence experiments. Perhaps one prediction from the work of Levina and colleagues6 would be that large avalanches should occur more often after long recovery times. Their model should generate a correlation between avalanche sizes and intervals between avalanches; this could be easily evaluated with existing data. Another issue concerns whether this picture really applies to the intact brain, as the experiments that motivated Levina et al. were all performed in isolated neural networks. If their model survives these tests, then perhaps we will have moved one step closer to a statistical mechanics not of particles, but of neurons.