Abstract
Phase transitions and critical behavior are crucial issues both in theoretical and experimental neuroscience. We report analytic and computational results about phase transitions and selforganized criticality (SOC) in networks with general stochastic neurons. The stochastic neuron has a firing probability given by a smooth monotonic function Φ(V) of the membrane potential V, rather than a sharp firing threshold. We find that such networks can operate in several dynamic regimes (phases) depending on the average synaptic weight and the shape of the firing function Φ. In particular, we encounter both continuous and discontinuous phase transitions to absorbing states. At the continuous transition critical boundary, neuronal avalanches occur whose distributions of size and duration are given by power laws, as observed in biological neural networks. We also propose and test a new mechanism to produce SOC: the use of dynamic neuronal gains – a form of shortterm plasticity probably located at the axon initial segment (AIS) – instead of depressing synapses at the dendrites (as previously studied in the literature). The new selforganization mechanism produces a slightly supercritical state, that we called SOSC, in accord to some intuitions of Alan Turing.
Similar content being viewed by others
Introduction
“Another simile would be an atomic pile of less than critical size: an injected idea is to correspond to a neutron entering the pile from without. Each such neutron will cause a certain disturbance which eventually dies away. If, however, the size of the pile is sufficiently increased, the disturbance caused by such an incoming neutron will very likely go on and on increasing until the whole pile is destroyed. Is there a corresponding phenomenon for minds, and is there one for machines? There does seem to be one for the human mind. The majority of them seems to be subcritical, i.e., to correspond in this analogy to piles of subcritical size. An idea presented to such a mind will on average give rise to less than one idea in reply. A smallish proportion are supercritical. An idea presented to such a mind may give rise to a whole “theory” consisting of secondary, tertiary and more remote ideas. (…) Adhering to this analogy we ask, “Can a machine be made to be supercritical?”” Alan Turing (1950)^{1}.
The Critical Brain Hypothesis^{2,3} states that (some) biological neuronal networks work near phase transitions because criticality enhances information processing capabilities^{4,5,6} and health^{7}. The first discussion about criticality in the brain, in the sense that subcritical, critical and slightly supercritical branching process of thoughts could describe human and animal minds, has been made in the beautiful speculative 1950 Imitation Game paper by Turing^{1}. In 1995, Herz & Hopfield^{8} noticed that selforganized criticality (SOC) models for earthquakes were mathematically equivalent to networks of integrateandfire neurons, and speculated that perhaps SOC would occur in the brain. In 2003, in a landmark paper, these theoretical conjectures found experimental support by Beggs & Plenz^{9} and, by now, more than half a thousand papers can be found about the subject, see some reviews^{2,3,10}. Although not consensual, the Critical Brain Hypothesis can be considered at least a very fertile idea.
The open question about neuronal criticality is what are the mechanisms responsible for tuning the network towards the critical state. Up to now, the main mechanism studied is some dynamics in the links which, in the biological context, would occur at the synaptic level^{11,12,13,14,15,16,17}.
Here we propose a whole new mechanism: dynamic neuronal gains, related to the diminution (and recovery) of the firing probability, an intrinsic neuronal property. The neuronal gain is experimentally related to the well known phenomenon of firing rate adaptation^{18,19,20}. This new mechanism is sufficient to drive neuronal networks of stochastic neurons towards a critical boundary found, by the first time, for these models. The neuron model we use was proposed by Galves and Locherbach^{21} as a stochastic model of spiking neurons inspired by the traditional integrateandfire (IF) model.
Introduced in the early 20th century^{22}, IF elements have been extensively used in simulations of spiking neurons^{20,23,24,25,26,27,28}. Despite their simplicity, IF models have successfully emulated certain phenomena observed in biological neural networks, such as firing avalanches^{12,13,29} and multiple dynamical regimes^{30,31}. In these models, the membrane potential V(t) integrates synaptic and external currents up to a firing threshold V_{T}^{32}. Then, a spike is generated and V(t) drops to a reset potential V_{R}. The leaky integrateandfire (LIF) model extends the IF neuron with a leakage current, which causes the potential V(t) to decay exponentially towards a baseline potential V_{B} in the absence of input signals^{24,26}.
LIF models are deterministic but it has been claimed that stochastic models may be more adequate for simulation purposes^{33}. Some authors proposed to introduce stochasticity by adding noise terms to the potential^{24,25,30,31,33,34,35,36,37}, yielding the leaky stochastic integrateandfire (LSIF) models.
Alternatively, the GalvesLöcherbach (GL) model^{21,38,39,40,41} and also the model used by Larremore et al.^{42,43} introduce stochasticity in their firing neuron models in a different way. Instead of noise inputs, they assume that the firing of the neuron is a random event, whose probability of occurrence in any time step is a firing function Φ(V) of membrane potential V. By subsuming all sources of randomness into a single function, the GalvesLöcherbach (GL) neuron model simplifies the analysis and simulation of noisy spiking neural networks.
Brain networks are also known to exhibit plasticity: changes in neural parameters over time scales longer than the firing time scale^{27,44}. For example, shortterm synaptic plasticity^{45} has been incorporated in models by assuming that the strength of each synapse is lowered after each firing, and then gradually recovers towards a reference value^{12,13}. This kind of dynamics drives the synaptic weights of the network towards critical values, a SOC state which is believed to optimize the network information processing^{3,4,7,9,10,46}.
In this work, first we study the dynamics of networks of GL neurons by a very simple and transparent meanfield calculation. We find both continuous and discontinuous phase transitions depending on the average synaptic strength and parameters of the firing function Φ(V). To the best of our knowledge, these phase transitions have never been observed in standard integrateandfire neurons. We also find that, at the second order phase transition, the stimulated excitation of a single neuron causes avalanches of firing events (neuronal avalanches) that are similar to those observed in biological networks^{3,9}.
Second, we present a new mechanism for SOC based on a dynamics on the neuronal gains (a parameter of the neuron probably related to the axon initial segment – AIS^{32,47}), instead of depression of coupling strengths (related to neurotransmiter vesicle depletion at synaptic contacts between neurons) proposed in the literature^{12,13,15,17}. This new activity dependent gain model is sufficient to achieve selforganized criticality, both by simulation evidence and by meanfield calculations. The great advantage of this new SOC mechanism is that it is much more efficient, since we have only one adaptive parameter per neuron, instead of one per synapse.
The Model
We assume a network of N GL neurons that change states in parallel at certain sampling times with a uniform spacing Δ. Thus, the membrane potential of neuron i is modeled by a real variable V_{i}[t] indexed by discrete time t, an integer that represents the sampling time tΔ.
Each synapse transmits signals from some presynaptic neuron j to some postsynaptic neuron i, and has a synaptic strength w_{ij}. If neuron j fires between discrete times t and t + 1, its potential drops to V_{R}. This event increments by w_{ij} the potential of every postsynaptic neuron i that does not fire in that interval. The potential of a nonfiring neuron may also integrate an external stimulus I_{i}[t], which can model signals received from sources outside the network. Apart from these increments, the potential of a nonfiring neuron decays at each time step towards the baseline voltage V_{B} by a factor μ ∈ [0, 1], which models the effect of a leakage current.
We introduce the Boolean variable X_{i}[t] ∈ {0, 1} which denotes whether neuron i fired between t and t + 1. The potentials evolve as:
This is a special case of the general GL model^{21}, with the filter function , where t_{s} is the time of the last firing of neuron i. We have X_{i}[t + 1] = 1 with probability Φ(V_{i}[t]), which is called the firing function^{21,38,39,40,41,42}. We also have X_{i}[t + 1] = 0 if X_{i}[t] = 1 (refractory period). The function Φ is sigmoidal, that is, monotonically increasing, with limiting values Φ(−∞) = 0 and Φ(+∞) = 1, with only one derivative maximum. We also assume that Φ(V) is zero up to some threshold potential V_{T} (possibly −∞) and is 1 starting at some saturation potential V_{S} (possibly +∞). If Φ is the shifted Heaviside step function Θ, Φ(V) = Θ(V − V_{T}), we have a deterministic discretetime LIF neuron. Any other choice for Φ(V) gives a stochastic neuron.
The network’s activity is measured by the fraction (or density) ρ[t] of firing neurons:
The density ρ[t] can be computed from the probability density p[t](V) of potentials at time t:
where p[t](V)dV is the fraction of neurons with potential in the range [V, V + dV] at time t.
Neurons that fire between t and t + 1 have their potential reset to V_{R}. They contribute to p[t + 1](V) a Dirac impulse at potential V_{R}, with amplitude (integral) ρ[t] given by equation (3). In subsequent time steps, the potentials of all neurons will evolve according to equation (1). This process modifies p[t](V) also for V ≠ V_{R}.
Results
We will study only fully connected networks, where each neuron receives inputs from all the other N − 1 neurons. Since the zero of potential is arbitrary, we assume V_{B} = 0. We also consider only the case with V_{R} = 0, and uniform constant input I_{i}[t] = I. So, for these networks, equation (1) reads:
Meanfield calculation
In the meanfield analysis, we assume that the synaptic weights w_{ij} follow a distribution with average W/N and finite variance. The meanfield approximation disregards correlations, so the final term of equation (1) becomes:
Notice that the variance of the weights w_{ij} becomes immaterial when N tends to infinity.
Since the external input I is the same for all neurons and all times, every neuron i that does not fire between t and t + 1 (that is, with X_{i}[t] = 0) has its potential changed in the same way:
Recall that the probability density p[t](V) has a Dirac impulse at potential U_{0} = 0, representing all neurons that fired in the previous interval. This Dirac impulse is modified in later steps by equation (6). It follows that, once all neurons have fired at least once, the density p[t](V) will be a combination of discrete impulses with amplitudes η_{0}[t], η_{1}[t], η_{2}[t], …, at potentials U_{0}[t], U_{1}[t], U_{2}[t], …, such that .
The amplitude η_{k}[t] is the fraction of neurons with firing age k at discrete time t, that is, neurons that fired between times t − k − 1 and t − k, and did not fire between t − k and t. The common potential of those neurons, at time t, is U_{k}[t]. In particular, η_{0}[t] is the fraction ρ[t − 1] of neurons that fired in the previous time step. For this type of distribution, the integral of equation (3) becomes a discrete sum:
According to equation (6), the values η_{k}[t] and U_{k}[t] evolve by the equations
for all k ≥ 1, with η_{0}[t + 1] = ρ[t] and U_{0}[t + 1] = 0.
Stationary states for general Φ and μ
A stationary state is a density p[t](V) = p(V) of membrane potentials that does not change with time. In such a regime, quantities U_{k} and η_{k} do not depend anymore on t. Therefore, the equations (8) and (9) become the recurrence equations:
for all k ≥ 1.
Since equations (12) are homogeneous on the η_{k}, the normalization condition must be included explicitly. So, integrating over the density p(V) leads to a discrete distribution P(V) (see Fig. 1 for a specific Φ).
Equations (10, 11, 12, 13) can be solved numerically, e. g. by simulating the evolution of the potential probability density p[t](V) according to equations (8) and (9), starting from an arbitrary initial distribution, until reaching a stable distribution (the probabilities η_{k} should be renormalized for unit sum after each time step, to compensate for rounding errors). Notice that this can be done for any Φ function, so this numerical solution is very general.
The monomial saturating Φ with μ > 0
Now we consider a specific class of firing functions, the saturating monomials. This class is parametrized by a positive degree r and a neuronal gain Γ > 0. In all functions of this class, Φ(V) is 0 when V ≤ V_{T}, and 1 when V ≥ V_{S}, where the saturation potential is V_{S} = V_{T} + 1/Γ. In the interval V_{T} < V < V_{S}, we have:
Note that these functions can be seen as limiting cases of sigmoidal functions, and that we recover the deterministic LIF model Φ(V) = Θ(V − V_{T}) when Γ → ∞.
For any integer p ≥ 2, there are combinations of values of V_{T}, V_{S}, and μ that cause the network to behave deterministically. This happens if the stationary state defined by equations (12) and (13) is such that U_{p−2} ≤ V_{T} ≤ V_{S} ≤ U_{p−1}—that is, Φ(U_{k}) is either 0 or 1 for all k, so the GL model becomes equivalent to the deterministic LIF model. In such a stationary state, we have ρ = η_{k} = 1/p for all k < p; meaning that the neurons are divided into p groups of equal size, and each group fires every p steps, exactly. If the inequalities are strict (U_{p−2} < V_{T} and V_{S} < U_{p−1}) then there are also many deterministic periodic regimes (pcycles) where the p groups have slightly more or less than 1/p of all the neurons, but still fire regularly every p steps.
Note that, if V_{T} = 0, such degenerate (deterministic) regimes, stationary or periodic, occur only for p = 2 and W ≥ W_{B} where W_{B} = 2(I + V_{S}). The stationary regime has ρ = η_{0} = η_{1} = 1/2 and U_{1} = I + W/2. In the periodic regimes (2cycles) the activity ρ[t] alternates between two values ρ′ and ρ′′ = 1 − ρ′, with ρ_{1}(W) < ρ′ < 1/2 < ρ′′ < ρ_{2}(W), where:
All these 2cycles are marginally stable, in the sense that, if a perturbed state ρ_{ε} = ρ + ε satisfy equation (15) then the new cycle ρ_{ε}[t + 1] = 1 − ρ_{ε}[t] is also marginally stable.
In the analyses that follows, the control parameters are W and Γ, and ρ(W, Γ) is the order parameter. We obtain numerically ρ(W, Γ) and the phase diagram (W, Γ) for several values of μ > 0, for the linear (r = 1) saturating Φ with I = V_{T} = 0 (Fig. 2). Only the first 100 peaks (U_{k}, η_{k}) were considered, since, for the given μ and Φ, there was no significant probability density beyond that point. The same numerical method can be used for r ≠ 1, I ≠ 0, V_{T} ≠ 0.
Near the critical point, we obtain numerically ρ(W, μ) ≈ C(W − W_{C})/W, where W_{C}(Γ) = (1 − μ)/Γ and C(μ) is a constant. So, the critical exponent is α = 1, characteristic of the meanfield directed percolation (DP) universality class^{3,4}. The critical boundary in the (W, Γ) plane, numerically obtained, seems to be Γ_{C}(W) = (1 − μ)/W (Fig. 2b).
Analytic results for μ = 0
Below we give results of a simple meanfield analysis in the limits N → ∞ and μ → 0. The latter implies that, at time t + 1, the neuron “forgets” its previous potential V_{i}[t] and integrates only the inputs I[t] + W_{ij}X_{j}[t]. This scenario is interesting because it enables analytic solutions, yet exhibits all kinds of behaviors and phase transitions that occur with μ > 0.
When μ = 0 and I_{i}[t] = I (uniform constant input), the density p[t](V) consists of only two Dirac peaks at potentials U_{0}[t] = V_{R} = 0 and U_{1}[t] = I + Wρ[t − 1], with fractions η_{0}[t] and η_{1}[t] that evolve as:
Furthermore, if the neurons cannot fire spontaneously, that is, Φ(0) = 0, then equation (16) reduces to:
In a stationary regime, equation (18) simplifies to:
since η_{0} = ρ, η_{1} = 1 − ρ, U_{0} = 0, and U_{1} = I + Wρ. Below, all the results refer to the monomial saturating Φs given by equation (14).
The case with r = 1, V_{T} = 0
When r = 1, we have the linear function Φ(V) = ΓV for 0 < V < V_{S} = 1/Γ, where V = I + Wρ. Equation (19) turns out:
with solution (Fig. 3a):
For zero input we have:
where W_{C} = 1/Γ and the order parameter critical exponent is β = 1. This corresponds to a standard meanfield continuous (second order) absorbing state phase transition. This transition will be studied in detail two section below.
A measure of the network sensitivity to inputs (which play here the role of external fields) is the susceptibility χ = dρ/dI, which is a function of Γ, W and I (Fig. 3b):
For zero external inputs, the susceptibility behaves as:
where we have the critical exponent γ = 1.
A very interesting result is that, for any I, the susceptibility is maximized at the critical line W_{C} = 1/Γ, with the values:
For I → 0 we have . The critical exponent δ is defined by I ∝ ρ^{δ} for small I, so we obtain the meanfield value δ = 2. In analogy with Psychophysics, we may call m = 1/δ = 1/2 the Stevens’s exponent of the network^{4}.
With two critical exponents it is possible to obtain others through scaling relations. For example, notice that β, γ and δ are related to 2β + γ = β(δ + 1).
Notice that, at the critical line, the susceptibility diverges as as I → 0. We will comment the importance of the fractionary Stevens’s exponent m = 1/2 (Fig. 3a) and the diverging susceptibility (Fig. 3b) for information processing in the Discussion section.
Isolated neurons
We can also analyze the behavior of the GL neuron model under the standard experiment where an isolated neuron in vitro is artificially injected with a current of constant intensity J. That corresponds to setting the external input signal I[t] of that neuron to a constant value I = JΔ/C where C is the effective capacitance of the neuron.
The firing rate of an isolated neuron can be written as:
where F_{max} is an empirical maximum firing rate (measured in spikes per second) of a given neuron and ρ is our previous neuron firing probability per time step. With W = 0 and I > 0 in equation (19), we get:
The solution for the monomial saturating Φ with V_{T} = 0 is (Fig. 3c):
which is less than ρ = 1/2 only if I < 1/Γ. For any I ≥ 1/Γ the firing rate saturates at ρ = 1/2 (the neuron fires at every other step, alternating between potentials U_{0} = V_{R} = 0 and U_{1} = I. So, for I > 0, there is no phase transition. Interestingly, equation (29), known as generalized MichaelisMenten function, is frequently used to fit the firing response of biological neurons to DC currents^{48,49}.
Continuous phase transitions in networks: the case with r = 1
Even with I = 0, spontaneous collective activity is possible if the network suffers a phase transition. With r = 1, the stationary state condition equation (19) is:
The two solutions are the absorbing state ρ_{0} = 0 and the nontrivial state:
with W_{C} = 1/Γ. Since we must have 0 < ρ ≤ 1/2, this solution is valid only for W_{C} < W ≤ W_{B} = 2/Γ (Fig. 4b).
This solution describes a stationary state where 1 − ρ of the neurons are at potential U_{1} = W − W_{C}. The neurons that will fire in the next step are a fraction Φ(U_{1}) of those, which are again a fraction ρ of the total. For any W > W_{C}, the state ρ_{0} = 0 is unstable: any small perturbation of the potentials cause the network to converge to the active stationary state above. For W < W_{C}, the solution ρ_{0} = 0 is stable and absorbing. In the ρ(W) plot, the locus of stationary regimes defined by equation (31) bifurcates at W = W_{B} into the two bounds of equation (15) that delimit the 2cycles (Fig. 4b).
So, at the critical boundary W = 1/Γ, we have a standard continuous absorbing state transition ρ(W) ∝ (W − W_{C})^{α} with a critical exponent α = 1, which also can be written as ρ(Γ) ∝ (Γ − Γ_{C})^{α}. In the (Γ, W) plane, the phase transition corresponds to a critical boundary Γ_{C}(W) = 1/W, below the 2cycle phase transition Γ_{B}(W) = 2/W (Fig. 4c).
Discontinuous phase transitions in networks: the case with r > 1
When r > 1 and W ≤ W_{B} = 2/Γ, the stationary state condition is:
This equation has a non trivial solution ρ^{+} only when 1 ≤ r ≤ 2 and W_{C}(r) ≤ W ≤ W_{B}, for a certain W_{C}(r) > 1/Γ. In this case, at W = W_{C}(r), there is a discontinuous (firstorder) phase transition to a regime with activity ρ = ρ_{C}(r) ≤ 1/2 (Fig. 4d). It turns out that ρ_{C}(r) → 0 as r → 1, recovering the continuous phase transition in that limit. For r = 2, the solution to equation (32) is a single point ρ(W_{C}) = ρ_{C} = 1/2 at W_{C} = 2/Γ = W_{B} (Fig. 4f).
Notice that, in the linear case, the fixed point ρ_{0} = 0 is unstable for W > 1 (Fig. 4b). This occurs because the separatrix ρ_{−} (trace lines, Fig. 4d), for r → 1, collapses with the ρ_{0} point, so that it looses its stability.
Ceaseless activity: the case with r < 1
When r < 1, there is no absorbing solution ρ_{0} = 0 to equation (32). In the W → 0 limit we get ρ(W) = (ΓW)^{r/(1−r)}. These power laws means that ρ > 0 for any W > W_{C}(r) = 0 (Fig. 4e). We recover the second order transition W_{C}(r = 1) = 1/Γ when r → 1 in equation (32). Interestingly, this ceaseless activity ρ > 0 for any W > 0 seems to be similar to that found by Larremore et al.^{42} with a μ = 0 linear saturating model. That ceaseless activity, observed even with r = 1, perhaps is due to the presence of inhibitory neurons in Larremore et al. model.
Discontinuous phase transitions in networks: the case with V_{T} > 0 and I > 0
The standard IF model has V_{T} > 0. If we allow this feature in our models we find a new ingredient that produces first order phase transitions. Indeed, in this case, if U_{1} = Wρ + I < V_{T} then we have a single peak at U_{0} = 0 with η_{0} = 1, which means we have a silent state. When U_{1} = Wρ + I > V_{T}, we have a peak with height η_{1} = 1 − ρ and ρ = η_{0} = Φ(U_{1})η_{1}.
For the linear monomial model this leads to the equations:
with the solution:
where ρ^{+} is the non trivial fixed point and ρ^{−} is the unstable fixed point (separatrix). These solutions only exist for ΓW values such that . This produces the condition:
which defines a first order critical boundary. At the critical boundary the density of firing neurons is:
which is nonzero (discontinuous) for any V_{T} > I. These transitions can be seen in Fig. 5. The solutions for equations (35) and (37) is valid only for ρ_{C} < 1/2 (2cycle bifurcation). This imply the maximal value V_{T} = W_{C} /4 + I.
Neuronal avalanches
Firing avalanches in neural networks have attracted significant interest because of their possible connection to efficient information processing^{3,4,5,7,9}. Through simulations, we studied the critical point W_{C} = 1, Γ_{C} = 1 (with μ = 0) in search for neuronal avalanches^{3,9} (Fig. 6).
An avalanche that starts at discrete time t = a and ends at t = b has duration d = b − a and size (Fig. 6a). By using the notation S for a random variable and s for its numerical value, we observe a power law avalanche size distribution , with the meanfield exponent τ_{S} = 3/2 (Fig. 6b)^{3,9,13}. Since the distribution P_{S}(s) is noisy for large s, for further analysis we use the complementary cumulative function (which gives the probability of having an avalanche with size equal or greater than s) because it is very smooth and monotonic (Fig. 6c). Data collapse gives a finitesize scaling exponent c_{S} = 1 (Fig. 6d)^{15,17}.
We also observed a power law distribution for avalanche duration, with τ_{D} = 2 (Fig. 7a). The complementary cumulative distribution is . From data collapse, we find a finitesize scaling exponent c_{D} = 1/2 (Fig. 7b), in accord with the literature^{13}.
The model with dynamic parameters
The results of the previous section were obtained by finetuning the network at the critical point Γ_{C} = W_{C} = 1. Given the conjecture that the critical region presents functional advantages, a biological model should include some homeostatic mechanism capable of tuning the network towards criticality. Without such mechanism, we cannot truly say that the network selforganizes toward the critical regime.
However, observing that the relevant parameter for criticality in our model is the critical boundary Γ_{C}W_{C} = 1, we propose to work with dynamic gains Γ_{i}[t] while keeping the synapses W_{ij} fixed. The idea is to reduce the gain Γ_{i}[t] when the neuron fires, and let the gain slowly recover towards a higher resting value after that:
Now, the factor τ is related to the characteristic recovery time of the gain, A is the asymptotic resting gain, and u ∈ [0, 1] is the fraction of gain lost due to the firing. This model is plausible biologically, and can be related to a decrease and recovery, due to the neuron activity, of the firing probability at the AIS^{47}. Our dynamic Γ_{i}[t] mimics the well known phenomenon of spike frequency adaptation^{18,19}.
Figure 8a shows a simulation with alltoall coupled networks with N neurons and, for simplicity, W_{ij} = W. We observe that the average gain seems to converge toward the critical value Γ_{C}(W) = 1/W = 1, starting from different Γ[0] ≠ 1. As the network converges to the critical region, we observe powerlaw avalanche size distributions with exponent −3/2 leading to a cumulative function C_{S}(s) ∝ s^{−1/2} (Fig. 8b). However, we also observe supercritical bumps for large s and N, meaning that the network is in a slightly supercritical state.
This empirical evidence is supported by a meanfield analysis of equation (38). Averaging over the sites, we have for the average gain:
In the stationary state, we have Γ[t + 1] = Γ[t] = Γ^{*}, so:
But we have the relation
near the critical region, where C is a constant that depends on Φ(V) and μ, for example, with μ = 0, C = 1 for Φ linear monomial model. So:
Eliminating the common factor Γ^{*}, and dividing by uC, we have:
Now, call x = 1/(uCτ). Then, we have:
The fine tuning solution is to put by hand A = Γ_{C}, which leads to Γ^{*} = Γ_{C} independent of x. This fine tuning solution should not be allowed in a true SOC scenario. So, suppose that A = BΓ_{C}. Then, we have:
Now we see that to have a critical or supercritical state (where equation (41) holds) we must have B > 1, otherwise we fall in the subcritical state Γ^{*} < Γ_{C} where ρ^{*} = 0 and our meanfield calculation is not valid. A first order approximation leads to:
This meanfield calculation shows that, if x → 0, we obtain a SOC state Γ^{*} → Γ_{C}. However, the strict case x → 0 would require a scaling τ = O(N^{a}) with an exponent a > 0, as done previously for dynamic synapses^{12,13,15,17}.
However, if we want to avoid the nonbiological scaling τ(N) = O(N^{a}), we can use biologically reasonable parameters like τ ∈ [10, 1000] ms, u = [0.1, 1], C = 1 and A ∈ [1.1, 2]Γ_{C}. In particular, if τ = 1000, u = 1 and A = 1.1, we have x = 0.001 and:
Even a more conservative value τ = 100 ms gives Γ^{*} ≈ 1.001Γ_{C}. Although not perfect SOC^{10}, this result is totally sufficient to explain power law neuronal avalanches. We call this phenomena selforganized supercriticality (SOSC), where the supercriticality can be very small. We must yet determine the volume of parameter space (τ, A, u) where the SOSC phenomenon holds. In the case of dynamic synapses W_{ij}[t], this parametric volume is very large^{15,17} and we conjecture that the same occurs for the dynamic gains Γ_{i}[t]. This shall be studied in detail in another paper.
Discussion
Stochastic model
The stochastic neuron Galves and Löcherbach^{21,41} is an interesting element for studies of networks of spiking neurons because it enables exact analytic results and simple numerical calculations. While the LSIF models of Soula et al.^{34} and Cessac^{35,36,37} introduce stochasticity in the neuron’s behavior by adding noise terms to its potential, the GL model is agnostic about the origin of noise and randomness (which can be a good thing when several noise sources are present). All the random behavior is grouped at the single firing function Φ(V).
Phase transitions
Networks of GL neurons display a variety of dynamical states with interesting phase transitions. We looked for stationary regimes in such networks, for some specific firing functions Φ(V) with no spontaneous activity at the baseline potential (that is, with Φ(0) = 0 and I = 0). We studied the changes in those regimes as a function of the mean synaptic weight W and mean neuronal gain Γ. We found basically tree kinds of phase transition, depending of the behavior of Φ(V) ∝ V^{r} for low V:
r < 1: A ceaseless dynamic regime with no phase transitions (W_{C} = 0) similar to that found by Larremore et al.^{42};
r = 1: A continuous (second order) absorbing state phase transition in the Directed Percolation universality class usual in SOC models^{2,3,10,15,17};
r > 1: Discontinuous (first order) absorbing state transitions.
We also observed discontinuous phase transitions for any r > 0 when the neurons have a firing threshold V_{T} > 0.
The deterministic LIF neuron models, which do not have noise, do not seem to allow these kinds of transitions^{27,30,31}. The model studied by Larremore et al.^{42} is equivalent to the GL model with monomial saturating firing function with r = 1, V_{T} = 0, μ = 0 and Γ = 1. They did not report any phase transition (perhaps because of the effect of inhibitory neurons in their network), but found a ceaseless activity very similar to what we observed with r < 1.
Avalanches
In the case of secondorder phase transitions (Φ(0) = 0, r = 1, V_{T} = 0), we detected firing avalanches at the critical boundary Γ_{C} = 1/W whose size and duration power law distributions present the standard meanfield exponents τ_{S} = 3/2 and τ_{D} = 2. We observed a very good finitescaling and data collapse behavior, with finitesize exponents c_{S} = 1 and c_{D} = 1/2.
Maximal susceptibility and optimal dynamic range at criticality
Maximal susceptibility means maximal sensitivity to inputs, in special to weak inputs, which seems to be an interesting property in biological terms. So, this is a new example of optimization of information processing at criticality. We also observed, for small I, the behavior ρ(I) ∝ I^{m} with a fractionary Stevens’s exponent m = 1/δ = 1/2. Fractionary Stevens’s exponents maximize the network dynamic range since, outside criticality, we have only a inputoutput proportional behavior ρ(I) ∝ I, see ref. 4. As an example, in noncritical systems, an input range of 1–10000 spikes/s, arriving to the neurons due to their extensive dendritic arbors, must be mapped onto a range also of 1–10000 spikes/s in each neuron, which is biologically impossible because neuronal firing do not span four orders of magnitude. However, at criticality, since , a similar input range needs to be mapped only to an output range of 1–100 spikes/s, which is biologically possible. Optimal dynamic range and maximal susceptibility to small inputs constitute prime biological motivations to neuronal networks selforganize toward criticality.
Selforganized criticality
One way to achieve this goal is to use dynamical synapses W_{ij}[t], in a way that mimics the loss of strength after a synaptic discharge (presumably due to neurotransmitter vesicles depletion), and the subsequent slow recovery^{12,13,15,17}:
The parameters are the synaptic recovery time τ, the asymptotic value A, and the fraction u of synaptic weight lost after firing. This synaptic dynamics has been examined in refs 12, 13, 15 and 17. For our alltoall coupled network, we have N(N − 1) dynamic equations for the W_{ij}s. This is a huge number, for example O(10^{8}) equations, even for a moderate network of N = 10^{4} neurons^{15,17}. The possibility of well behaved SOC in bulk dissipative systems with loading is discussed in refs 10, 13 and 50. Further considerations for systems with conservation on the average at the stationary state, as occurs in our model, are made in refs 15 and 17.
Inspired by the presence of the critical boundary, we proposed a new mechanism for shortscale neural network plasticity, based on dynamic neuron gains Γ_{i}[t] instead of the above dynamic synaptic weights. This new mechanism is biologically plausible, probably related an activitydependent firing probability at the axon initial segment (AIS)^{32,47}, and was found to be sufficient to selforganize the network near the critical region. We obtained good data collapse and finitesize behavior for the P_{S}(S) distributions.
The great advantage of this new SOC mechanism is its computational efficiency: when simulating N neurons with K synapses each, there are only N dynamic equations for the gains Γ_{i}[t], instead of NK equations for the synaptic weights W_{ij}[t]. Notice that, for the alltoall coupling network studied here, this means O(N^{2}) equations for dynamic synapse but only O(N) equations for dynamic gains. This makes a huge difference for the network sizes that can be simulated.
We stress that, since we used τ finite, the criticality is not perfect (Γ^{*}/Γ_{C} ∈ [1.001; 1.01]). So, we called it a selforganized supercriticality (SOSC) phenomenon. Interestingly, SOSC would be a concretization of Turing’s intuition that the best brain operating point is slightly supercritical^{1}.
We speculate that this slightly supercriticality could explain why humans are so prone to supercritical pathological states like epilepsy^{3} (prevalence 1.7%) and mania (prevalence 2.6% in the population). Our mechanism suggests that such pathological states arises from small gain depression u or small gain recovery time τ. These parameters are experimentally related to firing rate adaptation and perhaps our proposal could be experimentally studied in normal and pathological tissues.
We also conjecture that this supecriticality in the whole network could explain the Subsamplig Paradox in neuronal avalanches: since the initial experimental protocols^{9,10}, critical power laws have been seem when using arrays of N_{e} = 32–512 electrodes, which are a very small numbers compared to the full biological network size with N = O(10^{6}–10^{9}) neurons. This situation N_{e} << N has been called subsampling^{51,52,53}.
The paradox occurs because models that present good power laws for avalanches measured over the total number of neurons N, under subsampling present only exponential tails or lognormal behaviors^{53}. No model, to the best of our knowledge, has solved this paradox^{10}. Our dynamic gains, which produce supercritical states like Γ^{*} = 1.01Γ_{C}, could be a solution to the paradox if the supercriticality in the whole network, described by a power law with a supercritical bump for large avalanches, turns out to be described by an apparent pure power law under subsampling. This possibility will be fully explored in another paper.
Directions for future research
Future research could investigate other network topologies and firing functions, heterogeneous networks, the effect of inhibitory neurons^{30,42}, and network learning. The study of selforganized supercriticality (and subsampling) with GL neurons and dynamic neuron gains is particularly promising.
Methods
Numerical Calculations
All numerical calculations are done by using MATLAB software. Simulation procedures: Simulation codes are made in Fortran90 and C++11. The avalanche statistics were obtained by simulating the evolution of finite networks of N neurons, with uniform synaptic strengths W_{ij} = W (W_{ii} = 0), Φ(V) monomial linear (r = 1) and critical parameter values W_{C} = 1 and Γ_{C} = 1. Each avalanche was started with all neuron potentials V_{i}[0] = V_{R} = 0 and forcing the firing of a single random neuron i by setting X_{i}[0] = 1.
In contrast to standard integrateand fire^{12,13} or automata networks^{4,15,17}, stochastic networks can fire even after intervals with no firing (ρ[t] = 0) because membrane voltages V[t] are not necessarily zero and Φ(V) can produce new delayed firings. So, our criteria to define avalanches is slightly different from previous literature: the network was simulated according to equation (1) until all potentials had decayed to such low values that , so further spontaneous firing would not be expected to occur for thousands of steps, which defines a stop time. Then, the total number of firings S is counted from the first firing up to this stop time.
The correct finitesize scaling for avalanche duration is obtained by defining the duration as D = D_{bare} + 5 time steps, where D_{bare} is the measured duration in the simulation. These extra five time steps probably arise from the new definition of avalanche used for these stochastic neurons.
Additional Information
How to cite this article: Brochini, L. et al. Phase transitions and selforganized criticality in networks of stochastic spiking neurons. Sci. Rep. 6, 35831; doi: 10.1038/srep35831 (2016).
Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
Turing, A. M. Computing machinery and intelligence. Mind 59, 433–460 (1950).
Chialvo, D. R. Emergent complex neural dynamics. Nature physics 6, 744–750 (2010).
Hesse, J. & Gross, T. Selforganized criticality as a fundamental property of neural systems. Criticality as a signature of healthy neural systems: multiscale experimental and computational studies (2015).
Kinouchi, O. & Copelli, M. Optimal dynamical range of excitable networks at criticality. Nature physics 2, 348–351 (2006).
Beggs, J. M. The criticality hypothesis: how local cortical networks might optimize information processing. Philosophical Transactions of the Royal Society of London A: Mathematical, Physical and Engineering Sciences 366, 329–343 (2008).
Shew, W. L., Yang, H., Petermann, T., Roy, R. & Plenz, D. Neuronal avalanches imply maximum dynamic range in cortical networks at criticality. The Journal of Neuroscience 29, 15595–15600 (2009).
Massobrio, P., de Arcangelis, L., Pasquale, V., Jensen, H. J. & Plenz, D. Criticality as a signature of healthy neural systems. Frontiers in systems neuroscience 9 (2015).
Herz, A. V. & Hopfield, J. J. Earthquake cycles and neural reverberations: collective oscillations in systems with pulsecoupled threshold elements. Physical review letters 75, 1222 (1995).
Beggs, J. M. & Plenz, D. Neuronal avalanches in neocortical circuits. The Journal of neuroscience 23, 11167–11177 (2003).
Marković, D. & Gros, C. Power laws and selforganized criticality in theory and nature. Physics Reports 536, 41–74 (2014).
de Arcangelis, L., PerroneCapano, C. & Herrmann, H. J. Selforganized criticality model for brain plasticity. Physical review letters 96, 028107 (2006).
Levina, A., Herrmann, J. M. & Geisel, T. Dynamical synapses causing selforganized criticality in neural networks. Nature physics 3, 857–860 (2007).
Bonachela, J. A., De Franciscis, S., Torres, J. J. & Muñoz, M. A. Selforganization without conservation: are neuronal avalanches generically critical? Journal of Statistical Mechanics: Theory and Experiment 2010, P02015 (2010).
De Arcangelis, L. Are dragonking neuronal avalanches dungeons for selforganized brain activity? The European Physical Journal Special Topics 205, 243–257 (2012).
Costa, A., Copelli, M. & Kinouchi, O. Can dynamical synapses produce true selforganized criticality? Journal of Statistical Mechanics: Theory and Experiment 2015, P06004 (2015).
van Kessenich, L. M., de Arcangelis, L. & Herrmann, H. Synaptic plasticity and neuronal refractory time cause scaling behaviour of neuronal avalanches. Scientific Reports 6 (2016).
Campos, J., Costa, A., Copelli, M. & Kinouchi, O. Differences between quenched and annealed networks with dynamical links. arXiv:1604.05779 To appear in Physical Review E (2016).
Ermentrout, B., Pascal, M. & Gutkin, B. The effects of spike frequency adaptation and negative feedback on the synchronization of neural oscillators. Neural Computation 13, 1285–1310 (2001).
Benda, J. & Herz, A. V. A universal model for spikefrequency adaptation. Neural computation 15, 2523–2564 (2003).
Buonocore, A., Caputo, L., Pirozzi, E. & Carfora, M. F. A leaky integrateandfire model with adaptation for the generation of a spike train. Mathematical biosciences and engineering: MBE 13, 483–493 (2016).
Galves, A. & Löcherbach, E. Infinite systems of interacting chains with memory of variable length—stochastic model for biological neural nets. Journal of Statistical Physics 151, 896–921 (2013).
Lapicque, L. Recherches quantitatives sur l’excitation électrique des nerfs traitée comme une polarisation. J. Physiol. Pathol. Gen. 9, 620–635 (1907). Translation: Brunel, N. & van Rossum, M. C. Quantitative investigations of electrical nerve excitation treated as polarization. Biol. Cybernetics97, 341–349 (2007).
Gerstein, G. L. & Mandelbrot, B. Random walk models for the spike activity of a single neuron. Biophysical journal 4, 41 (1964).
Burkitt, A. N. A review of the integrateandfire neuron model: I. homogeneous synaptic input. Biological cybernetics 95, 1–19 (2006).
Burkitt, A. N. A review of the integrateandfire neuron model: II. inhomogeneous synaptic input and network properties. Biological cybernetics 95, 97–112 (2006).
Naud, R. & Gerstner, W. The performance (and limits) of simple neuron models: generalizations of the leaky integrateandfire model. In Computational Systems Neurobiology 163–192 (Springer, 2012).
Brette, R. et al. Simulation of networks of spiking neurons: a review of tools and strategies. Journal of computational neuroscience 23, 349–398 (2007).
Brette, R. What is the most realistic singlecompartment model of spike initiation? PLoS Comput Biol 11, e1004114 (2015).
Benayoun, M., Cowan, J. D., van Drongelen, W. & Wallace, E. Avalanches in a stochastic model of spiking neurons. PLoS Comput Biol 6, e1000846 (2010).
Ostojic, S. Two types of asynchronous activity in networks of excitatory and inhibitory spiking neurons. Nature neuroscience 17, 594–600 (2014).
Torres, J. J. & Marro, J. Brain performance versus phase transitions. Scientific reports 5 (2015).
Platkiewicz, J. & Brette, R. A threshold equation for action potential initiation. PLoS Comput Biol 6, e1000850 (2010).
McDonnell, M. D., Goldwyn, J. H. & Lindner, B. Editorial: Neuronal stochastic variability: Influences on spiking dynamics and network activity. Frontiers in computational neuroscience 10 (2016).
Soula, H., Beslon, G. & Mazet, O. Spontaneous dynamics of asymmetric random recurrent spiking neural networks. Neural Computation 18, 60–79 (2006).
Cessac, B. A discrete time neural network model with spiking neurons. Journal of Mathematical Biology 56, 311–345 (2008).
Cessac, B. A view of neural networks as dynamical systems. International Journal of Bifurcation and Chaos 20, 1585–1629 (2010).
Cessac, B. A discrete time neural network model with spiking neurons: Ii: Dynamics with noise. Journal of mathematical biology 62, 863–900 (2011).
De Masi, A., Galves, A., Löcherbach, E. & Presutti, E. Hydrodynamic limit for interacting neurons. Journal of Statistical Physics 158, 866–902 (2015).
Duarte, A. & Ost, G. A model for neural activity in the absence of external stimuli. Markov Processes and Related Fields 22, 37–52 (2016).
Duarte, A., Ost, G. & Rodrguez, A. A. Hydrodynamic limit for spatially structured interacting neurons. Journal of Statistical Physics 161, 1163–1202 (2015).
Galves, A. & Löcherbach, E. Modeling networks of spiking neurons as interacting processes with memory of variable length. J. Soc. Franc. Stat. 157, 17–32 (2016).
Larremore, D. B., Shew, W. L., Ott, E., Sorrentino, F. & Restrepo, J. G. Inhibition causes ceaseless dynamics in networks of excitable nodes. Physical review letters 112, 138103 (2014).
Virkar, Y. S., Shew, W. L., Restrepo, J. G. & Ott, E. Metabolite transport through glial networks stabilizes the dynamics of learning. arXiv preprint arXiv:1605.03090 (2016).
Cooper, S. J. Donald o. hebb’s synapse and learning rule: a history and commentary. Neuroscience & Biobehavioral Reviews 28, 851–874 (2005).
Tsodyks, M., Pawelzik, K. & Markram, H. Neural networks with dynamic synapses. Neural computation 10, 821–835 (1998).
Larremore, D. B., Shew, W. L. & Restrepo, J. G. Predicting criticality and dynamic range in complex networks: effects of topology. Physical review letters 106, 058101 (2011).
Kole, M. H. & Stuart, G. J. Signal processing in the axon initial segment. Neuron 73, 235–247 (2012).
Lipetz, L. E. The relation of physiological and psychological aspects of sensory intensity. In Principles of Receptor Physiology, 191–225 (Springer, 1971).
Naka, K.I. & Rushton, W. A. Spotentials from luminosity units in the retina of fish (cyprinidae). The Journal of physiology 185, 587 (1966).
Bonachela, J. A. & Muñoz, M. A. Selforganization without conservation: true or just apparent scaleinvariance? Journal of Statistical Mechanics: Theory and Experiment 2009, P09009 (2009).
Priesemann, V., Munk, M. H. & Wibral, M. Subsampling effects in neuronal avalanche distributions recorded in vivo. BMC neuroscience 10, 40 (2009).
Ribeiro, T. L. et al. Spike avalanches exhibit universal dynamics across the sleepwake cycle. PloS one 5, e14129 (2010).
Ribeiro, T. L., Ribeiro, S., Belchior, H., Caixeta, F. & Copelli, M. Undersampled critical branching processes on smallworld and random networks fail to reproduce the statistics of spike avalanches. PloS one 9, e94992 (2014).
Acknowledgements
This paper results from research activity on the FAPESP Center for Neuromathematics (FAPESP grant 2013/076990). OK and AAC also received support from Núcleo de Apoio à Pesquisa CNAIPSUSP and FAPESP (grant 2016/004303). L.B., J.S. and A.C.R. also received CNPq support (grants 165828/20153, 310706/20157 and 306251/20140). We thank A. Galves for suggestions and revision of the paper, and M. Copelli and S. Ribeiro for discussions.
Author information
Authors and Affiliations
Contributions
L.B. and A.d.A.C. performed the simulations and prepared all the figures. O.K. and J.S. made the analytic calculations. O.K., J.S. and L.B. wrote the paper. M.A. and A.C.R. contributed with ideas, the writing of the paper and citations to the literature. All authors reviewed the manuscript.
Ethics declarations
Competing interests
The authors declare no competing financial interests.
Rights and permissions
This work is licensed under a Creative Commons Attribution 4.0 International License. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in the credit line; if the material is not included under the Creative Commons license, users will need to obtain permission from the license holder to reproduce the material. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/
About this article
Cite this article
Brochini, L., de Andrade Costa, A., Abadi, M. et al. Phase transitions and selforganized criticality in networks of stochastic spiking neurons. Sci Rep 6, 35831 (2016). https://doi.org/10.1038/srep35831
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/srep35831
This article is cited by

Metastability in a Stochastic System of Spiking Neurons with Leakage
Journal of Statistical Physics (2023)

A Numerical Study of the Time of Extinction in a Class of Systems of Spiking Neurons
Journal of Statistical Physics (2023)

Scale free avalanches in excitatoryinhibitory populations of spiking neurons with conductance based synaptic currents
Journal of Computational Neuroscience (2023)

Multiple Phase Transitions for an Infinite System of Spiking Neurons
Journal of Statistical Physics (2022)

FNS allows efficient eventdriven spiking neural network simulations based on a neuron model supporting spike latency
Scientific Reports (2021)
Comments
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.