Phase transitions and self-organized criticality in networks of stochastic spiking neurons

Phase transitions and critical behavior are crucial issues both in theoretical and experimental neuroscience. We report analytic and computational results about phase transitions and self-organized criticality (SOC) in networks with general stochastic neurons. The stochastic neuron has a firing probability given by a smooth monotonic function Φ(V) of the membrane potential V, rather than a sharp firing threshold. We find that such networks can operate in several dynamic regimes (phases) depending on the average synaptic weight and the shape of the firing function Φ. In particular, we encounter both continuous and discontinuous phase transitions to absorbing states. At the continuous transition critical boundary, neuronal avalanches occur whose distributions of size and duration are given by power laws, as observed in biological neural networks. We also propose and test a new mechanism to produce SOC: the use of dynamic neuronal gains – a form of short-term plasticity probably located at the axon initial segment (AIS) – instead of depressing synapses at the dendrites (as previously studied in the literature). The new self-organization mechanism produces a slightly supercritical state, that we called SOSC, in accord to some intuitions of Alan Turing.


Introduction
The Critical Brain Hypothesis 2,3 states that (some) biological neuronal networks work near phase transitions because criticality enhances information processing capabilities [4][5][6] and health 7 .The first discussion about criticality in the brain, in the sense that subcritical, critical and slightly supercritical branching process of thoughts could describe human and animal minds, has been made in the beautiful speculative 1950 Imitation Game paper by Turing 1 .In 1995, Herz & Hopfield 8 noticed that self-organized criticality (SOC) models for earthquakes were mathematically equivalent to networks of integrate-and-fire neurons, and speculated that perhaps SOC would occur in the brain.In 2003, in a landmark paper, these theoretical conjectures found experimental support by Beggs and Plenz 9 and, by now, more than half a thousand papers can be found about the subject, see some reviews 2,3 .Although not consensual, the Critical Brain Hypothesis can be considered at least a very fertile idea.
The open question about neuronal criticality is what are the mechanisms responsible for tuning the network towards the critical state.Up to now, the main mechanism studied is some dynamics in the links which, in the biological context, occur at the synaptic level [10][11][12][13] .Here we propose a whole new mechanism: dynamic neuronal gains, related to the diminution (and recovery) of the firing probability, an intrinsic neuronal property.The neuronal gain is experimentally related to the well known phenomenon of firing rate adaptation [14][15][16] .This new mechanism is sufficient to drive neuronal networks of stochastic neurons towards a critical boundary found, by the first time, for these models.The neuron model we use was proposed by Galves and Locherbach 17 as a stochastic model of spiking neurons inspired by the traditional integrate-and-fire (IF) model.
Introduced in the early 20th century 18 , IF elements have been extensively used in simulations of spiking neurons 16,[19][20][21][22][23][24] .Despite their simplicity, IF models have successfully emulated certain phenomena observed in biological neural networks, such as firing avalanches 10,11,25 and multiple dynamical regimes 26,27 .In these models, the membrane potential V (t) integrates synaptic and external currents up to a firing threshold V T 28 .Then, a spike is generated and V (t) drops to a reset potential V R .The leaky integrate-and-fire (LIF) model extends the IF neuron with a leakage current, which causes the potential V (t) to decay exponentially towards a baseline potential V B in the absence of input signals 20,22 .LIF models are deterministic but it has been claimed that stochastic models may be more adequate for simulation purposes 29 .Some authors proposed to introduce stochasticity by adding noise terms to the potential 20,21,26,27,[29][30][31][32][33] , yielding the leaky stochastic integrate-and-fire (LSIF) models.
Alternatively, the Galves-Löcherbach (GL) model 17,[34][35][36][37] and also the model used by Larremore et al. 38 introduce stochasticity in their firing neuron models in a different way.Instead of noise inputs, they assume that the firing of the neuron is a random event, whose probability of occurrence in any time step is a firing function Φ(V ) of membrane potential V .By subsuming all sources of randomness into a single function, the Galves-Löcherbach (GL) neuron model simplifies the analysis and simulation of noisy spiking neural networks.Brain networks are also known to exhibit plasticity: changes in neural parameters over time scales longer than the firing time scale 23,39 .For example, short-term synaptic plasticity 40 has been incorporated in models by assuming that the strength of each synapse is lowered after each firing, and then gradually recovers towards an reference value 10,11 .This kind of dynamics drives the synaptic weights of the network towards critical values, a phenomenon called self-organized criticality (SOC), which is believed to optimize the network information processing 3,4,7,9,41 .
In this work, first we study the dynamics of networks of GL neurons by a very simple and transparent meanfield calculation.We find both continuous and discontinuous phase transitions depending on the average synaptic strength and parameters of the firing function Φ(V ).To the best of our knowledge, these phase transitions have never been observed in standard integrate-and-fire neurons.We also find that, at the second order phase transition the stimulated excitation of a single neuron causes avalanches of firing events (neuronal avalanches) that are similar to those observed in biological networks 3,9 .
Second, we present a new mechanism for SOC based on a dynamics on the neuronal gains (a parameter of the neuron probably related to the axon initial segment -AIS 28,42 ), instead of depression of coupling strengths (related to neurotransmiter vesicle depletion at synaptic contacts between neurons) proposed in the literature [10][11][12][13] .This new activity dependent gain model is sufficient to achieve self-organized criticality, both by simulation evidence and by mean-field calculations.The great advantage of this new SOC mechanism is that it is much more efficient-since we have only one adaptive parameter per neuron, instead of one per synapse.

The Model
We assume a network of N GL neurons that change states in parallel at certain sampling times with a uniform spacing ∆.Thus, the membrane potential of neuron i is modeled by a real variable V i [t] indexed by discrete time t, an integer that represents the sampling time t∆.
Each synapse transmits signals from some presynaptic neuron j to some postsynaptic neuron i, and has a synaptic strength w i j .If neuron j fires between discrete times t and t + 1, its potential drops to V R = 0.This event increments by w i j the potential of every postsynaptic neuron i that does not fire in that interval.The potential of a non-firing neuron may also integrate an external stimulus I i [t], which can model signals received from sources outside the network.Apart from these increments, the potential of a non-firing neuron decays at each time step towards the baseline voltage V B by a factor µ ∈ [0, 1], which models the effect of a leakage current.Since the zero of potential is arbitrary, we assume V B = 0. We introduce the Boolean variable X i [t] ∈ {0, 1} which denotes whether neuron i fired between t and t + 1.The potentials evolve as: This corresponds to a GL neuron with a geometric leakage function g[t − t s ] = µ [t−t s ] , where t s is the time of the last spike of neuron i, see 17 .We have X i [t + 1] = 1 with probability Φ(V i [t]), which is called the firing function 17,[34][35][36][37][38] .We also have X i [t + 1] = 0 if X i [t] = 1 (refractory period).The function Φ is sigmoidal, that is, monotonically increasing, with limiting values Φ(−∞) = 0 and Φ(+∞) = 1, with only one derivative maximum.We also assume that Φ(V ) is zero up to some threshold potential V T .If Φ is the shifted Heaviside step function Θ, Φ(V ) = Θ(V − V T ), we have a deterministic discrete-time LIF neuron.Any other choice for Φ(V ) gives a stochastic neuron.
The network activity is measured by the fraction (or density) ρ[t] of firing neurons: The density ρ[t] can be computed from the probability density p[t](V ) of potentials at time t: where p[t](V ) dV is the fraction of neurons with potential in the range [V,V + dV ] at time t.
Neurons that fire between t and t + 1 have their potential reset to zero.They contribute to p[t + 1](V ) a Dirac impulse at potential V R , with amplitude (integral) ρ[t] given by equation (3).In subsequent time steps, the potentials of all neurons will evolve according to equation (1).This process modifies p[t](V ) also for V = V R .

Results
In this paper we study only fully connected networks, that is, all neurons have N − 1 neighbors.In this case, to obtain sensible results, we must scale the synapses as w i j = W i j /N, where the random variables W i j have finite mean W and finite variance.We also study only the case with V R = 0 and I i [t] = I (constant uniform input).So, for these networks, equation (1) reads:

Mean-field calculation
In the mean-field analysis, we assume that the synaptic weights W i j follow a distribution with average W and finite variance.The mean-field approximation disregards correlations, so the final term of equation (1) becomes: 1 Notice that the variance of the weights W i j becomes immaterial when N tends to infinity.
By now, we assume that the external input I is zero for all neurons and all times.Therefore, every neuron i that does not fire between t and t + 1 (that is, with X i [t] = 0) has its potential changed in the same way: Recall that the probability density p[t](V ) has a Dirac impulse at potential U 0 = 0, representing all neurons that fired in the previous interval.This Dirac impulse is modified in later steps by equation (6).It follows that, once all neurons have fired at least once, the density p[t](V ) will be a combination of discrete impulses with amplitudes is the fraction of neurons with firing age k at discrete time t, that is, neurons that fired between times t − k − 1 and t − k, and did not fire between t − k and t.The common potential of those neurons, at time t, In particular, η 0 [t] is the fraction ρ[t − 1] of neurons that fired in the previous time step.For this type of distribution, the integral of equation ( 3) becomes a discrete sum: According to equation ( 6), the values η k [t] and U k [t] evolve by the equations for all k ≥ 1, with η 0

Stationary states for general Φ and µ
A stationary state is a density p[t](V ) = p(V ) of membrane potentials that does not change with time.In such a regime, quantities U k and η k do not depend anymore on t.Therefore, the equations (8)(9) become the recurrence equations for all k ≥ 1.
and W ∞ can be obtained analytically by imposing the condition U m = 1 in equations (10-11).
Since equations ( 10) are homogeneous on the η k , the normalization condition ∑ ∞ k=0 η k = 1 must be included explicitly.So, integrating over the density p(V ) leads to a discrete distribution P(V ) (see Fig. 1 for a specific Φ).
Equations (10-11) can be solved numerically, e. g. by simulating the evolution of the potential probability density p[t](V ) according to equation (8-9), starting from an arbitrary initial distribution, until reaching a stable distribution (the probabilities η k should be renormalized for unit sum after each time step, to compensate for rounding errors).

4/17
The monomial saturating Φ with µ > 0 Now we consider a specific class of firing functions, the saturating monomials.This class is parametrized by a positive degree r and a neuronal gain γ > 0. In all functions of this class, Φ(V ) is 0 when V ≤ V T , and 1 when V ≥ V S , where the saturation potential is V S = V T + 1/γ.In the interval V T < V < V S we have the monomial: Note that these functions can be seen as limiting cases of sigmoidal functions, and that we recover the deterministic LIF model Notice that for such saturating monomials, there is a value W B = V T + 2/γ above which the dynamics is deterministic, leading to the 2-cycles, see Appendix.The case of isolated neurons with the monomial saturating Φ is also studied in the Appendix.
In the analyses that follows, the control parameters are W and γ, and ρ(W, γ) is the order parameter.We obtain numerically ρ(W, γ) and the phase diagram (W, γ) for several values of µ > 0, for the linear (r = 1) saturating Φ with V T = 0 (Fig. 2).Only the first 100 peaks (U k , η k ) were considered, since, for the given µ and Φ, there was no significant probability density beyond that point.The same numerical method can be used for r = 1,V T = 0. Analytic results for µ = 0 Below we give results of a simple mean-field analysis in the limits N → ∞ and µ → 0. The latter implies that, at time t + 1, the neuron "forgets" its previous potential V i [t] and integrates only the inputs . This scenario is interesting because it enables analytic solutions, yet exhibits all kinds of behaviors and phase transitions that occur with µ > 0.
Continuous phase transitions in networks: the case with r = 1.
When r = 1, we have the linear function Φ(V ) = γV for 0 < V < V S = 1/γ.The stationary state condition equation ( 16) then becomes: The two solutions are the absorbing state ρ = 0 and the non-trivial state: with W C = 1/γ.Since we must have 0 < ρ ≤ 1/2, this solution is valid only for This solution describes an stationary state where 1 − ρ of the neurons are at potential U 1 = W −W C .The neurons that will fire in the next step are a fraction Φ(U 1 ) of those, which are again a fraction ρ of the total.For any W > W C , the state ρ = 0 is unstable: any small perturbation of the potentials cause the network to converge to the active stationary state above.For W < W C , the solution ρ = 0 is stable and absorbing.In the ρ(W ) plot, the locus of stationary regimes defined by equation (18) bifurcates at W = W B into the two bounds of equation ( 31) that delimit the 2-cycles (Fig. 3b).
Discontinuous phase transitions in networks: the case with r > 1.
When r > 1 and W ≤ W B = 2/γ, the stationary state condition is: This equation has a non trivial solution ρ + only when 1 ≤ r ≤ 2 and W C (r) ≤ W ≤ W B , for a certain W C (r) > 1/γ.In this case, at W = W C (r), there is a discontinuous (first-order) phase transition to a regime with activity ρ = ρ C (r) ≤ 1/2 (Fig. 3d).It turns out that ρ C (r) → 0 as r → 1, recovering the continuous phase transition in that limit.For r = 2, the solution to equation ( 19) is a single point ρ(W C ) = ρ C = 1/2 at W C = 2/γ = W B (Fig. 3f).
It also occurs discontinuous transitions if we have a non zero firing threshold V T .Analytic results for µ = 0,V T > 0, I > 0 are given in the Appendix.The absorbing state ρ 0 now is stable (solid line at zero).The non trivial fixed point ρ + starts with the value ρ C at W C and bifurcates at W B , creating the boundary curves (gray) that delimit possible 2-cycles.At W C also appears the unstable separatrix ρ − (dashed line).e) Ceaseless activity (no phase transitions) for r = 0.25, 0.5 and r = 0.75.The activity approach zero (for W = 0 as power laws.f) In the limiting case r = 2 we do not have a ρ > 0 fixed point, but only the stable ρ = 0 (black), the 2-cycles region (gray) and the unstable separatrix (traces).
Ceaseless activity: the case with r < 1.
When r < 1, there is no absorbing solution ρ = 0 to equation (19).In the W → 0 limit we get ρ(W ) = (γW ) r/(1−r) .These power laws means that ρ > 0 for any W > W C (r) = 0 (Fig. 3e).We recover the second order transition W C (r = 1) = 1/γ when r → 1 in equation (19).Interestingly, this ceaseless activity ρ > 0 for any W > 0 seems to be similar to that found by Larremore et al. 38 with a µ = 0 linear saturating model.This ceaseless activity, even with r = 1, perhaps is due to the presence of inhibitory neurons in  Firing avalanches in neural networks have attracted significant interest because of their possible connection to efficient information processing 3-5, 7, 9 .Through simulations, we studied the critical point W C = 1, γ C = 1 (with µ = 0) in search for neuronal avalanches 3,9 (Fig 4).
An avalanche that starts at discrete time t = a and ends at t = b has duration d = b − a and size s = N ∑ b t=a ρ[t] (Fig. 4a).By using the notation S for a random variable and s for its numerical value, we observe a power law avalanche size distribution P S (s) ≡ P(S = s) ∝ s −β , with the mean-field exponent β = 3/2 (Fig. 4b) 3,9,11 .Since the distribution P S (s) is noisy for large s, for further analysis we use the complementary cumulative function C S (s) ≡ P(S ≥ s) = ∑ k=s ∞P S (k) (which gives the probability of having an avalanche with size equal or greater than s) because it is very smooth and monotonic (Fig. 4c).Data collapse gives a finite-size scaling exponent c S = 1 (Fig. 4d) 12,13 .
We also observed a power law distribution for avalanche duration, P D (d) ≡ P(D = d) ∝ d −δ with δ = 2 (Figure 5a).The complementary cumulative distribution is From data collapse, we find a finite-size scaling exponent c D = 1/2 (Fig. 5b), in accord with the literature 11 .

The model with dynamic parameters
The results of the previous section were obtained by fine-tuning the network at the critical point γ C = W C = 1.Given the conjecture that the critical situation has functional advantages, a biological model should include some homeostatic mechanism capable of tuning the network towards criticality.Without such mechanism, we cannot truly say that the network self-organizes toward the critical regime.
However, observing that the relevant parameter for criticality in our model is the critical boundary γ C W C = 1, we propose to work with dynamic gains γ i [t] while keeping the synapses W i j fixed.The idea is to reduce the gain γ i [t] when the neuron fires, and let the gain slowly recover towards a higher resting value after that: Now, the factor τ is related to the characteristic recovery time of the gain, A is the asymptotic resting gain, and u ∈ [0, 1] is the fraction of gain lost due to the firing.This model is plausible biologically, and can be related to a decrease and recovery, due to the neuron activity, of the firing probability at the AIS 42 .Our dynamic γ i [t] mimic the well known phenomenon of spike firing adaptation 14,15 .
This approach seems sufficient to achieve a state very similar to self-organized criticality.Fig. 6a shows a simulation with all-to-all coupled networks with N neurons and, for simplicity, W i j = W .We observe that the average gain γ seems to converge toward the critical value γ C (W ) = 1/W = 1, starting from different γ[0] = 1.As the network converges to the critical state, we observe power-law avalanche size distributions with exponent −3/2 leading to a cumulative function C S (s) ∝ s −1/2 (Fig. 6b).Curiously, the finite-size scaling exponent is c S = 2/3, which is different from that observed in the static model with fixed gains (c S = 1) (Fig. 4d).This empirical evidence is supported by a mean-field analysis of equation (20).Averaging over the sites, we have for the average gain:

9/17
In the stationary state, we have γ But we have the relation near the critical region, where C is a constant that depends on Φ(V ) and µ, for example, with µ = 0, C = 1 for Φ linear monomial model.So: Eliminating the common factor γ * , and dividing by uC, we have: 10/17 Now, call x = 1/(uCτ).Then, we have: The fine tuning solution is to put by hand A = γ c , which leads to γ * = γ c independent of x.This fine tuning solution should not be allowed in a true SOC scenario.So, suppose that A = Bγ c .Then, we have: Now we see that, to have a critical or supercritical state (where equation ( 23) holds), we must have B > 1, otherwise we fall in the subcritical state γ * < γ C where ρ * = 0 and this mean-field calculation is not valid.A first order approximation leads to: This mean-field calculation shows that, if x → 0, we obtain a SOC state γ * → γ C .However, the strict case x → 0 would require a scaling τ = O(N a ) with an exponent a > 0, as done previously for dynamic synapses in [10][11][12][13] ), This scaling is necessary because all-to-all networks are pathological: the same pathology imply the scaling w i j = W i j /N, which is also non-biological since depends on the non-local information given by N.This dependence on N is similar to the scaling J i j /N for magnetic couplings in spin systems with all-to-all networks, which also is non-physical.
Even a more conservative value τ = 10 ms gives γ * = 1.01γC .Although not perfect SOC, this result is totally suf- ficient to explain power law neuronal avalanches.We call this phenomena self-organized supercriticality (SOSC), where the supercriticality can be very small.
We must yet to determine the volume of parameter space (τ, A, u) where the SOSC phenomenon holds.In the case of dynamic synapses W i j [t], this parametric volume is very large 12,13 and we conjecture that the same occurs for the dynamic gains γ i [t].This shall be studied in detail in another paper.

Discussion
Stochastic model: The stochastic neuron introduced by Galves and Löcherbach 17,37 are interesting elements for studies of networks of spiking neurons because they enable exact analytic results and simple numerical calculations.While the LSIF models of Soula et al. 30 and Cessac [31][32][33] introduce stochasticity in the neuron's behavior by adding noise terms to its potential, the GL model is agnostic about the origin of noise and randomness (which can be a good thing when several noise sources are present).All the random behavior is grouped at the single firing function Φ(V ).
Phase transitions: Networks of GL neurons display a variety of dynamical states with interesting phase transitions.We looked for stationary regimes in such networks, for some specific firing functions Φ(V ) with no spontaneous activity at the baseline potential (that is, with Φ(0) = 0 and I = 0).We studied the changes in those regimes as a function of the mean synaptic weight W and mean neuronal gain γ.We found basically tree kinds of phase transition, depending of the behavior of Φ(V ) ∝ V r for low V : 2-4 : r < 1: A ceaseless dynamic regime with no phase transitions (W C = 0) similar to that found by Larremore (2014); r = 1: A continuous (second order) absorbing state phase transition in the Directed Percolation universality class usual in SOC models 2,3,12,13 ; r > 1: Discontinuous (first order) absorbing state transitions.

11/17
We also observed discontinuous phase transitions for any r > 0 when the neurons have a firing threshold V T > 0, see Appendix.
The deterministic LIF neuron models, which do not have noise, do not seem to allow these kinds of transitions 23,26,27 .The model studied by Larremore et al. 38 is equivalent to the GL model with monomial saturating firing function with r = 1,V T = 0 and γ = 1.They did not report any phase transition (perhaps because of the effect of inhibitory neurons in their network), but found a ceaseless activity very similar to what we observed with r < 1.
Avalanches: In the case of second-order phase transitions (Φ(0) = 0, r = 1,V T = 0), we detected firing avalanches at the critical boundary γ C = 1/W , whose size and duration power law distributions present the standard mean- field exponents β = 3/2 and δ = 2.We observed a very good finite-scaling and data collapse behavior, with finite-size exponents c S = 1 and c D = 1/2.

Self-organized criticality:
One way to achieve this goal is to use dynamical synapses W i j [t], in a way that mimics the loss of strength after a synaptic discharge (presumably due to neurotransmitter vesicles depletion), and the subsequent slow recovery [10][11][12][13] : where K j is the number of neighbors of the presynaptic neuron j.Parameters are related to the synaptic recovery time τ, the asymptotic value A and u the fraction of synaptic lost after firing.This has been examined in [10][11][12][13] .For our all-to-all coupled network, we have K = N − 1 and N(N − 1) dynamic equations for the W i j s.This is a huge number, for example O(10 8 ) equations, even for a moderate network of N = 10 4 neurons 12,13 .The possibility of well behaved SOC in bulk dissipative systems with loading is discussed in 11,43 .Further considerations for systems with conservation in average at the stationary state, as occurs in our model, are made in 12,13 .
Inspired by the presence of the critical boundary, we proposed a new mechanism for short-scale neural network plasticity, based on dynamic neuron gains γ i [t] instead the above dynamic synaptic weights.This new mechanism is biologically plausible, probably related an activity-dependent firing probability at the AIS 28,42 , and was found to be sufficient to obtain neuronal avalanches.We obtained good data collapse and finite-size behavior for the P S (S) distributions but, in contrast with the static model, we get a finite-size exponent c S = 2/3.The reason for this difference is not clear by now, but we notice that such c S = 2/3 exponent has been found previously in the Pruessner-Jensen SOC model and explained by a field theory elaborated for such systems 43 .
The great advantage of this new SOC mechanism is its computational efficiency: when simulating N neurons with K synapses each, there are only N dynamic equations for the gains γ i [t], instead of NK equations for the synaptic weights W i j [t].Notice that, for the all-to-all coupling network studied here, this means O(N 2 ) equations for dynamic synapse but only O(N) equations for dynamic gains.This makes a huge difference for the network sizes that can be simulated.
We stress that, since we used τ finite, the criticality is not perfect (γ * /γ C ∈ [1.001; 1.01]).So, we called it a self- organized superCriticality (SOSC) phenomenon.If x = 1/(uCτ) ≈ 0.001, the stationary state is experimentally indistinguishable from true SOC.However, if x < 100, large avalanches can be obtained.Interestingly, SOSC would be a concretization of Turing intuition that the best brain operating point is slightly supercritical 1 .
We speculate that this slightly supercriticality could explain why humans are so prone to supercritical-like pathological states like epilepsy (prevalence 1.7%) and mania (prevalence 2.6) 3 .From our mechanism, such pathological states arises from small gain depression u or small gain recovery time τ.These parameters are experimentally related to firing rate adaptation and perhaps our proposal could be experimentally studied in normal and pathological tissues.
We also conjecture that this supecriticality in the whole network could explain the Subsamplig Paradox in neuronal avalanches: since the initial experimental protocols 9 , critical power laws have been seem when using arrays of N e = 32 − −512 electrodes, that are a very small number compared to the full biological network with N = O(10 6 − −10 9 ) neurons.This situation N e << N has been called subsampling [44][45][46] .
The paradox occurs because models that present good power laws for avalanches measured over the total number of neurons N, under subsampling present only exponential tails or log-normal behaviors 46 .No model, to the best of our knowledge, has solved this paradox.Our dynamic gains, since produce supercritical states like γ * = 1.01γC , 12/17 could be a solution if the supercriticality in the whole network, described by a power law with a supercritical bump for large avalanches, turns out an apparent pure power law under subsampling.This possibility will be fully explored in another paper.
Directions for future research: Future research could investigate other network topologies and firing functions, heterogeneous networks, the effect of inhibitory neurons 26,38 , and network learning.The study of self-organized supercriticality (and subsampling) with GL neurons and dynamic neuron gains is particularly promising.Here the solid black lines represent the stable fixed points, dashed black lines represent unstable fixed points and grey lines correspond to the marginally stable boundaries of cycles-2 regime.The discontinuity ρ C goes to zero for V T → 0.

Figure 5 .
Figure 5. Avalanche duration statistics in the static model: Simulations at the critical point W C = 1, γ = 0 (µ = 0 ) for network sizes N = 1000, 2000, 4000, 8000, 16000 and 32000: a) Probability distribution P D (d) ≡ P(D = d) for avalanche duration d.The dashed reference line is proportional to d −δ , with δ = 2. b) Data collapse C D (d)d versus d/N c D , with the cutoff exponent c D = 1/2.The complementary cumulative function C D (d) ≡ ∑ ∞ k P D (k), being an integral of P D (d), has power law exponent −δ + 1 = −1.

Figure 6 .
Figure 6.Self-organization with dynamic neuronal gains: Simulations of a network of GL neurons with fixedW i j = W = 1, γ = 1, u = 1, A = 1.1 and τ = 1000 ms.Dynamic gains γ i [t] starts with γ i [0] uniformly distributed in [0, γ max ].The average initial condition is γ[t] ≡ 1 N ∑ N i γ i [t] ≈ γ max /2,which produces the different initial conditions γ[0].(a) Self-organization of the average gain γ[t] over time.The horizontal dashed line marks the value γ C = 1.(b) Data collapse for C S (s)s 1/2 versus s/N c S for several N, with the cutoff exponent c S = 2/3.
Figure 6.Self-organization with dynamic neuronal gains: Simulations of a network of GL neurons with fixedW i j = W = 1, γ = 1, u = 1, A = 1.1 and τ = 1000 ms.Dynamic gains γ i [t] starts with γ i [0] uniformly distributed in [0, γ max ].The average initial condition is γ[t] ≡ 1 N ∑ N i γ i [t] ≈ γ max /2,which produces the different initial conditions γ[0].(a) Self-organization of the average gain γ[t] over time.The horizontal dashed line marks the value γ C = 1.(b) Data collapse for C S (s)s 1/2 versus s/N c S for several N, with the cutoff exponent c S = 2/3.

Figure 8 .
Figure 8. Phase transitions for V T > 0: monomial model with µ = 0,r = 1, γ = 1 and thresholds V T = 0, 0.05 and 0.1 .Here the solid black lines represent the stable fixed points, dashed black lines represent unstable fixed points and grey lines correspond to the marginally stable boundaries of cycles-2 regime.The discontinuity ρ C goes to zero for V T → 0.