Abstract
Neurons in the brain are wired into adaptive networks that exhibit collective dynamics as diverse as scalespecific oscillations and scalefree neuronal avalanches. Although existing models account for oscillations and avalanches separately, they typically do not explain both phenomena, are too complex to analyze analytically or intractable to infer from data rigorously. Here we propose a feedbackdriven Isinglike class of neural networks that captures avalanches and oscillations simultaneously and quantitatively. In the simplest yet fully microscopic model version, we can analytically compute the phase diagram and make direct contact with human brain restingstate activity recordings via tractable inference of the model’s two essential parameters. The inferred model quantitatively captures the dynamics over a broad range of scales, from single sensor oscillations to collective behaviors of extreme events and neuronal avalanches. Importantly, the inferred parameters indicate that the coexistence of scalespecific (oscillations) and scalefree (avalanches) dynamics occurs close to a nonequilibrium critical point at the onset of selfsustained oscillations.
Similar content being viewed by others
Main
Synchronization is a key organizing principle that leads to the emergence of coherent macroscopic behaviors across diverse biological networks^{1}. From Hebb’s neural assemblies^{2} to synfire chains^{3}, synchronization has also strongly shaped our understanding of brain dynamics and function^{4}. The classic and arguably most prominent example of largescale neural synchronization is brain oscillations, first reported about a century ago^{5}: periodic, large deflections in electrophysiological recordings such as electroencephalography (EEG), magnetoencephalography (MEG) or local field potential (LFP)^{4,5}. As oscillations are thought to play a fundamental role in brain function, their mechanistic origins have been the subject of intense research. According to the current view, the canonical circuit that generates prominent brain rhythms such as the alpha oscillations and the alternation of up and downstates uses mutual coupling between excitatory (E) and inhibitory (I) neurons^{6}. Alternative circuits, including I–I population coupling, have been proposed to explain other brain rhythms such as highfrequency gamma oscillations^{7}. Setting biological details aside, the majority of research has predominantly focused on the emergence of synchronization at a preferred temporal scale—the oscillation frequency.
Yet brain activity also exhibits complex, largescale cooperative dynamics with characteristics that are antithetic to those of oscillations. In particular, empirical observations of neuronal avalanches have shown that brain rhythms coexist with activity cascades in which neuronal groups fire in patterns with no characteristic time or spatial scale, suggesting that the brain may operate near criticality^{8,9,10,11,12,13,14}. In this context, the coexistence of scalefree neuronal avalanches with scalespecific oscillations suggests an intriguing dichotomy that is currently not understood. On the one hand, models of brain oscillations are very specific and seek to capture physiological mechanisms underlying particular brain rhythms. On the other hand, attempts to explain the emergence of neuronal avalanches almost exclusively focus on criticalityrelated aspects and ignore coexisting behaviors such as oscillations, even though they themselves may be constitutive for understanding the putative criticality. Among the few exceptions^{15,16,17,18,19}, Poil et al. proposed a probabilistic integrate and fire spiking model with E and I neurons, which generates longrange correlated fluctuations reminiscent of MEG oscillations in the resting state, with suprathreshold activity following powerlaw statistics consistent with neuronal avalanches and criticality^{15}. More recently, by adopting a coarsegrained Landau–Ginzburg approach to neural network dynamics, Di Santo et al. have shown that neuronal avalanches and related putative signatures of criticality cooccur at a synchronization phase transition, where collective oscillations may also emerge^{17}. These results were successively extended to a hybridtype synchronization transition in a generalized Kuramoto model^{20}.
Although these and other proposed approaches show that neuronal avalanches may coexist with some form of network oscillations^{15,19} or network synchronization^{17,20}, they suffer from three major shortcomings. First, these models are neither simple (for example, in terms of parameters) nor analytically tractable, making an exhaustive exploration of their phase diagram out of reach. Second, neither of the two abovementioned models simultaneously capture events at the microscopic (individual spikes) and macroscopic (collective variables) scales. Third, it is not clear how to rigorously connect these models to data, beyond relying on qualitative correspondences.
Here we propose a minimal, microscopic and analytically tractable model class that can capture a wide spectrum of emergent phenomena in brain dynamics, including neural oscillations, extreme event statistics and scalefree neuronal avalanches^{8}. Inspired by recent theoretical results on the emergence of selfoscillations in systems with distinct coexisting phases^{21}, these models are nonequilibrium extensions of the Ising model of statistical physics with an extra feedback loop that enables selfadaptation. As a consequence of feedback, neuronal dynamics is driven by the ongoing network activity, generating a rich repertoire of dynamical behaviors. The structure of the simplest model from this class permits microscopic network dynamics investigations as well as an analytical meanfield solution in the Laudau–Ginzburg spirit and, in particular, allows us to construct the model’s phase diagram.
The tractability of our model enables us to make direct contact with MEG data on the restingstate activity of the human brain. With its two free parameters inferred from data, the model closely captures brain dynamics across scales, from single sensor MEG signals to collective behavior of extreme events and neuronal avalanches. Remarkably, the inferred parameters indicate that scalespecific (neural oscillations) and scalefree (neuronal avalanches) dynamics in brain activity coexist close to a nonequilibrium critical point that we proceed to characterize in detail.
Results
Adaptive Ising model
We consider a population of interacting neurons whose dynamics is selfregulated by a timevarying field that depends on the ongoing population activity level (Fig. 1a). The N spins s_{i} = ±1 (i = 1, 2, ... , N; N = 10^{4} in our simulations unless specified otherwise) represent excitatory neurons that are active when s_{i} = +1 or inactive when s_{i} = −1. In the simplest, fully homogeneous scenario described here, neurons interact with each other through synapses of equal strength J_{ij} = J = 1 (Methods). The ongoing network activity is defined as \(m(t)=\frac{1}{N}\mathop{\sum }\nolimits_{i = 1}^{N}{s}_{i}(t)\) (that is, as the magnetization of the Ising model) and each neuron experiences a uniform negative feedback h that depends on the network activity as \(\dot{h}=cm\), with c determining the strength of the feedback. Neurons s_{i} are stochastically activated according to Glauber dynamics, where the new state of neuron s_{i} is drawn from the marginal Boltzmann–Gibbs distribution \(P({s}_{i})\propto \exp (\beta {\tilde{h}}_{i}{s}_{i})\), with \({\tilde{h}}_{i}={\sum }_{j\ne i}{J}_{ij}{s}_{j}+h\), where β is reminiscent of the inverse temperature for an Ising model (see Methods).
Multiple interpretations of this model are possible. On the one hand, negative feedback can be identified with a meanfield approximation to the inhibitory neuron population that uniformly affects all excitatory neurons with a delay given by the characteristic time c^{−1} (Supplementary Section 1.4). On the other hand, feedback could be seen as intrinsic to excitatory neurons, mimicking, for example, spikethreshold adaptation^{22}. Explorationworthy (and possibly more realistic) extensions within the same model class are accessible by considering two ways in which geometry and neural biology can enter the model. First, as in the standard Ising magnet, the interaction matrix J_{ij} can be used to model cell types (for example, inhibitory versus excitatory types; Supplementary Section 1.4), the spatial structure of the cortex, or empirically established topological features of real neural networks (Supplementary Section 1.5). Second, feedback h_{i} to neuron i could be derived from a local magnetization in a neighborhood around neuron i instead of the global magnetization; in the interesting limiting case in which \(\dot{{h}_{i}}=c{s}_{i}\), each neuron would feedback only on its own past spiking history and the model would reduce to a set of coupled binary oscillators (see Supplementary Section 1.2 and Supplementary Fig. 2 for a discussion of this limiting case). Irrespective of the exact setting, the model’s mathematical attractiveness stems from its tractable interpolation between stochastic (spiking of excitatory units) and deterministic (feedback) elements.
Here we consider the fully connected continuous time limit of the model (Fig. 1 and Methods). Network behavior is determined by c and β. For c = 0, h = 0, the model reduces to the standard infinitedimensional (mean field) Ising model with a secondorder phase transition at β = β_{c} = 1. At nonzero feedback, c > 0, the model is driven out of equilibrium and its critical point at β_{c} coincides with an Andronov–Hopf bifurcation^{21}. For β < β_{c} and c below a threshold value c* = (β−1)^{2}/4β, m(t) is described by an Ornstein–Uhlenbeck process independently of β. For β < β_{c}, the system is stable and shows a crossover from a stable node with exponential relaxation (two negative real eigenvalues) to a stable focus with oscillationmodulated exponential relaxation (two complex eigenvalues; resonant regime) when c increases beyond c* (Supplementary Fig. 7). In the resonant regime, c > c*, oscillations become more prominent as β_{c} = 1 is approached, finally transitioning into selfsustained oscillations for β > β_{c} (Supplementary Fig. 8). At β = β_{c}, we have a line of Andronov–Hopf bifurcations where a focus loses stability and a limit cycle emerges. We find that this bifurcation is supercritical, with the first Lyapunov coefficient being negative (that is, λ_{1} = −(1 + c)β/8 < 0).
We focus on the resonant regime below and at the critical point, and study the reversal times and zerocrossing areas of the total network activity m(t) (Fig. 1c). The reversal time, t, is defined as the time interval between two consecutive points in time at which a given signal crosses zero. Correspondingly, the zerocrossing area (a_{0}) is the area under the signal curve between two zerocrossing points. The distribution P(a_{0}) of the zerocrossing area follows a powerlaw behavior with an exponent τ = 1.227 ± 0.004 in the vicinity of the critical point. As β decreases, the scaling regime shrinks until it eventually vanishes for small enough β. Similar behavior is observed for the distribution P(t) of reversal times. This distribution also follows a powerlaw with an exponent α_{t} = 1.378 ± 0.004 near the critical point (Fig. 1d). Both distributions have an exponential cutoff related to the characteristic time of the network activity oscillations, 1/c; this cutoff transforms into a hump as β → 1 and c ≫ c*(β), that is, as oscillations in m(t) become increasingly prominent (Supplementary Fig. 9). Importantly, for the noninteracting (J = 0) model, the distributions P(a_{0}) and P(t) follow a purely exponential behavior (Fig. 1d, inset), indicating that the coexistence of oscillatory bursts and powerlaw distributions for the network activity requires neuron interactions as well as the adaptive feedback (Supplementary Fig. 10).
Model inference from local restingstate brain dynamics
In the resonant regime below the critical point (c > c*, β < β_{c}), it is possible to analytically compute the autocorrelation function, C(τ), of m(t) in the linear approximation^{23} (Methods); C(τ) can be used to infer model parameters β and c from empirical data by moment matching (see Supplementary Section 1.6 for details on parameter inference), thereby locating the observed system in the phase diagram (Fig. 1b).
We test the proposed approach on MEG recordings of the awake restingstate of the human brain (Methods). We first analyze brain activity on individual MEG sensors. To this end, we compare the magnetic field recorded on individual MEG sensors with the magnetization m of the model (Fig. 1). This analogy relies on the nature of the brain magnetic fields captured by the MEG, which are generated by synchronous postsynaptic currents in cortical neurons, and on their relationship with collective neural fluctuations mimicked by m (ref. ^{24}).
During resting wakefulness the brain activity is largely dominated by oscillations in the alpha band (8−13 Hz; Fig. 2a), which have been the starting point of many investigations^{4,25,26} including ours reported below; similar results are also obtained for the broadband activity (Supplementary Fig. 11). After isolating the alpha band, we estimate β and c by fitting the empirical C(τ) to the analytical form of the autocorrelation (Methods). Figure 2b illustrates the typical quality of the fit and the qualitative resemblance between the model and MEG sensor signal dynamics.
As our model is fit to reproduce the secondorder statistical structure in the signal, we next turn our attention to signal excursions over the threshold—a higherorder statistical feature routinely used to characterize bursting brain dynamics^{10,27,28,29}. To that end, we construct the distribution of (log) areas under the signal above a threshold ± e (Fig. 2c)^{26}; P(log a_{e}) is bellshaped, featuring strongly asymmetric tails for MEG sensors as well as the model (Fig. 2c). Variability across subjects is mostly related to signal amplitude modulation, resulting in small horizontal shifts in P(log a_{e}) but no variability in the distribution shape. Importantly, the rescaled distribution is independent of the threshold e over a robust range of values, and is welldescribed by a Weibull form, \({P}_{\mathrm{W}}(x;\lambda ,k)=\frac{k}{\lambda }{(\frac{x}{\lambda })}^{k1}{\mathrm{e}}^{{(x/\lambda )}^{k}}\) (Fig. 2c, bottom panel inset; Supplementary Fig. 12). Taken together, these observations indicate that our model has the ability to capture nontrivial aspects of amplitude statistics in MEG signals, within and across different subjects (Supplementary Fig. 13).
Parameters inferred across all sensors and subjects suggest baseline values of β = 0.99 and c = 0.01 that are well matched with the data, which we use for all subsequent analyses (unless stated otherwise). Specifically, we find that the bestfit β values strongly concentrate in a narrow range around β ≈ 0.99 (β = 0.986 ± 0.006; c = 0.012 ± 0.001), which is very close to the critical point (Fig. 2d and Supplementary Fig. 14). Although all analyzed signals are bandpasslimited to a central frequency of around 10 Hz by filtering, closeness to criticality seems to strongly correlate with the fraction of the total power in the raw signal in the alpha band (Fig. 2d; R^{2} = 0.21; P < 10^{−5}). This suggests that alpha oscillations may be closely related to critical brain tuning during the resting state^{11,25,30}.
A classic fingerprint of tuning to criticality is the emergence of longrange temporal correlations (LRTCs), which have been documented empirically^{25,29,30}. Longrange temporal correlations in the alpha band have been investigated primarily by applying the detrended fluctuations analysis (DFA) to the amplitude envelope of MEG or EEG signals in the alpha band (Methods)^{15,25}. Briefly, DFA estimates the scaling exponent α of the rootmeansquare fluctuation function F in nonstationary signals with polynomial trends^{31}. In brief, the integrated signal is divided into windows of equal length, n, and the local trend is subtracted in each window. For signals exhibiting positive (or negative) LRTC, F scales as F ∝ n^{α} with 0.5 < α < 1 (or 0 < α < 0.5); α = 0.5 indicates the absence of longrange correlations; α also approaches unity for a number of known model systems as they are tuned to criticality^{32}.
To test for the presence of LRTC using DFA, we analyzed the scaling behavior of fluctuations and extracted their scaling exponent α. To avoid spurious correlations introduced by signal filtering, α was estimated over the range 2 s < n < 60 s (Fig. 2e)^{25}. We find that α is consistently between 0.5 and 1 for all MEG sensors and subjects, in agreement with previous analyses^{25}. Importantly, modelfree α values measured across MEG sensors positively correlate with the inferred β values from the model (Fig. 2f), indicating that higher β values are diagnostic of the presence of longrange temporal correlations in the amplitude envelope. Furthermore, we find that inferred β values correlate with the fraction of total signal power in the alpha band (Fig. 2d), which in turn correlates with the inferred entropy production in brain signals (Supplementary Section 1.1)^{33}.
Taken together, our analyses so far show that the adaptive Ising model recapitulates singleMEGsensor dynamics by matching their autocorrelation function and the distribution of amplitude fluctuations, and further suggest that the true MEG signals are best reproduced when the adaptive Ising model is tuned close to, but slightly below, its critical point (β ≲ 1).
Scaleinvariant collective dynamics of extreme events
We now turn our attention to phenomena that are intrinsically collective: (1) coordinated suprathreshold bursts of activity, which emerge jointly with LRTC in alpha oscillations^{15}; and (2) neuronal avalanches, that is, spatiotemporal cascades of thresholdcrossing sensor activity, which have been identified in the MEG of the resting state of the human brain^{11,30}. Both of these phenomena are generally seen as chains of extreme events that are diagnostic of the underlying brain dynamics^{10,34}.
We start by defining the instantaneous network excitation A_{ϵ}(t) as the number of extreme events cooccurring within time bins of size ϵ across the entire MEG sensor array (Methods). For each sensor, extreme events are the extreme points in that sensor’s signal that exceed a set threshold e = ± n s.d. (Fig. 3a). For a given threshold, A_{ϵ} depends on the size of the time bin ϵ that we use to analyze the data (Fig. 3b). To make contact with the model, we parcel our simulated network into K equally sized disjoint subsystems of n_{sub} = N/K neurons each, and consider each subsystem activity m_{μ} (μ = 1, … , K) as the equivalent of a single MEG sensor signal (Methods); A_{ϵ} for the model then follows the same definition as for the data, allowing us to perform direct sidebyside comparisons of extreme event statistics.
We first study the distribution of the network excitation, P(A_{ϵ}). We use the same threshold value e = 2.9 s.d. for both the data and model analyses (see Methods). Extensive robustness analyses confirm that our key results are stable in the 2.7 s.d. < e < 3.1 s.d. range (Supplementary Figs. 19 and 20), which we detail resultbyresult below.
Although P(A_{ϵ}) generally depends on ϵ, the distributions corresponding to different ϵ collapse onto a single, nonexponential master curve when A_{ϵ} is rescaled by the average instantaneous network excitation 〈A_{ϵ}〉 (Fig. 3c). The excitation distribution is thus invariant under temporal coarsegraining and the number of extreme events scales nontrivially with ϵ, in contrast to phaseshuffled surrogate data (Methods and Fig. 3c). Model simulations fully recapitulate this data collapse as well as the nonexponential extreme event statistics. Moreover, we show that model simulations reproduce P(A_{ϵ}) to within the variability observed among subjects (Fig. 3c, inset) for given values of ϵ. An analysis of the Kullback–Leibler divergence (Supplementary Section 2) shows that the model quantitatively reproduces the measured distributions to an expected degree given the natural variability in the data (Supplementary Table 1).
Periods of excitation (A_{ϵ} ≠ 0) are separated by periods of quiescence (A_{ϵ} = 0) of duration I_{ϵ} = nϵ, where n is the number of consecutive time bins with A_{ϵ} = 0. The distribution of quiescence durations, P(I_{ϵ}), is invariant under temporal coarsegraining when rescaled by the average quiescence duration, 〈I_{ϵ}〉, collapsing onto a single, nonexponential master curve (Fig. 3d). As was the case with the distribution of network excitation, the modelpredicted distribution of quiescence durations also diverges from the data average distribution by an amount that is within the range of variability among subjects (Fig. 3d, upper inset and Supplementary Table 1).
We also show that the overall probability P_{0}(ϵ) of finding a quiescent time bin follows a nonexponential relation \({P}_{0}(\epsilon )=\exp \left({r}_{0}{\epsilon }^{{\beta }_{I}}\right)\), with β_{I} ≃ 0.6 (Fig. 3d, lower inset), indicating that extreme events grouped into bins of increasing size are not independent^{35}. These results are robust to changes in N, so long as n_{sub} or the number of subsystems K is fixed, or does not change considerably (Supplementary Figs. 15 and 16); otherwise, the value of e that defines an extreme event should be adjusted accordingly, in particular to closely reproduce the distribution of P(I_{ϵ}) (Supplementary Fig. 17). Finally we notice that the quantities 〈A_{ϵ}〉 and 〈I_{ϵ}〉 scale as a power of the bin size ϵ (Supplementary Fig. 21), and are connected to each other by a relationship of the form \(\langle {A}_{\epsilon }\rangle \sim {\langle {I}_{\epsilon }\rangle }^{{b}_{AI}}\) (Supplementary Fig. 21). This implies that for a fixed value of e, both distributions P(A_{ϵ}) and P(I_{ϵ}) are controlled by a single quantity, for example, 〈A_{ϵ}〉.
We performed the data and model analyses using the same threshold value e = 2.9 s.d., which was fixed by comparing the amplitude distribution of MEG sensor signals and model subsystem signals m_{μ}. The distributions P(A_{ϵ}) and P(I_{ϵ}) follow a similar functional behavior in both the data and model for different values of e. The influence of thresholding on the analysis of continuous signals has been previously investigated^{36}. Here, for increasing values of e, we find that: (1) the probability of large (small) A_{ϵ} tends to decrease (increase); (2) the probability of large (small) I_{ϵ} tends to increase (decrease) (Supplementary Fig. 18). These effects are more pronounced for the distribution of P(I_{ϵ}), particularly in its tail. Importantly, P(A_{ϵ}) and P(I_{ϵ}), as well as the exponent β_{I}, show a similar dependence on e in both MEG data and model simulations and, as a consequence, the agreement between the data and model is robust to changes in e (Supplementary Figs. 19 and 20).
In summary, our simple model at baseline parameters provides a robust account of the collective statistics of extreme events. We emphasize that the excellent match to the observed longtailed distributions is only observed for the inferred value β ≃ 0.99, which is very close to criticality; for β = 0.98, we already observe considerable deviations from the data (Supplementary Figs. 22 and 23), demonstrating that excitation and quiescence distributions represent a powerful benchmark for collective brain activity.
Concomitant occurrence of scalefree neuronal avalanches and scalespecific oscillations
A neuronal avalanche is a maximal contiguous sequence of time bins populated with at least one extreme event per bin (Fig. 4a)^{8,11}; every avalanche thus starts after—and ends with—a quiescent time bin (A_{ϵ} = 0) (see Methods for details). Neuronal avalanches are typically characterized by their size s, defined as the total number of extreme events within the avalanche. Avalanche sizes have been reported to have a scalefree powerlaw distribution^{8,11,14,30}.
We estimate the distribution of avalanche sizes P(s) in the restingstate MEG, and compare it with the distribution obtained from model simulation at closetocritical baseline parameter set (Fig. 4b). Both distributions are described by a powerlaw with an exponential cutoff^{11} and show an excellent match across subjects and for individual subjects. Again, the Kullback–Leibler divergence between the mean empirical and model distribution is smaller than the mean Kullback–Leibler divergence estimated among MEG subjects (Supplementary Table 1). Phasescrambled surrogate data strongly deviate from the powerlaw observations, as do model predictions when parameter β is moved even marginally below 0.99 (Supplementary Fig. 24). These results are independent of the N so long as the size n_{sub} or the number K of the subsystems are fixed or do not change considerably (Supplementary Figs. 15 and 16). Importantly, the model also reproduces the distribution of avalanche durations (Supplementary Fig. 26) and, in particular, the scaling relation 〈s〉(d) ∼ d^{ζ} that connects average avalanche sizes s and durations d. Unlike the powerlaw exponent of avalanche size distribution that typically depends on time bin size ϵ (refs. ^{8,30}), the exponent ζ does not depend on ϵ, as shown by the data collapse for both MEG data and model (Fig. 4b, inset). Although the scaling behavior is reproduced qualitatively, the inferred and modelderived values of ζ are not in quantitative agreement, probably due to the overly simplified meanfield connectivity assumed by our model.
As shown for P(A_{ϵ}) and P(I_{ϵ}), the distributions of avalanche sizes also moderately depend on e. This has been previously reported both in the resting human brain and in other systems^{8,30}. We find that simulated avalanche size distributions show a similar dependence to the data, and are thus in agreement with empirical distributions for a range of e values (Supplementary Fig. 25). Importantly, we observe that the relationship between avalanche sizes and durations is robust to changes in e, and the exponent ζ shows no substantial dependence on e (Supplementary Figs. 18 and 25).
Discussion
In this paper we put forward the adaptive Ising class of models for capturing largescale brain dynamics. To our knowledge, this is the simplest model class that reproduces the stylized coexistence of neuronal avalanches and oscillations—the two antithetic features of real brain dynamics. In this formulation, individual units are neither intrinsic oscillators themselves^{20,37}, nor are they mesoscopic units operating close to a Hopf bifurcation^{38}; the collective dynamics is therefore not a result of oscillator synchronization (even though this regime could also be captured by a different realization of an adaptive Ising model). Our proposal thus provides an analytically tractable alternative to, or perhaps a reformulation of, existing models^{15,17,19,39}, which typically implicate either particular excitation/inhibition or network resource balance, or ad hoc driving mechanisms to open up the regime in which oscillations and avalanches may coexist.
Starting with the seminal work of Hopfield^{40}, the functional aspects of neural networks have traditionally been studied with microscopic spin models or attractor neural networks. The associated inverse (maximum entropy) problem recently attracted great attention in connecting spin models to data^{41,42}, particularly with regards to criticality signatures^{43} and the structure of temporal correlations in the neural activity^{44,45}. However, the dynamical expressive power of maximumentropy stationary, kinetic or latentvariable models has been limited, and the rhythmic behavior of brain oscillations was beyond the practical scope of these models. The adaptive Ising model class can be seen as a natural yet orthogonal extension to those previous works, as it enables oscillations and furthermore permits us to explore an interesting interplay of mechanisms, for example, by having selffeedback drive Hopfieldlike networks (with memories encoded in the coupling matrix J) through sequences of stable states.
By contrast to past works^{15,17}, we do not make contact with existing data by qualitatively matching the phenomenology, but instead by proper parameter inference. The inferred parameters consistently place the model very close to its critical point, supporting the hypothesis that alpha oscillations represent brain tuning to criticality^{25}. Inference of parameters with methods that are not based on autocorrelation matching^{46} has confirmed this result (Supplementary Section 1.7 and Supplementary Figs. 3–6). Other models also predict adaptive parameters that are slightly subcritical^{47}. However, within our framework, the possibility of mapping empirical data to a defined region in the adaptive Ising model phase diagram through parameter inference paves the way for further quantification of the relationship between measures of brain criticality and healthy, developing or pathological brain dynamics along the lines developed recently^{48}.
Our inferred model provides a broad account of brain dynamics across spatial and temporal scales. Despite the successes, we openly acknowledge the quantitative failures of our model: first, at the single sensor level, small deviations exist in the distributions of log activity (Fig. 2c), probably due to very long timescales or nonstationarities in the MEG signals^{11}; second, the scaling exponent governing the relation between the avalanche size and duration, ζ, is not reproduced quantitatively (Fig. 4b, inset). Despite these valid points of concern, we find it remarkable that such a simple and tractable model can quantitatively account for so much of the observed phenomenology.
Future work should, first, consider connectivity beyond the simple alltoall meanfield version that we introduced here, probably leading to a better data fit and new types of dynamics, for example, cortical waves (Supplementary Section 1.5). Second, we strongly advocate for rigorous and transparent data analysis and quantitative—not only stylized—comparisons to data. To this end, care must be taken not only when inferring the essential model parameters beyond the linear approximation^{46,49}, but also when treating the hidden degrees of freedom related to the data analysis (specifically, subsampling, temporal discretization, thresholding and so on)^{8,30,36,50}. Third, it is important to confront the model with different types of brain recordings; a real success in this vein would be to account simultaneously for the activity statistics at the microscale (spiking of individual neurons) as well as at the mesoscale (coarsegrained activity probed with MEG, EEG or LFP).
Methods
Data acquisition and preprocessing
Ongoing brain activity was recorded from 14 healthy participants in the MEG core facility at the National Institute of Mental Health for a duration of 4 min (eyes closed). All of the experiments were performed in accordance with the NIH guidelines for human subjects. All participants gave written informed consent. The sampling rate was 600 Hz, and the data were bandpassfiltered between 1 and 150 Hz. Powerline interferences were removed using a 60 Hz notch filter designed in Matlab (Mathworks). The sensor array consisted of 275 axial firstorder gradiometers. Two dysfunctional sensors were removed, leaving 273 sensors in the analysis. The analysis was performed directly on the axial gradiometer waveforms. The data analyzed here were selected from a set of MEG recordings for a previously published study^{11}, in which further details can be found. For our analyses, we used the subjects showing the highest percentage of spectral power in the alpha band (8–13 Hz). Similar results were obtained for randomly selected subjects.
The adaptive Ising model
The model comprises a collection of N spins s_{i} = ±1 (i = 1, 2, ... , N) that interact with each other with a coupling strength J_{ij}. In our analysis, the N spins represent excitatory neurons that are active when s_{i} = +1 or inactive when s_{i} = −1 and J_{ij} > 0. Furthermore, we consider the fully homogeneous scenario in which neurons interact with each other through synapses of equal strength J_{ij} = J = 1. However, interesting generalizations with nonhomogeneous, negative, nonsymmetric J_{ij} are possible, which allow to include in the model, for example, the effect of inhibitory neuronal population and structural and functional heterogeneity. The s_{i} are stochastically activated according to the Glauber dynamics, where the state of a neuron is drawn from the marginal Boltzmann–Gibbs distribution
The spins experience an external field h, a negative feedback that depends on network activity according to the following equation,
where c is a constant that controls the feedback strength, and the sum runs over a neighborhood of the neuron i specified by \({{{{\mathcal{N}}}}}_{i}\); index j enumerates over all of the elements of this neighborhood. Depending on the choice of \({{{{\mathcal{N}}}}}_{i}\), the feedback may depend on the activity of the neuron i itself (selffeedback), its nearest neighbors, or the entire network—the case which we considered in the main paper. In a more realistic setting including both excitatory (J_{ij} > 0) and inhibitory neurons (J_{ij} < 0), one could then take into account the different structural and functional properties of excitatory and inhibitory neurons by considering different interaction and feedback properties.
In the fully connected continuous time limit, the model can be described with the following Langevin equations:
where ξ is unituncorrelated Gaussian noise; the stochastic term thus has amplitude \(b=\sqrt{2/(\beta N)}\). This framework allows for a reparametrizazion of spin variables s_{i} from (− 1, 1) to (0, 1) by introducing a constant term, −cm_{0}, in the feedback equation (Supplementary Section 1.3). Equation (3) can be linearized around the stationary point (m* = 0, h* = 0) to calculate dynamical eigenvalues and construct a phase diagram (Fig. 1b, main text):
In the resonant regime below the critical point (c > c*, β < β_{c}), it is possible to analytically compute the autocorrelation function, C(τ), of the ongoing network activity m(t) in the linear approximation^{23}:
where γ = (1 − β)/2 is the relaxation time of the system, and \(\omega =\sqrt{\beta c{(1\beta )}^{2}/4}\) is the characteristic angular frequency of the model.
In our simulations, one time step corresponds to one system sweep—that is, N spin flips—of Monte Carlo updates, and equation (2) is integrated using Δt = 1/N. Note that this choice of timescales for deterministic versus stochastic dynamic is important, as it interpolates between the quasiequilibrium regime where spins fully equilibrate with respect to the field h, and the regime where the field is updated by feedback after each spinflip and so spins can constantly remain out of equilibrium; Δt is generally much smaller than the characteristic time of the adaptive feedback that is controlled by the parameter c.
Detrended fluctuations analysis of the alpha band amplitude envelope
The DFA^{31} consists of the following steps: (1) given a timeseries x_{i}(i = 1, ... , N), calculate the integrated signal \(I(k)=\mathop{\sum }\nolimits_{i = 1}^{k}(x(i)\langle x\rangle )\), where 〈x〉 is the mean of x_{i}; (2) divide the integrated signal I(k) into boxes of equal length n and, in each box, fit I(k) with a firstorder polynomial I_{n}(k), which represents the trend in that box; (3) for each n, detrend I(k) by subtracting the local trend, I_{n}(k), in each box and calculate the rootmeansquare (r.m.s.) fluctuation \(F(n)=\sqrt{\mathop{\sum }\nolimits_{k = 1}^{N}{[I(k){I}_{n}(k)]}^{2}/N}\); (4) repeat this calculation over a range of box lengths n and obtain a functional relation between F(n) and n. For a powerlaw correlated timeseries, the average r.m.s. fluctuation function F(n) and the box size n are connected by a powerlaw relation F(n) ≈ n^{α}. The exponent α quantifies the longrange correlation properties of the signal. Values of α < 0.5 indicate the presence of anticorrelations in the timeseries x_{i}, α = 0.5 indicates the absence of correlations (white noise), and values of α > 0.5 indicate the presence of positive correlations in x_{i}. The DFA was applied to the alpha band (8−13 Hz) amplitude envelope. Data were band filtered in the 8–13 Hz range using a finite impulse response (FIR) filter (second order) designed in Matlab. The scaling exponent α was estimated in the n range corresponding to 2–60 s to avoid spurious correlations induced by the signal filtering^{25}.
Extreme events, instantaneous network excitation and neuronal avalanches
Data
For each sensor, positive and negative excursions beyond a threshold e were identified. In each excursion beyond the threshold, a single event was identified at the most extreme value (the maximum for positive excursions and minimum for negative excursions). Comparison of the signal distribution with the bestfit Gaussian indicates that the two distributions start to deviate from one another at around ± 2.7 s.d. (ref. ^{11}). A Gaussian distribution of amplitudes is expected to be produced from a superposition of uncorrelated sources, and is not indicative of individual extreme events. For such a reason, one needs to choose e ≥ 2.7 s.d. for the threshold. Higher values will reduce the number of false positives, but increase the number of false negatives. In this study we set e to ± 2.9 s.d. We performed an extensive robustness analyses to confirm that our key results are stable across a range of e values (Supplementary Figs. 19, 20 and 25).
The raster of identified events was binned at a number of temporal resolutions ϵ, which are a multiple of the sampling time T = 1.67 ms. The network excitation A_{ϵ} at a given temporal resolution ϵ is defined as the number of events occurring across all sensors in a time bin. An avalanche is defined as a continuous sequence of time bins in which there is at least an event on any sensor, ending with at least a time bin with no events (Fig. 4a). The size of an avalanche, s, is defined as the number of events in the avalanche. See refs. ^{11,30} for more details.
Model
The simulated network is parceled into K equally sized disjoint subsystems of n_{sub} = N/K neurons each, and each subsystem activity m_{μ} (μ = 1, …, K) is considered as the equivalent of a single MEG sensor signal. The number of neurons n_{sub} in each subsystem is fixed by matching the amplitude distribution of m_{μ} to the estimated MEG sensor amplitude distribution between ± 2.7 s.d., which is the range over which amplitude distributions follow a Gaussian behavior^{11}. This procedure gives the sufficient number of neurons whose collective activity accounts for the the Gaussian core of the empirical signal amplitude distribution, thus providing a common reference to consistently define extreme events in empirical data and model simulations. Extreme events, network excitation and neuronal avalanches for the model follow the same definition as for the data.
Datamodel comparison
Beyond the two key model parameters that are directly inferred from individual sensors (β, c), quantitative data analysis of extreme events requires additional parametric choices (time bin ϵ, threshold e, system size N and subsystem size n_{sub}), both for empirical data as well as model simulations. We successfully demonstrate the scaling invariance of the relevant distributions with respect to ϵ, and robustness of results in a range of e values (Supplementary Figs. 19, 20 and 25). Moreover, we demonstrate robustness with respect to n_{sub} at fixed K = N/n_{sub}, and to K at fixed n_{sub}. However, if K (or n_{sub}) changes considerably, a close match to data (in particular, P(I_{ϵ})) still requires adjusting one extra parameter (for example, threshold e; Supplementary Fig. 17).
Reporting summary
Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article.
Data availability
The data analyzed in this study were collected at the MEG facility of the NIH for a previously published study^{11}. The data belong to NIH and are available from O.S. (shrikio@bgu.ac.il) on reasonable request. Source data are provided with this paper.
Code availability
The codes^{51} used in the current study are publicly available on GitHub (https://github.com/demartid/stat_mod_ada_nn).
References
Pikovsky, A., Rosenblum, M. & Kurths, J. Synchronization: A Universal Concept in Nonlinear Sciences (Cambridge Univ. Press, 2001).
Hebb, D. O. The Organization of Behaviour (Wiley, 1949).
Abeles, M. Corticonics (Cambridge Univ. Press, 1991).
Buzsaki, G. & Draguhn, A. Neuronal oscillations in cortical networks. Science 304, 1926–1929 (2004).
Berger, H. Über das elektrenkephalogramm des menschen. Arch. Psychiatr. Nervenkr. 87, 527–570 (1929).
Wang, X.J. Neurophysiological and computational principles of cortical rhythms in cognition. Physiol. Rev. 90, 1195–1268 (2010).
Chow, C. C., White, J. A., Ritt, J. & Kopell, N. J. Frequency control in synchronized networks of inhibitory neurons. J. Comput. Neurosci. 5, 407–420 (1998).
Beggs, J. M. & Plenz, D. Neuronal avalanches in neocortical circuits. J. Neurosci. 23, 11167–11177 (2003).
Gireesh, D. E. & Plenz, D. Neuronal avalanches organized as nested thetaand beta/gammaoscillations during development of cortical layer 2/3. Proc. Natl. Acad. Sci. USA 105, 7576–7581 (2008).
Tagliazucchi, E., Balenzuela, P., Fraiman, D. & Chialvo, D. R. Criticality in largescale brain FMRI dynamics unveiled by a novel point process analysis. Front. Physiol., https://doi.org/10.3389/fphys.2012.00015 (2012).
Shriki, O. et al. Neuronal avalanches in the resting meg of the human brain. J. Neurosci. 33, 7079–7090 (2013).
Lombardi, F., Herrmann, H. J., PerroneCapano, C., Plenz, D. & de Arcangelis, L. Balance between excitation and inhibition controls the temporal organization of neuronal avalanches. Phys. Rev. Lett. 108, 228703 (2012).
Lombardi, F. & de Arcangelis, L. Temporal organization of ongoing brain activity. Euro. Phys. J. Special Topics 223, 2119–2130 (2014).
Fontenele, A. J. et al. Criticality between cortical states. Phys. Rev. Lett. 122, 208101 (2019).
Poil, S.S., Hardstone, R., Mansvelder, H. D. & LinkenkaerHansen, K. Criticalstate dynamics of avalanches and oscillations jointly emerge from balanced excitation/inhibition in neuronal networks. J. Neurosci. 32, 9817–9823 (2012).
Scarpetta, S., Giacco, F., Lombardi, F. & Candia, A. D. Effects of poisson noise in a if model with stdp and spontaneous replay of periodic spatiotemporal patterns, in absence of cue stimulation. Biosystems 112, 258–264 (2013).
Di Santo, S., Villegas, P., Burioni, R. & Munoz, M. A. Landau–Ginzburg theory of cortex dynamics: scalefree avalanches emerge at the edge of synchronization. Proc. Natl. Acad. Sci. USA 115, 1356–1365 (2018).
Costa, A. A., Brochini, L. & Kinouchi, O. Selforganized supercriticality and oscillations in networks of stochastic spiking neurons. Entropy 19, 399 (2017).
Kinouchi, O., Brochini, L., Costa, A. A., Campos, J. G. F. & Copelli, M. Stochastic oscillations and dragon king avalanches in selforganized quasicritical systems. Sci. Rep. 9, 3874 (2019).
Buendia, V., Villegas, P., Burioni, R. & Munoz, M. A. Hybridtype synchronization transition: where incipient oscillations, scalefree avalanches, and bistability live together. Phys. Rev. Res. 3, 023224 (2021).
De Martino, D. Feedbackinduced selfoscillations in large interacting systems subjected to phase transitions. J. Phys. A 52, 045002 (2019).
Azouz, R. & Gray, C. M. Dynamic spike threshold reveals a mechanism for synaptic coincidence detection in cortical neurons in vivo. Proc. Natl. Acad. Sci. USA 97, 8110–8115 (2000).
Gardiner, C. Stochastic Methods Vol. 4 (Springer, 2009).
da Silva, F. L. EEG and MEG: relevance to neuroscience. Neuron 80, 1112–1128 (2013).
LinkenkaerHansen, K., Nikouline, V. V., Palva, J. M. & IImoniemi, R. J. Longrange temporal correlations and scaling behavior in human brain oscillations. J. Neurosci. 21, 1370–1377 (2001).
Freyer, F., Aquino, K., Robinson, P. A., Ritter, P. & Breakspear, M. Bistability and nongaussian fluctuations in spontaneous cortical activity. J. Neurosci. 29, 8512–8524 (2009).
Lombardi, F., Chialvo, D. R., Herrmann, H. J. & de Arcangelis, L. Strobing brain thunders: functional correlation of extreme activity events. Chaos Solitons Fract. 55, 102 (2013).
Wang, J. W. J. L., Lombardi, F., Zhang, X., Anaclet, C. & Ivanov, P. C. Nonequilibrium critical dynamics of bursts in θ and δ rhythms as fundamental characteristic of sleep and wake microarchitecture. PLoS Comput. Biol. 15, 1007268 (2019).
Lombardi, F. et al. Critical dynamics and coupling in bursts of cortical rhythms indicate nonhomeostatic mechanism for sleepstage transitions and dual role of VLPO neurons in both sleep and wake. J. Neurosci. 40, 171–190 (2020).
Lombardi, F., Shriki, O., Herrmann, H. J. & de Arcangelis, L. Longrange temporal correlations in the broadband resting state activity of the human brain revealed by neuronal avalanches. Neurocomputing 461, 657–666 (2021).
Peng, C.K. et al. Mosaic organization of DNA nucleotides. Phys. Rev. E 49, 1685–1689 (1994).
Eisler, Z., Bartos, I. & Kertész, J. Fluctuation scaling in complex systems: Taylor’s law and beyond. Adv. Phys. 57, 89–142 (2008).
Lynn, C. W., Cornblath, E. J., Papadopoulos, L., Bertolero, M. A. & Bassett, D. S. Broken detailed balance and entropy production in the human brain. Proc. Natl. Acad. Sci. USA 118, 1–7 (2021).
Fekete, T. et al. Critical dynamics, anesthesia and information integration: lessons from multiscale criticality analysis of voltage imaging data. NeuroImage 183, 919–933 (2018).
Meshulam, L., Gauthier, J. L., Brody, C. D., Tank, D. W. & Bialek, W. Coarse graining, fixed points, and scaling in a large population of neurons. Phys. Rev. Lett. 123, 178103 (2019).
FontClos, F., Pruessner, G., Moloney, N. R. & Deluca, A. The perils of thresholding. New J. Phys. 17, 043066 (2015).
Deco, G., Kringelbach, M. L., Jirsa, V. K. & Ritter, P. The dynamics of resting fluctuations in the brain: metastability and its dynamical cortical core. Sci. Rep. 7, 3095 (2017).
Cabral, J., Kringelbach, M. L. & Deco, G. Functional connectivity dynamically evolves on multiple timescales over a static structural connectome: models and mechanisms. Neuroimage 160, 84–96 (2017).
Pausch, J., Garcia Millan, R. & Pruessner, G. Time dependent branching processes: a model of oscillating neuronal avalanches. Sci. Rep. 10, 13678 (2020).
Hopfield, J. J. Neural networks and physical systems with emergent collective computational abilities. Proc. Natl. Acad. Sci. USA 79, 2554–2558 (1982).
Schneidman, E., Berry, M. J., Segev, R. & Bialek, W. Weak pairwise correlations imply strongly correlated network states in a neural population. Nature 440, 1007–1012 (2006).
Tkačik, G. et al. Searching for collective behavior in a large network of sensory neurons. PLoS Comput. Biol. 10, 1003408 (2014).
Tkačik, G. et al. Thermodynamics and signatures of criticality in a network of neurons. Proc. Natl. Acad. Sci. USA 112, 11508–11513 (2015).
Marre, O., El Boustani, S., Frégnac, Y. & Destexhe, A. Prediction of spatiotemporal patterns of neural activity from pairwise correlations. Phys. Rev. Lett. 102, 138101 (2009).
Nasser, H., Marre, O. & Cessac, B. Spatiotemporal spike train analysis for large scale networks using the maximum entropy principle and Monte Carlo method. J. Stat. Mech. Theory Exp. 3, 03006 (2013).
Ferretti, F., Chardès, V., Mora, T., Walczak, A. M. & Giardina, I. Building general langevin models from discrete datasets. Phys. Rev. X 10, 031018 (2020).
Menesse, G., Marin, B., GirardiSchappo, M. & Kinouchi, O. Homeostatic criticality in neural networks. Chaos Solitons Fract. 156, 111877 (2022).
Fekete, T., Hinrichs, H., Sitt, J. D., Heinze, H.J. & Shriki, O. Multiscale criticality measures as generalpurpose gauges of proper brain function. Sci. Rep. 11, 14441 (2021).
Brückner, D. B., Ronceray, P. & Broedersz, C. P. Inferring the dynamics of underdamped stochastic systems. Phys. Rev. Lett 125, 058103 (2020).
Levina, A. & Priesemann, V. Subsampling scaling. Nat. Commun. 8, 15140 (2017).
Lombardi, F. & De Martino, D. demartid/stat_mod_ada_nn: v1.1.1 (Zenodo, 2022); https://doi.org/10.5281/zenodo.7426504
Acknowledgements
This research was funded in whole, or in part, by the Austrian Science Fund (FWF) (grant no. PT1013M03318 to F.L. and no. P34015 to G.T.). For the purpose of open access, the author has applied a CC BY public copyright licence to any Author Accepted Manuscript version arising from this submission. The study was supported by the European Union Horizon 2020 research and innovation program under the Marie SklodowskaCurie action (grant agreement No. 754411 to F.L.).
Author information
Authors and Affiliations
Contributions
F.L., G.T. and D.D.M. designed the research and wrote the paper. F.L. and D.D.M. analyzed the data. All of the authors performed the research.
Corresponding authors
Ethics declarations
Competing interest
The authors declare no competing interests.
Peer review
Peer review information
Nature Computational Science thanks Cristiano Capone, Osame Kinouchi, and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Handling editor: Ananya Rastogi, in collaboration with the Nature Computational Science team. Peer reviewer reports are available.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary information
Supplementary Information
Supplementary Methods, Table 1, Figs. 1–26 and Kullback–Leibler divergence analysis.
Supplementary Source Data
Source Data for Supplementary Figs. 1–26.
Source data
Source Data Fig. 1
Statistical Source Data.
Source Data Fig. 2
Statistical Source Data.
Source Data Fig. 3
Statistical Source Data.
Source Data Fig. 4
Statistical Source Data.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Lombardi, F., Pepić, S., Shriki, O. et al. Statistical modeling of adaptive neural networks explains coexistence of avalanches and oscillations in resting human brain. Nat Comput Sci 3, 254–263 (2023). https://doi.org/10.1038/s43588023004109
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1038/s43588023004109
This article is cited by

A neurophysiological basis for aperiodic EEG and the background spectral trend
Nature Communications (2024)