Abstract
The development of novel techniques to record widefield brain activity enables estimation of datadriven models from thousands of recording channels and hence across large regions of cortex. These in turn improve our understanding of the modulation of brain states and the richness of traveling waves dynamics. Here, we infer datadriven models from highresolution invivo recordings of mouse brain obtained from widefield calcium imaging. We then assimilate experimental and simulated data through the characterization of the spatiotemporal features of cortical waves in experimental recordings. Inference is built in two steps: an inner loop that optimizes a meanfield model by likelihood maximization, and an outer loop that optimizes a periodic neuromodulation via direct comparison of observables that characterize cortical slow waves. The model reproduces most of the features of the nonstationary and nonlinear dynamics present in the highresolution invivo recordings of the mouse brain. The proposed approach offers new methods of characterizing and understanding cortical waves for experimental and computational neuroscientists.
Similar content being viewed by others
Introduction
In the last decade, macroscale widefield imaging, coupled with fluorescent activity indicators (e.g., Genetically Encoded Calcium Indicators, GECIs^{1,2,3,4}), has provided new insights on the study of brain activity patterns^{5,6,7,8,9,10,11,12,13}. Although this microscopy technique does not reach singlecell resolution and supports temporal sampling rates much lower than classical electrophysiology, it enables the recording of neural dynamics simultaneously across brain areas, with signaltonoise ratio and spatiotemporal resolution high enough to capture the cortexwide dynamics in anesthetized and behaving animals^{14,15}. Furthermore, this technique allows to map spontaneous network activity and has recently provided important information about the spatiotemporal features of slow waves^{11,12,16,18}. In this work we consider raw imaging datasets made up of 10.000 pixels per frame at 50 μm × 50 μm spatial resolution and sampled every 40 ms. This poses the challenge to build a model capable of being descriptive and predictive of this large amount of information.
The building of dynamic models requires to capture in a mathematical form the causal relationships between variables required to describe the system of interest; the value of the model is measured by both its ability to match actual observations and its predictive power. We mention two possible approaches to model building, the classical direct and the more recent inverse. In the former, parameters appearing in the model are assigned through a mixture of insights from knowledge about the main architectural characteristics of the system to be modelled, experimental observations, and trialanderror iterations. In the latter, instead, model parameters are inferred by maximising some objective function, e.g., likelihood, entropy, or similarity defined through some specific observable (e.g., functional connectivity). A small number of models following the inverse approach have been so far able to reproduce, from a single recording, the complexity of the observed dynamics. In^{19} the authors constrain the model dynamics to reproduce the experimental spectrum. The work presented in^{20} proposes a network of modules called epileptors, and infers local excitability parameters. Similarly, the authors in^{21} estimate parameters and states from a large dataset of epileptic seizures. Some models focus on fMRI resting state recordings and on the reproduction of functional connectivity^{22}. However, few works center on the reliability of the temporal and spatial features generated by the inferred model, such as in^{23}, where spatiotemporal propagation of bursts of a culture of cortical neurons is accurately reproduced with a minimal spiking model. Indeed, the assessment of this kind of results on large datasets requires methods capable of extracting and characterising the emerging activity.
A major difficulty in assessing the goodness of inferred models is the impossibility of directly comparing the set of parameters of the model (“internal”) with the set of those observable in the biological system, and the two sets are typically not coincident. In order to fill the gap between experimental probing and mathematical description, this paper proposes a method relying on a set of flexible observables capable to optimally perform model selection and to validate the quality of the produced prediction. The developed modular analysis procedure is able to extract relevant observables, such as the distributions of frequency of occurrence, speed and direction of propagation of the detected slow waves.
Another common problem when inferring the model parameters is that it is possible to only partially constrain the dynamics. Usually, the average activity and the correlations between different sites^{24,25} are used. However, it remains difficult to constrain other aspects of the spontaneously generated dynamics^{26}, e.g., the variety of shapes of traveling wavefronts and the distribution of frequency of slow waves. Aware of this issue, we propose a twostep approach that we name inner loop and outer loop. In the inner loop, the model parameters are optimised by maximising the likelihood of reproducing the dynamics displayed by the experimental data locally in time (i.e., the dynamics at a given time is dependent only on the previous iteration step). Conversely, in the outer loop, the outcome of the simulation and the experimental data are compared via observables based on the statistics of the whole time course of the spontaneously generated dynamics. Specifically, we demonstrate that the inclusion of a timedependent acetylcholinic neuromodulation term in the model enables a better match between experimental recordings and simulations, inducing a variability in the expressed dynamics otherwise stereotyped. This additional term influences the excitability of the network along the simulation time course. The outer loop thus enables a quantitative search for optimal modulation.
Another aspect to be considered is the combination of a large number of parameters in the model and a typically short duration of recording sessions (in our case, only six recordings lasting 40 s each). Since the resulting inferred system could be underdetermined, it is important to use good anatomical and neurophysiological priors. Following this route, a minimal anatomical assumption, is to decompose structural connectivities into a shortrange lateral connectivity contribution (the exponential decay characterising the lateral connectivity^{27}, or intraarea connectivity) and a longrange white matter mediated connectivity. It is well known, and confirmed in our experimental data, that during deep sleep and deep and intermediate anesthesia, propagation is by slow wave propagation and therefore can be mainly mediated by lateral cortical connectivity (ref. ^{28}). Relying on this, we chose to model lateral connectivity using local elliptic kernels. Elliptical kernels are often used in neural field theories to predict wave properties as a function of the connectivity parameters^{29,30}; this is usually referred to as a direct approach, going from parameters to dynamics. In^{31} the authors propose equations that include several properties of the cerebral tissue (2D structure, temporal delays, nonlinearities and others), retaining mathematical tractability. This allows to analytically predict global mode properties (such as stability and velocity of propagating wavefronts) for different geometries. Here, we propose a methodology to implement an inverse approach, from wave properties to model parameters. The choice of local connectivity kernels reduces the number of parameters to be inferred from N^{2} to N (number of recording sites). The proposed approach prevents overfitting and keeps the computational cost of inference under control even for higherresolution data. In summary, this paper addresses the understanding of the mechanisms underlying the emergence of the spatiotemporal features of cortical waves leveraging the integration of two aspects: the knowledge coming from experimental data and the interpretation gained from simulations. We identified essential ingredients needed to reproduce the main modes expressed by the biological system, providing a mechanistic explanation grounded in the neuromodulation and spatial hetereogeneity of connectivity and local parameters.
In the following, the Results section presents the main elements of this work (for methodological details, see Methods section) and the Discussion section illustrates limitations and potentials of the approach we introduced, in particular in relation with experiments, theoretical and modeling perspectives.
Results
We propose a twostep procedure to reproduce, in a dataconstrained simulation, the spontaneous activity recorded from the mouse cortex during anesthesia. Here, we considered signals collected from the right hemisphere of a mouse, anesthetized with a mix of Ketamine and Xylazine. The activity of excitatory neurons labeled with the calcium indicator GCaMP6f is recorded through widefield fluorescence imaging, see Fig. 1a. In this anesthetized state, the cortex displays slow waves from ≃0.25 Hz to ≃5 Hz (i.e., covering a frequency range that corresponds to the full δ frequency band and extends a little towards higher frequencies). The waves travel across distant portions of the cortical surface, exhibiting a variety of spatiotemporal patterns.
The optical signal is highly informative about the activity of excitatory neurons. However, the GCaMP6f indicator, despite being “fast” compared to other indicators, has a slow response in time if compared to the time scale of the spiking activity of the cells. The impact on the observed experimental signature can be described as a convolution of the original signal with a lowpass transfer function, Eq. (2), resulting in a recorded fluorescence signal that exhibits a peak after about 200 ms from the onset of the spike and a longtailed decay^{7} (see Methods, Experimental Data and Deconvolution section for details). To take into account this effect, we performed a deconvolution of the fluorescence signal (Fig. 1b–top) to better estimate the firing rate of the population of excitatory neurons, i.e., the multiunit activity, recorded by each single pixel (see Fig. 1b–bottom).
We considered two sets of six acquisitions (each one lasting 40 s) collected from two mice. We characterized the slow waves activity defining a set of macroscopic local observables describing the properties of the cortex site, in opposition to cellular properties derived from the inference: local speed, direction, and interwave interval (IWI, see Methods for more details). Also, we used these observables to compare the spontaneous dynamics measured in experimental recordings with the one reproduced in simulated data. In the first step of the proposed method, the inner loop, the parameters of the model are inferred through likelihood maximization, Fig. 1c–left, exploiting a set of known, still generic, anatomical connectivity priors^{27} and assuming that mechanisms of suppression of longrange connectivity are in action during the expression of global slow waves^{28}. Such anatomical priors correspond to probabilities of connection among neuronal populations, and assume the shape of elliptical kernels exponentially decaying with the distance. For each pixel, the inference procedure has to identify the best local values for the parameters of such connectivity kernels. In addition, for each acquisition period tn ∈ {t1, …, t6}, it identifies the best local (i.e., channelwise) values for spikefrequencyadaptation strength and external current.
In the second step, the outer loop, we seek for hyperparameters (through a grid search exploration) granting the best match between model and experiment by comparing simulations and data, Fig. 1cright. This second step exploits acquired knowledge about neuromodulation mechanisms in action during cortical slow waves expression, implementing them as in^{32}.
In Fig. 1d, we provide a qualitative preview of the fidelity of our simulations by reporting three frames from the simulation (top) and the data (bottom). This example shows similarities in the slow waves propagation: the origin of the wave, the wave extension and the activity are qualitative comparable, as well as the activation pattern. A quantitative comparison is detailed in the following sections.
Characterization of the cortical slow waves
In this work, we improved the methodology implemented by^{16} (see Methods for details), providing three local (i.e., channelwise) quantitative measurements to be applied in order to characterize the identified slow waves activity: local wave velocity, local wave direction, and local wave frequency. The high spatial resolution of the data allows not only to extract global information for each wave, but also to evaluate the spatiotemporal evolution of the local observables within each wave.
For each channel participating in a wave event, the local velocity is evaluated as the inverse of the absolute value of its passage function gradient (see Eq. (21)), the local direction is computed from the weighted average of the components of the local wave velocities (see Eq. (22)), and the local wave frequency is expressed through its reciprocal, the interwave interval (IWI) evaluated from the duration between two consecutive waves passing through the same channel (see Eq. (23)).
In Fig. 2a–c we show a summary of the data analysis performed on all the recordings from the two mice. Interestingly, the defined experimental observables are remarkably comparable among the two subjects, since they display similar distributions in detected waves velocity, directions and IWI.
Further, we classified the waves into propagation modes with a Gaussian Mixture Model (GMM) classification approach applied in a downsampled channel space (44 and 41 informative channels for the two mice, respectively). The number of modes fitted by the GMM is automatically identified using the Bayesian Information Criterion (BIC) relying on a likelihood maximization protocol^{33}. For additional details see Supp. Mat. section Detecting the number of components in the GMM. In Fig. 2d we report the wave propagation modes identified through GMM (see details in Methods, Gaussian Mixture Model section). Specifically, GMM are fitted on a dataset composed by the observed spatiotemporal wave patterns represented in the subsampled channel space (i.e. for each entry in the dataset, the number of features is equal to the number of reduced channels). See section Methods, Slow Waves and Propagation analysis for a more detailed description of how travelling waves are identified from the raw signal. Coherently with what expected from previous studies^{34}, in both mice two main propagation directions are identified: a more frequent anteroposterior and a less frequent posteroanterior direction.
It is also worth noting that, as quantitatively shown in^{35}, the distributions of wave quantitative observables change as a function of the downsampling factor. With a decreasing spatial resolution, fewer waves are detected, and they appear more planar as some complex local patterns are no longer detected. In Fig. 2a, we show the distribution of local velocities measured over the original dataset at a spatial resolution of 0.1 mm (black) and the one downsampled at a spatial resolution of 0.6 mm (orange). Here, the distributions are sharpened and have a lower mean, coherent with the global velocities of the typical waves identified.
Twostep inference method
The inner loop
The first step of our inference method can be seen as a oneway flow of information from data to model and consists of estimating model parameters by a likelihood maximization, see Fig. 3a.
We built a generative model as a network of interacting populations, each of them composed of 500 AdEx (Adaptive Exponential integrateandfire) neurons. Each population is modeled by a firstorder meanfield equation^{29,36,37,38}, according to which all the neurons in the population are described by their average firing rate. The neurons are affected by spike frequency adaptation, that is accounted for by an additional equation for each population (see Methods for details). We remark that we considered currentbased neurons; however, the meanfield model might be extended to the conductancebased case^{39,40}. Each population models the group of neurons recorded by a single pixel (the number of considered pixels, i.e. populations, is obtained after an downsampling is performed, as illustrated in Methods). The parameters of the neuron model, such as the membrane potential, the threshold voltage, etc. have been set to the values reported in Table 1. The other parameters of the model (connectivity, external currents, and adaptation) are inferred from the data. This is achieved by defining the loglikelihood \({{{{{{{\mathcal{L}}}}}}}}\) for the parameters {ξ} to describe a specific observed dynamics {S} as
where F_{i}(t) is the transfer function of the population i and c is the variance of the Gaussian distribution from which the activity is extracted (more details about this can be found in the Methods section). We remark that in this work we consider populations composed of excitatory/inhibitory neurons. To model this, it is necessary to use an effective transfer function describing the average activity of the excitatory neurons and accounting for the effect of the inhibitory neurons (see Methods for details, ExcitatoryInhibitory module, the effective transfer function section).
The optimal values for the parameters are inferred from maximizing the likelihood with respect to the parameters themselves. The optimizer we used is the gradientbased iRprop algorithm^{41}. The parameters are not assumed to be isotropic in the space, and each population (identified by a pixel in the image of the cortex) is left free to have different inferred parameters. The resulting parameters can thus be reported in a map, as illustrated in Fig. 3bg. Each panel represents, colorcoded, the spatial map of a different inferred parameter, describing its distribution over the cortex of the mouse. Specifically, λ, k_{0}, e, and a characterize the inferred shape of the elliptical exponentialdecaying connectivity kernels at a local level (see Methods, section Elliptic exponentiallydecaying connectivity kernels); b is the strength of the spike frequency adaptation (see section Generative Model and Eq. (4)); I_{ext} is the local external input current. Panel H in Fig. 3 presents, for a few exemplary pixels, the total amount of incoming connectivity k, and confirms the presence of a strong eccentricity (e) and prevalent orientations (ϕ) for the elliptic connectivity kernels. Both of which contribute to the initiation and propagation of wavefronts along preferential axes, orthogonal to the major axis of the ellipses.
In principle, such inferred parameters define the generative model which reproduces best the experimental data. A major risk while maximizing the likelihood (see Fig. 3i, blue line) is overfitting, which we avoid by considering the validation likelihood (see Fig. 3i, orange line), i.e., the likelihood evaluated on data expressing the same kind of dynamics but not used for the inference. A training likelihood increase associated with a validation likelihood decrease during the inference procedure is an indicator for overfitting. This is, however, not the case here.
Notably, there is no spatial regularization in the likelihood. The spatial smoothness of parameters naturally comes out when inferring the model from data.
The outer loop
The likelihood maximization (inner loop) constrains the dynamics at the following time step, given the activity at the previous step. Therefore, we expect that running a simulation with the inferred parameters should give a good prediction of the activity of the network. However, it is not straightforward that the spontaneous activity generated by the model would reproduce the desired variety of dynamical properties observed in the data. Indeed, the constraints imposed within the inner loop are local in time (i.e., the constraint at a certain time step t only depends on the state of the network at the previous time step t − 1). As a consequence, the model is not able to generate the longterm temporal richness observed in data. In other words, when running the spontaneous meanfield simulation for the same duration of the original recording, we found that parameters estimated in the inner loop can be adopted to obtain a generative model capable of reproducing oscillations and travelling wavefronts. However, the activity produced by the model inferred from the inner loop results is found to be much more regular as compared to the experimental activity. For instance, the down state duration and the wavefront shape express almost no variability when compared to experimental recordings (see Fig. 3, and Supp. Mat. Fig. S6).
For this reason, it is necessary to introduce the outer loop here described (see Fig. 4a). First, we analyze the spontaneous activity of the generative model and compare it to the data, to tune the parameters that cannot be optimized by direct likelihood differentiation (see Fig. 4a). Thus, we include an external oscillatory neuromodulation in the model (inspired by^{32}), and look for the optimal amplitude and period of the neuromodulation proxy, expected to affect both the values of I_{ext} and b (more details in Methods section).
The identification of the optimal values for these parameters requires the definition of a quantitative score to evaluate the similarity between data and simulation. On the whole set of waves, we computed the cumulative distributions (i.e. on the whole set of waves) of the three local observables characterizing the identified travelling waves (speed, direction, and IWI, already introduced in Section Characterization of the Cortical Slow Waves, Fig. 4c, d). The distance between cumulative distributions we chose is the Earth Mover’s Distance (EMD, see Methods section for more details). EMD is separately evaluated over each of these observables (Fig. 4e) in a grid search. We then combined the three EMDs as in Eq. (20). The resulting “Combined Distance” is reported in Fig. 4d for a single trial, and is further constrained by an additional requirement that excludes the zone marked in gray: we reject those simulations with too long downstates and too short upstates, if compared to experimental distributions (see purple and yellow lines in Fig. 4, respectively). Further details about the rejection criteria can be found in Methods Section The Outer Loop: grid search and datasimulation comparison.
The combination of such distances as a function of the two hyperparameters (amplitude and period) leads to the identification of the optimal point (where the “Combined Distance” is minimal) reported as a green cross in Fig. 4d. The comparison between experimental and simulated cumulative macroscopic distributions for a “good” (optimal match) and a “bad” (nonoptimal match) point in the grid search is depicted in panels 4b and 4c, respectively. We applied the procedure to both mice, 6 recorded trials per mouse, and calibrated the model on each trial independently. Optimal values for amplitude and period for each of them are depicted in Fig. 4f and g, respectively. The distributions of the inferred neuromodulation period (T) and amplitude (A) (Fig. 4f and g, respectively) are consistent between the two mice. The period mildly oscillates around 1 Hz. Instead, we observe more variability in the inferred amplitudes: this is reasonable, because the level of anesthesia changes among trials. In panel 4h, on the other hand, the span of combined EMD measurements for each trial in both mice is shown (within the range of values for amplitude and period considered in the grid search), depicting the variety of the observed phenomena. The black points report the “Combined Distance” for the simulation resulting from the inner loop. Moreover, Fig. 4h shows that the best “Combined Distance” achieved in the grid search (Fig. 4h, bottom of the candle) is lower than the worst one (Fig. 4h, top of the candle), and also than the one obtained for the innerloop simulation (Fig. 4h, black points). Indeed, looking at the grid search results in Fig. 4d, the row corresponding to an amplitude A = 0 reports results for simulations without neuromodulation, i.e. the innerloop output. We observe that, according to the metrics we defined, the simulation outcome is much worse than the optimal point indicated with the green cross.
Validation of the simulation through GMMbased propagation modes
As shown in Fig. 5, the sequential application of the inner and the outer loop (i.e., the twostep inference model) results in an evident improvement of the simulation. Specifically, when looking at the raster plot, the neuromodulation (outer loop) appears to be a key ingredient able to reduce the level of stereotypization of the wave collection, and to introduce elements that mimic the richness and variability of the biological phenomenon.
To further stress out this aspect, we introduce a color code for labeling waves belonging to the different clusters identified as distinct propagation modes by GMM when the model is applied to the collection of experimental and simulated waves (see Suppl. Mat. Detecting the number of components in the GMM for more details on how the algorithm detects the number of clusters). Indeed, besides illustrating the propagation features of the wave collections (as in Fig. 2), we use GMM as a tool for posterior validation of the simulation outcome (Fig. 6). The idea behind this approach is that GMM, acting on a wave collection, is able to identify distinct clusters, grouping wave events with comparable spatiotemporal features, if each identified cluster is significantly populated. This happens if the wave collection to which the GMM is applied for fitting the propagation modes is autoconsistent, i.e. contains propagation events that represent instances of the same coherent phenomenon. It is worth noting that the features fit by GMM are the spatiotemporal propagation pattern of each wave (i.e. a point in the spatially downsampled channel space). These features differ from those used by the outer loop (i.e. local velocity, direction and IWI cumulative distributions), thus providing a validation for the experimentsimulation comparison.
Results reported in Fig. 6 suggest this interpretation: here, for each mouse, the wave collection to which the GMM is applied is made up by putting together wave events from the six experimental trials and wave events from the corresponding simulated trials. In Fig. 6a and e it is reported the comparison between experimental and simulated distributions of wave velocity (left panel), direction (central panel), and IWI (right panel). GMM acting on the entire collection (experiments + simulations) identifies 4 propagation modes; despite differences between the two mice, results illustrated in the pie charts (panels Fig. 6b and f) show that the GMM is “fooled” by the simulation, meaning that the clustering process on the entire collection does not trivially separate experimental from simulated events. Or rather, experimental and simulated waves are nicely mixed when events are grouped in clusters identified by the GMM, implying a correspondence between data and simulations.
A validation of this is depicted in panels Fig. 6c and g. Specifically, we compared the optimal simulation with a control case obtained by “shuffling” the output of the simulation through channel permutation equally for all the simulated waves. The GMM correctly reports a segregation between data and numerical events, suggesting that the GMM is not deceived by a simulation that is properly approaching the data. In Fig. 6c and g, it is shown the fraction of occupancy of waves identified in the shuffled dataset. This case is explored in Supplementary Note 1.
In other words, we can claim that data and simulations obtained from the twostep inference model here presented express the same propagation modes (centroids) as shown in Fig. 6d, h, while the modes expressed by the “shuffled” simulation (as a control) are orthogonal to modes found in data.
Discussion
In this paper, we have proposed a twostep inference method to automatically build a highresolution meanfield model of the whole cortical hemisphere of the mouse.
In recent years, the field of statistical inference applied to neuronal dynamics has mainly focused on the reliable estimate of an effective synaptic structure of a network of neurons^{42,43,44}. In^{45} the authors demonstrated, on an invitro population of ON and OFF parasol ganglion cells, the ability of a Generalized Linear Model (GLM) to accurately reproduce the dynamics of the network^{46}; in addition, they studied the response properties of lateral intraparietal area neurons at the singletrial, singlecell level. The capability of GLM to capture a broad range of single neuron response behaviors was analyzed in^{47}. However, in these works, the focus was on the response of neurons to stimuli of different spatiotemporal complexity. Moreover, recurrent connections, even when accounted for, did not provide a decisive correction to the network dynamics. To date, a very few published studies have focused on the spontaneous dynamics of networks inferred from neuronal data. Most of them focused on the resting state recorded through fMRI and on the reproduction of its functional connectivity^{22}. In^{23} the temporal and spatial features of bursts propagation have been accurately reproduced, but on a simpler biological system, a culture of cortical neurons recorded with a 60 electrodes array. Here we aim at reproducing the dynamics of the whole hemisphere of the mouse with approximately 1400 recording sites.
One of the major risks when inferring a model is the failure of the obtained generative model to reproduce a dynamics comparable to the data. The reason for this is the difficulty to constrain some of the observables when inferring a model. A common example is offered by the Ising model: when inferring its couplings and external fields, only magnetizations and correlations between different spins are constrained^{24}, but it is not possible to constrain the power spectrum displayed by the model and other temporal features. When this experiment is done on the Ising model itself, this is not a problem: if the dataset is large enough, the correct parameters are inferred and also the temporal features are correctly reproduced. However, if the original model contains some hidden variables (unobserved neurons, higherorder connectivities, other dynamical variables), the dynamics generated by the inferred model might not be representative of the original dynamics. This led us to introduce the twostep method, in order to adjust parameters beyond likelihood maximization.
Another obvious but persisting problem in the application of inference models to neuroscience has been how to assess the value of inferred parameters, as well as their biological interpretation. In the literature, a cautionary remark is usually included, recognizing that inferred couplings are to be meant as “effective”, leaving of course open the problem of what exactly this means.
One way to infer meaningful parameters is to introduce good priors. In this work, we assumed that under anesthesia the interareal connectivity is largely suppressed in favor of lateral connectivity, as demonstrated e.g. in^{28} during a similar oscillatory regime. Also, the probability of lateral connectivity in the cortex is experimentally known^{27} to decay with the distance according to an exponential shape, with a spatial scale in the range of a few hundreds of μm. Furthermore, the observation of preferential directions for the propagation of cortical slow waves^{48} suggested us to explicitly account for such anisotropy. For all of these reasons, we introduced exponentially decaying elliptical connectivity kernels in our model. We remark that inferring connections at this mesoscopic scale allows obtaining information complementary to that contained in interareal connectivity atlases, where connectivity is considered on a larger spatial scale, and it is carried by white matter rather than by lateral connectivity.
Another possible approach to get priors could be to make use of Structural Connectivity (SC) atlases that, however, ignore individual variations in anatomic structures and are usually based on exvivo samples for which the structure can be modified by brain tissue processing^{49,50,51}. Also, SC atlases fail to track the dynamic organization of functional cortical modules in different cognitive processes^{52,53}. Indeed, a key feature of neural networks is degeneracy, the ability of systems that are structurally different to perform the same function^{49}. In other words, the same task can be achieved by different individuals with different neural architectures.
On the other hand, Functional Connectivity (FC) is individualspecific an activitybased measure still highly dependent on physiological parameters like brain state and arousal so that FC atlases might not be representative of the subject under investigation^{54}. E.g., statedependent control of circuit dynamics by neuromodulatory influences including acetylcholine (ACh) strongly affects cortical FC^{55}. In addition, functional modules often vary across individual specimens and different studies^{52,56,57}.
In summary, generic atlases of Structural Connectivity (SC) and Functional Connectivity (FC) contain information averaged among different individuals. Though similar functions and dynamics can be sustained in each individual by referring to specific SCs and FCs^{50,51}, detailed information about individual brain peculiarities are lost with these approaches. In the framework here presented, detailed individual connectivities can be directly inferred in vivo, therefore complementing the information coming from SC and FC, and contributing to the construction of brainspecific models descriptive of the dynamical features observed in one specific individual.
One of the values added by the modeling technique here described is that its dynamics relies on a meanfield theory, which describes populations of interconnected spiking neurons. This is particularly useful in order to set up a richer and more realistic dynamics in largescale spiking simulations of slow waves and awakelike activity^{58}, that in absence of such detailed inferred parameters behave according to quite stereotyped spatiotemporal features.
In the framework of the activity we are carrying out in the Human Brain Project, aiming at the development of a flexible and modular analysis pipeline for the study of cortical activity^{35} (see also^{59}), we developed for this study a set of analysis tools applicable to both experimental and simulated data, capable to identify wavefronts and quantify their features.
In perspective, we plan to integrate plastic cortical modules capable of incremental learning and sleep on smallscale networks with the methodology here described, demonstrating the beneficial effects of sleep on cognition^{38,60,61}.
This paper is based on data acquired under a KetamineXylazine anesthesia. An example of application with potential therapeutical impact is to contribute to the understanding of the mechanisms underlying the effect of ketamine treatments in therapy of depression^{62}.
Generally speaking, there are strong evidences of oscillatory neuromodulation currents, affecting the cortex and coming from deeper brain areas (such as the brainstem), with a subHertz frequency^{63,64}. However, we acknowledge that this shape for the neuromodulation might have a limited descriptive power, and we plan to develop more general models. A possible choice can be that of a combination of many oscillatory modes or noisy processes such as OrnsteinUhlenbeck and gaussian processes. However, the risk of this kind of models is overparametrization. This raises the necessity of a penalty to discourage a too large number of parameters and avoid over fitting (see e.g. Akaike Information Criterion^{65}).
Finally, we stress once again that in the current work we only considered spontaneous dynamics. Then, we plan to go beyond spontaneous activity, aiming at modeling the dynamics of slow waves when the network receives input signals^{66,67}, ranging from simple (local) electrical perturbations to proper sensory stimuli. The methodology to achieve this would be very similar to what presented in^{68,69,70}. We present an example of the response of the inferred network to a pulse stimulation in Supp. Mat. Section Simulation with pulse stimulation.
Methods
Mouse widefield calcium imaging
Invivo calcium imaging datasets have been acquired by LENS personnel (European Laboratory for NonLinear Spectroscopy (LENS Home Page, http://www.lens.unifi.it/index.php)) in the Biophotonics laboratory of the Physics and Astronomy Department, Florence, Italy. All the procedures were performed in accordance with the rules of the Italian Minister of Health (Protocol Number 183/2016PR). Mice were housed in clear plastic enriched cages under a 12 h light/dark cycle and were given ad libitum access to water and food. The transgenic mouse line C57BL/6JTg(Thy1GCaMP6f)GP5.17Dkim/J (referred to as GCaMP6f mice (For more details, see The Jackson Laboratory, Thy1GCaMP6f, https://www.jax.org/strain/025393; RRID:IMSR_{J}AX:025393)) from Jackson Laboratories (Bar Harbor, Maine, USA) was used. This mouse model selectively expresses the ultrasensitive calcium indicator (GCaMP6f) in excitatory neurons.
Surgery procedures and imaging: 6monthold male mice were anaesthetized with a mix of Ketamine (100 mg/Kg) and Xylazine (10 mg/Kg). The procedure for implantation of a chronic transcranial window is described in^{16,17}. Briefly, to obtain optical access to the cerebral mantle below the intact skull, the skin and the periosteoum over the skull were removed following application of the local anesthetic lidocaine (20 mg/mL). Imaging sessions were performed right after the surgical procedure. Widefield fluorescence imaging of the right hemisphere were performed with a 505 nm LED light (M505L3 Thorlabs, New Jersey, United States) deflected by a dichroic mirror (DC FF 495DI02 Semrock, Rochester, New York USA) on the objective (2.5x EC Plan Neofluar, NA 0.085, Carl Zeiss Microscopy, Oberkochen, Germany). The fluorescence signal was selected by a bandpass filter (525/50 Semrock, Rochester, New York USA) and collected on the sensor of a highspeed complementary metaloxide semiconductor (CMOS) camera (Orca Flash 4.0 Hamamatsu Photonics, NJ, USA). A 3D motorized platform (M229 for the xy plane, M126 for the zaxis movement; Physik Instrumente, Karlsruhe, Germany) allowed the correct subject displacement in the optical setup.
Experimental data and deconvolution
We assumed that the optical signal X_{i}(t) acquired from each pixel i is proportional to the local average excitatory (somatic and dendritic) activity (see Fig. 1). The main issue is the slow (if compared to the spiking dynamics) response of the calcium indicator (GCaMP6f). We estimate the shape of such a response function from the singlespike response^{7} as a lognormal function:
where dt = 40 ms is the temporal sampling frequency, and μ = 2.2 and σ = 0.91 have been estimated in^{16}. We assumed a linear response of the calcium indicator (which is known to be not exactly true^{7}) and we applied the deconvolution to obtain an estimation of the actual firing rate time course S_{i}(t). The deconvolution is achieved by dividing the signal by the lognormal in the Fourier space:
where \({\hat{X}}_{i}(k)={fft}({X}_{i}(t))\) and \(\hat{LN}(k)={fft}(\hat{LN}(t))\) (fft( ⋅ ) is the fast Fourier transform). Finally, the deconvolved signal is \({S}_{i}(t)={ifft}({\hat{S}}_{i}(k))\) (ifft( ⋅ ) is the inverse fast Fourier transform). The function Θ(k_{0} − k) is the Heaviside function and is used to apply a lowpass filter with a cutoff frequency k_{0} = 6.25 Hz.
Generative Model
We assumed a system composed of N_{pop} (approximately 1400, as the number of pixel) interacting populations of neurons. The population j at time t has a level of activity (firing rate) defined by the variable S_{j}(t) (expressed in ms^{−1}). As anticipated, each population j is associated to a pixel j of the experimental optical acquisition, and contains N_{j} neurons; J_{ij} and C_{ij} are the average synaptic weight and the average degree of connectivity between populations i and j, respectively.
We defined the parameter k_{ij} = N_{j}J_{ij}C_{ij} as the relevant one in the inference procedure. We also considered that each population receives an input from a virtual external population composed of \({N}_{j}^{{{{{{{{\rm{ext}}}}}}}}}\) neurons, each of which injecting a current \({J}_{j}^{{{{{{{{\rm{ext}}}}}}}}}\) in the population with a frequency \({\nu }_{j}^{{{{{{{{\rm{ext}}}}}}}}}\). Similarly to what done above, we defined the parameter \({I}_{j}^{{{{{{{{\rm{ext}}}}}}}}}={\nu }_{j}^{{{{{{{{\rm{ext}}}}}}}}}{N}_{j}^{{{{{{{{\rm{ext}}}}}}}}}{J}_{j}^{{{{{{{{\rm{ext}}}}}}}}}\).
In a common formalism, site i at time t is influenced by the activity of other sites through couplings k_{ik}^{23}, in terms of a local input current defined as
The term \(\frac{{b}_{i}{W}_{i}(t)}{{C}_{m}}\) accounts for a feature that plays an essential role in modeling the emergence of cortical slow oscillations: the spike frequency statedependent adaptation, that is the progressive tendency of neurons to reduce their firing rates even in the presence of constant excitatory input currents. Spike frequency adaptation is known to be much more pronounced in deep sleep and anesthesia than during wakefulness. From a physiological standpoint, this is associated to the calcium concentration in neurons and can be modeled introducing a negative (hyperpolarizing) current on population i (the − b_{i} W_{i}(t) term in Eq. (4)).
W(t) is the adaptation (adimensional) variable: it increases when a neuron i emits a spike and is gradually restored to its rest value when no spikes are emitted; its dynamics can be written as
where \({\alpha }_{w}=1\exp (dt/{\tau }_{w})\), and τ_{w} is the characteristic time scale of spike frequency adaptation.
It is also customary to introduce the adaptation strength b_{i} that determines how much the fatigue influences the dynamics: this factor changes depending on neuromodulation and explains the emergence of specific dynamic regimes associated to different brain states. Specifically, higher values of b (together with higher values of recurrent corticocortical connectivity) are used in models for the induction of Slow Oscillations.
It is possible to write down the dynamics for the activity variables by assuming that the probability at time t + dt to have a certain level of activity S_{i}(t + dt) in the population i is drawn from a Gaussian distribution:
For simplicity, we assumed c = 2 s^{−1} to be constant in time, and the same for all the populations. Its temporal dynamics might be accounted for by considering a secondorder meanfield theory^{36,71}.
The average activity S_{i}(t + dt) of the population i at time t + dt is computed as a function of its input by using a leakyintegrateandfire response function
where \(F[{\mu }_{i}(t),{\sigma }_{i}^{2}(t)]\) is the transfer function that, for an AdEx neuron under stationary conditions, can be analytically expressed as the flux of realizations (i.e., the probability current) crossing the threshold V_{spike} = θ + 5ΔV^{40,72}:
where \(f(v)=(v(t){E}_{l})+\Delta V\exp \left\{(v(t)\theta )/\Delta V\right\}\), τ_{m} is the membrane time constant, E_{l} is the reversal potential, θ is the threshold, C_{m} is the membrane capacitance, and ΔV is the exponential slope parameter. The parameter values are reported in Table 1. Here we considered a small and constant \({\sigma }_{\star }^{2}=1{0}^{4}\,{{{{{{{{\rm{mV}}}}}}}}}^{2}/{{{{{{{\rm{s}}}}}}}}\), since we assumed small fluctuations, and fluctuations have a small effect on the population dynamics in any case. We will verify such assumption a posteriori. It is actually straightforward to work out the same theory without making any assumption on the size of σ, the only drawback being an increased computational power required to evaluate a 2D function (in Eq. (8)) during the inference (and simulation) procedure.
In a firstorder meanfield formulation, the gain function depends on the infinitesimal mean μ and variance σ^{2} of the noisy input signal. μ is proportional to K = NJp (number of neurons, weights, and connectivity, respectively). However, Jp can be rescaled in order to keep K fixed. On the other hand, σ^{2} is proportional to K^{2}/(Np), thus necessarily depending on N. However, this latter aspect is neglected in our current formulation (e.g. see Eq. (8)), as we assumed a constant small value for the infinitesimal variance, allowing to arbitrarily choose N, with the proper rescaling. In order to be compliant with neuronal densities coming from physiological literature (about 45 K neurons per mm^{2}) we took N = 500 in our model.
The inner loop: likelihood maximization
It is possible to write the loglikelihood of one specific history {S_{i}(t)} of the dynamics of the model, given the parameters {ξ_{k}}, as follows:
The optimal parameters, given a history of the system {S(t)}, can be obtained by maximizing this loglikelihood. When using a gradientbased optimizer (we indeed used iRprop), it is necessary to explicitly compute the derivatives
where \(\frac{\partial {F}_{i}(t)}{\partial {\mu }_{i}}\) are computed numerically, while \(\frac{\partial {\mu }_{i}}{\partial {\xi }_{k}}\) can be easily evaluated analytically. The value of c^{2} is not relevant since it can be absorbed in the learning rate.
The optimization procedure was performed independently on the 12 data chunks of 40 s here considered. For each chunk, parameters were actually optimized on the first 32 s, while the remaining 8 s were used for validation. In more detail, the likelihood evaluated on such a validation dataset with the values of the parameters inferred on former datasets is referred to as validation likelihood, and helps to prevent the risk of overfitting.
Excitatory  inhibitory module, the effective transfer function
In this paper, the single population (pixel) was assumed to be composed of two subpopulations of excitatory and inhibitory neurons; the mean of the input currents reads as:
It is not always possible to distinguish the excitatory S^{e} from the inhibitory firing rates S^{i} in experimental recordings. Indeed, in electrical recordings, the signal is a composition of the two activities. However, in our case we can make the assumption that the recorded activity, after the deconvolution, is a good proxy of the excitatory activity S^{e}. This is possible thanks to the fluorescent indicator we considered in this work, which targets specifically excitatory neurons. In this way, it is possible to constrain the excitatory activity and to consider S^{i} as hidden variable.
The transfer function depends both on the excitatory and the inhibitory activities F_{e}(S^{e}, S^{i}). However, S^{i} is not observable in our dataset, and it is necessary to estimate an effective transfer function for the excitatory population of neurons by making an adiabatic approximation on the inhibitory one^{73}. We defined the effective transfer function as
The inhibitory activity S^{i} is assumed to have a faster time scale. In first approximation, a biologically plausible assumption when comparing the activity of fastspiking inhibitory neurons with regular spiking excitatory neurons. As a consequence, S^{i}(S^{e}) can be evaluated as its stationary value given a fixed S^{e} value. The likelihood then contains this new effective transfer function as a function of k^{ee} and k^{ei}, which are the weights to be optimized. The dependence on k^{ie}, k^{ii} disappears: they have to be suitably chosen (we set k^{ie} = k^{ii} = − 25 mV in our case) and cannot be optimized with this approach.
Elliptic exponentially decaying connectivity kernels
Having a large number of parameters, especially in the case of small datasets, can cause convergence problems and can increase the risk of overfitting. However, experimental constraints and priors can be used to reduce the number of parameters to be inferred. In our case, we decided to exploit the information that lateral connectivity decays with the distance, in first approximation according to a simple exponential reduction law^{27}, with longrange interareal connection suppressed during the expression of slow waves^{28}. Furthermore, we supposed that a deformation from a circular symmetry to a more elliptical shape of the probability of connection could facilitate the creation of preferential directions of propagation observed in the experiments.
Therefore, we included this in the model as an exponential decay on the spatial scale λ. In this case the average input current to the population i can be written as already presented in Eq. (4), with k_{ik} taking into account the abovediscussed assumptions:
where d_{ik} is the distance between populations (i.e. pixels) i and k,
Parameters can be inferred by differentiating such parametrization of the connectivity kernel. For example, the derivative with respect to the spatial scale λ_{k} to feed Eq. (10) would be:
Gaussian mixture model
Collected the transition times to the up state in each of the L downsampled channels (44 and 41 informative channels for the two mice, respectively), each wave is hence described by a vector x in a Ldimensional space. The problem of classifying slow waves into typical spatiotemporal patterns of propagation can then be tackled as a clustering problem in such highdimensional space. We chose the approach based on Gaussian Mixture Model (GMM)^{74}.
We initially assumed K typical propagation patterns, each described by a Ldimensional multivariate Gaussian, with its mean μ_{k} and covariance matrix Σ_{k}. The probability of a wave x_{n} to belong to the kth cathegory is hence given by the related multivariate Gaussian density function:
Ignoring which cluster each wave belongs to, one has to sum over all the possibilities, taking into account the relative weights π of the K Gaussians in the mixture. The total likelihood for the set X of waves hence reads:
in fact depending on the set of Gaussian means and covariances θ ≡ {μ_{k}, Σ_{k}}, p(X) = p(X; θ), as well as on the number K and the relative proportions π of propagating modes.
The scope of this clustering procedure is in first place to infer the features of typical propagation patterns, i. e. their mean μ and their covariance Σ. Then, each wave is assigned to one of these modes. To this aim, maximumlikelihood approaches are exploited, pointing at finding the optimal values of parameters θ that maximize the likelihood \({{{{{{{\mathcal{L}}}}}}}}(\theta )\equiv p(X;\theta )\) from Eq. (17). This can be easily attained through the Python library sklearn.mixture.GaussianMixture^{33}, exploiting the ExpectationMaximization (EM) algorithm for likelihood maximization.
Notice that EM is guaranteed to converge to a local maximum of the likelihood. This implies that, for nonconvex problems, the solution to which the algorithm converges depends on the initial condition. To get rid of this dependence, a possible strategy is to run the maximization by reshuffling the data and finally averaging over the final configurations of clusters.
In fact, the number of typical propagation patterns K is not even known a priori, so it has to be inferred as well. To face this problem, we run the above procedure for different numbers K of clusters and compare the resulting performances, exploiting a Gaussianity test (scipy.stats.normaltest^{75}) as a quantitative parameter for comparison.
Neuromodulation
The periodic neuromodulation we included in our network was modeled as a periodic oscillation of the parameters b and I^{ext} described by the following equations:
where we defined \({I}_{i}^{{{{{{{{\rm{ext}}}}}}}},0}\) and \({b}_{i}^{0}\) as the set of parameters inferred in the inner loop, while A and T are the amplitude and the oscillation period optimized in the outer loop.
The outer loop: grid search and datasimulation comparison
There can be different ways to define the outer loop. One way might be an iterative algorithm where at each step simulation and data are compared, and parameters are updated with some criterion.
Here we used a simpler strategy, implementing a grid search of the parameters to be optimized, in order to find the best match between simulations and data. The parameters we considered are A and T, namely the amplitude and the period of the neuromodulation (Eq. (18)). We run a 21(T) × 22(A) grid search with parameter A linearly ranging from 0.0 to 6, and parameter T linearly ranging from 0.37 s to 6.9 s. This choice was performed heuristically. However, it is possible to run a wider and denser grid search at the cost of a higher computational expense.
Thus, we processed the simulation run over for each couple of values (A, T) in the grid, applying the same pipeline of analysis applied to experimental data, obtaining in output the distribution of the three previously defined local observables: wave speed, propagation direction and interwave interval (see Section Slow Waves and Propagation analysis for more details).
We quantitatively compared each of these distributions with the ones observed in experimental data applying an Earth Mover’s Distance (EMD) measure. Specifically, given two distributions u and v, EMD can be seen as the minimum amount of work (i.e. distribution weight that needs to be moved multiplied by the distance it has to be moved) required to transform u into v. In a more formal definition, EMD is also known as the Wasserstein distance between the two 1D distributions, namely
where Γ(u, v) is the set of (probability) distributions on \({\mathbb{R}}\times {\mathbb{R}}\) whose marginals are u and v on the first and second factors, respectively. U and V are the respective cumulative density functions (CDFs) of u and v. The proof of the equivalence of both definitions can be found in^{76}.
Specifically, for the purposes of this work, we evaluated the CDFs of each distribution sampling with a discrete binning. Moreover, in order to have this measure independent from the characteristic bin scale of the three studied observables, we evaluated EMD for speed, directions and IWI in “bin units”: thanks to this approach, we can combine these measurements without incurring into nonhomogeneous measure unit inconsistencies.
Then, to define a quantitative “score” describing the similarity between data and simulation, we chose to combine the three EMDs computed over the three macroscopic local observables in an Euclidean way
Finally, we imposed two constraints on the simulation output, in order to reject biologically meaningless values for A and T: 1) most of the downstate duration distribution and 2) most of the wave duration distribution need to be both lower than their minimum values observed in experimental data (after having removed outliers; acceptance criteria: 98th and 95th percentile, respectively). This allowed us to exclude an entire region of noninteresting simulations and identify the optimal, biologically meaningful results.
Slow waves and propagation analysis
To compare the simulation outputs with experimental data we implemented and applied the same analysis tools to both sets. Specifically, we improved the pipeline described by some of us in^{16,35} to make it more versatile.
First, experimental data were arranged to make them suitable for both the application of the slow wave propagation analysis and the inference method. Thus, data were loaded and converted into a standardized format; the region of interest was selected by masking channels of lowsignal intensity; fluctuations were further reduced applying a spatial downsampling: pixels were assembled in 2 × 2 blocks through the function scipy.signal.decimate^{75}, curating the aliasing effect with the application of an ideal Hamming filter. Then, data were ready to be both analyzed or given as input to the inference method.
Both experimental data and simulation output can then be given as input to these analysis tools. The procedure we followed is articulated into four blocks:

Processing: the constant background signal is estimated for each pixel as the mean signal computed on the whole image set; it is then subtracted from images, pixel by pixel. The resulting signal is then normalized by its maximum.

Trigger Detection: transition times from Down to Up states are detected. These are identified from the signal local minima. Specifically, the collection of transition times for each channel is obtained as the vertex of the parabolic fit of the minima.

Wave Detection: the detection method used in^{16} and described in^{77} is applied. It consists in splitting the set of transition times into separate waves according to a Unicity Principle (i.e. each pixel can be involved in each wave only once) and a Globality Principle (i.e. each wave needs to involve at least 75% of the total pixel number).

Wave Characterization: once that the wave collection has been identified, such set of waves is characterized by measuring local wave velocities, local wave directions and local SO frequencies.
Specifically, we computed the local wave velocity as
where T(x, y) is the passage time function of each wave. Having access to this function only in the pixel discrete domain, the partial derivatives have been calculated as finite differences
where d is the distance between pixels.
Concerning local directions, for each wave transition we computed a weighted average of wave local velocity components through a Gaussian filter w_{(μ, σ)} centered in the pixel involved in the transition and with σ = 2. The local direction associated to the transition is
Finally, the SO local frequency f_{SO} is computed as the inverse time lag of the transition on the same pixel by two consecutive waves. Defining T^{(w)}(x, y) as the transition time of wave w on pixel (x, y), the SO frequency can thus be computed as
Statistics and reproducibility
The model is built upon several recordings coming from two different mice (six for each mouse). The obtained results are remarkably comparable between the two mice, indicating a good reproducibility of our method.
We performed validation on our model and acquired long simulations in order to conduct a grid search for optimal parameters. This ensured the accuracy and reliability of our results.
Data availability
Experimental widefield calcium imaging recordings of anesthetized mice are publicly available datasets from the EBRAINS Knowledge Graph platform at the link https://kg.ebrains.eu/search/instances/Dataset/28e65cf1ce134c1292dc743b0cb66862^{78}.
Code availability
The implementation of the inference method and dataanalysis is available on GitHub at the link https://github.com/APEgroup/CorticalSW_Inference, together with the source codes to reproduce the figures in this paper.
References
Lin, M. Z. & Schnitzer, M. J. Genetically encoded indicators of neuronal activity. Nat. Neurosci. 19, 1142–1153 (2016).
Sabatini, B. L. & Tian, L. Imaging neurotransmitter and neuromodulator dynamics in vivo with genetically encoded indicators. Neuron 108, 17–32 (2020).
Mohajerani, M. H. et al. Spontaneous cortical activity alternates between motifs defined by regional axonal projections. Nat. Neurosci. 16, 1426 (2013).
Greenberg, A., Abadchi, J. K., Dickson, C. T. & Mohajerani, M. H. New waves: Rhythmic electrical field stimulation systematically alters spontaneous slow dynamics across mouse neocortex. Neuroimage 174, 328–339 (2018).
Akemann, W. et al. Imaging neural circuit dynamics with a voltagesensitive fluorescent protein. J. Neurophysiol. 108, 2323–2337 (2012).
Grienberger, C. & Konnerth, A. Imaging calcium in neurons. Neuron 73, 862–885 (2012).
Chen, T.W. et al. Ultrasensitive fluorescent proteins for imaging neuronal activity. Nature 499, 295–300 (2013).
Shimaoka, D., Song, C. & Knöpfel, T. Statedependent modulation of slow wave motifs towards awakening. Front. Cell. Neurosci. 11, 108 (2017).
Scott, G. et al. Voltage imaging of waking mouse cortex reveals emergence of critical neuronal dynamics. J. Neurosci. 34, 16611–16620 (2014).
Fagerholm, E. D. et al. Cortical entropy, mutual information and scalefree dynamics in waking mice. Cereb. cortex 26, 3945–3952 (2016).
Wright, P. W. et al. Functional connectivity structure of cortical calcium dynamics in anesthetized and awake mice. PloS one 12, e0185759 (2017).
Vanni, M. P., Chan, A. W., Balbi, M., Silasi, G. & Murphy, T. H. Mesoscale mapping of mouse cortex reveals frequencydependent cycling between distinct macroscale functional modules. J. Neurosci. 37, 7513–7533 (2017).
Xie, Y. et al. Resolution of highfrequency mesoscale intracortical maps using the genetically encoded glutamate sensor iglusnfr. J. Neurosci. 36, 1261–1272 (2016).
Ren, C. & Komiyama, T. Characterizing cortexwide dynamics with widefield calcium imaging. J. Neurosci. 41, 4160–4168 (2021).
Montagni, E. et al. Widefield imaging of cortical neuronal activity with redshifted functional indicators during motor task execution. J. Phys. D: Appl. Phys. 52, 074001 (2019).
Celotto, M. et al. Analysis and model of cortical slow waves acquired with optical techniques. Methods Protoc. 3, 14 (2020).
Resta, F. et al. Largescale alloptical dissection of motor cortex connectivity shows a segregated organization of mouse forelimb representations. Cell Rep. 41, 111627 (2022).
Brier, L. M. et al. Separability of calcium slow waves and functional connectivity during wake, sleep, and anesthesia. Neurophotonics 6, 035002 (2019).
Van Albada, S., Kerr, C., Chiang, A., Rennie, C. & Robinson, P. Neurophysiological changes with age probed by inverse modeling of eeg spectra. Clin. Neurophysiol. 121, 21–38 (2010).
Jirsa, V. K. et al. The virtual epileptic patient: individualized wholebrain models of epilepsy spread. Neuroimage 145, 377–388 (2017).
Karoly, P. J. et al. Seizure pathways: A modelbased investigation. PLoS comput. Biol. 14, e1006403 (2018).
Aqil, M., Atasoy, S., Kringelbach, M. L. & Hindriks, R. Graph neural fields: A framework for spatiotemporal dynamical models on the human connectome. PLOS Comput. Biol. 17, 1–29 (2021).
Capone, C., Gigante, G. & Del Giudice, P. Spontaneous activity emerging from an inferred network model captures complex spatiotemporal dynamics of spike data. Sci. Rep. 8, 17056 (2018).
Schneidman, E., Berry, M. J., Segev, R. & Bialek, W. Weak pairwise correlations imply strongly correlated network states in a neural population. Nature 440, 1007–1012 (2006).
Capone, C., Filosa, C., Gigante, G., RicciTersenghi, F. & Del Giudice, P. Inferring synaptic structure in presence of neural interaction time scales. PloS one 10, e0118412 (2015).
Rostami, V., Mana, P. P., Grün, S. & Helias, M. Bistability, nonergodicity, and inhibition in pairwise maximumentropy models. PLoS comput. Biol. 13, e1005762 (2017).
Schnepel, P., Kumar, A., Zohar, M., Aertsen, A. & Boucsein, C. Physiology and Impact of Horizontal Connections in Rat Neocortex. Cereb. Cortex 25, 3818–3835 (2014).
Olcese, U. et al. Spikebased functional connectivity in cerebral cortex and hippocampus: Loss of global connectivity is coupled to preservation of local connectivity during nonrem sleep. J. Neurosci. 36, 7676–7692 (2016).
Capone, C. & Mattia, M. Speed hysteresis and noise shaping of traveling fronts in neural fields: role of local circuitry and nonlocal connectivity. Sci. Rep. 7, 1–10 (2017).
Coombes, S. Waves, bumps, and patterns in neural field theories. Biol. Cybern. 93, 91–108 (2005).
Robinson, P. A., Rennie, C. J. & Wright, J. J. Propagation and stability of waves of electrical activity in the cerebral cortex. Phys. Rev. E 56, 826 (1997).
Goldman, J. S. et al. Bridging single neuron dynamics to global brain states. Front. Syst. Neurosci. 13, 75 (2019).
Pedregosa, F. et al. Scikitlearn: Machine learning in Python. J. Mach. Learn. Res. 12, 2825–2830 (2011).
Pazienti, A., Galluzzi, A., Dasilva, M., SanchezVives, M. V. & Mattia, M. Slow waves form expanding, memoryrich mesostates steered by local excitability in fading anesthesia. Iscience 25, 103918 (2022).
Gutzen, R. et al. Comparing apples to apples – using a modular and adaptable analysis pipeline to compare slow cerebral rhythms across heterogeneous datasets. https://arxiv.org/abs/2211.08527 (2022).
El Boustani, S. & Destexhe, A. A master equation formalism for macroscopic modeling of asynchronous irregular activity states. Neural Comput. 21, 46–100 (2009).
Gigante, G., Mattia, M. & Del Giudice, P. Diverse populationbursting modes of adapting spiking neurons. Phys. Rev. Lett. 98, 148101 (2007).
Capone, C., Pastorelli, E., Golosio, B. & Paolucci, P. S. Sleeplike slow oscillations improve visual classification through synaptic homeostasis and memory association in a thalamocortical model. Sci. Rep. 9, 8990 (2019).
Di Volo, M., Romagnoni, A., Capone, C. & Destexhe, A. Biologically realistic meanfield models of conductancebased networks of spiking neurons with adaptation. Neural Comput. 31, 653–680 (2019).
Capone, C., Di Volo, M., Romagnoni, A., Mattia, M. & Destexhe, A. Statedependent meanfield formalism to model different activity states in conductancebased networks of spiking neurons. Phys. Rev. E 100, 062413 (2019).
Igel, C. & Hüsken, M. Empirical evaluation of the improved rprop learning algorithms. Neurocomputing 50, 105–123 (2003).
Roudi, Y., Dunn, B. & Hertz, J. Multineuronal activity and functional connectivity in cell assemblies. Curr. Opin. Neurobiol. 32, 38–44 (2015).
Tyrcha, J., Roudi, Y., Marsili, M. & Hertz, J. The effect of nonstationarity on models inferred from neural data. J. Stat. Mech.: Theory Exp. 2013, P03005 (2013).
Nghiem, T.A., Telenczuk, B., Marre, O., Destexhe, A. & Ferrari, U. Maximumentropy models reveal the excitatory and inhibitory correlation structures in cortical neuronal activity. Phys. Rev. E 98, 012402 (2018).
Pillow, J. W. et al. Spatiotemporal correlations and visual signalling in a complete neuronal population. Nature 454, 995–999 (2008).
Park, I. M., Meister, M. L., Huk, A. C. & Pillow, J. W. Encoding and decoding in parietal cortex during sensorimotor decisionmaking. Nat. Neurosci. 17, 1395–1403 (2014).
Weber, A. I. & Pillow, J. W. Capturing the dynamical repertoire of single neurons with generalized linear models. Neural Comput. 29, 3260–3289 (2017).
Capone, C. et al. Slow waves in cortical slices: how spontaneous activity is shaped by laminar structure. Cereb. cortex 29, 319–335 (2019).
Tononi, G., Sporns, O. & Edelman, G. M. Measures of degeneracy and redundancy in biological networks. Proc. Natl Acad. Sci. 96, 3257–3262 (1999).
Melozzi, F. et al. Individual structural features constrain the mouse functional connectome. Proc. Natl Acad. Sci. 116, 26961–26969 (2019).
Marrelec, G., Messé, A., Giron, A. & Rudrauf, D. Functional connectivity’s degenerate view of brain computation. PLoS Comput. Biol. 12, e1005031 (2016).
Barson, D. et al. Simultaneous mesoscopic and twophoton imaging of neuronal activity in cortical circuits. Nat. methods 17, 107–113 (2020).
Saxena, A., Tripathi, A. & Talukdar, P. Improving multihop question answering over knowledge graphs using knowledge base embeddings. In Proceedings of the 58th annual meeting of the association for computational linguistics, 44984507 (2020).
Ren, C. & Komiyama, T. Widefield calcium imaging of cortexwide activity in awake, headfixed mice. STAR Protoc. 2, 100973 (2021).
Cardin, J. A. Functional flexibility in cortical circuits. Curr. Opin. Neurobiol. 58, 175–180 (2019).
Makino, H. et al. Transformation of cortexwide emergent properties during motor learning. Neuron 94, 880–890 (2017).
Li, T. et al. Earthquakes induced by wastewater disposal near musreau lake, alberta, 2018–2020. Seismological Soc. Am. 93, 727–738 (2022).
Pastorelli, E. et al. Scaling of a largescale simulation of synchronous slowwave and asynchronous awakelike activity of a cortical model with longrange interconnections. Front. Syst. Neurosci. 13, 33 (2019).
Slow Wave Analysis Pipeline (SWAP): Integrating multiscale data and the output of simulations in a reproducible and adaptable pipeline. https://wiki.ebrains.eu/bin/view/Collabs/slowwaveanalysispipeline Accessed: 20210414. RRID:SCR_022966.
Muller, L., Chavane, F., Reynolds, J. & Sejnowski, T. J. Cortical travelling waves: mechanisms and computational principles. Nat. Rev. Neurosci. 19, 255–268 (2018).
Golosio, B. et al. Thalamocortical spiking model of incremental learning combining perception, context and nremsleep. PLoS Comput. Biol. 17, e1009045 (2021).
Oliver, P. A. et al. Clinical effectiveness of intravenous racemic ketamine infusions in a large community sample of patients with treatmentresistant depression, suicidal ideation, and generalized anxiety symptoms: a retrospective chart review. J. Clin. Psychiatry 83, 42811 (2022).
TortColet, N., Capone, C., SanchezVives, M. V. & Mattia, M. Attractor competition enriches cortical dynamics during awakening from anesthesia. Cell Rep. 35, 109270 (2021).
Terzano, M. et al. The cyclic alternating pattern as a physiologic component of normal nrem sleep. Sleep 8, 137–145 (1985).
Hu, S. Akaike information criterion. Center for Research in Scientific Computation. 93 (2007).
Muller, L., Reynaud, A., Chavane, F. & Destexhe, A. The stimulusevoked population response in visual cortex of awake monkey is a propagating wave. Nat. Commun. 5, 1–14 (2014).
Burkitt, G. R., Silberstein, R. B., Cadusch, P. J. & Wood, A. W. Steadystate visual evoked potentials and travelling waves. Clin. Neurophysiol. 111, 246–258 (2000).
Muratore, P., Capone, C. & Paolucci, P. S. Target spike patterns enable efficient and biologically plausible learning for complex temporal tasks. PloS one 16, e0247014 (2021).
DePasquale, B., Cueva, C. J., Rajan, K., Escola, G. S. & Abbott, L. fullforce: A targetbased method for training recurrent networks. PloS one 13, e0191527 (2018).
Capone, C., Muratore, P. & Paolucci, P. S. Errorbased or targetbased? a unified framework for learning in recurrent spiking networks. PLoS comput. Biol. 18, e1010221 (2022).
Chong, M. N., Jin, B., Chow, C. W. & Saint, C. Recent developments in photocatalytic water treatment technology: a review. Water Res. 44, 2997–3027 (2010).
Tuckwell, H. C. Introduction to theoretical neurobiology: volume 2, nonlinear and stochastic theories, vol. 8 (Cambridge University Press, 1988).
Mascaro, M. & Amit, D. J. Effective neural response function for collective population states. Netw.: Comput. Neural Syst. 10, 351–373 (1999).
Reynolds, D. A. Gaussian mixture models.Encyclopedia of biometrics. 741, 202–210 (2009).
Virtanen, P. et al. SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python. Nat. Methods 17, 261–272 (2020).
Ramdas, A., Garcia, N. & Cuturi, M. On wasserstein two sample testing and related families of nonparametric tests. https://arxiv.org/abs/1509.02237 (2015).
De Bonis, G. et al. Analysis pipeline for extracting features of cortical slow oscillations. Front. Syst. Neurosci. 13, 70 (2019).
Resta, F., Allegra Mascaro, A. L. & Pavone, F. Study of slow waves (sws) propagation through widefield calcium imaging of the right cortical hemisphere of gcamp6f mice (v2) (2021). https://kg.ebrains.eu/search/instances/Dataset/28e65cf1ce134c1292dc743b0cb66862.
Li, P. et al. Measuring sharp waves and oscillatory population activity with the genetically encoded calcium indicator gcamp6f. Front. Cell. Neurosci. 13, 274 (2019).
Igel, C. & Hüsken, M. Improving the rprop learning algorithm. In Proceedings of the second international ICSC symposium on neural computation (NC 2000), vol. 2000, 115–121 (Citeseer, 2000).
Acknowledgements
This work has been supported by the European Union Horizon 2020 Research and Innovation program under the FET Flagship Human Brain Project (grant agreement SGA3 n. 945539 and grant agreement SGA2 n. 785907) and by the INFN APE Parallel/Distributed Computing laboratory.
Author information
Authors and Affiliations
Contributions
C.C. contributed to conceptualization, methodology, software, investigation, data curation, formal analysis, supervision, validation, visualization; C.D.L. contributed to conceptualization, methodology, software, investigation, data curation, formal analysis, supervision, validation, visualization; G.D.B. contributed to conceptualization, methodology, investigation, data curation; R.G. contributed to methodology, software, computational resources; I.B. contributed to methodology; E.P. contributed to methodology, validation, visualization; F.S. contributed to computational resources; C.L. contributed to methodology, visualization; L.T. contributed to methodology; F.R. contributed to experiments, investigation, data curation; A.L.A.M. contributed to experiments, investigation, data curation; F.P. contributed to experiments, investigation, data curation; M.D. contributed to methodology, computational resources; P.S.P. contributed to conceptualization, methodology, investigation, project administration, supervision, funding acquisition. All authors contributed to writing and revising the manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Peer review
Peer review information
Communications Biology thanks Michael Breakspear and the other, anonymous, reviewer for their contribution to the peer review of this work. Primary Handling Editors: Enzo Tagliazucchi and Joao Valente. Peer reviewer reports are available.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary information
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Capone, C., De Luca, C., De Bonis, G. et al. Simulations approaching data: cortical slow waves in inferred models of the whole hemisphere of mouse. Commun Biol 6, 266 (2023). https://doi.org/10.1038/s42003023045800
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s42003023045800
Comments
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.