Abstract
The interpretation of neuronal spike train recordings often relies on abstract statistical models that allow for principled parameter estimation and model selection but provide only limited insights into underlying microcircuits. In contrast, mechanistic models are useful to interpret microcircuit dynamics, but are rarely quantitatively matched to experimental data due to methodological challenges. Here we present analytical methods to efficiently fit spiking circuit models to singletrial spike trains. Using derived likelihood functions, we statistically infer the mean and variance of hidden inputs, neuronal adaptation properties and connectivity for coupled integrateandfire neurons. Comprehensive evaluations on synthetic data, validations using ground truth invitro and invivo recordings, and comparisons with existing techniques demonstrate that parameter estimation is very accurate and efficient, even for highly subsampled networks. Our methods bridge statistical, datadriven and theoretical, modelbased neurosciences at the level of spiking circuits, for the purpose of a quantitative, mechanistic interpretation of recorded neuronal population activity.
Introduction
In recent years neuronal spike train data have been collected at an increasing pace, with the ultimate aim of unraveling how neural circuitry implements computations that underlie behavior. Often these data are acquired from extracellular electrophysiological recordings in vivo without knowledge of neuronal input and connections between neurons. To interpret such data, the recorded spike trains are frequently analyzed by fitting abstract parametric models that describe statistical dependencies in the data. A typical example consists in fitting generalized linear models (GLMs) to characterize the mapping between measured sensory input and neuronal spiking activity^{1,2,3,4,5}. These approaches are very useful for quantifying the structure in the data, and benefit from statistically principled parameter estimation and model selection methods. However, their interpretive power is limited as the underlying models typically do not incorporate prior biophysical constraints.
Mechanistic models of coupled neurons on the other hand involve biophysically interpretable variables and parameters, and have proven essential for analyzing the dynamics of neural circuits. At the top level of complexity for this purpose are detailed models of the Hodgkin–Huxley type^{6,7}. Comparisons between models of this type and recorded spike trains have revealed that multiple combinations of biophysical parameters give rise to the same firing patterns^{8,9}. This observation motivates the use of models at an intermediate level of complexity, and in particular integrateandfire (I&F) neurons, which implement in a simplified manner the key biophysical constraints with a reduced number of effective parameters and can be equipped with various mechanisms, such as spike initiation^{10,11,12}, adaptive excitability^{13,14}, or distinct compartments^{15,16} to generate diverse spiking behaviors^{17,18} and model multiple neuron types^{19,20}. I&F models can reproduce and predict neuronal activity with a remarkable degree of accuracy^{11,21,22}, essentially matching the performance of biophysically detailed models with many parameters^{17,23}; thus, they have become stateoftheart models for describing neural activity in in vivolike conditions^{11,19,20,24}. In particular, they have been applied in a multitude of studies on local circuits^{25,26,27,28,29,30,31}, network dynamics^{32,33,34,35,36}, learning/computing in networks^{37,38,39,40}, as well as in neuromorphic hardware systems^{37,41,42,43}.
I&F neurons can be fit in straightforward ways to membrane voltage recordings with knowledge of the neuronal input, typically from in vitro preparations^{11,17,19,20,24,44,45}. Having only access to the spike times as in a typical in vivo setting however poses a substantial challenge for the estimation of parameters. Estimation methods that rely on numerical simulations to maximize a likelihood or minimize a cost function^{46,47,48} strongly suffer from the presence of intrinsic variability in this case.
To date, model selection methods based on extracellular recordings are thus much more advanced and principled for statistical/phenomenological models than for mechanistic circuit models. To bridge this methodological gap, here we present analytical tools to efficiently fit I&F circuits to observed spike times from a single trial. By maximizing analytically computed likelihood functions, we infer the statistics of hidden inputs, input perturbations, neuronal adaptation properties and synaptic coupling strengths, and evaluate our approach extensively using synthetic data. Importantly, we validate our inference methods for all of these features using ground truth in vitro and in vivo data from wholecell^{49,50} and combined juxtacellular–extracellular recordings^{51}. Systematic comparisons with existing modelbased and modelfree methods on synthetic data and electrophysiological recordings^{49,50,51,52} reveal a number of advantages, in particular for the challenging task of estimating synaptic couplings from highly subsampled networks.
Results
Maximum likelihood estimation for I&F neurons
Maximum likelihood estimation is a principled method for fitting statistical models to observations. Given observed data D and a model that depends on a vector of parameters θ, the estimated value of the parameter vector is determined by maximizing the likelihood that the observations are generated by the model, p(Dθ), with respect to θ. This method features several attractive properties, among them: (1) the distribution of maximum likelihood estimates is asymptotically Gaussian with mean given by the true value of θ; (2) the variances of the parameter estimates achieve a theoretical lower bound, the Cramer–Rao bound, as the sample size increases^{53}.
Let us first focus on single neurons. The data we have are neuronal spike times, which we collect in the ordered set D := {t_{1},…,t_{K}}. We consider neuron models of the I&F type, which describe the membrane voltage dynamics by a differential equation together with a reset condition that simplifies the complex, but rather stereotyped, dynamics of action potentials (spikes). Here, we focus on the classical leaky I&F model^{54} but also consider a refined variant that includes a nonlinear description of the spikegenerating sodium current at spike initiation and is known as exponential I&F model^{10}. An extended I&F model that accounts for neuronal spike rate adaptation^{14,17,55} is included in Results section “Inference of neuronal adaptation”. Each model neuron receives fluctuating inputs described by a Gaussian white noise process with (timevarying) mean μ(t) and standard deviation σ (for details on the models see Methods section “I&F neuron models”).
We are interested in the likelihood p(Dθ) of observing the spike train D from the model with parameter vector θ. As spike emission in I&F models is a renewal process (except in presence of adaptation, see below) this likelihood can be factorized as
where μ[t_{k}, t_{k+1}] := {μ(t)t∈[t_{k}, t_{k+1}]} denotes the mean input time series across the time interval [t_{k}, t_{k+1}]. In words, each factor in Eq. (1) is the probability density value of a spike time conditioned on knowledge about the previous spike time, the parameters contained in θ and the mean input time series across the interspike interval (ISI). Below we refer to these factors as conditioned spike time likelihoods. We assume that μ[t_{k}, t_{k+1}] can be determined using available knowledge, which includes the parameters in θ as well as the observed spike times. Note that we indicate the parameter μ separately from θ due to its exceptional property of variation over time.
For robust and rapid parameter estimation using established optimization techniques we need to compute p(Dθ) as accurately and efficiently as possible. Typical simulationbased techniques are not well suited because they can only achieve a noisy approximation of the likelihood that depends on the realization of the input fluctuations and is difficult to maximize. This poses a methodological challenge which can be overcome using analytical tools that have been developed for I&F neurons in the context of the forward problem of calculating model output for given parameters^{56,57,58,59,60}. These tools led us to the following methods that we explored for the inverse problem of parameter estimation:

Method 1 calculates the factors of the likelihood (Eq. (1), right hand side) by solving a Fokker–Planck PDE using suitable numerical solution schemes (for details see Methods section “Method 1: conditioned spike time likelihood”). In model scenarios where the mean input is expected to vary only little in [t_{k}, t_{k+1}] and contains perturbations with weak effects, for example, synaptic inputs that cause relatively small postsynaptic potentials, we can write \(\mu (t) = \mu _0^k + J\mu _1(t)\) with small J, where \(\mu _0^k\) may vary between ISIs. This allows us to employ the first order approximation
$$p(t_{k + 1}t_k,\mu [t_k,t_{k + 1}],{\boldsymbol{\theta }}) \approx p_0(t_{k + 1}t_k,{\boldsymbol{\theta }}) + J\,\,p_1(t_{k + 1}t_k,\mu _1[t_k,t_{k + 1}],{\boldsymbol{\theta }}),$$(2)where θ contains parameters that remain constant within ISIs, including \(\mu _0^k\); J is indicated separately for improved clarity. p_{0} denotes the conditioned spike time likelihood in absence of input perturbations and p_{1} the first order correction. We indicate the use of this approximation explicitly by method 1a.

Method 2 involves an approximation of the spike train by an inhomogeneous Poisson point process. The spike rate r(t) of that process is effectively described by a simple differential equation derived from the I&F model and depends on the mean input up to time t as well as the other parameters in θ (for details see Methods section “Method 2: derived spike rate model”). In this case the factors in Eq. (1), right, are expressed as
$$ p(t_{k + 1}t_k,\mu [t_k,t_{k + 1}],{\boldsymbol{\theta }}) \approx \\ r(t_{k + 1}\mu [t_1,t_{k + 1}],{\boldsymbol{\theta }})\,\,{\mathrm{exp}}\left( {  \int_{t_k}^{t_{k + 1}} r (\tau \mu [t_1,\tau ],{\boldsymbol{\theta }})d\tau } \right).$$(3)
For each of these methods the likelihood (Eq. (1)) is then maximized using wellestablished algorithms (see Methods section “Likelihood maximization”). Notably, for the leaky I&F model this likelihood is mathematically guaranteed to be free from local maxima^{61} so that any optimization algorithm will converge to its global maximum, which ensures reliable and tractable fitting.
The most accurate and advantageous method depends on the specific setting, as illustrated for different scenarios in the following sections. Several scenarios further allow to benchmark our methods against a related approach from Pillow and colleagues, which also uses the Fokker–Planck equation^{61,62}. We compared the different methods in terms of estimation accuracy and computation time. For information on implementation and computational demands see Methods section “Implementation and computational complexity”.
Inference of background inputs
We first consider the spontaneous activity of an isolated recorded neuron. This situation is modeled with an I&F neuron receiving a stationary noisy background input with constant mean μ and standard deviation σ. For this scenario method 1 is more accurate than method 2 while similarly efficient, and therefore best suited. The parameters of interest for estimation are limited to μ and σ, together with the membrane time constant τ_{m} (for details see Methods section “I&F neuron models”).
An example of the simulated ground truth data, which consists of the membrane voltage time series including spike times, is shown in Fig. 1a together with ISI and membrane voltage histograms. Note that for estimation we only use the spike times. By maximizing the likelihood the true parameter values are well recovered (Fig. 1b) and we obtain an accurate estimate of the ISI density. In addition, we also obtain an accurate estimate for the unobserved membrane voltage density, which can be calculated using a slight modification of method 1 in a straightforward way once the parameter values are determined. Interestingly, the fits are accurate regardless of whether the membrane time constant τ_{m} is set to the true value (Fig. 1a, b), estimated or set to a wrong value within a biologically plausible range (Fig. 1c and Supplementary Fig. 1a).
We next evaluated the estimation accuracy for different numbers of observed spikes (Fig. 1d). As little as 50 spikes already lead to a good solution with a maximum average relative error of about 10%. Naturally, the estimation accuracy increases with the number of observed spikes. Moreover, the variance of the parameter estimates decreases as the number of spikes increases (see insets in Fig. 1d).
To further quantify how well the different parameters can be estimated from a spike train of a given length, we analytically computed the Cramer–Rao bound (see Methods section “Calculation of the Cramer–Rao bound”). This bound limits the variance of any unbiased estimator from below and is approached by the variance of a maximum likelihood estimator. For τ_{m} fixed and a reasonable range of values for μ and σ we consistently find that the variance of the estimates of both input parameters decreases with decreasing ISI variability (Fig. 1e) and the variance for μ is smaller than that for σ (Fig. 1d, e). If τ_{m} is included in the estimation, the variance of its estimates is by far the largest and that for σ the smallest (in relation to the range of biologically plausible values, Supplementary Fig. 1b, c). Together with the results from Fig. 1c and Supplementary Fig. 1a this motivates to keep τ_{m} fixed.
Comparisons of our method with the approach from Pillow et al.^{62} show clear improvements on estimation accuracy (Supplementary Fig. 1c) and computation time (Supplementary Fig. 1d: reduction by two orders of magnitude). Finally, we tested our method on the exponential I&F model, which involves an additional nonlinearity and includes a refractory period. For this model we also obtain accurate estimates (Supplementary Fig. 1e).
We validated our inference method using somatic wholecell recordings of cortical pyramidal cells (PYRs)^{49} and fastspiking interneurons (INTs) exposed to injected fluctuating currents. A range of stimulus statistics, in terms of different values for the mean μ_{I} and standard deviation σ_{I} of these noise currents, was applied and each cell responded to multiple different stimuli (examples are shown in Fig. 2a; for details see Methods section “In vitro ground truth data on neuronal input statistics”). We estimated the input parameters μ and σ of an I&F neuron from the observed spike train for each stimulus by maximizing the spike train likelihood.
Model fitting yielded an accurate reproduction of the ISI distributions (Fig. 2a). Importantly, the estimated input statistics captured well the true stimulus statistics (Fig. 2b, c). In particular, estimated and true mean input as well as estimated and true input standard deviations were strongly correlated for all cells (Fig. 2b, c). The correlation coefficients between estimated and ground truth values for INTs are larger than those for PYRs, as reflected by the concave shape of the estimated μ values as a function of μ_{I}. This shape indicates a saturation mechanism that is not included in the I&F model. Indeed, it can in part be explained by the intrinsic adaptation property of PYRs (see Results section “Inference of neuronal adaptation”). Furthermore, correlation coefficients are slightly increased for longer stimuli (15 s compared to 5 s duration) due to improved estimation accuracy for longer spike trains (Supplementary Fig. 2). Comparison with a Poisson point process showed that the I&F model is the preferred one across all cells and stimuli according to the Akaike information criterion (AIC), which takes into account both goodness of fit and complexity of a model (Fig. 2d).
Noise injections in vitro mimic in a simplified way the background inputs that lead to spontaneous neural activity in vivo, and certain dynamical aspects may not be well captured. Therefore, we additionally considered spiketrain data obtained from extracellular multichannel recordings in primary auditory cortex of awake ferrets during silence^{52} (for details see Methods section “Estimating neuronal input statistics from in vivo data”). Examples of baseline data in terms of ISI histograms together with estimation results are shown in Fig. 2e, and estimated parameter values of the background input for all cells are visualized in Fig. 2f. The I&F model with fluctuating background input captures well a range of ISI histograms that characterize the baseline spiking activity of cortical neurons. Also here the I&F model appears to be the preferred one compared to a Poisson process for almost all cells according to the AIC (Fig. 2g).
Inference of input perturbations
We next focus on the effects of synaptic or stimulusdriven inputs at known times. Specifically, we consider μ(t) = μ_{0} + Jμ_{1}(t), where μ_{0} denotes the background mean input and μ_{1}(t) reflects the superposition of input pulses triggered at known times. For this scenario we apply and compare methods 1a and 2.
For an evaluation on synthetic data we model μ_{1}(t) by the superposition of alpha functions with time constant τ, triggered at known event times with irregular interevent intervals. We estimate the perturbation strength J as well as τ, which determines the temporal extent over which the perturbation acts. Estimation accuracy for a range of perturbation strengths is shown in Fig. 3a, b. Note that the input perturbations are relatively weak, producing mild deflections of the hidden membrane voltage which are difficult to recognize visually in the membrane voltage time series in the presence of noisy background input (Fig. 3a). Both methods perform comparably well for weak input perturbations. As J increases the estimation accuracy of method 2 increases, whereas that of method 1a decreases (Fig. 3b) because it is based on a weak coupling approximation.
We further assessed the sensitivity of our estimation methods for the detection of weak input perturbations, and considered a modelfree method based on crosscorrelograms (CCGs) between spike trains and perturbation times (Fig. 3c, d) for comparison. Briefly, detection sensitivity was measured by the fraction of significant estimates of J and CCG extrema for positive lags, respectively, compared to the estimated values and CCGs without perturbations from large numbers of realizations (for details see Methods section “Modeling input perturbations”). The modelfree approach estimates the probability that the input and the spike train are coupled, but does not provide additional information on the shape of that coupling. Both modelbased estimation methods are more sensitive in detecting weak perturbations than the modelfree approach, with method 1a expectedly performing best (Fig. 3d).
For additional benchmarks we considered the approach from ref. ^{62} (Supplementary Fig. 3). We compared estimation accuracy for the parameters of perturbations (Supplementary Fig. 3a), background input and the membrane time constant (Supplementary Fig. 3b), detection sensitivity (Supplementary Fig. 3c) as well as computation time (Supplementary Fig. 3d). Both of our methods are clearly more accurate in terms of parameter estimation (Supplementary Fig. 3a, b) and thus, reconstruction (Supplementary Fig. 3e). As a result, detection sensitivity is improved (Supplementary Fig. 3c), while computation time is strongly reduced (Supplementary Fig. 3d).
Using our model we then examined how the statistics of the background input affect the ability to detect weak input perturbations. For this purpose we analytically computed the change in expected ISI caused by a weak, brief input pulse (Supplementary Fig. 4). That change depends on the time within the ISI at which the perturbation occurs (Supplementary Fig. 4a). By comparing the maximal relative change of expected ISI across parameter values for the background input we found that a reduction of spiking variability within biologically plausible limits (ISI CV ≳ 0.3) typically increases detectability according to this measure (Supplementary Fig. 4b).
We validated our inference approach using wholecell recordings of cortical pyramidal cells that received injections of a fluctuating current and an additional artificial excitatory postsynaptic current (aEPSC)^{50}. The fluctuating current was calibrated for each cell to produce ~5 spikes/s and membrane voltage fluctuations of maximal ~10 mV amplitude. aEPSC bumps with 1 ms rise time and 10 ms decay time were triggered at simulated presynaptic spike times with rate 5 Hz, and their strength varied in different realizations (for details see Methods section “In vitro ground truth data on input perturbations”).
We applied methods 1a and 2 to detect these artificial synaptic inputs based on the presynaptic and postsynaptic spike times only, and compared the detection performance with that of a CCGbased method. Specifically, input perturbations were described by delayed pulses for method 1a (which yields improved computational efficiency) and alpha functions for method 2 (cf. evaluation above). We quantified how much data is necessary to detect synaptic input of a given strength. For this purpose we computed the loglikelihood ratio between the I&F model with (J ≠ 0) and without (J = 0) coupling in a crossvalidated setting, similarly to the approach that has been applied to this dataset previously using a Poisson point process GLM^{50}. For the modelfree CCG method we compared the CCG peak for positive lags to the peaks from surrogate data generated by distorting the presynaptic spike times with a temporal jitter in a large number of realizations. Detection time is then defined respectively as the length of data at which the model with coupling provides a better fit than the one without according to the likelihood on test data^{50} (Fig. 4a), and at which the CCG peak crosses the significance threshold (Fig. 4b; for details see Methods section “In vitro ground truth data on input perturbations”).
All three methods were able to successfully detect most aEPSC perturbations, and method 2 required the least data for detection (Fig. 4c; overall reduction in detection time, also compared to the GLM used previously: Fig. 2F, G in ref. ^{50} is directly comparable to Fig. 4a, c). We further considered an alternative detection criterion for the I&F model, based on surrogate data as used for the CCG method. This approach thus enables a more direct comparison. Comparing the detection performance for fixed recording duration across all perturbation strengths and available cells shows that method 1a is more sensitive than the CCG method (success rate 0.86 vs. 0.78, Fig. 4d). We conclude that both methods, 1a with delayed pulses and 2 with an alpha function kernel, are well suited to detect weak aEPSCs in this in vitro dataset, and their sensitivity is increased compared to a modelfree CCG method.
Inference of synaptic coupling
In the previous section we showed that we can successfully estimate the perturbations in the spiking of an individual neuron that may be elicited by inputs from another neuron. We now turn to estimating synaptic couplings in a network. We consider the situation in which we have observed spike trains of N neurons. We fit these data using a network model in which each neuron i receives independent fluctuating background input with neuronspecific mean μ_{i}(t) and variance \(\sigma _i^2\), and neurons are coupled through delayed current pulses which cause postsynaptic potentials of size J_{i,j} with time delay d_{i,j}, for i, j ∈ {1, …, N}. The mean background input may vary over time to reflect large amplitude variations in the external drive (and thus, spiking activity) that are slow compared to the fast input fluctuations captured by the Gaussian white noise process. Our aim is therefore to estimate the coupling strengths in addition to the statistics of background inputs caused by unobserved neurons.
We collect the observed spike times of all N neurons in the set D and separate the matrix of coupling strengths J from all other parameters in θ for improved clarity. Since μ_{i}(t) is assumed to vary slowly we approximate it by one value across each ISI. The overall mean input for neuron i across the kth ISI can therefore be expressed as \(\mu _i^k + \mathop {\sum}\nolimits_{j = 1}^N {J_{i,j}}\; \mu _j^1(t)\), where \(J_{i,j}\;\mu _j^1(t)\) describes the synaptic input for neuron i elicited by neuron j taking the delay d_{i,j} into account. The likelihood p(Dθ, J) can be factorized into conditioned spike time likelihoods, where each factor is determined by the parameters in θ together with a specific subset of all coupling strengths and knowledge of the spike times that we have observed. Assuming reasonably weak coupling strengths, each of these factors can be approximated by the sum of the conditioned spike time likelihood in absence of input perturbations and a first order correction due to synaptic coupling (cf. Eq. (2)) to obtain
where \(t_i^k\) denotes the kth of K_{i} observed spike times and θ contains all parameters, including \(\mu _i^k\) (except for J). Note that the mean input perturbations \(\mu _j^1\) depend on the spike times of neuron j taking the delay d_{i,j} into account. The approximation (4) implies that an individual synaptic connection on one neuron has a negligible effect for the estimation of a connection on another neuron, which is justified by the assumption of weak coupling. This allows for the application of method 1a, by which the likelihood can be calculated in an efficient way (for details see Methods section “Network model and inference details”).
We first evaluated our method on synthetic data for small (N = 10) as well as larger (N = 50) fully observed networks of neurons with constant background input statistics. The number of parameters inferred per network, which include mean and standard deviation of background inputs, coupling strengths and delays, excluding selfcouplings, amounts to 2N^{2}. The estimated parameter values show a remarkable degree of accuracy (Fig. 5a, b left and Supplementary Fig. 5a).
In a more realistic scenario, the N recorded neurons belong to a larger network that is subsampled through the measurement process. The unobserved neurons therefore contribute additional, hidden inputs. In the fitted model, the effect of these unobserved neurons on neuron i is absorbed in the estimated parameters (μ_{i}, σ_{i}) of the background noise. Specifically, the total external input to neuron i, originating from a large number M_{i} of unobserved neurons whose spike trains are represented by independent Poisson processes with rates r_{j}, can be approximated for reasonably small coupling strengths with a background noise of mean \(\mu _i = \mathop {\sum}\nolimits_{j = N + 1}^{N + M_i} {J_{i,j}} r_j\) and variance \(\sigma _i^2 = \mathop {\sum}\nolimits_{j = N + 1}^{N + M_i} {J_{i,j}^2} r_j\). This is the classical diffusion approximation^{63}. Because of shared connections from unobserved neurons, the inputs received by the different observed neurons are in general correlated, with correlation strengths that depend on the degree of overlap between the unobserved presynaptic populations. Although the model we fit to the observed data assumes uncorrelated background inputs, the effects of input correlations on the neuronal ISI distributions are approximated in the estimation of μ_{i} and σ_{i}.
To assess the influence of correlations due to unobserved common inputs, we first fitted our model to data generated from a network with correlated background inputs. The estimation accuracy of synaptic strengths is still good in case of weak correlations of the external input fluctuations (correlation coefficient c = 0.1 for each pair of observed neurons; Fig. 5b right). Empirical values for such noise correlations, measured in experimental studies by the correlation coefficient of pairwise spike counts on a short timescale, are typically very small^{36,64,65}.
We next tested our method on data generated by explicitly subsampling networks of randomly connected neurons, and compared the results with those from two classical approaches: a modelfree CCG method (cf. Results section “Inference of input perturbations”) and a method based on a Poisson point process GLM that is well constrained for the synthetic data^{66} (for details see Methods section “Network model and inference details”). We chose a GLM that is tailored to capture the spiking dynamics of the (ground truth) I&F network model with a minimal number of parameters.
First, we considered classical networks of 800 excitatory and 200 inhibitory neurons^{33} with connection probabilities of 0.1 for excitatory connections, 0.4 for inhibitory connections, and a plausible range of coupling strengths. Inference results from partial observations (N = 20) show that the I&F method outperforms the other two approaches on accuracy of both estimated coupling strengths and detection of connections (Fig. 5c–g, for different recording lengths see Supplementary Fig. 5b).
To gain more insight into the challenges caused by shared input from unobserved neurons and how they affect the different methods, we then varied connection probability and delay. We considered networks with equal numbers of excitatory and inhibitory neurons to ensure equal conditions for the estimation of the two connection types (Fig. 5h–k and Supplementary Fig. 5c–f). For relatively sparse, subsampled networks (connection probability 0.1, N = 20 observed neurons out of 1000) all three methods perform well, and the I&F method shows only a slight improvement in terms of correlation between true and estimated coupling strengths and detection accuracy (Fig. 5h, for detailed inference results see Supplementary Fig. 5c). The inference task becomes increasingly difficult as connectivity increases (connection probability 0.3, see Fig. 5i and Supplementary Fig. 5d). In this case the correlations between spike trains of uncoupled pairs of neurons are clearly stronger on average, particularly at small time lags, which renders CCGs unsuitable to distinguish the effects of numerous synaptic connections from uncoupled pairs. As a consequence, the number of false positives and misclassified inhibitory connections increases. Hence, the accuracy of the CCG method is substantially reduced. This also causes strongly impaired accuracy of the GLM method, especially with respect to detection of synapses, whereas our I&F method appears to be much less affected. Increased synaptic delays in such networks lead to improved inference results for all three methods (Fig. 5j and Supplementary Fig. 5e). Intuitively, this shifts the effects of connections in the CCG to larger lag values and thus away from the peak at zero lag caused by correlated inputs (Fig. 5j left). Nevertheless, also in this case the I&F method remains the most accurate one. We further considered small spike train distortions (using a temporal jitter) to mimic strong measurement noise, which naturally caused an overall reduction in accuracy but most strongly affected the CCG method (Fig. 5k and Supplementary Fig. 5f).
Finally, we would like to note that the connectivity is not the only determinant of noise correlations in random networks; an increase of coupling strengths also caused increased spike train correlations for uncoupled pairs (Supplementary Fig. 6). For rather sparse networks (connection probability 0.1) the benefit of stronger couplings outweighs that disadvantage for inference (Supplementary Fig. 6a). However, for slightly increased connectivity (connection probability 0.2) the noise correlations are strongly amplified, leading to an increase in mainly false positives and misclassified inhibitory connections (Supplementary Fig. 6b, in comparison to Fig. 5i and Supplementary Fig. 5d).
In sum, our approach yields accurate inference results for subsampled networks as long as the correlations between the hidden inputs, due to shared connections from unobserved neurons, are not too large. In particular, it outperforms classical CCGbased and GLMbased methods.
We validated our inference of synaptic coupling using simultaneous extracellular recordings and juxtacellular stimulations of hippocampal neuronal ensembles in awake mice^{51}. Following the approach developed in ref. ^{51}, we estimated connectivity by applying the I&F method to spontaneous, extracellularly recorded spiking activity, and assessed the accuracy of our estimates by comparison with ground truth data. Ground truth connectivity was obtained by evoking spikes in single PYRs juxtacellularly using short current pulses, while recording extracellular spike trains of local INTs (for an example see Fig. 6a). Ground truth values for the presence and absence of synaptic connections were derived from spike train CCGs using the evoked presynaptic spikes, taking into account comodulation caused by common network drive (for details see Methods section “In vivo ground truth data on synaptic connections”).
An important aspect of these data is that spontaneous activity appeared to be highly nonstationary, so that the spike trains of the recorded neurons were typically comodulated. To infer synaptic couplings from the spontaneous spike trains with our modelbased approach, we accounted for network comodulation in two ways: (1) through small temporal perturbations of the PYR spike times, used to compute coupling strength zscores; (2) through estimated variations of the background mean input for the (potentially postsynaptic) INTs, that is, \(\mu _i^k\) in Eq. (4) varied between ISIs. These variations were inferred from the instantaneous spike rate, which typically varied at multiple timescales over the duration of the recordings that lasted up to ~2 h (Fig. 6a). We, therefore, estimated the variations of mean input at three different timescales separately and inferred synaptic couplings for each of these (see Methods section “In vivo ground truth data on synaptic connections”).
Although spontaneous activity was highly nonstationary, our inference of the connectivity appeared to be very accurate. Comparisons with ground truth estimates demonstrated accuracy of up to 0.95 (for the intermediate timescale variant; Fig. 6c). Moreover, reducing the number of spikes used for inference did not lead to an appreciable decrease of reproduction accuracy (Supplementary Fig. 7a). Nevertheless, using instead a modelfree CCG method on the spontaneous spike trains yielded comparable detection accuracy (Supplementary Fig. 7b, for an example CCG see Fig. 6b). This fact and the observation that the timescale affects detection performance only weakly (cf. Fig. 6c) may be explained by the large signaltonoise ratio in this dataset (Supplementary Fig. 7c), as the focus in ref. ^{51} was on strong connections. We would also like to remark that a GLMbased method was previously applied and compared to the CCGbased method on these data^{51}, resulting in similar detection accuracy at strongly increased computational demands (200–400 s GPU computing time for a GLM fit, depending on the recording length, vs. 20 ms CPU computing time for the CCG analysis).
Inference of neuronal adaptation
We next extend the model neurons to account for spike rate adaptation—a property of many types of neurons, including pyramidal cells^{67,68,69}. It can be observed by a gradual change in spiking activity following an immediate response upon an abrupt change of input strength, as shown in Fig. 7a, d. This behavior is typically mediated by a calciumactivated, slowly decaying transmembrane potassium current, which rapidly accumulates when the neuron spikes repeatedly^{69,70}. In the extended I&F neuron model^{14,55} this adaptation current is represented by an additional variable w that is incremented at spike times by a value Δw, exponentially decays with slow time constant τ_{w} in between spikes, and subtracts from the mean input, acting as a negative feedback on the membrane voltage (Fig. 7a, see Methods section “Modeling spike rate adaptation”).
In contrast to classical I&F neurons, in the generalized model with adaptation, spiking is not a renewal process: given a spike time t_{k} the probability of the next spike depends on all previous spike times. That dependence is however indirect, as it is mediated through the effective mean input μ(t) − w(t) across the ISI [t_{k}, t_{k+1}]. This effective mean input can be explicitly expressed using the parameter values in θ together with the observed spike times, and then inserted in Eq. (1) for estimation. Here, method 1 is best suited and can be applied efficiently by exploiting the fact that w varies within ISIs in a rather stereotyped way (for details see Methods section “Modeling spike rate adaptation”).
We first evaluated the inference method using simulated ground truth data for constant statistics of the background inputs (cf. Results section “Inference of background inputs”). An example of the membrane voltage and adaptation current time series is depicted in Fig. 7b. The true values for the adaptation parameters (the strength Δw and time constant τ_{w}) are well recovered and we obtain an accurate estimate of the adaptation current as it evolves over time. Here, too, the estimation accuracy depends on the number of observed spikes (Fig. 7c) and relative errors are on average less than 10% for 500 spikes.
Comparisons of our method with the approach from ref. ^{62} on simultaneous inference of adaptation and background input parameters show clear improvements in terms of estimation accuracy (Supplementary Fig. 8a, b, d) and computation time (Supplementary Fig. 8c).
To validate our inference method for adaptation parameters, we used the recordings of neurons stimulated by noise currents that we examined in Results section “Inference of background inputs”. Several cells, predominantly PYRs, exhibited clear spike rate adaptation (for an example see Fig. 7d). Accordingly, the adaptive I&F model yielded a clearly improved fit compared to the nonadaptive model for all but one PYRs as shown by the AIC (Fig. 7e; for details see Methods section “In vitro ground truth data on neuronal input statistics”). On the other hand, for all except one INTs the nonadaptive model turned out to be the preferred one, which is consistent with the observation that INTs generally exhibit little spike rate adaptation compared to PYRs^{71}.
We examined the mean input estimates from the adaptive model in comparison to the nonadaptive model for the neurons where the adaptive model was preferred (Fig. 7e, f). For all of those cells, including adaptation increased the correlation coefficient between estimated and empirical mean input. The remaining room for improvement of this correlation for PYRs indicates that there are likely multiple adaptation mechanisms (with different timescales) at work^{71}. Note that the intrinsic adaptation current effectively subtracts from the mean input (but does not affect the input standard deviation). Indeed, the presence of adaptation in the model insignificantly affects the correlation coefficient between estimated and empirical input standard deviation (Supplementary Fig. 9). This can be explained by the fact that the adaptation variable varies slowly compared to the membrane voltage and typical mean ISI (estimated τ_{w} > 4τ_{m} on average); therefore, it affects the ISI mean more strongly than its variance which is predominantly influenced by the fast input fluctuations (parameter σ).
Discussion
We presented efficient, statistically principled methods to fit I&F circuit models to singletrial spike trains, and we evaluated and validated them extensively using synthetic, in vitro and in vivo ground truth data. Our approach allows to accurately infer hidden neuronal input statistics and adaptation currents as well as coupling strengths for I&F networks. We demonstrated that (1) the mean and variance of neuronal inputs are well recovered even for relatively short spike trains; (2) for a sufficient, experimentally plausible number of spikes, weak input perturbations triggered at known times are detected with high sensitivity, (3) coupling strengths are faithfully estimated even for subsampled networks, and (4) neuronal adaptation strength and timescale are accurately inferred. By applying our methods to suitable electrophysiological datasets, we could successfully infer the statistics of in vivolike fluctuating inputs, detect input perturbations masked by background noise, reveal intrinsic adaptation mechanisms, and reconstruct in vivo synaptic connectivity.
Previously several likelihoodbased methods related to ours have been proposed, considering uncoupled I&F neurons with^{61,62,72} or without adaptation^{73,74} for constant^{73} or timevarying^{61,62,72,74} input statistics. All of these methods employ the Fokker–Planck equation and numerically calculate the spike train likelihood with high precision^{73} or using approximations^{61,62,72,74}. We benchmarked our methods against that from^{61,62}, which was applicable to the estimation of single neuron parameters and for which an efficient implementation is available (cf. Supplementary Methods Section 3). Our methods clearly outperformed the previous one in terms of estimation accuracy as well as computation time, owing to optimized numerical discretization/interpolation schemes (method 1) and effective approximations (method 2). Notably, our methods extend these previous approaches, centered on inference for single leaky I&F models, to the estimation of synaptic coupling in networks of generalized I&F neurons.
In the absence of likelihoods, methods for parameter fitting typically involve numerical simulations and distance measures that are defined on possibly multiple features of interest^{46,47,48,75,76}. Evolutionary algorithms^{46,47,75}, bruteforce search^{48} or more principled Bayesian techniques^{76} are then used to minimize the distances between observed and modelderived features. While these likelihoodfree, simulationbased methods can be applied to more complex models they exhibit disadvantages: the distance measures usually depend on additional parameters and their evaluation depends on the particular realization of noise or randomness considered in the model. Optimization can therefore be exceedingly timeconsuming. Furthermore, the principled, likelihoodbased tools for model comparison (such as AIC and loglikelihood ratio) are not applicable in this case.
Here, we directly estimated synaptic coupling strengths for leaky I&F networks with fluctuating external inputs from observed spike trains. Our method is conceptually similar to those presented in^{77,78}, but does not rely on an approximation of the spike train likelihood that assumes vanishing^{77} or small^{78} amplitudes of input fluctuations. Strongly fluctuating inputs are typically required to produce in vivolike spiking statistics.
Our approach outperformed a straightforward, modelfree method based on CCGs as well as an approach based on a phenomenological, point process GLM^{66}. This does not imply that GLMbased methods are generally less accurate in inferring synaptic connectivity: point process GLMs are flexible models which can be designed and optimized to fit the observed spike trains well^{2,3,79,80}. However, that approach is prone to overfitting unless strong constraints or regularization are enforced^{3,5,66,80,81}. An advantage of our approach in this respect is that the basic mechanistic principles included in I&F models provide a natural regularization and reduce the number of model parameters, which strongly reduces the risk of overfitting. Note that our method 2 is essentially based on mapping I&F models to simplified, constrained GLMlike models^{57,58,82}.
Alternative approaches to infer connectivity from spike trains, other than those addressed above, have employed models of sparsely and linearly interacting point processes^{83}, or have been designed in a modelfree manner^{51,84,85}, for example, using CCGs^{51,84} similarly to our comparisons. A general challenge in subsampled networks arises from pairwise spike train correlations at small time lags generated by shared connections from unobserved neurons, regardless of whether a direct connection is present. These spurious correlations impair our ability to distinguish the effects of synaptic connections from those caused by correlated inputs, especially when coupling delays are small. One of the benefits of our approach is that it includes an explicit, principled mechanism to account for the effects of unobserved neurons, which are absorbed in the estimated statistics of the fluctuating background inputs. Correlated fast input fluctuations are not directly modeled, their effects are compensated for in the estimation of the background input parameters, whereas shared input dynamics on a longer timescale are explicitly captured by slow variations of the mean input for each neuron. This facilitates the isolation of pairwise synaptic interactions from common drive.
Several related studies have focused on a theoretical link between network structure and correlated spiking activity recorded from a large number of neurons, without attempting to explicitly estimate synaptic connections^{86,87,88,89,90,91,92,93}. Of major relevance in this regard is the extent to which effective interactions among observed neurons are reshaped by coupling to unobserved neurons^{79,94}. Current methods to estimate coupling strengths from observed spike trains may be further advanced using these theoretical insights.
Throughout this work we assumed that the mean input trajectory across an ISI can be determined using available knowledge (that is, the model parameters and observed spike times). In Results section “Inference of synaptic coupling” we extracted the variations of the mean input from estimates of the instantaneous neuronal spike rate at different timescales (cf. Fig. 6). A useful extension may be to consider a separate stochastic process that governs the evolution of the mean input, allowing to extract the most appropriate timescale from the data^{95}, which in turn could benefit the estimation of synaptic couplings using our approach.
I&F neurons are a popular tool for interpreting spiking activity in terms of simple circuit models (see, e.g., refs. ^{25,26,27,28,29,30,31}). Such approaches typically start by hypothesizing a structure for the underlying circuit based on available physiological information, and then examine the behavior of the resulting model as a function of the critical biophysical parameters. The model is then validated by qualitatively comparing the model output with experimental data. Specifically, the model activity is required to resemble key features of the experimental data in an extended region of the parameter space. If that is not the case, the model is rejected and a different one is sought.
An important benefit of this approach is that it provides a mechanistic interpretation and understanding of recorded activity in terms of biological parameters in a neural circuit. A major limitation is, however, that it typically relies on a qualitative comparison with the data to select or reject models. The methods presented here open the door to a more quantitative, datadriven approach, in which this class of spiking circuit models can be evaluated and compared based on their fitting performance (cf. Figs. 2d, g, 4a, c, and 7e for such comparisons) as is routinely the case for more abstract statistical models (see, e.g., ref. ^{4}).
Methods
I&F neuron models
We consider typical I&F models subject to fluctuating inputs. The dynamics of the membrane voltage V are governed by
where μ is the mean input, σ the standard deviation of the input, ξ a (unit) Gaussian white noise process, i.e., 〈ξ(t)ξ(t + τ)〉 = δ(τ) with expectation 〈⋅〉, V_{s} is the threshold (or spike) voltage and V_{r} the reset voltage. For the leaky I&F model the function f is given by
where τ_{m} denotes the membrane time constant. It should be noted that for the methods used in this paper f can be any arbitrary realvalued function. For example, in the exponential I&F model, used for Supplementary Fig. 1e, f is a nonlinear function that includes an exponential term (for details see Supplementary Methods section 1). The parameter values are V_{s} = 30 mV, V_{r} = 0 mV, τ_{m} = 20 ms, μ = 1.75 mV/ms, σ = 2.5 mV/\(\sqrt {{\mathrm{ms}}}\) if not stated otherwise in figures or captions.
It is not meaningful to estimate all model parameters: a change of V_{s} or V_{r} in the leaky I&F model can be completely compensated in terms of spiking dynamics by appropriate changes of μ(t) and σ. This can be seen using the change of variables \(\tilde V: = (V  V_{\mathrm{r}})/(V_{\mathrm{s}}  V_{\mathrm{r}})\). Consequently, we may restrict the estimation to μ(t), σ, τ_{m} and set the remaining parameters to reasonable values.
Method 1: conditioned spike time likelihood
It is useful to express the factors in Eq. (1), the conditioned spike time likelihoods, in terms of the ISI probability density p_{ISI},
where s_{k} := t_{k+1} − t_{k} is the length of the kth ISI and μ_{ISI} is the mean input across that ISI given by μ_{ISI}[0, s_{k}] = μ[t_{k}, t_{k+1}]. The technical advantage of this change of variables becomes most obvious for constant μ. In this case the density function p_{ISI} needs to be computed only once in order to evaluate the spike train likelihood, Eq. (1), for a given parametrization μ, θ.
Given a spike at t = t_{0} the probability density of the next spike time is equal to the ISI probability density p_{ISI}(s) where s := t − t_{0} ≥ 0 denotes the time since the last spike. This quantity can be approximated by numerical simulation in an intuitive way: starting with initial condition V(t_{0}) = V_{r} one follows the neuronal dynamics given by Eq. (5) in each of n realizations of the noise process until the membrane voltage crosses the value V_{s} and records that spike time t_{i} in the ith realization. The set of times {t_{i}} can then be used to compute p_{ISI}, where the approximation error decreases as n increases. We can calculate p_{ISI} analytically in the limit n → ∞ by solving the Fokker–Planck partial differential equation (PDE)^{96,97} that governs the dynamics of the membrane voltage probability density p_{V} (V, s),
with mean input μ_{ISI}(s) = μ(t), subject to the initial and boundary conditions
The ISI probability density is then given by the probability flux at V_{s},
In the field of probability theory p_{ISI} is also known as first passage time density.
In method 1a we consider the first order approximation (2) for weak perturbations of the mean input, \(\mu (t) = \mu _0^k + J\mu _1(t)\) with small J during the kth ISI. In this case the kth spike time likelihood is expressed as
where \(\mu _{{\mathrm{ISI}}}^1[0,s_k] = \mu _1[t_k,t_{k + 1}]\) and θ contains parameters that remain constant within ISIs, including \(\mu _{{\mathrm{ISI}}}^0 = \mu _0^k\).
Numerical solution schemes to compute p_{ISI} (method 1) or \(p_{{\mathrm{ISI}}}^0\) and \(p_{{\mathrm{ISI}}}^1\) (method 1a) in accurate and efficient ways are provided in Supplementary Methods section 2. It should be noted that these functions do not need to be computed for each observed ISI separately; instead, we precalculate them for a reasonable set of trajectories μ_{ISI}[0, s_{max}], where s_{max} is the largest observed ISI, and use interpolation for each evaluation of a spike time likelihood.
Method 2: derived spike rate model
Method 2 requires the (instantaneous) spike rate r(t) of the model neuron described by Eqs. (5) and (6), which can be calculated by solving a Fokker–Planck system similar to Eqs. (9–13),
subject to the conditions
where Eq. (19) accounts for the reset condition (6). The steadystate solution of this system (for constant mean input) can be conveniently calculated^{56}. Obtaining the timevarying solution of Eqs. (17–19) is computationally more demanding and can be achieved, e.g., using a finite volume method as described in Supplementary Methods section 2 (see ref. ^{57}).
As an efficient alternative, reduced models have been developed to approximate the spike rate dynamics of this Fokker–Planck system by a lowdimensional ordinary differential equation (ODE) that can be solved much faster^{57,58,59,98}. Here, we employ a simple yet accurate reduced model from ref. ^{57} (the LNexp model, based on ref. ^{58}) adapted for leaky I&F neurons with constant input variance σ^{2}. This model is derived via a linear–nonlinear cascade ansatz, where the mean input is first linearly filtered and then passed though a nonlinear function to yield the spike rate. Both components are determined from the Fokker–Planck system and can be conveniently calculated without having to solve Eqs. (17)–(19) forward in time: the linear temporal filter is obtained from the first order spike rate response to small amplitude modulations of the mean input and the nonlinearity is obtained from the steadystate solution^{57,58}. The filter is approximated by an exponential function and adapted to the input in order to allow for large deviations of μ. This yields a onedimensional ODE for the filter application,
where μ_{f} is the filtered mean input and τ_{μ} is the (state dependent) time constant. The spike rate is given by the steadystate spike rate of the Fokker–Planck system evaluated at μ = μ_{f},
In order to efficiently simulate this model we precalculate τ_{μ} and r_{∞} for a reasonable range of mean input values and use lookup tables during time integration. Note that this model is based on the derivation in ref. ^{58} with filter approximation scheme proposed in ref. ^{57}, which leads to improved accuracy of spike rate reproduction for the sensitive low input regime^{57}. For a given mean input time series μ[t_{0}, t] we calculate r(tμ[t_{0}, t],θ) using the initial condition μ_{f}(t_{0}) = μ(t_{0}).
Likelihood maximization
We maximized the logarithm of the likelihood (loglikelihood),
for individual neurons, using Eq. (1), and similarly for networks using the logarithm of Eq. (4). Optimization was performed using a simplex algorithm^{99} as implemented in the Scipy package for Python. It should be noted that our method is not restricted to this algorithm; alternative, gradientbased optimization techniques, for example, may likely lead to reduced estimation times.
We would further like to remark that maximizing the likelihood p(Dθ) within plausible limits for the parameter values is equivalent to maximizing the posterior probability density for the parameters given the data, p(θD), without prior knowledge about the parameters except for the limits (i.e., assuming a uniform prior distribution of θ).
Calculation of the Cramer–Rao bound
We computed the Cramer–Rao bound for the variance of parameter estimates in Results section “Inference of background inputs”. This bound is approached by the variance of a maximum likelihood estimator as the number of realizations increases. Let θ denote the vector of parameters for estimation (contained in θ), e.g., θ = (μ, σ, τ_{m})^{T}. In case of a single (nonadapting) model neuron with constant input moments the Cramer–Rao bound for the variance of estimates of θ_{i} from spike trains with K spikes is then given by \([{\cal{I}}(\theta )]_{i,i}^{  1}/(K  1)\), where \({\cal{I}}(\theta )\) is the Fisher information matrix per ISI defined by
Modeling input perturbations
In Results section “Inference of input perturbations” we consider input perturbations of the form μ(t) = μ_{0} + Jμ_{1}(t), where μ_{1}(t) is described by the superposition of alpha functions with time constant τ, triggered at times \(\tilde t_1, \ldots ,\tilde t_L\),
with Heaviside step function H. The alpha functions are normalized such that their maximum value is 1 when considered in isolation. As an alternative we also considered delayed delta pulses instead of alpha kernels
where d denotes the time delay. The perturbation onset (trigger) times were generated by randomly sampling successive separation intervals \(\tilde t_{l + 1}  \tilde t_l\) from a Gaussian distribution with 200 ms mean and 50 ms standard deviation.
In Fig. 3 and Supplementary Fig. 3, we quantified the sensitivity to detect weak input perturbations using our estimation methods (1a and 2) in comparison with a detection method based on the generated data only. For a given parametrization N_{r} spike trains were simulated using different realizations of neuronal input noise and perturbation onset times. Detection sensitivity was quantified by comparing the inferred perturbation strengths from data generated with (J ≠ 0) and without (J = 0) perturbations.
For the I&Fbased methods it was calculated by the fraction of N_{r} = 50 estimates of J for true J > 0 (J < 0) that exceeded the 95th percentile (fell below the 5th percentile) of estimates without perturbation. The modelfree reference method was based on CCGs between the spike trains and perturbation times (in other words, spike density curves aligned to perturbation onset times). For each realization one such curve was calculated by the differences between spike times and the perturbation onset times using a Gaussian kernel with 3 ms standard deviation. Detection sensitivity was assessed by the fraction of N_{r} = 300 CCGs for which a significant peak (for J > 0) or trough (for J < 0) appeared in the interval [0, 100 ms]. Significance was achieved for true J > 0 (J < 0) if the curve maximum (minimum) exceeded the 95th percentile (fell below the 5th percentile) of maxima (minima) in that interval without perturbation.
Network model and inference details
In Results section “Inference of synaptic coupling” we consider networks of N_{tot} coupled leaky I&F neurons from which the spike trains of N ≤ N_{tot} neurons have been observed. These networks are given by
for i ∈ {1, …, N_{tot}}, where J_{i,j} denotes the coupling strength between presynaptic neuron j and postsynaptic neuron i, \(t_i^k\) is the kth of K_{i} spike times of neuron i, and d_{i,j} is the delay. η_{i} describes the fluctuations of external input received from unobserved neurons,
where ξ_{i}, ξ_{c} are independent unit Gaussian white noise processes, i.e., 〈ξ_{i}(t)ξ_{j}(t + τ)〉 = δ_{ij}δ(τ), i, j ∈ {1, …, N_{tot}, c}, and c is the input correlation coefficient. We considered uncorrelated or weakly correlated external input fluctuations, i.e., c = 0 or c = 0.1. Note that the input variation for neuron i caused by (observed or unobserved) neuron j across the interval \([t_i^k,t_i^{k + 1}]\), denoted by \(J_{i,j}\mu _j^1[t_i^k,t_i^{k + 1}]\) (as used in Eq. (4)), is determined by the spike times of neuron j that occur in the interval \([t_i^k  d_{i,j},t_i^{k + 1}  d_{i,j}]\). For simulated data we chose identical delays across the network, but this is not a restriction of our inference method (see below). Coupling strengths were uniformly sampled in [−0.75, 0.75] mV (Fig. 5a, b and Supplementary Fig. 5a), otherwise excitatory/inhibitory connections were randomly generated with specified probabilities and coupling strengths were then uniformly sampled with mean ±0.5 mV (Fig. 5c–k and Supplementary Fig. 5b–f) or mean ±1 mV (Supplementary Fig. 6), respectively. Autapses were excluded, i.e., J_{i,i} = 0. Network simulations for Fig. 5c–k, Supplementary Figs. 5b–f and 6 were performed using the Pythonbased Brian2 simulator^{100}.
Our method fits an I&F network, described by Eqs. (26)–(29) with c = 0 for the N observed neurons (i.e., i ∈ {1, …, N}) to the spike train data by maximizing the likelihood (4). In the fitted model the effects of unobserved neurons are absorbed by the parameters μ_{i}(t) and σ_{i}, where μ_{i}(t) is discretized in an eventbased way: the background mean input for neuron i during its kth ISI is represented by a constant value \(\mu _i^k\) (cf. Eq. (4)). In Fig. 5, Supplementary Figs. 5 and 6 μ_{i}(t) was assumed to be constant over time, whereas in Fig. 6 and Supplementary Fig. 7 it varied between ISIs; for details on the estimation of these variations see Methods section “In vivo ground truth data on synaptic connections”.
The logarithm of the spike train likelihood (4) (cf. Methods section “Likelihood maximization”) was optimized in the following way, justified by the assumption of weak coupling. First, the parameters of the background input, \(\mu _i^k\) and σ_{i}, were estimated for each neuron in isolation (all J_{i,j} = 0). Consequently, effects of observed and unobserved neurons are reflected by these estimates. Then, the coupling strength J_{i,j} and delay d_{i,j} were estimated given \(\mu _i^k\) and σ_{i} for each i,jpair. These two steps were performed in parallel over (postsynaptic) neurons. We estimated couplings in a pairwise manner to save computation time, assuming that the transient effects of an individual synaptic connection on the spiking probability of a neuron are negligible for the estimation of other synaptic connections. This is justified for weak coupling and network activity where synchronous spikes across the network occur sparsely.
We then corrected for a potential systematic bias in the estimated coupling strengths of a network in Fig. 5, Supplementary Figs. 5 and 6 as follows. We perturbed all presynaptic spike times by a temporal jitter (random values uniformly sampled in the interval [−10, 10] ms) to mask any of the transient effects caused by synaptic connections, and reestimated the coupling strengths for multiple such realizations for each postsynaptic neuron i given \(\mu _i^k\) and σ_{i}. The averaged bias that was estimated across the network from this procedure was then subtracted from the original estimates.
For comparison we used a method that fits a point process GLM to the data. From the class of GLMbased approaches^{2,3,80} we chose one that is well suited for reconstruction from spike train data generated by an I&F network as specified above and for which an efficient implementation is available^{66}. In the GLM network model the incoming spike trains, after incurring transmission delays, are filtered by a leaky integrator with a time constant and a (constant) baseline activity parameter for each neuron. The resulting membrane potential is passed through an exponential link function, which transforms it into the timevarying rate of a Poisson point process that generates the output spike train of the neuron. The spike train is also fed back as an input to the neuron itself to model refractory, postspike properties. The coupling terms in the GLM and I&F networks are equivalent: both models use delayed delta pulses.
Using maximum likelihood estimation this GLM method inferred N^{2} + 3N parameters per network with N neurons: the coupling strengths (including selffeedback) as well as time constant, baseline parameter and delay, one for each neuron. Note that for the simulated data only one (global) delay value was used for all connections in a network. Hence, the GLM method estimated fewer parameters compared to the I&F method (which inferred 2N^{2} parameters). For details on the elaborate inference technique, which includes regularized optimization and crossvalidation, and an available Python implementation using the library Cython for accelerated program execution we refer to ref. ^{66}.
In the modelfree, CCGbased method a connection strength was estimated by the zscore of the extremum in the spike train CCG across positive lags for each pair of neurons. Note that the lag denotes the time since a presynaptic spike. zscores were obtained using estimates from surrogate data generated by perturbing the presynaptic spike times by a temporal jitter (random values between −10 ms and 10 ms) in a large number of realizations.
For each of the three methods detailed in this section we assessed detection performance in the following way. A discrimination threshold J_{thresh} for the presence or absence of connections was applied to estimated coupling strengths \(\hat J_{i,j}\). Accordingly, the presence of a connection (i,jpair) was assured by the condition \(\hat J_{i,j} \, > \, J_{{\mathrm{thresh}}}\). The true positive rate (sensitivity) was given by the number TP of connections for which the estimation satisfied \(\hat J_{i,j} \, > \, J_{{\mathrm{thresh}}}\) and a true connection was present (J_{i,j} > 0), divided by the number P of true connections. The true negative rate (specificity) was given by the number TN of connections for which \(\hat J_{i,j} \le J_{{\mathrm{thresh}}}\) and a true connection was absent (J_{i,j} = 0), divided by the number N of absent connections. Receiver operating characteristic (ROC) curves were generated from sensitivity and specificity as a function of J_{thresh}. Accuracy (ACC) and balanced accuracy (BACC) are defined as ACC = (TP + TN)/(P + N) and BACC = (TP/P + TN/N)/2, respectively.
Modeling spike rate adaptation
In Results section “Inference of neuronal adaptation” we consider an extended I&F neuron model that includes an additional adaptation (current) variable w that is incremented at spike times, slowly decays, and counteracts the input to the neuron^{14,55}: Eqs. (5) and (6) where the mean input μ(t) is replaced by an effective mean input μ(t) − w(t), with
Here, τ_{w} is the adaptation time constant and Δw denotes the spiketriggered increment.
For known spike times, contained in set D, the effective mean input can be written as μ(t) − Δwμ_{1}(tD,τ_{w}), where μ_{1} between spike times t_{k} and t_{k+1} is explicitly expressed by
t ∈ [t_{k}, t_{k+1}], with Heaviside step function H, assuming the adaptation current just prior to the first spike is zero, \(w(t_1^  ) = 0\). This means, for given parameters μ, Δw, τ_{w} the effective mean input time series is determined by the observed spike train (up to t_{k}).
Note that in general the mean input perturbations caused by adaptation vary from spike to spike, μ_{1}(t_{k}D, τ_{w}) ≠ μ_{1}(t_{l}D,τ_{w}) for t_{k} ≠ t_{l} ∈ D. To efficiently evaluate the likelihood p(Dθ) via method 1 (using Eqs. (1) and (8)) we calculate p_{ISI}(sμ_{ISI}[0, s],θ) with μ_{ISI}(s) = μ(s) − w_{0} exp(−s/τ_{w}), s ≥ 0 for a reasonable range of values for w_{0} and interpolate to obtain p_{ISI}(s_{k}μ_{ISI}[0, s_{k}],θ) with μ_{ISI}[0, s_{k}] = μ[t_{k}, t_{k+1}] − Δw μ_{1}[t_{k}, t_{k+1}] using Eq. (32). Methods 1a and 2 are less well suited for this scenario because the adaptation variable can accumulate to substantial values, thereby opposing the assumption of weak variations of the mean input; moreover, the spike train of an adapting neuron deviates strongly from a Poisson process.
Implementation and computational complexity
We have implemented our methods for parameter estimation (1, 1a, and 2) using the Python programming language and applying the libraries Scipy^{101} for optimization and Numba^{102} for lowlevel machine acceleration. The code for representative estimation examples from Results sections “Inference of background inputs” to “Inference of neuronal adaptation” is available at GitHub: https://github.com/neuromethods/inferenceforintegrateandfiremodels. Computation times for example inference problems are summarized in Supplementary Table 1.
In vitro ground truth data on neuronal input statistics
We used somatic wholecell current clamp recordings from primary somatosensory cortex in acute brain slices (for details see ref. ^{49}). Layer 5 PYRs were recorded in wildtype mice^{49}, fastspiking layer 5 INTs were selected among the fluorescing cells of a GAD67GFP transgenic line^{103}. Only cells with an access resistance ≤25 MΩ (PYR: 18.3 ± 1.5 MΩ, n = 7; INT: 19.5 ± 4.0 MΩ, n = 6) and a drift in the resting membrane potential ≤7.5 mV (PYR: 3.2 ± 3.0 mV, n = 7; INT: 3.1 ± 3.7 mV, n = 6) throughout the recording were retained for further analysis. Seven PYRs and six INTs were stimulated with a fluctuating current I(t) generated according to an Ornstein–Uhlenbeck process
where τ_{I} denotes the correlation time, μ_{I} and σ_{I} are the mean and standard deviation of the stationary normal distribution, i.e., \({\mathrm{lim}}_{t \to \infty }I(t)\sim {\cal{N}}(\mu _{I},{\sigma_{I}^{2}})\), and ξ is a unit Gaussian white noise process. Somatic current injections lasted 5 s and were separated by interstimulus intervals of at least 25 s. Different values for μ_{I} and σ_{I} were used and each combination was repeated three times. The correlation time was set to 3 ms. Spike times were defined by the time at which the membrane voltage crossed 0 mV from below, which was consistent with a large depolarization rate dV/dt > 10 mV/ms^{49}. An absolute refractory period of 3 ms was assumed.
For each neuron we fitted a leaky I&F model with and without adaptation (cf. Methods sections “I&F neuron models” and “Modeling spike rate adaptation”). Note that the injected current I(t) can be well approximated by a Gaussian white noise process as considered in our model because of the small correlation time τ_{I}. In Results section “Inference of background inputs” we estimated the input parameters μ and σ for nonadaptive model neurons from each 5s long spike train recording as well as from each combined 3 × 5s long recording (using the three repetitions with identical stimulus parameters which effectively yielded 15 s long stimuli). To exclude onset transients (i.e., increased spike rate upon stimulus onset) we used the central 90% of ISIs for each stimulus, ensuring that ISIs lasted >5 ms. For comparison we considered a Poisson process with constant rate. In Results section “Inference of neuronal adaptation” we additionally estimated the adaptation parameters Δw and τ_{w} per neuron across all available stimuli in the combined 15 s stimulus setting. Here we used all ISIs (including the short ones at stimulus onset) in order to unmask adaptation effects. Parameter estimation was accomplished using method 1. To compare the quality of the models and avoid overfitting we used the AIC^{53,104}, given by 2N_{θ} − 2max_{θ} log p(Dθ), where N_{θ} denotes the number of estimated parameters (θ is a subvector of θ) for a particular model. For the adaptive I&F model N_{θ} = 4, for the nonadaptive I&F model N_{θ} = 2, and for the Poisson model N_{θ} = 1. The preferred model from a set of candidate models is the one with the smallest AIC value.
Estimating neuronal input statistics from in vivo data
We used single unit spike trains from extracellular recordings of two adult female ferrets in an awake, spontaneous state. The animals were listening to acoustic stimuli separated by periods of silence lasting 0.4 s which we used for model fitting. Neural activity from primary auditory cortex was recorded using a 24 channelelectrode and spikes were sorted using an automatic clustering algorithm followed by a manual adjustment of the clusters (for details see ref. ^{52}). Spike trains with >50 ISIs during silence periods were considered for fitting. Seventyone single units passed that threshold in each of two behavioral conditions (passive listening vs. engaged in a discrimination task). Model neurons were fit in either behavioral condition separately, resulting in 142 sets of estimated parameters. We employed the leaky I&F model (Eqs. (5) and (6)) with constant background input mean μ. For robust estimation we used the central 95% of ISIs, ensuring that ISIs lasted >2.5 ms. For comparison we considered a Poisson process with constant rate and compared the quality of the models using the AIC.
In vitro ground truth data on input perturbations
We used wholecell recordings of pyramidal neurons in slices of rat visual cortex. Ten neurons were stimulated with an input current that consisted of transient bumps reflecting an aEPSC immersed in background noise (for details see ref. ^{50}: experiment 1). The background noise was generated as in Methods section “In vitro ground truth data on neuronal input statistics” with τ_{I} = 5 ms, μ_{I} tuned to maintain ~5 spikes/s, and σ_{I} adjusted to produce membrane voltage fluctuations with ~15–20 mV peaktopeak amplitude. aEPSC traces were generated by convolving a simulated presynaptic spike train with a synaptic kernel described by the difference of two exponentials (rise time 1 ms, decay time 10 ms). Presynaptic spikes were generated by a gamma renewal process (shape 2, scale 2.5) with 5 spikes/s on average; aEPSC amplitudes triggered by a single spike ranged from 0.1 σ_{I} to 1.5 σ_{I}. Current was injected in segments of 46 s length with at least 10 repetitions per aEPSC strength (except for one cell). The first and last 3 s of each segment were discarded from the analysis, as in ref. ^{50}.
Using only the presynaptic and postsynaptic spike times, we fitted an I&F neuron where input perturbations were described using delta pulses or alpha functions (cf. Methods section “Modeling input perturbations”). For the former model we applied method 1a, for the latter method 2. Similarly to ref. ^{50} we defined detection time as the minimal total length of spike train data for which the model with input perturbations (J ≠ 0) yields a larger likelihood on test data compared to the respective model without input perturbations (J = 0); this was indicated by a positive loglikelihood ratio on test data from 10fold crossvalidation. In addition, we assessed detection performance on a fixed amount of data that consisted of five consecutive segments (i.e., 200 s recording duration), where adjacent fivesegment blocks shared one segment. Coupling strength zscores were computed using estimates from surrogate data generated by perturbing the presynaptic spike times by a temporal jitter (cf. Methods section “Network model and inference details”) in a large number of realizations.
In vivo ground truth data on synaptic connections
We used combined juxtacellular–extracellular recordings of neuronal ensembles from the hippocampal CA1 region in awake mice (for details see ref. ^{51}). Neurons were separated into PYRs and INTs according to their spike waveform and spiking statistics. Spikes were evoked in single PYRs by short current pulses (50–100 ms) applied at intervals of variable length using juxtacellular electrodes while recording extracellular spikes of local INTs. PYR spikes which occurred during a stimulus were considered as evoked, and those which occurred at all other times were considered as spontaneous. All spikes that occurred during sharpwave ripple events were discarded from the analyses and we only considered INTs that fired at least 3 spikes/s on average. A total of 78 PYRINT pairs were included for estimation of synaptic couplings.
For each INT we fitted a leaky I&F neuron receiving background input and (potential) synaptic input from the recorded PYR such that each presynaptic spike causes a delayed postsynaptic potential with delay d and size J (cf. Eqs. (26–29) with c = 0, where we omit the indices i, j here for simplicity).
To account for changes in background input statistics over the recording duration, which lasted up to ~2 h, and to reflect lowfrequency network comodulation induced by common network drive, the background mean input was allowed to vary over time. The parameters to be estimated are thus μ(t), J, d, and σ. Estimation consisted of three steps. First, we inferred the statistics μ(t) and σ of background inputs for J = 0 in the following way. We computed the empirical instantaneous spike rate r(t) of the INT from the observed spike train via kernel density estimation using a Gaussian kernel with width σ_{G} ∈ {0.1, 0.5, 1} s. The estimated empirical spike rate varies over time much slower than the timescale at which changes of mean input translate to changes of spike rate in the I&F model. This justifies the approximation r(t) ≈ \({r_{\infty}}\)(μ(t)θ) (cf. Methods section “Method 2: derived spike rate model”), which allowed us to efficiently evaluate the spike train likelihood for fixed σ by applying method 1 with mean input assumed constant within each ISI, given by \(\mu (t) = r_\infty ^{  1}(r(t){\boldsymbol{\theta }})\) (at the center between consecutive spike times). The likelihood was then maximized with respect to σ. Given the parameters for the background inputs (one value of μ per ISI and one for σ) we next maximized the likelihood of the full model in the second step with respect to J and d using method 1a. In the third step we assessed the significance of synaptic coupling estimates using surrogate data, similarly as in the previous section. We perturbed the presynaptic spike times by a small temporal jitter (random values between −5 and +5 ms) and reestimated J and d. This was repeated 100 times and zscores were computed from the estimated coupling strengths. Notably, since spike times are shifted by only small values, effects due to network comodulation which occurs on a slower timescale are preserved in the surrogate data. In this way we obtained a coupling strength zscore for each PYRINT pair and for each of the three values of σ_{G}.
We validated our results against ground truth connection labels obtained from juxtacellular evoked activity using a modelfree method based on spike train CCGs^{51} (for details see Supplementary Methods section 4). Based on these labels we computed ROC curves as well as ACC and BACC (cf. Methods section “Network model and inference details”) using a classification (zscore) threshold value \(J_{{\mathrm{thresh}}}^z\). Accordingly, the presence of an estimated connection was assured by the condition \(\hat J^z \, > \, J_{{\mathrm{thresh}}}^z\), where \(\hat J^z\) denotes the connection strength (zscore) estimate for a given PYRINT pair. Note that the ground truth labels indicate excitatory connections (positives) and absent connections (negatives).
To test the validity of our approach we estimated connectivity using only the first evoked PYR spikes of each stimulation pulse, which are maximally decoupled from network comodulation, and compared the results with the ground truth labels. This assessment yielded excellent agreement, with ACC and BACC values of up to 0.97 and 0.95, respectively (for σ_{G} = 0.1 s).
Reporting summary
Further information on research design is available in the Nature Research Reporting Summary linked to this article.
Data availability
No experimental data were collected for this study. The study involved available datasets from previous experimental studies with ethical approval granted. These datasets are available either online https://doi.org/10.6084/m9.figshare.1144467 or from the authors on reasonable request.
Code availability
Python code for our methods and estimation examples are available under a free license at https://github.com/neuromethods/inferenceforintegrateandfiremodels.
References
 1.
Chichilnisky, E. J. A simple white noise analysis of neuronal light responses. Network 12, 199–213 (2001).
 2.
Truccolo, W., Eden, U. T., Fellows, M. R., Donoghue, J. P. & Brown, E. N. A point process framework for relating neural spiking activity to spiking history, neural ensemble, and extrinsic covariate effects. J. Neurophysiol. 93, 1074–1089 (2005).
 3.
Pillow, J. W. et al. Spatiotemporal correlations and visual signalling in a complete neuronal population. Nature 454, 995–999 (2008).
 4.
Latimer, K. W., Yates, J. L., Meister, M. L. R., Huk, A. C. & Pillow, J. W. Singletrial spike trains in parietal cortex reveal discrete steps during decisionmaking. Science 349, 184–187 (2015).
 5.
Aljadeff, J., Lansdell, B. J., Fairhall, A. L. & Kleinfeld, D. Analysis of neuronal spike trains, deconstructed. Neuron 91, 221–259 (2016).
 6.
Hodgkin, A. L. & Huxley, A. F. A quantitative description of membrane current and its application to conduction and excitation in nerve. J. Physiol. 117, 500–544 (1952).
 7.
Gouwens, N. W. et al. Systematic generation of biophysically detailed models for diverse cortical neuron types. Nat. Commun. 9, 710 (2018).
 8.
Prinz, A. A., Billimoria, C. P. & Marder, E. Alternative to handtuning conductancebased models: construction and analysis of databases of model neurons. J. Neurophysiol. 90, 3998–4015 (2003).
 9.
Marder, E. & Taylor, A. L. Multiple models to capture the variability in biological neurons and networks. Nat. Neurosci. 14, 133–138 (2011).
 10.
FourcaudTrocmé, N., Hansel, D., van Vreeswijk, C. & Brunel, N. How spike generation mechanisms determine the neuronal response to fluctuating inputs. J. Neurosci. 23, 11628–11640 (2003).
 11.
Badel, L. et al. Dynamic IV curves are reliable predictors of naturalistic pyramidalneuron voltage traces. J. Neurophysiol. 99, 656–666 (2008).
 12.
Platkiewicz, J. & Brette, R. A threshold equation for action potential initiation. PLoS Comput. Biol. 6, e1000850 (2010).
 13.
Richardson, M. J. E., Brunel, N. & Hakim, V. From subthreshold to firingrate resonance. J. Neurophysiol. 89, 2538–2554 (2003).
 14.
Ladenbauer, J., Augustin, M. & Obermayer, K. How adaptation currents change threshold, gain and variability of neuronal spiking. J. Neurophysiol. 111, 939–953 (2014).
 15.
Ostojic, S. et al. Neuronal morphology generates highfrequency firing resonance. J. Neurosci. 35, 7056–7068 (2015).
 16.
Ladenbauer, J. & Obermayer, K. Weak electric fields promote resonance in neuronal spiking activity: analytical results from twocompartment cell and network models. PLoS Comput. Biol. 15, e1006974 (2019).
 17.
Brette, R. & Gerstner, W. Adaptive exponential integrateandfire model as an effective description of neuronal activity. J. Neurophysiol. 94, 3637–3642 (2005).
 18.
Naud, R., Marcille, N., Clopath, C. & Gerstner, W. Firing patterns in the adaptive exponential integrateandfire model. Biol. Cybern. 99, 335–347 (2008).
 19.
Harrison, P. M., Badel, L., Wall, M. J. & Richardson, M. J. E. Experimentally verified parameter sets for modelling heterogeneous neocortical pyramidalcell populations. PLoS Comput. Biol. 11, e1004165 (2015).
 20.
Teeter, C. et al. Generalized leaky integrateandfire models classify multiple neuron types. Nat. Commun. 9, 709 (2018).
 21.
Jolivet, R. et al. The quantitative singleneuron modeling competition. Biol. Cybern. 99, 417–426 (2008).
 22.
Gerstner, W. & Naud, R. How good are neuron models? Science 326, 379–380 (2009).
 23.
Pospischil, M., Piwkowska, Z., Bal, T. & Destexhe, A. Comparison of different neuron models to conductancebased poststimulus time histograms obtained in cortical pyramidal cells using dynamicclamp in vitro. Biol. Cybern. 105, 167–180 (2011).
 24.
Pozzorini, C. et al. Automated highthroughput characterization of single neurons by means of simplified spiking models. PLoS Comput. Biol. 11, e1004275 (2015).
 25.
de Solages, C. et al. Highfrequency organization and synchrony of activity in the purkinje cell layer of the cerebellum. Neuron 58, 775–788 (2008).
 26.
Giridhar, S., Doiron, B. & Urban, N. N. Timescaledependent shaping of correlation by olfactory bulb lateral inhibition. Proc. Natl Acad. Sci. USA 108, 5843–5848 (2011).
 27.
LitwinKumar, A., Chacron, M. J. & Doiron, B. The spatial structure of stimuli shapes the timescale of correlations in population spiking activity. PLoS Comput. Biol. 8, e1002667 (2012).
 28.
Potjans, T. C. & Diesmann, M. The celltype specific cortical microcircuit: Relating structure and activity in a fullscale spiking network model. Cereb. Cortex 24, 785–806 (2014).
 29.
Bendor, D. The role of inhibition in a computational model of an auditory cortical neuron during the encoding of temporal information. PLoS Comput. Biol. 11, e1004197 (2015).
 30.
Blot, A. et al. Timeinvariant feedforward inhibition of Purkinje cells in the cerebellar cortex in vivo. J. Physiol. 10, 2729–2749 (2016).
 31.
Kanashiro, T., Ocker, G. K., Cohen, M. R. & Doiron, B. Attentional modulation of neuronal variability in circuit models of cortex. eLife 6, e23978 (2017).
 32.
Brunel, N. & Hakim, V. Fast global oscillations in networks of integrateandfire neurons with low firing rates. Neural Comput. 11, 1621–1671 (1999).
 33.
Brunel, N. Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons. J. Comput. Neurosci. 8, 183–208 (2000).
 34.
Izhikevich, E. M. & Edelman, G. Largescale model of mammalian thalamocortical systems. Proc. Natl Acad. Sci. USA 105, 3593–3598 (2008).
 35.
LitwinKumar, A. & Doiron, B. Slow dynamics and high variability in balanced cortical networks with clustered connections. Nat. Neurosci. 15, 1498–1505 (2012).
 36.
Doiron, B., LitwinKumar, A., Rosenbaum, R., Ocker, G. K. & Josić, K. The mechanics of statedependent neural correlations. Nat. Neurosci. 19, 383–393 (2016).
 37.
Schmuker, M., Pfeil, T. & Nawrot, M. P. A neuromorphic network for generic multivariate data classification. Proc. Natl Acad. Sci. USA 111, 2081–2086 (2014).
 38.
Gütig, R. Spiking neurons can discover predictive features by aggregatelabel learning. Science 351, aab4113 (2016).
 39.
Gilra, A. & Gerstner, W. Predicting nonlinear dynamics by stable local learning in a recurrent spiking neural network. eLife 6, e28295 (2017).
 40.
Bellec, G., Salaj, D., Subramoney, A., Legenstein, R. & Maass, W. Long shortterm memory and learningtolearn in networks of spiking neurons. Adv. Neural Inf. Process. Syst. 31, 787–797 (2018).
 41.
Neftci, E. et al. Synthesizing cognition in neuromorphic electronic systems. Proc. Natl Acad. Sci. USA 110, E3468–E3476 (2013).
 42.
Nawrocki, R. A., Voyles, R. M. & Shaheen, S. E. A mini review of neuromorphic architectures and implementations. IEEE Trans. Electron Devices 63, 3819–3829 (2016).
 43.
Davies, M. et al. Loihi: a neuromorphic manycore processor with onchip learning. IEEE Micro 38, 82–99 (2018).
 44.
Gerstner, W., Kistler, W. M., Naud, R. & Paninski, L. Neuronal Dynamics: From Single Neurons to Networks and Models of Cognition. (Cambridge University Press, Cambridge, UK, 2014).
 45.
Hawrylycz, M. et al. Inferring cortical function in the mouse visual system through largescale systems neuroscience. Proc. Natl Acad. Sci. USA 113, 7337–7344 (2016).
 46.
Druckmann, S. et al. A novel multiple objective optimization framework for constraining conductancebased neuron models by experimental data. Front. Neurosci. 1, 7–18 (2007).
 47.
Rossant, C. et al. Fitting neuron models to spike trains. Front. Neurosci. 5, 1–8 (2011).
 48.
Stringer, C. et al. Inhibitory control of correlated intrinsic variability in cortical networks. eLife 5, e19695 (2016).
 49.
Mensi, S., Hagens, O., Gerstner, W. & Pozzorini, C. Enhanced sensitivity to rapid input fluctuations by nonlinear threshold dynamics in neocortical pyramidal neurons. PLoS Comput. Biol. 12, e1004761 (2016).
 50.
Volgushev, M., Ilin, V. & Stevenson, I. H. Identifying and Tracking Simulated Synaptic Inputs from Neuronal Firing: Insights from In Vitro Experiments. PLoS Comput. Biol. 11, e1004167 (2015).
 51.
English, D. F. et al. Pyramidal cellinterneuron circuit architecture and dynamics in hippocampal networks. Neuron 96, 505–520 (2017).
 52.
Bagur, S. et al. Go/NoGo task engagement enhances population representation of target stimuli in primary auditory cortex. Nat. Commun. 9, 2529 (2018).
 53.
Millar, R. B. Maximum Likelihood Estimation and Inference (Wiley, 2011).
 54.
Brunel, N. & Van Rossum, M. C. Lapicque’s 1907 paper: from frogs to integrateandfire. Biol. Cybern. 97, 337–339 (2007).
 55.
Gigante, G., Mattia, M., Giudice, P. & Del Giudice, P. Diverse populationbursting modes of adapting spiking neurons. Phys. Rev. Lett. 98, 148101 (2007).
 56.
Richardson, M. J. E. Spiketrain spectra and network response functions for nonlinear integrateandfire neurons. Biol. Cybern. 99, 381–392 (2008).
 57.
Augustin, M., Ladenbauer, J., Baumann, F. & Obermayer, K. Lowdimensional spike rate models derived from networks of adaptive integrateandfire neurons: comparison and implementation. PLoS Comput. Biol. 13, e1005545 (2017).
 58.
Ostojic, S. & Brunel, N. From spiking neuron models to linearnonlinear models. PLoS Comput. Biol. 7, e1001056 (2011).
 59.
Mattia, M. & Del Giudice, P. Population dynamics of interacting spiking neurons. Phys. Rev. E 66, 051917 (2002).
 60.
Burkitt, A. N. A review of the integrateandfire neuron model: I. Homogeneous synaptic input. Biol. Cybern. 95, 1–19 (2006).
 61.
Paninski, L., Pillow, J. W. & Simoncelli, E. P. Maximum likelihood estimation of a stochastic integrateandfire neural encoding model. Neural Comput. 16, 2533–2561 (2004).
 62.
Pillow, J. W., Paninski, L., Uzzell, V. J., Simoncelli, E. P. & Chichilnisky, E. J. Prediction and decoding of retinal ganglion cell responses with a probabilistic spiking model. J. Neurosci. 25, 11003–11013 (2005).
 63.
Tuckwell, H. C. Introduction to Theoretical Neurobiology. (Cambridge University Press, Cambridge, UK, 1988).
 64.
Ecker, A. S. et al. Decorrelated neuronal firing in coritcal micorcircuits. Science 327, 584–587 (2010).
 65.
Cohen, M. R. & Kohn, A. Measuring and interpreting neuronal correlations. Nat. Neurosci. 14, 811–819 (2011).
 66.
Zaytsev, Y. V., Morrison, A. & Deger, M. Reconstruction of recurrent synaptic connectivity of thousands of neurons from simulated spiking activity. J. Comput. Neurosci. 39, 77–103 (2015).
 67.
Madison, D. V. & Nicoll, R. A. Control of the repetitive discharge of rat CA1 pyramidal neurones in vitro. J. Physiol. 354, 319–331 (1984).
 68.
Schwindt, P. C., Spain, W. J., Foehring, R. C., Chubb, M. C. & Crill, W. E. Slow conductances in neurons from cat sensorimotor cortex in vitro and their role in slow excitability changes. J. Neurophysiol. 59, 450–467 (1988).
 69.
Stocker, M. Ca(2+)activated K+ channels: molecular determinants and function of the SK family. Nat. Rev. Neurosci. 5, 758–770 (2004).
 70.
Schwindt, P. C., Spain, W. J. & Crill, W. E. Calciumdependent potassium currents in neurons from cat sensorimotor cortex. J. Neurophysiol. 67, 216–226 (1992).
 71.
La Camera, G. et al. Multiple time scales of temporal response in pyramidal and fast spiking cortical neurons. J. Neurophysiol. 96, 3448–3464 (2006).
 72.
Dong, Y., Mihalas, S., Russell, A., EtienneCummings, R. & Niebur, E. Estimating parameters of generalized integrateandfire neurons from the maximum likelihood of spike trains. Neural Comput. 23, 2833–2867 (2011).
 73.
Mullowney, P. & Iyengar, S. Parameter estimation for a leaky integrateandfire neuronal model from ISI data. J. Comput. Neurosci. 24, 179–194 (2008).
 74.
Kim, H. & Shinomoto, S. Estimating nonstationary input signals from a single neuronal spike train. Phys. Rev. E 86, 051903 (2012).
 75.
Carlson, K. D., Nageswaran, J. M., Dutt, N. & Krichmar, J. L. An efficient automated parameter tuning framework for spiking neural networks. Front. Neurosci. 8, 1–15 (2014).
 76.
Lueckmann, J.M. et al. Flexible statistical inference for mechanistic models of neural dynamics. Adv. Neural Inf. Process. Syst. 30, 1289–1299 (2017).
 77.
Cocco, S., Leibler, S. & Monasson, R. Neuronal couplings between retinal ganglion cells inferred by efficient inverse statistical physics methods. Proc. Natl Acad. Sci. USA 106, 14058–14062 (2009).
 78.
Monasson, R. & Cocco, S. Fast inference of interactions in assemblies of stochastic integrateandfire neurons from spike recordings. J. Comput. Neurosci. 31, 199–227 (2011).
 79.
Vidne, M. et al. Modeling the impact of common noise inputs on the network activity of retinal ganglion cells. J. Comput. Neurosci. 33, 97–121 (2012).
 80.
Stevenson, I. H. et al. Functional connectivity and tuning curves in populations of simultaneously recorded neurons. PLoS Comput. Biol. 8, e1002775 (2012).
 81.
Gerhard, F., Deger, M. & Truccolo, W. On the stability and dynamics of stochastic spiking neuron models: Nonlinear Hawkes process and point process GLMs. PLoS Comput. Biol. 13, e1005390 (2017).
 82.
Mensi, S., Naud, R. & Gerstner, W. From stochastic nonlinear integrateandfire to generalized linear models. Adv. Neural Inf. Process. Syst. 24, 1377–1385 (2011).
 83.
Pernice, V. & Rotter, S. Reconstruction of sparse connectivity in neural networks from spike train covariances. J. Stat. Mech. 3, P03008 (2013).
 84.
Pastore, V. P., Massobrio, P., Godjoski, A. & Martinoia, S. Identification of excitatoryinhibitory links and network topology in largescale neuronal assemblies from multielectrode recordings. PLoS Comput. Biol. 14, e1006381 (2018).
 85.
Casadiego, J., Maoutsa, D. & Timme, M. Inferring network connectivity from event timing patterns. Phys. Rev. Lett. 121, 054101 (2018).
 86.
Ostojic, S., Brunel, N. & Hakim, V. How connectivity, background activity, and synaptic properties shape the crosscorrelation between spike trains. J. Neurosci. 29, 10234–10253 (2009).
 87.
Pernice, V., Staude, B., Cardanobile, S. & Rotter, S. How structure determines correlations in neuronal networks. PLoS Comput. Biol. 7, e1002059 (2011).
 88.
Trousdale, J., Hu, Y., SheaBrown, E. & Josić, K. Impact of network structure and cellular response on spike time correlations. PLoS Comput. Biol. 8, e1002408 (2012).
 89.
Tetzlaff, T., Helias, M., Einevoll, G. T. & Diesmann, M. Decorrelation of neuralnetwork activity by inhibitory feedback. PLoS Comput. Biol. 8, e1002596 (2012).
 90.
Rosenbaum, R., Smith, M. A., Kohn, A., Rubin, J. E. & Doiron, B. The spatial structure of correlated neuronal variability. Nat. Neurosci. 20, 107–114 (2017).
 91.
Ocker, G. K., Josić, K., SheaBrown, E. & Buice, M. A. Linking structure and activity in nonlinear spiking networks. PLoS Comput. Biol. 13, e1005583 (2017).
 92.
Huang, C. et al. Circuit models of lowdimensional shared variability in cortical networks. Neuron 101, 1–12 (2019).
 93.
Ocker, G. K. et al. From the statistics of connectivity to the statistics of spike times in neuronal networks. Curr. Opin. Neurobiol. 46, 109–119 (2017).
 94.
Brinkman, B. A., Rieke, F., SheaBrown, E. & Buice, M. A. Predicting how and when hidden neurons skew measured synaptic interactions. PLoS Comput. Biol. 14, e1006490 (2018).
 95.
Donner, C., Opper, M. & Ladenbauer, J. Inferring the dynamics of neural populations from singletrial spike trains using mechanistic models. Cosyne Abstract, Lisbon, PT. Full preprint at https://doi.org/10.1101/671909 (2019).
 96.
Risken, H. The FokkerPlanck Equation: Methods of Solutions and Applications. (Springer, Berlin, 1996).
 97.
Ostojic, S. Interspike interval distributions of spiking neurons driven by fluctuating inputs. J. Neurophysiol. 106, 361–373 (2011).
 98.
Schaffer, E. S., Ostojic, S. & Abbott, L. F. A complexvalued firingrate model that approximates the dynamics of spiking networks. PLoS Comput. Biol. 9, e1003301 (2013).
 99.
Nelder, J. A. & Mead, R. A simplex method for function minimization. Comput. J. 7, 308–313 (1965).
 100.
Stimberg, M., Goodman, D. F. M., Benichoux, V. & Brette, R. Equationoriented specification of neural models for simulations. Front. Neuroinform. 8, 1–14 (2014).
 101.
Oliphant, T. E. Python for scientific computing. Comput. Sci. Eng. 9, 10–20 (2007).
 102.
Lam, S. K., Pitrou, A. & Seibert, S. Numba: A LLVMbased python JIT compiler. In Proc. LLVM Compil. Infrastruct. HPC, 1–6 (2015).
 103.
Tamamaki, N. et al. Green fluorescent protein expression and colocalization with calretinin, parvalbumin, and somatostatin in the GAD67GFP knockin mouse. J. Comp. Neurol. 467, 60–79 (2003).
 104.
Akaike, H. A new look at the statistical model identification. IEEE Trans. Autom. Control 19, 716–723 (1974).
Acknowledgements
We thank Dimitra Maoutsa for her support on the GLM implementation. This work was supported by Deutsche Forschungsgemeinschaft in the framework of Collaborative Research Center 910, the Programme Emergences of the City of Paris, ANR project MORSE (ANR16CE370016), and the program “Ecoles Universitaires de Recherche” launched by the French Government and implemented by the ANR, with the reference ANR17EURE0017. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the paper.
Author information
Affiliations
Contributions
J.L. and S.O. designed the study. J.L. led the project, developed and implemented the methods, performed the evaluations, validations and benchmark tests, and wrote the paper. S.M., D.E., and O.H. provided electrophysiological data. S.O. supervised the study and edited the paper. S.M. and O.H. provided feedback on the paper.
Corresponding authors
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Peer review information Nature Communications thanks the anonymous reviewers for their contribution to the peer review of this work. Peer reviewer reports are available.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary information
41467_2019_12572_MOESM1_ESM.pdf
Supplementary Information
41467_2019_12572_MOESM2_ESM.pdf
Peer Review File
41467_2019_12572_MOESM3_ESM.pdf
Reporting Summary
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Ladenbauer, J., McKenzie, S., English, D.F. et al. Inferring and validating mechanistic models of neural microcircuits based on spiketrain data. Nat Commun 10, 4933 (2019). https://doi.org/10.1038/s41467019125720
Received:
Accepted:
Published:
Further reading

Channel current fluctuations conclusively explain neuronal encoding of internal potential into spike trains
Physical Review E (2021)

Statistical Analysis of Decoding Performances of Diverse Populations of Neurons
Neural Computation (2021)

Estimating Transfer Entropy in Continuous Time Between Neural Spike Trains or Other EventBased Data
PLOS Computational Biology (2021)

Modelbased detection of putative synaptic connections from spike recordings with latency and type constraints
Journal of Neurophysiology (2020)

Thermodynamic Formalism in Neuronal Dynamics and Spike Train Statistics
Entropy (2020)
Comments
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.