Abstract
The recent increase in reliable, simultaneous high channel count extracellular recordings is exciting for physiologists and theoreticians because it offers the possibility of reconstructing the underlying neuronal circuits. We recently presented a method of inferring this circuit connectivity from neuronal spike trains by applying the generalized linear model to crosscorrelograms. Although the algorithm can do a good job of circuit reconstruction, the parameters need to be carefully tuned for each individual dataset. Here we present another method using a Convolutional Neural Network for Estimating synaptic Connectivity from spike trains. After adaptation to huge amounts of simulated data, this method robustly captures the specific feature of monosynaptic impact in a noisy crosscorrelogram. There are no useradjustable parameters. With this new method, we have constructed diagrams of neuronal circuits recorded in several cortical areas of monkeys.
Similar content being viewed by others
Introduction
More than half a century ago, Perkel, Gerstein, and Moore^{1} pointed out that by measuring the influence of one neuron on another through a crosscorrelogram, physiologists could infer the strength of the connection between the neurons. If this were done for lots of pairs of neurons, a map of the neuronal circuitry could be built. Now, with the advent of highquality simultaneous recording from large arrays of neurons, it might have become possible to map the structures of neuronal circuits.
The original crosscorrelation method can give plausible inferences about connections. However, in many cases, it also tended to suggest the presence of connections that are spurious, i.e., false positives (FPs). There were many possible sources for the lack of reliability and specificity, such as large fluctuations produced by external signals or higherorder interactions among neurons. Over the years, there have been many attempts to minimize the presence of such spurious connections, by shuffling spike trains^{2}, by jittering spike times^{3,4,5,6}, or by taking fluctuating inputs into account^{7,8,9,10,11,12,13}. These, in general, helped eliminate the FPs, but they then tended to be conservative, giving rise to false negatives (FNs), i.e., missing existing connections.
Recently, we developed an estimation method by applying the generalized linear model (GLM) to each crosscorrelogram^{14}. The estimation method we call GLMCC works well in balancing the conflicting demands of reducing FPs and reducing FNs, demonstrating that the crosscorrelogram image actually contains sufficient information from which to infer the presence of monosynaptic connectivity. GLMCC nonetheless has a shortcoming: the estimation results are sensitive to the model parameters, and therefore the parameters need to be tuned for the spiking data.
Here, we develop another method: Convolutional Neural Network for Estimating synaptic Connectivity from spike Trains (CoNNECT). The premise is that a convolutional neural network is good at capturing the features important for distinguishing among different categories of images^{15,16,17,18}; we apply it to a crosscorrelogram, expecting that it is capable of detecting the signature of the monosynaptic impact in the onedimensional crosscorrelogram image. Our new method CoNNECT is easy to use, and it works robustly with data arising from different cortical regions in nonhuman primates. The convolutional neural network has tens of thousands of internal parameters. The parameters can be adjusted using hundreds of thousands of pairs of spike trains generated with a largescale simulation of the circuitry of realistic model neurons. To reproduce large fluctuations in real spike trains, we added external fluctuations to the model neurons in the simulation.
CoNNECT promptly provides reasonable inference. It does not, however, give a rationale for why the result was derived, whereas our previous algorithm GLMCC does because it fits an interaction kernel to the crosscorrelogram. These methods, therefore, have different strengths and weaknesses and can be used in combination in a complementary manner. Namely, the inference given by CoNNECT can be used for guiding GLMCC to search for suitable parameters, and GLMCC can provide interpretation.
We evaluated the accuracy of estimation by comparing the inference with the true connections, using synthetic data generated by simulating circuitries of model neurons, and compared the performance of CoNNECT with that of GLMCC, as well as the classical crosscorrelogram method^{19,20}, the Jittering method^{4,5}, and an extended GLM method^{13}. After confirming the performance of the model, we applied CoNNECT to parallel spike signals recorded from three cortical areas of monkeys and obtained estimation of the local neuronal circuitry among many neurons. We have found that the connections among recorded units are sparse; they are less than 1% for all three datasets.
Results
Training and validating with synthetic data
CoNNECT infers the presence or absence of monosynaptic connections between a pair of neurons and estimates the amplitude of the postsynaptic potential (PSP) that one neuron would drive in another. The estimation is performed by applying a convolutional neural network^{15,16,17,18} to a crosscorrelogram obtained for every pair of spike trains (Fig. 1a). The network has an output layer consisting of two units. One unit indicates the presence or absence of connectivity with a real value \(z \in [0,1]\) by thresholding at 0.5. Another is the level of PSP represented in a unit of (mV). The network was trained with spike trains generated by a numerical simulation of a network of multipletimescale adaptive threshold (MAT) model neurons^{21,22,23,24,25} interacting through fixed synapses. In a largescale simulation, we applied fluctuating inputs to a subset of neurons to reproduce large fluctuations in real spike trains in vivo (Fig. 1b). Figure 1c,d, and e demonstrate sample spike trains, histograms of the firing rates of excitatory and inhibitory neurons^{26}, and firing irregularity measured in terms of the local variation of the interspike intervals Lv^{27,28}. The training data does not contain many low firing rate neurons, considering the situation that low firing units are often discarded when analyzing real data. The details of the learning procedure are summarized in “Methods” section.
We validated the estimation performance of CoNNECT using novel spike trains generated by another neuronal circuit with different connections. Figure 2a depicts an estimated connection matrix, referenced to the true connection matrix, of 50 neurons. Here, the estimation was done with spike trains recorded for 120 min. Of 50 spike trains, 40 and 10 are, respectively, sampled from 800 excitatory and 200 inhibitory neurons. Figure 2b compares the estimated PSPs against true values. We have presented an estimated PSP as being 0 if the connection is not detected. Points lying on the nonzero xaxis are existing connections that were not detected or FNs. Points lying on the nonzero yaxis are spurious connections assigned for unconnected pairs or FPs. Figure 2c depicts how the numbers of FNs and FPs for excitatory and inhibitory categories changed with the recording duration or the length of spike trains (10, 30, and 120 min). While the number of FPs or spurious connections does not depend largely on the recording duration, the number of FNs or missing connections decreased with the period, implying that more synaptic connections of weaker strength are revealed by increasing the recording time.
Comparison with other estimation methods
There are many algorithms that were developed to estimate synaptic connections from spike trains^{1,2,3,4,5,6,7,8,9,10,11,12,13,19,29,30,31}. We compared CoNNECT with the conventional crosscorrelation method (CC)^{19}, Jittering method^{4}, Extended GLM^{13}, and GLMCC^{21} for their ability to estimate connectivity using synthetic data. Figure 3 shows connection matrices determined by the four methods referenced to the true connection matrices. In the lower panels, we demonstrated the performances in terms of the false positive (discovery) rate (FPR) and false negative (omission) rate (FNR) for excitatory and inhibitory categories; smaller values are better. Here we estimated the mean and SD of the performance by applying each method to 8 test datasets of 50 neurons. Overall performance with FPR and FNR was measured in terms of the Matthews correlation coefficient (MCC) (see “Methods” section). The MCCs for these estimation methods are shown in the right edge panel; a larger MCC is better. For evaluating the performances, we adopted spiking data generated by a network of MAT models and a network of Hodgkin–Huxley (HH) type models (“Methods” section). In computing the numbers of FPs and FNs, we ignored small excitatory connections, which are inherently difficult to discern with this observation duration. We took the lower thresholds as 0.1 mV for the MAT simulation and 1 mV for the HH simulation so that the visible connectivity is about 10%, but the relative performances between different models are unchanged even if we change the thresholds.
The conventional crosscorrelation analysis produced many FPs, revealing a vulnerability to fluctuations in crosscorrelograms. The Jittering method succeeded in avoiding FPs but missed many existing connections, thus generating many FNs. The Extended GLM method of given parameters was also rather conservative. In comparison to these methods, GLMCC and CoNNECT have better performance, producing a small number of FPs and FNs and a larger MCC value. Here we have modified GLMCC so that it achieves higher performance than the original algorithm^{14} by using the likelihood ratio test to determine the statistical significance (“Methods” section). When comparing these two algorithms, GLMCC was slightly conservative, producing more FNs, while CoNNECT tended to suggest more connections, producing more FPs.
The converted GLMCC was better than CoNNECT for the HH model data (Fig. 3b), but the converse was true for the MAT model data (Fig. 3a). This might be because CoNNECT was trained using the MAT model data of a similar kind, and GLMCC was constructed by considering the HH model simulation. Although the model performance was examined with independent datasets, the HH model simulations would be more similar than across the HH model and MAT model. As the GLMCC parameters were selected with the HH model simulation, it naturally works better for the HH data than MAT simulation data and vice versa.
Comparison of different learning conditions
While the convolutional neural network has the advantage that tens of thousands of parameters can be suitably adjusted to reproduce given datasets, it does not guarantee the generalization capability. We evaluated the generalization capability of our convolutional network by changing the number of outchannels representing the degree of system adaptability from 1 to 10. Figure 4a depicts the numbers of FPs and FNs in the above, and the overall performances measured in terms of MCC in the below, which were obtained for the MAT model simulation data (left panel) and the HodgkinHuxley model simulation data (right panel). We observe that the convolutional network consisting of 1 channel slightly “underfits” because of the little flexibility, whereas that of 10 channels slightly overfits the data, exhibiting slightly lower MCC. Thus we have employed the network of 5 channels, consisting of about fifty thousand parameters.
Selection of training data sets
To make the convolutional network applicable to data of a wider variety, we have trained the network using the crosscorrelograms augmented by rescaling the time. Figure 4b depicts the estimation performances of networks trained using the crosscorrelations rescaled by 1/4 and 1/2 times of the original (indicated as “1/4” and “1/2”), the original crosscorrelations (“1”), and all the data (“1/4 + 1/2 + 1”). The networks trained with lower firing rates exhibited lower performances. We have adopted the network trained with all the data (“1/4 + 1/2 + 1”) because it gave the highest performance in estimating connectivity.
Crosscorrelograms
To observe the situations in which different estimation methods succeeded or failed in detecting the presence or absence of synaptic connectivity, we examined sample crosscorrelograms of neuron pairs of a network of MAT model neurons. Figure 5 depicts neuron pairs that exhibited various patterns including pathological cases. The majority of neuron pairs are of successful cases demonstrated in the upper part of the figure. Some crosscorrelograms from this simulation exhibited large fluctuations that resemble what is seen in real biological data. These were produced by external fluctuations added to a subset of neurons, making the connectivity inference difficult. The inference results obtained by the four estimation methods are distinguished with colors; magenta, cyan, and gray represent that estimated connections were excitatory, inhibitory, or unconnected, respectively. We also superimposed a GLM function fitted to each crosscorrelogram.
Figure 5a,b depict sample crosscorrelograms of neuron pairs that are connected with excitatory and inhibitory synapses, respectively. For the first three crosscorrelograms from the top, all four estimation methods succeeded in detecting excitatory or inhibitory connections, thus making true positive (TP) estimations. For the fourth case, the Jittering method failed to detect the connection. This implies that the Jittering method is rather conservative for producing FPs, and as a result, has produced many FNs. In this case, the crosscorrelation method (CC) has mistaken the excitatory synapse as inhibition due to the large wavy fluctuation in the crosscorrelogram. For the last cases, all four estimation methods failed to detect the connection, resulting in FNs. This would have been because the original connections were not strong enough to produce significant impacts on the crosscorrelograms.
Figure 5c depicts sample crosscorrelograms of unconnected pairs. For the first two crosscorrelograms, all four estimation methods judge the absence of connections correctly (or the null hypothesis of the absence of connection was not rejected), resulting in true negatives (TNs). For the third pair, the CC suggested the presence of a connection, resulting in an FP. This demonstrates that the conventional crosscorrelation method is fragile in the presence of large fluctuations. For the fourth and the last cases, the CC, GLMCC, and CoNNECT have suggested monosynaptic connections. The sharp peaks appearing in the crosscorrelogram would have been caused by indirect interaction via other neurons. In such cases, however, it is difficult to discern the absence of a monosynaptic connection solely from the crosscorrelogram.
Analyzing experimental data
We examined spike trains recorded from the prefrontal (PF), inferior temporal (IT), and the primary visual (V1) cortices of monkeys using the Utah arrays. Experimental conditions of individual data are summarized in “Methods” section. Because neurons with low firing rates do not provide enough evidence for the connectivity, we have excluded low firing units and examined those that have fired more than 1 Hz.
Preprocessing experimental data
Some of the experimentally available crosscorrelograms exhibit a sharp drop near the origin for a few ms due to the shadowing effect, in which nearsynchronous spikes cannot be detected^{32}. This effect disrupts the estimation of synaptic impacts that should appear near the origin of the crosscorrelogram. The data were obtained with a sorting algorithm specifically used for the Utah array exhibit rather broad shadowing effects larger than 1 ms (up to 1.75 ms). Here, we analyzed the experimental data by removing an interval of \(0 \pm 2\) ms in the crosscorrelogram and applying the estimation method to a crosscorrelogram obtained by concatenating the remaining left and right parts (Fig. 6a,b). We also conducted this operation in the analysis of synthetic data.
Figure 6c demonstrates the crosscorrelograms of sample neuron pairs for which both CoNNECT and GLMCC estimated connections which were excitatory, inhibitory, or absent (unconnected). It was observed that the real crosscorrelograms are accompanied by large fluctuations. Nevertheless, CoNNECT and GLMCC are able to detect the likely presence or absence of synaptic interaction by ignoring the severe fluctuations.
Connection matrices
Figure 7 depicts the estimated connections for the entire three datasets of PF, IT, and V1. The units in the connection matrices are arranged in the order provided by a sorting algorithm, and accordingly, units of neighboring indexes of the matrices tended to have been spatially closely located. All three connection matrices had more components in near diagonal elements, implying that neurons in a nearby location are more likely to be connected. The firing rate and irregularity (the local variation of the interspike intervals Lv^{27,28}) are shown in the rightmost panels. The summary statistics in Table 1 reflect differences in firing rate between excitatory and inhibitory cells in PF and IT but not V1. The firing irregularity of excitatory neurons is slightly higher than that of inhibitory neurons, consistent with the previous results.
Table 1 summarizes the statistics of the three datasets. Each neuron is assigned as putative excitatory, putative inhibitory, or undetermined, according to whether the excitatory–inhibitory (E–I) dominance index is positive, negative, or undetermined (or zero), respectively. Here, the E–I dominance index is defined as \(d_{\mathrm{ei}}= (n_{\mathrm{e}} n_{\mathrm{i}})/(n_{\mathrm{e}}+ n_{\mathrm{i}})\), in which \(n_{\mathrm{e}}\) and \(n_{\mathrm{i}}\) represent the numbers of excitatory and inhibitory identified connections projecting from each unit, respectively^{14}. The row “num. connections” indicates the average number of innervated connections per neuron. Because the number of innervated connections for each neuron is only a few, the majority of \(d_{ei}\) is either 0 or 1. Though we have obtained many connections, the total number of all pairs is enormous, scaling with the square of the number of units, and accordingly, the connectivity is sparse (less than 1% for each (directed) pair of neurons).
In contrast to synthetic data, the currently available experimental data do not contain information regarding the true connectivity. To examine the stability of the estimation, we split the recordings in half and compared estimated connections from each half. If the real connectivity is stable, we may expect the estimated connections have overlap between the first and second halves. Figure 8a represents the connection matrices obtained from the first and second halves of the spike trains recorded from PF, IT, and V1. Figure 8b compares the estimated PSPs in two periods. Many estimated connections appear only on one of the two. This might be simply due to statistical fluctuation or due to real changes in synaptic connectivity. Nevertheless, it may be noteworthy that the excitatory connections of large amplitudes were detected relatively consistently between the first and second halves. Namely, they appear in the first and third quadrants diagonally, implying that they have the same signs with similar amplitudes.
Discussion
Here we have devised a new method for estimating synaptic connections based on a convolutional neural network. While this method does not require adjusting parameters for individual data, it robustly provides a reasonable estimate of synaptic connections by tolerating large fluctuations in the data. This high performance was obtained by training a convolutional neural network using a considerable amount of training data generated by simulating a network of model spiking neurons subject to fluctuating current.
We compared CoNNECT with the conventional crosscorrelation method, the Jittering method, Extended GLM, and GLMCC in their ability to estimate connectivity, using synthetic data obtained by simulating neuronal circuitries of fixed synaptic connections. Both CoNNECT and GLMCC exhibited high performance in predicting individual synaptic connections, superior to other methods.
Then we applied CoNNECT to simultaneously recorded spike trains recorded from monkeys using the Utah arrays. We have found that the connections among recorded units are sparse; they are less than 1% for all three datasets. To test the reliability of the estimation, we divided the entire recording interval in half and estimated connections for respective intervals. We have seen that strong excitatory connections overlap between the periods. This result implies that the estimation is reliable for the strong connectivity, and the connectivity lasts at least for hours.
The crosscorrelograms of real biological data (Fig. 6) turned out to be even more complicated than those of synthetic data (Fig. 5), which were generated by adding large fluctuations to individual neurons (Fig. 1). The complicated features in real crosscorrelograms were not only due to fluctuations in real circuitry but also due to the sorting algorithm. The most severe bottleneck in estimating connectivity may have been the shadowing effect of a few ms, in which nearsynchronous spikes were not detected (Fig. 6a); this effect might hide the first part of a monosynaptic impact, which is expected to show up in a few ms in a crosscorrelogram. If the sorting algorithm is improved such that the shadowing duration is shortened, the estimation might be more reliable.
In this study, we have employed the convolutional neural network to capture the specific signature of monosynaptic impact in a crosscorrelogram image. While the convolutional network is known to be robust against the translation of images, the monosynaptic impact is expected to appear at a specific location in the crosscorrelogram, particularly exhibiting the delay of a few milliseconds. Thus it might be an interesting challenge to search for other learning algorithms that utilize such information and perform better than the convolutional network.
We used data augmentation technique^{33} to increase the number of training examples artificially. Data augmentation is known to improve the performance on various tasks in computer vision^{18,34} and acoustic signal processing^{35,36}. Here we augmented the crosscorrelation data by rescaling the time to capture the diverse synaptic interactions. This augmentation also improved the estimation performance of synaptic connectivity (Fig. 4b). Recently, several authors proposed a more systematic approach for data augmentation, e.g., generating augmented data using generative adversarial networks (GANs)^{37,38} and learning the data augmentation policy^{39}. Though these approaches focus on the image classification task and require a vast computational resource, it may be interesting to apply these techniques to pursue an advanced data augmentation method for synaptic connectivity estimation.
So far, we have little knowledge about neuronal circuitry in the brain. By collecting more data from high channel count recordings and applying these reliable analysis methods to them, we shall be able to obtain information about neuronal circuitry in different brain regions and learn about network characteristics and the information flow in each area. Ultimately, we expect that we will characterize the network characteristics of different brain regions processing various kinds of information.
Methods
Configuration of a neural network for estimating synaptic connectivity
Here we describe the details of a fourlayered convolutional neural network^{15,16,17,18} applied to crosscorrelograms obtained for every pair of spike trains to estimate the presence or absence of a connection and its postsynaptic potential (PSP) (Fig. 1). The neural network learns to find a bump or dent in the crosscorrelogram caused by a monosynaptic connection.
In particular, the input consists of 100 integer values of the spike counts in a crosscorrelation histogram in an interval of \([50, 50]\) ms with 1 ms bin size. The network comprises a 1dimensional convolution layer, the average pooling, and the internal layer of fully connected 100 nodes. The output layer consists of two units; one indicates the presence or absence of connectivity with a real value \(z \in [0,1]\). Another is the level of PSP represented in a unit of (mV).
Training the convolutional neural network
We ran a numerical simulation of a network of 1000 neurons interacting through fixed synapses in various conditions and trained the neural network with spike trains from 400 units selected from the entire network. Thus, we constructed crosscorrelograms of about 80, 000 pairs, each assigned with the teaching signals consisting of the true information about the presence or absence of connectivity (respectively represented as \(z=1\) or 0) and its PSP value in either direction. The training was performed using an algorithm for firstorder gradientbased optimization of stochastic objective functions, based on adaptive estimates of lowerorder moments, named Adam^{40}. The parameters adopted in the learning are summarized in Table 2. Details of the architecture are summarized in Table 3. Figure 9 demonstrates a set of convolutional kernels that were learned with training data. From the set of learned kernels, we can see some specific features of monosynaptic impact of a few milliseconds appearing in a crosscorrelogram. It is also interesting to see a kernel exhibiting a roughly monotonic gradient (the second panel from the left). This might have worked for detrending the large slow fluctuations in the cross correlogram produced by our simulation, which aimed at reproducing real situations.
Data augmentation
To make the estimation method applicable to data of a wider range, we performed data augmentation^{18,34,35,36,37} (see^{33} for review). Namely, we augmented the data by rescaling the crosscorrelations by 2 and 4 times and used all the data, including the original data in the learning.
Webapplication program
A readytouse version of the web application, the source code, and example data sets are available at our website, https://sshinomoto.com/CONNECT/ and are also hosted publicly on Github, accessible via https://github.com/shigerushinomoto. The simulation code is also available at this Github.
Improvement of GLMCC
Original framework of GLMCC
In the previous study^{14}, we developed a method of estimating the connectivity by fitting the generalized linear model to a crosscorrelogram, GLMCC. We designed the GLM function as
where t is the time from the spikes of the reference neuron. a(t) represents largescale fluctuations in the crosscorrelogram in a window \([W, W]\) (\(W= 50\) ms). By discretizing the time in units of \(\Delta (=1 {\mathrm {ms}})\), a(t) is represented as a vector \({\vec {a}}= ( a_1, a_2, \ldots , a_M)\) (\(M= 2W/ \Delta\)). \(J_{12}\) (\(J_{21}\)) represents a possible synaptic connection from the reference (target) neuron to the target (reference) neuron. The temporal profile of the synaptic interaction is modeled as \(f(t) = \exp (\frac{td}{\tau })\) for \(t>d\) and \(f(t)=0\) otherwise, where \(\tau\) is the typical timescale of synaptic impact and d is the transmission delay. Here we have chosen \(\tau = 4\) ms, and let the synaptic delay d be selected from 1, 2, 3, and 4 ms for each pair.
The parameters \(\vec {\theta } = \{J_{12}, J_{21}, \vec {a}\}\) are determined with the maximum a posteriori (MAP) estimate, that is, by maximizing the posterior distribution or its logarithm:
where \(\{ t_i \}\) are the relative spike times. The loglikelihood is obtained as
where \(n_{\mathrm{pre}}\) is the number of spikes of presynaptic neuron (j). Here we have provided the prior distribution of \(\vec {a}\) that penalizes a large gradient of a(t) and uniform prior for \(\{ J_{12}, J_{21} \}\)
where the hyperparameter \(\gamma\) representing the degree of flatness of a(t) was chosen as \(\gamma =2 \times 10^{4}\) [ms\(^{1}\)].
Likelihood ratio test
The likely presence of the connectivity can be determined by disproving the null hypothesis that a connection is absent. In the original model, this was performed by thresholding the estimated parameters with \(\hat{J_{ij}}>1.57 z_{\alpha } (\tau T \lambda _{\text {pre}} \lambda _{\text {post}})^{1/2}\), where \(z_{\alpha }\), T, \(\lambda _{\text {pre}}\), and \(\lambda _{\text {post}}\) are a threshold for the normal distribution, recording time, firing rates of pre and postsynaptic neurons. But we realized that this thresholding method might induce a large asymmetry in detectability between excitatory and inhibitory connections.
Instead of a simple thresholding, here we introduce the likelihood ratio test that is a general method for testing hypothesis (Chapter 11 of^{41}, see also^{42}): We compute the likelihood ratio between the presence of the connectivity \(J_{ij}=\hat{J}_{ij}\) and the absence of connectivity \(J_{ij}=0\) or its logarithm:
where \(L^* \left( J_{ij} = c \right)\) in each case is the likelihood obtained by optimizing all the other parameters with the constraint of \(J_{ij}=c\). It was proven that 2D obey the \(\chi ^{2}\) distribution in a large sample limit (Wilks’ theorem)^{43}. Accordingly, we may reject the null hypothesis if \(2D > z_{\alpha }\), where \(z_{\alpha }\) is the threshold of \(\chi ^2\) distribution of a significance level \(\alpha\). Here we have adopted \(\alpha = 10^{4}\).
Model validation
The performance of CoNNECT was evaluated using the synthetic data generated by independent simulations. The presence or absence of connectivity in each direction is decided by whether or not an output value \(z \in [0,1]\) exceeds a threshold \(\theta\). It is possible to reduce the number of FPs by shifting the threshold \(\theta\) to a high level. But this operation may produce many FNs, making many existing connections be missed. To balance the falsepositives and falsenegatives, we considered maximizing the Matthews correlation coefficient (MCC)^{44}, as has been done in our previous study^{14}. The MCC is defined as
where \(N_{\mathrm{TP}}\), \(N_{\mathrm{TN}}\), \(N_{\mathrm{FP}}\), and \(N_{\mathrm{FN}}\) represent the numbers of true positive, true negative, false positive, and false negative connections, respectively.
We have obtained two coefficients for excitatory and inhibitory categories and taken the macroaverage MCC that gives equal importance to these categories (Macroaverage)^{45}, \(MCC= (MCC_{\mathrm{E}}+MCC_{\mathrm{I}})/2\) as we have done in the previous study^{14}. In computing the coefficient for the excitatory category \(MCC_{\mathrm{E}}\), we classify connections as excitatory or other (unconnected and inhibitory); for the inhibitory category \(MCC_{\mathrm{I}}\), we classify connections as inhibitory or other (unconnected and excitatory). Here we evaluate \(MCC_{\mathrm{E}}\) by considering only excitatory connections of reasonable strength (EPSP > 0.1 mV for the MAT simulation and \(> 1\) mV for the HH simulation).
We have confirmed that the Matthews correlation coefficient exhibits a wide peak at about \(\theta \sim 0.5\) (Fig. 10), and accordingly, we adopted \(\theta = 0.5\) as the threshold.
A largescale simulation of a network of MAT neurons
To obtain a large number of spike trains that have resulted under the influence of synaptic connections between neurons, we ran a numerical simulation of a network of 1000 model neurons interacting through fixed synapses. Of them, 800 excitatory neurons innervate to 12.5 % of other neurons with EPSPs that are lognormally distributed^{14,46,47,48,49}, whereas 200 inhibitory neurons innervate randomly to 25% of other neurons with IPSPs that are normally distributed.
Neuron model
As for the spiking neuron model, we adopted the MAT model, which is superior to the Hodgkin–Huxley model in reproducing and predicting spike times of real biological neurons in response to fluctuating inputs^{21,23}. In addition, its numerical simulation is stable and fast. The membrane potential of each neuron obeys a simple relaxation equation following the input signal:
where \(g_{e}, g_{i}\) represents the excitatory conductance and the inhibitory conductance, respectively. Here \(RI_{\mathrm{bg}}\) represent the background noise. The conductance evolves with the
where \(\tau _{s, X}\) is the synaptic time constant, \(X\) stands for \(e\) (excitatory) or \(i\) (inhibitory), \(t_{jk}\) is the kth spike time of jth neuron, \(d_{j}\) is a synaptic delay and \(G_{j}\) is the synaptic weight from jth neuron. \(\delta (t)\) is the Dirac delta function.
Next, the adaptive threshold of each neuron \(\theta \left( t \right)\) obeys the following equation:
where \(t_j\) is the jth spike time of a neuron, \(\omega\) is the resting value of the threshold, \(\tau _k\) is the kth time constant, and \(\alpha _k\) is the weight of the kth component. The parameter values are summarized in Table 4.
Synaptic connections
We ran a simulation of a network consisting of 800 pyramidal neurons and 200 interneurons interconnected with a fixed strength. Each neuron receives 100 excitatory inputs randomly selected from 800 pyramidal neurons and 50 inhibitory inputs selected from 200 interneurons. The excitatory and inhibitory synaptic connections were sampled from respective distributions so that the resulting EPSPs and IPSPs are similar to the distributions adopted in our previous study^{14}. In particular, the excitatory conductances \(\{G_{ij}^{E}\}\) were sampled independently from a lognormal distribution^{46,47}.
where \(\mu = 5.543\) and \(\sigma = 1.30\) are the mean and SD of the natural logarithm of the conductances.
The inhibitory conductances \(\{G_{ij}^{I}\}\) were sampled from the normal distribution:
where \(\mu = 0.0217\) mS cm\(^{2}\), \(\sigma = 0.00171\) mS cm\(^{2}\) are the mean and SD of the conductances. If the sampled value is less than zero, the conductance is resampled from the same distribution. The delays of the synaptic connections from excitatory neurons are drawn from a uniform distribution between 3 and 5 ms. The delays of the synaptic connections from inhibitory neurons are drawn from a uniform distribution between 2 and 4 ms.
Background noise
Because our model network is smaller than real mammalian cortical networks, we added a background current to represent inputs from many neurons, as previously done by Destexhe et al.^{11,50}.
The summed conductance \(RI_{\mathrm{bg}}\) represents random bombardments from a number of excitatory and inhibitory neurons. The dynamics of excitatory or inhibitory conductances can be approximated as a stationary fluctuating process represented as the Ornstein–Uhlenbeck process^{51},
where \(g_{X}\) stands for \(g_{e}\) or \(g_{i}\), and \(\xi (t)\) is the white Gaussian noise satisfying \(\langle \xi (t) \rangle = 0\) and \(\langle \xi (t) \xi (s) \rangle = \delta _{ij} \delta (ts)\).
The real biological data has a wide variety of fluctuation, including nontrivial large variations with some characteristic timescales. For instance, the hippocampal neurons are subject to the theta oscillation of the frequency range of \(310\) (Hz)^{52}. To reproduce such oscillations that are also observed in the crosscorrelogram, we introduced slow oscillations into the background noise for excitatory neurons, as
where \(\xi _1(t)\) and \(\xi _2(t)\) are the white Gaussian noise satisfying \(\langle \xi _i(t) \rangle = 0\) and \(\langle \xi _i(t) \xi _j(s) \rangle = \delta _{ij} \delta (ts)\).
Among \(N=1000\) neurons, we added such oscillating background signals to three subgroups of 100 neurons (80 excitatory and 20 inhibitory neurons), respectively with 7, 10, and 20 Hz. The phases of the oscillation \(\delta\) were chosen randomly from the uniform distribution. Amplitudes of the oscillations were chosen randomly from uniform distribution in an interval \([\tilde{A}/2, 3\tilde{A}/2]\). The parameters for the background inputs are summarized in Table 5.
Numerical simulation
Simulation codes were written in C++ and parallelized with OpenMP framework. The time step was 0.1 ms. The neural activity was simulated up to 7200 s.
Experimental data
Spike trains were recorded from the PF, IT, and V1 cortices of monkeys in three experimental laboratories using the Utah arrays. All the studies were carried out in compliance with the ARRIVE guidelines. Individual experimental settings are summarized as follows.
Prefrontal cortex (PF)
The experiments were carried out on an adult male rhesus macaque Macaca mulatta (6.7 kg, age 4.5 y). The monkey had access to food 24 h a day and earned liquid through task performance on testing days. Experimental monkeys were socially pair housed. All experimental procedures were performed in accordance with the ILAR Guide for the Care and Use of Laboratory Animals and were approved by the Animal Care and Use Committee of the National Institute of Mental Health (U.S.A.). Procedures adhered to applicable United States federal and local laws, including the Animal Welfare Act (1990 revision) and applicable Regulations (PL89544; USDA 1985) and Public Health Service Policy (PHS2002). Eight 96–electrode arrays (Utah arrays, 10 \(\times\) 10 arrangement, 400 μm pitch, 1.5 mm depth, Blackrock Microsystems, Salt Lake City, U.S.A.) were implanted on the prefrontal cortex following previously described surgical procedures^{53}. Briefly, a single bone flap was temporarily removed from the skull to expose the PFC, then the dura mater was cut open to insert the electrode arrays into the cortical parenchyma. Next, the dura mater was closed and the bone flap was placed back into place and attached with absorbable suture, thus protecting the brain and the implanted arrays. In parallel, a customdesigned connector holder, 3Dprinted using biocompatible material, was implanted onto the posterior portion of the skull. Recordings were made using the Grapevine System (Ripple, Salt Lake City, USA). Two Neural Interface Processors (NIPs) made up the recording system, one NIP (384 channels each) was connected to the 4 multielectrode arrays of one hemisphere. Synchronizing behavioral codes from MonkeyLogic and eyetracking signals were split and sent to each NIP. The raw extracellular signal was highpass filtered (1 kHz cutoff) and digitized (30 kHz) to acquire singleunit activity. Spikes were detected online and the waveforms (snippets) were stored using the Trellis package (Grapevine). Single units were manually sorted offline using custom Matlab scripts to define timeamplitude windows in combination with clustering methods based on PCA feature extraction. Further details about the experiment can be found elsewhere^{54}. Briefly, the recordings were carried out while the animals were comfortably seated in front of a computer screen, performing left or right saccadic eye movements. Each trial started with the presentation of a fixation dot on the center of the screen and the monkeys were required to fixate. After a variable time (400–800 ms) had elapsed, the fixation dot was toggled off and a cue (white square, \(2^\circ \times 2^\circ\) side) was presented either to the left or right of the fixation dot. The monkeys had to make a saccade towards the cue and hold for 500 ms. 70% of the correctly performed trials were rewarded stochastically with a drop of juice (daily total 175–225 mL). Typically, monkeys performed \(>1000\) correct trials in a given recording session for recording time of 120150 min.
Inferior temporal cortex (IT)
The experiments were carried out on an adult male Japanese monkey (Macaca fuscata, 11 kg, age 13 y). The monkey had access to food 24 h a day and earned its liquid during and additionally after neural recording experiments on testing days. The monkey was housed in one of adjoining individual primate cages that allowed social interaction. All experimental procedures were approved by the Animal Care and Use Committee of the National Institute of Advanced Industrial Science and Technology (Japan) and were implemented in accordance with the “Guide for the Care and Use of Laboratory Animals” (eighth ed., National Research Council of the National Academies). Four 96 microelectrode arrays (Utah arrays, \(10 \times 10\) layout, 400 μm pitch, 1.5 mm depth, Blackrock Microsystems, Salt Lake City, USA) were surgically implanted on the IT cortex of the left hemisphere. Three arrays were located in area TE and the remaining one in area TEO. Surgical procedures were roughly the same as having been described previously^{53}, except that a bone flap that was temporarily removed from the skull was located over the IT cortex and that a CILUX chamber was implanted onto the anterior part of the skull protecting connectors of the arrays. Recordings of neural data and eye positions were done in a single session using Cerebus\(^{\mathrm{TM}}\) system (Blackrock Microsystems). The extracellular signal was bandpass filtered (250–7.5 k Hz) and digitized (30 kHz). Units were sorted online before the recording session for the extracellular signal of each electrode using a threshold and timeamplitude windows. Both the spike times and the waveforms (10 and 38 samples, preceding and after a threshold crossing, respectively) of the units were stored using Cerebus Central Suite (Blackrock Microsystems). Single units were refined offline by hand using the PCA projection of the spike waveforms in Offline sorter\(^{\mathrm{TM}}\) (Plexon Inc., Dallas, USA). The monkey seated in a primate chair, and the head was restrained with a head holding device so that the eyes were positioned 57 cm in front of a color monitor’s display (GDMF520, SONY, Japan). The display subtended a visual angle of \(40^\circ \times 30^\circ\) with a resolution of \(800 \times 600\) pixels. A television series on animals (NHK’s Darwin’s Amazing Animals, Asahi Shimbun Publications Inc., Japan) was shown on the display throughout the online spike sorting and the recording session. The monkey’s eye position was monitored using an infrared pupilposition monitoring system^{55} and was not restricted.
The primary visual cortex (V1)
The data set was obtained from Collaborative Research in Computational Neuroscience (CRCNS), pvc11^{56} by the courtesy of the authors of^{57}. In this experiment, spontaneous activity was measured from the primary visual cortex while a monkey viewed a CRT monitor (\(1024 \times 768\) pixels, 100 Hz refresh) displaying a uniform gray screen (luminance of roughly 40 cd/m\(^2\)). Briefly, the animal was premedicated with atropine sulfate (0.05 mg/kg) and diazepam (Valium, 1.5 mg/kg) 30 min before inducing anesthesia with ketamine HCl (10.0 mg/kg). Anesthesia was maintained throughout the experiment by a continuous intravenous infusion of sufentanil citrate. To minimize eye movements, the animal was paralyzed with a continuous intravenous infusion of vecuronium bromide (0.1 mg/kg/h). Vital signs (EEG, ECG, blood pressure, endtidal PCO2, temperature, and lung pressure) were monitored continuously. The pupils were dilated with topical atropine and the corneas protected with gaspermeable hard contact lenses. Supplementary lenses were used to bring the retinal image into focus by direct ophthalmoscopy and later adjusted the refraction further to optimize the response of recorded units. Experiments typically lasted 4–5 d. All experimental procedures complied with guidelines approved by the Albert Einstein College of Medicine of Yeshiva University and New York University Animal Welfare Committees.
Spike sorting and analysis criteria: Waveform segments were sorted offline with an automated sorting algorithm, which clustered similarly shaped waveforms using a competitive mixture decomposition method^{58}. The output of this algorithm was refined by hand with custom timeamplitude window discrimination software (written in MATLAB; MathWorks) for each electrode, taking into account the waveform shape and interspike interval distribution. To quantify the quality of the recording, the signaltonoise ratio (SNR) of each candidate unit was computed as the ratio of the average waveform amplitude to the SD of the waveform noise^{59,60,61}. Candidates that fell below an SNR of 2.75 were discarded as multiunit recordings.
References
Perkel, D. H., Gerstein, G. L. & Moore, G. P. Neuronal spike trains and stochastic point processes: II. Simultaneous spike trains. Biophys. J. 7, 419 (1967).
Toyama, K., Kimura, M. & Tanaka, K. Organization of cat visual cortex as investigated by crosscorrelation technique. J. Neurophysiol. 46, 202 (1981).
Grun, S. Datadriven significance estimation for precise spike correlation. J. Neurophysiol. 101, 1126 (2009).
Amarasingham, A., Harrison, M. T., Hatsopoulos, N. G. & Geman, S. Conditional modeling and the jitter method of spike resampling. J. Neurophysiol. 107, 517 (2012).
Schwindel, C. D., Ali, K., McNaughton, B. L. & Tatsuno, M. Longterm recordings improve the detection of weak excitatoryexcitatory connections in rat prefrontal cortex. J. Neurosci. 34, 5454 (2014).
Platkiewicz, J., Saccomano, Z., McKenzie, S., English, D. & Amarasingham, A. Monosynaptic inference via finelytimed spikes. J. Comput. Neurosci.https://doi.org/10.1007/s10827020007705 (2021).
Okatan, M., Wilson, M. A. & Brown, E. N. Analyzing functional connectivity using a network likelihood model of ensemble neural spiking activity. Neural Comput. 17, 1927 (2005).
Pillow, J. W. et al. Spatiotemporal correlations and visual signalling in a complete neuronal population. Nature 454, 995 (2008).
Stevenson, I. H., Rebesco, J. M., Miller, L. E. & Körding, K. P. Inferring functional connections between neurons. Curr. Opin. Neurobiol. 18, 582 (2008).
Chen, Z., Putrino, D. F., Ghosh, S., Barbieri, R. & Brown, E. N. Statistical inference for assessing functional connectivity of neuronal ensembles with sparse spiking data. IEEE Trans. Neural Syst. Rehabil. Eng. 19, 121 (2011).
Kobayashi, R. & Kitano, K. Impact of network topology on inference of synaptic connectivity from multineuronal spike data simulated by a largescale cortical network model. J. Comput. Neurosci. 35, 109 (2013).
Zaytsev, Y. V., Morrison, A. & Deger, M. Reconstruction of recurrent synaptic connectivity of thousands of neurons from simulated spiking activity. J. Comput. Neurosci. 39, 77 (2015).
Ren, N., Ito, S., Hafizi, H., Beggs, J. M. & Stevenson, I. H. Modelbased detection of putative synaptic connections from spike recordings with latency and type constraints. J. Neurophysiol. 124, 1588 (2020).
Kobayashi, R. et al. Reconstructing neuronal circuitry from parallel spike trains. Nat. Commun. 10, 1 (2019).
Fukushima, K. Neocognitron: a hierarchical neural network capable of visual pattern recognition. Neural Netw. 1, 119 (1988).
LeCun, Y. et al. Convolutional networks for images, speech, and time series. Handb. Brain Theory Neural Netw. 3361, 1995 (1995).
LeCun, Y., Bengio, Y. & Hinton, G. Deep learning. Nature 521, 436 (2015).
Krizhevsky, A., Sutskever, I. & Hinton, G. E. Imagenet classification with deep convolutional neural networks. Commun. ACM 60, 84 (2017).
Aertsen, A. M. & Gerstein, G. L. Evaluation of neuronal connectivity: sensitivity of crosscorrelation. Brain Res. 340, 341 (1985).
Reid, R. C. & Alonso, J.M. Specificity of monosynaptic connections from thalamus to visual cortex. Nature 378, 281 (1995).
Kobayashi, R., Tsubo, Y. & Shinomoto, S. Madetoorder spiking neuron model equipped with a multitimescale adaptive threshold. Front. Comput. Neurosci. 3, 9 (2009).
Gerstner, W. & Naud, R. How good are neuron models?. Science 326, 379 (2009).
Omura, Y., Carvalho, M. M., Inokuchi, K. & Fukai, T. A lognormal recurrent network model for burst generation during hippocampal sharp waves. J. Neurosci. 35, 14585 (2015).
Kobayashi, R. & Kitano, K. Impact of slow K+ currents on spike generation can be described by an adaptive threshold model. J. Comput. Neurosci. 40, 347 (2016).
Barta, T. & Kostal, L. The effect of inhibition on rate code efficiency indicators. PLoS Computat. Biol. 15, e1007545 (2019).
Harish, O. & Hansel, D. Asynchronous rate chaos in spiking neuronal circuits. PLoS Comput. Biol. 11, e1004266 (2015).
Shinomoto, S., Shima, K. & Tanji, J. Differences in spiking patterns among cortical neurons. Neural Comput. 15, 2823 (2003).
Mochizuki, Y. et al. Similarity in neuronal firing regimes across mammalian species. J. Neurosci. 36, 5736 (2016).
Stevenson, I. H. Omitted variable bias in GLMs of neural spiking activity. Neural Comput. 30, 3227 (2018).
Baker, C., Froudarakis, E., Yatsenko, D., Tolias, A. S. & Rosenbaum, R. Inference of synaptic connectivity and external variability in neural microcircuits. J. Comput. Neurosci. 48, 123 (2020).
Das, A. & Fiete, I. R. Systematic errors in connectivity inferred from activity in strongly recurrent networks. Nat. Neurosci. 23, 1286 (2020).
Pillow, J. W., Shlens, J., Chichilnisky, E. & Simoncelli, E. P. A modelbased spike sorting algorithm for removing correlation artifacts in multineuron recordings. PLoS ONE 8, e62123 (2013).
Shorten, C. & Khoshgoftaar, T. M. A survey on image data augmentation for deep learning. J. Big Data 6, 60 (2019).
Wong, S. C., Gatt, A., Stamatescu, V. & McDonnell, M. D. Understanding data augmentation for classification: when to warp? In 2016 International Conference on Digital Image Computing: Techniques and Applications (DICTA) 1–6 (IEEE, 2016).
Park, D. S., Chan, W., Zhang, Y., Chiu, C.C., Zoph, B., Cubuk, E. D. & Le, Q. V. Specaugment: A Simple Data Augmentation Method for Automatic Speech Recognition. arXiv preprint arXiv:1904.08779 (2019).
McDonnell, M. D. & Gao, W. Acoustic scene classification using deep residual networks with late fusion of separated high and low frequency paths. In ICASSP 20202020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 141–145 (IEEE, 2020).
Wang, J. & Perez, L. The effectiveness of data augmentation in image classification using deep learning. Convolut. Neural Netwo. Vis. Recognit. 11 (2017).
FridAdar, M. et al. GANbased synthetic medical image augmentation for increased CNN performance in liver lesion classification. Neurocomputing 321, 321 (2018).
Cubuk, E. D., Zoph, B., Mane, D., Vasudevan, V. & Le, Q. V. Autoaugment: Learning augmentation strategies from data. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 113–123 (2019).
Kingma, D. P. & Ba, J. Adam: A Method for Stochastic Optimization. arXiv preprint arXiv:1412.6980 (2014).
Kass, R. E., Eden, U. T. & Brown, E. N. Analysis of neural data, Vol. 491 (Springer, 2014).
Volgushev, M., Ilin, V. & Stevenson, I. H. Identifying and tracking simulated synaptic inputs from neuronal firing: insights from in vitro experiments. PLoS Comput. Biol. 11, e1004167 (2015).
Wilks, S. S. The largesample distribution of the likelihood ratio for testing composite hypotheses. Ann. Math. Stat. 9, 60 (1938).
Matthews, B. W. Comparison of the predicted and observed secondary structure of t4 phage lysozyme. Biochim. Biophys. Acta 405, 442 (1975).
Sun, A., & Lim, E.P. Hierarchical text classification and evaluation. In Proceedings of ICDM 2001 521–528 ( IEEE, 2001).
Song, S., Sjöström, P. J., Reigl, M., Nelson, S. & Chklovskii, D. B. Highly nonrandom features of synaptic connectivity in local cortical circuits. PLoS Biol. 3, e68 (2005).
Teramae, J.N., Tsubo, Y. & Fukai, T. Optimal spikebased communication in excitable networks with strongsparse and weakdense links. Sci. Rep. 2, 485 (2012).
Buzsáki, G. & Mizuseki, K. The logdynamic brain: how skewed distributions affect network operations. Nat. Rev. Neurosci. 15, 264 (2014).
Uzan, H., Sardi, S., Goldental, A., Vardi, R. & Kanter, I. Stationary lognormal distribution of weights stems from spontaneous ordering in adaptive node networks. Sci. Rep. 8, 1 (2018).
Destexhe, A., Rudolph, M., Fellous, J.M. & Sejnowski, T. J. Fluctuating synaptic conductances recreate in vivolike activity in neocortical neurons. Neuroscience 107, 13 (2001).
Tuckwell, H. C. Introduction to Theoretical Neurobiology: Volume 2, Nonlinear and Stochastic Theories (Cambridge University Press, Cambridge, 1988).
Goutagny, R., Jackson, J. & Williams, S. Selfgenerated theta oscillations in the hippocampus. Nat. Neurosci. 12, 1491 (2009).
Mitz, A. R. et al. High channel count singleunit recordings from nonhuman primate frontal cortex. J. Neurosci. Methods 289, 39 (2017).
Bartolo, R., Saunders, R. C., Mitz, A. R. & Averbeck, B. B. Informationlimiting correlations in large neural populations. J. Neurosci. 40, 1668 (2020).
Matsuda, K., Nagami, T., Sugase, Y., Takemura, A. & Kawano, K. A widely applicable realtime mono/binocular eye tracking system using a high framerate digital camera. In International Conference on HumanComputer Interaction 593–608 (Springer, 2017).
Kohn, A., Smith, M. A. Utah array extracellular recordings of spontaneous and visually evoked activity from anesthetized macaque primary visual cortex (V1). CRCNS.org. http://dx.doi.org/10.6080/K0NC5Z4X (2016).
Smith, M. A. & Kohn, A. Spatial and temporal scales of neuronal correlation in primary visual cortex. J. Neurosci. 28, 12591 (2008).
Shoham, S., Fellows, M. R. & Normann, R. A. Robust, automatic spike sorting using mixtures of multivariate tdistributions. J. Neurosci. Methods 127, 111 (2003).
Nordhausen, C. T., Maynard, E. M. & Normann, R. A. Single unit recording capabilities of a 100 microelectrode array. Brain Res. 726, 129 (1996).
Suner, S., Fellows, M. R., VargasIrwin, C., Nakata, G. K. & Donoghue, J. P. Reliability of signals from a chronically implanted, siliconbased electrode array in nonhuman primate primary motor cortex. IEEE Trans. Neural Syst. Rehabil. Eng. 13, 524 (2005).
Kelly, R. C. et al. Comparison of recordings from microelectrode arrays and single electrodes in the visual cortex. J. Neurosci. 27, 261 (2007).
Acknowledgements
We thank Masahiro Naito for his technical assistance in developing a webapplication program, Junnosuke Teramae for his advice on MAT simulation, and Kai Shinomoto for drawing an illustration of a monkey for Figure 1. We also thank Adam Kohn for permitting us to analyze their experimental data of V1 and providing the detailed information of the experimental conditions, and Richard Saunders and Mark Eldridge for performing surgery on the animal for IT cortex data, Yuji Nagai and Takafumi Minamimoto for assisting the surgery, and Rossella Falcone and Narihisa Matsumoto for helpful discussions upon preparing the IT cortex data. R.K. is supported by JSPS KAKENHI Grant Numbers JP17H03279, JP18K11560, JP19H01133, and JPJSBP120202201, and JST PRESTO Grant Number JPMJPR1925, Japan. B.B.A. is supported by NIMH DIRP ZIA MH002928. Y.S.M is supported by JSPS KAKENHI Grant Number JP18H05020 and New Energy and Industrial Technology Development Organization (NEDO). K.H. is supported by Japan Society for the Promotion of Science (JSPS) and JSPS KAKENHI Grant Number JP19J40302. K.K. is supported by JSPS KAKENHI Grant Number JP19K07804. B.J.R. is supported by NIMH DIRP ZIA MH002032. S.S. is supported by JST CREST Grant Number JPMJCR1304, and the New Energy and Industrial Technology Development Organization (NEDO).
Author information
Authors and Affiliations
Contributions
D.E. and R.K. contributed equally in performing analysis. R.B., B.B.A., Y.SM, K.H., and K.K. contributed data. B.J.R. and S.S. wrote the main manuscript text. S.S. designed the study. All authors discussed the results and reviewed the manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher's note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Endo, D., Kobayashi, R., Bartolo, R. et al. A convolutional neural network for estimating synaptic connectivity from spike trains. Sci Rep 11, 12087 (2021). https://doi.org/10.1038/s4159802191244w
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s4159802191244w
This article is cited by

Deconvolution improves the detection and quantification of spike transmission gain from spike trains
Communications Biology (2022)
Comments
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.