Abstract
Experimental evidence recently indicated that neural networks can learn in a different manner than was previously assumed, using adaptive nodes instead of adaptive links. Consequently, links to a node undergo the same adaptation, resulting in cooperative nonlinear dynamics with oscillating effective link weights. Here we show that the biological reality of stationary lognormal distribution of effective link weights in neural networks is a result of such adaptive nodes, although each effective link weight varies significantly in time. The underlying mechanism is a stochastic restoring force emerging from a spontaneous temporal ordering of spike pairs, generated by strong effective link preceding by a weak one. In addition, for feedforward adaptive node networks the number of dynamical attractors can scale exponentially with the number of links. These results are expected to advance deep learning capabilities and to open horizons to an interplay between adaptive node rules and the distribution of network link weights.
Similar content being viewed by others
Introduction
The brain is one of the most complex adaptive networks, where learning occurs by modifying the link weights^{1}. This type of biological strategy stimulated theory and application of machine learning algorithms^{2,3,4} as well as recent deep learning achievements^{5,6,7,8}. Accumulated experimental evidence indicate that neural network weights follow a wide distribution which is approximated by a lognormal distribution^{9,10}, however, the underlying mechanism for its origination and stability is unclear^{11}. Specifically, it is valuable to understand whether such a wide distribution of network weights, characterized by a small fraction of strong links, is a spontaneous outcome of a random stochastic process, or alternatively it is directed by a meaningful learning activity^{12,13,14,15,16}.
The longlasting assumption of learning by adaptive links was recently questioned, where experimental evidence showed that nodal adaptation occurs following its anisotropic incoming signals^{17,18}, similarly to the slow learning mechanism attributed to the links^{12,19,20}. Specifically, each node collects its incoming signals via several adaptive terminals (dendrites), hence all links to a terminal undergo the same adaptation, resulting in cooperative nonlinear dynamics. It presents a selfcontrolled mechanism to prevent divergence or vanishing of the learning parameters, as opposed to learning by links, and also supports selfoscillations of the effective learning parameters. In this paper we show that the biological reality^{10,21} of stationary lognormal distribution^{9,11,22,23} of effective link weights in neural networks is a result of such adaptive anisotropic nodes. This global distribution of the weights is a conserved quantity of the dynamics, although each effective link weight varies significantly in time. The underlying mechanism is a stochastic restoring force emerging from a spontaneous temporal ordering of spike pairs, generated by a strong nodal terminal preceding by a weak one. In addition, for feedforward adaptive node networks consisting of a few adaptive terminals, the number of dynamical attractors^{24} can scale exponentially with the number of links. These results are expected to advance deep learning capabilities^{3,5,6,25,26} where training, adaptation and generalization, information queries, occur simultaneously. It also opens horizons to find a possible universal interplay between adaptive node rules and the distribution of network link weights^{26,27}.
Results
The model of adaptive nodes
In order to study the effect of nodal adaptation we modeled a node with K terminals (neuronal dendritic trees)^{13}. Each terminal collects its many incoming signals via N/K timeindependent link weights, W_{m}, where N stands for the total number of input units (Fig. 1a). The nodal terminal is modeled as a threshold element based on a leaky integrate and fire neuron^{28}
where V_{i}(t) is the scaled voltage of the i^{th} terminal, T = 20 ms is the membrane time constant, V_{st} = 0 stands for the scaled stable (resting) membrane potential (Methods) and J_{i} stands for the i^{th} terminal weight. W_{m} and τ_{m} stand for the m^{th} link weight and delay, respectively, and the summation over n sums all input timings arriving at the m^{th} link, t_{m}(n). A spike occurs when the voltage of one of the terminals crosses the threshold, V_{i} ≥ 1. After a spike is generated the terminal’s voltage is set to V_{st}, and a refractory period of 2 ms occurs, where no evoked spikes are possible by any one of the terminals (Methods). Note that in order to achieve a threshold crossing, typically many inputs have to arrive to a neuron in temporal proximity via one of its terminals^{29}.
For every pair of a subthreshold stimulation via terminal i and an evoked spike from a different terminal, an adaptation step occurs for J_{i}:
where δ_{i} and η_{i} stand for the relative change and an additive random white noise, respectively. The relative change, δ, is the same as the adaptation rule used for link weights and follows the modified Hebbian learning rule^{12,19,20} (blueline in Fig. 1b). This relative change is a function of the timelag between a subthreshold stimulation and an evoked spike, t_{sub} − t_{spike}, originated from a different terminal. Specifically, the relative change decays exponentially to zero for large timelags and follows its sign (Fig. 1b). The qualitative reported results were also found to be robust to a simplified twolevel adaptation rule (dashed blueline in Figs 1b and S1).
Following recent experimental evidence, an additional ingredient is introduced, where a threshold crossing by one of the terminals now generates an evoked spike with probability^{30}
where Δt is the timelag from the last threshold crossing, and f_{c} reflects the maximal stationary firing frequency of the neuronal terminal, e.g. 15 Hz. Note that for high stimulation frequencies (>f_{c}) the nodal firing rate is saturated at K · f_{c}, and for low stimulation frequencies (<f_{c}) response failures practically vanish (Fig. 1c–e, Methods).
Feedforward networks
When the input units are simultaneously stimulated with a common stimulation frequency, three possible types of dynamics for J_{i} are observed, according to the initial weights and delays, W_{m} and τ_{m}. This is exemplified for a network with an output node with three terminals and 15 input units, without noise (K = 3, N = 15, η_{i} = 0) and 5 Hz stimulation frequency (Fig. 1c). In the first type of dynamics, all J_{i} converge to fixed values (Fig. 1c_{1}). The second type is characterized by fast oscillations with relatively small fluctuations of each J_{i} around an average value (Fig. 1c_{2}), and their periods are below few seconds, typically subseconds. The third type is characterized by slow oscillations with periods which can exceed hundreds of seconds (Fig. 1c_{3}) and exists for K > 2 only^{18}. They are accompanied by large variations in the amplitudes of J_{i} and consist of long plateaus at extreme values. The fraction of initial timeindependent weights and delays, W_{m} and τ_{m}, leading to oscillations was estimated using random sampling (Methods) for K = 3 and varies N (Fig. 1d). It increases from ~0.4 for N = 9 to ~0.8 for N = 27, indicating that the phenomenon of oscillations is a common scenario in adaptive node networks. Note that in the traditional adaptive link scenario all W_{m} converge either to zero or to abovethreshold (similar to Fig. 1c_{1}) and oscillations are excluded^{18}.
The robustness of the fast and slow oscillations to small stochastic noise, η in eq. (2), was examined using the Fourier analysis of the adaptive weights (Fig. 2). For fast oscillations, the noise does not affect the periods of oscillations, but only slightly affects their Fourier amplitudes (Fig. 2a,c). In contrast, for slow oscillations the noise, η, affects the periodicity which is typically shortened (Fig. 2d). This trend is a result of the noise which prevents the plateau at small values of terminal weights, J, for long periods (Fig. 2b).
The number of different stationary firing patterns, attractors, in the large N limit, can be bounded from below, for given K and delays τ_{m} (Fig. 1a). Assuming that for each terminal there are N_{0} < N/K nonzero inputs, the number of different attractors, A(N_{0}), is estimated using an exhaustive random sampling for W_{m} (Methods). A lower bound for the number of dynamical attractors for the entire network with N nonzero inputs, scales as
since for each one of the K terminals one can select a subset of N_{0} inputs among N/K, with repeated abovethreshold stimulated inputs. Each one of these choices results in A(N_{0}) different attractors as a result of different delays. For K = 3 and N_{0} = 3, for instance, the number of different attractors was estimated as A(3) ~ 1500 (Fig. 1e and Methods), indicating that eq. (4) scales as N^{9}. For N_{0} = O(N) even with small K, e.g. K = 2, the number of different attractors is expected to scale exponentially with N. This type of input scenarios is expected in biological realizations where a neuron has only a few terminals (dendritic trees)^{13} and many thousands of links (synapses)^{29}, however at each firing event only a small fraction of the input links is effectively involved^{29}. Results indicate powerful computational capabilities under biological realizations with a huge number of attractors even for such a simple feedforward network with only finite number of adaptive terminals.
Recurrent networks and lognormal distributions
To this point we assumed simultaneous inputs, which is far from biological reality. In order to expand the study to nonsimultaneous inputs, asynchronous stimulations are first discussed for the case of population dynamics between two pools^{31} consisting of 500 nodes each (Fig. 3a_{1} and a_{2}). In the adaptive links scenario, each node receives 60 inputs from randomly selected nodes from the other pool (Fig. 3a_{1}), or via three adaptive terminals for the adaptive nodes scenario (Fig. 3a_{2}), using random delays from a normal distribution of 100 ms mean and a standard deviation of 2 ms. Networks are simulated using eq. (1) (Methods). For the adaptive links scenario, weights converge to biologically unrealistic limits, and are frozen at either abovethreshold or practically vanish (Fig. 3b_{1} and c_{1}). For the adaptive nodes scenario, the distribution of the effective weights, W_{m} · J_{i}, converges to a stationary lognormal distribution (Fig. 3b_{2}), however, each weight is not frozen and significantly varies along the dynamics (Figs 3c_{2} and S2). A similar stationary lognormal distribution was obtained for a random network consisting of the same 1000 adaptive nodes, where each receives inputs from 60 randomly selected nodes (Fig. 3a_{3} and b_{3}). The lognormal distribution is stationary (Fig. S3), however each weight is not frozen and significantly varies along the dynamics (Fig. 3b_{3} and c_{3}) such that its distribution can also be well approximated by lognormal (Fig. S2). The lognormal distribution is not attributed to the emergence of some spontaneous clustering among the adaptive nodes, as the raster plot indicates a random firing activity of each node and the entire network, without any significant structure in the Fourier spectrum (Fig. S4). For pools consisting of adaptive nodes (Fig. 3a_{2}), where one of the pools was initially triggered, the raster plot initially indicates alternating firing between two synchronized pools (Fig. 3d_{1}). The variation between delays results in the broadening of the firing stripes until merging occurs with random raster activity (Fig. 3d_{2}), similar to the random network (Fig. 3a_{2}). This broadening and merging might be a selfcontrol mechanism to terminate an induced reverberating mode. Interestingly, the lognormal distribution for the effective weights emerges already in the transient consisting of stripes activity (Fig. 3d_{3}), and only its average (and variance) are later adjusted. The firing rate of each terminal in the network (Fig. 3a_{2} and a_{3}) is saturated, ~15 Hz, hence, the firing frequency of each node is ~45 Hz.
Restoring force via spontaneous spike ordering
The understanding of the underlying mechanism for the emergence of a stationary lognormal distribution requires the examination of a much simpler system imitating the network activity. We examine the dynamics of an adaptive node consisting of two terminals (K = 2) and 60 inputs, where each one of the inputs is stimulated at random and on the average at 30 Hz (Fig. 4a and Methods). The distribution of the effective weights is indeed lognormal distribution (Fig. 4b) and is practically identical to the distribution obtained in the network dynamics (Fig. 3b_{3}).
The emergence of a lognormal distribution is natural, since multiple adaptation steps of a weight, eq. (2), result in a multiplicative process, however, its stationary shape requires an explanation. The relative change in links with a given weight J, averaged over such instances during the stationary dynamics, revealed a stochastic restoring force towards the most probable J (Figs. 4c and S5). The origin of this restoring force is the emergence of spontaneous temporal ordering of pairs of spikes for a given adaptive node during the dynamical process. For simplicity we assume K = 2 and we concentrate on momentary events of the dynamics where the adaptive node has simultaneously one weak, J_{W}, and one strong, J_{S}, terminals, relative to the most probable values of the lognormal distribution (Fig. 4b). Next we estimate in simulations the probability of occurrence of the following two types of pairs of spikes in a bounded time window, e.g. 5 ms (Fig. 4d_{1}). The first type, P_{SW}, stands for a spike generated by J_{S} prior to a spike generated by J_{W}, and vice versa for the second type of pairs, P_{WS}. Simulation results indicate
where typically P_{SW} is several times greater than P_{WS}, and P_{SW} constitutes a few percent of all pairs of events (Fig. 5a). This preference, eq. (5), is exemplified using the following selfconsistent argument assuming that initially the weak and the strong spikes occur almost simultaneously (Fig. 4d_{2}). Since the input units of both terminals are stimulated at the same rate, the threshold crossing of J_{S} occurs before J_{W} which are both accompanied by response failures, eq. (3). Consequently, the spike generated by J_{S} occurs prior to the spike generated by J_{W} (Fig. 4d_{2}). Note that the adaptation steps, eq. (2), change the two terminals from remaining strong and weak, however on the average there is a stochastic tendency for the strong spike to evoke prior the weak one.
Discussion
The mechanism of the restoring force is a direct consequence of the spontaneous temporal ordering (Fig. 4d). A terminal that evoked a spike resets its membrane potential which rapidly increases by many subthreshold stimulations. The threshold crossing is achieved again in several ms and is followed by many response failures. Hence, the strong terminal generates most of its subthreshold stimulations prior to the following weak spike, whereas all the subthreshold stimulations of the weak terminal appear after the strong spike (Fig. 4e). Following the adaptation rule, eq. (2), the strong terminal is decreased, ΔJ_{S} < 0, whereas the weak terminal is enhanced, ΔJ_{W} > 0 (Fig. 4e) and the restoring force is created.
A necessary ingredient in the formation of the mechanism to achieve a stationary lognormal distribution is that the majority of the subthreshold stimulations of the strong spike occur prior to the weak one (Fig. 4e). For short refractory periods the timelag between a pair of strongweak spikes decreases, since the minimal timelag between consecutive spikes decreases. Indeed, for short enough refractory periods and certainly for a vanishing one, the lognormal distribution was found in simulations to be unstable, where all effective weights are asymptotically above threshold, since both ΔJ_{s} and ΔJ_{w} are now positive (Fig. 5). The lognormal distribution of link weights is an emerging spontaneous feature of adaptive node networks where the essential role of the refractory period is evident. Results open the horizon to explore the possible interplay between the adaptive node rules and stationary distribution classes of the network link weights^{26,27}.
Methods
Simulation dynamics
Each node is described by several independent terminals, and a node generates a spike when a terminal crosses a threshold (eqs. (1) and (3)). The voltage of each terminal is determined according to the leaky integrate and fire model as described in eq. (1), where T = 20 ms. For simplicity, we scale the equation such that V_{th} = 1, V_{st} = 0, consequently, V ≥ 1 is above threshold and V < 1 is below threshold. Nevertheless, results remain the same for both the scaled and unscaled equations, e.g. V_{st} = −70 mV and V_{th} = −54 mV. The initial voltage for each terminal is V_{(t=0)} = 0 and J_{i} = 1. The adaptation is done according to eq. (2), where \(\delta =A\cdot \exp (\,\frac{{\rm{\Delta }}t}{15})\cdot sign({\rm{\Delta }}t)\), and Δt stands for the time between a subthreshold stimulation and a spike, up to a cutoff at 50 ms. The parameter η is chosen randomly in the range [−0.5, 0.5] · 10^{−3}, and A is the adaptation step taken as 0.05, unless otherwise is stated.
Refractory period
After a spike is generated, the terminal that evoked a spike cannot respond to other stimulations arriving in the following 2 ms. During this refractory period, all other terminals cannot evoke a spike or cross the threshold as well, but can increase their membrane potential as a result of stimulations.
Response failure
When crossing the threshold, the terminal creates a spike with probability of Δt · f_{c}, where Δt is the timelag from the last threshold crossing by this terminal, and f_{c} reflects the maximal stationary firing frequency of the terminal. In case the terminal failed to respond its voltage is set to its previous value.
The parameters for feedforward networks
Number of terminals = 3, number of inputs per terminal = 5, refractory period = 2 ms, link weights are randomly chosen from a uniform distribution in the range [0.1, 1.1], delays (τ) are randomly chosen from a uniform distribution in the range [1, 150] ms (Fig. 1c). Links are ordered with increasing delays, except the maximal delay which is linked to the first terminal (closing a loop). The dynamics is given by eq. (1) and is numerically solved with a time resolution of 1 ms. Initial terminal weights, J_{i}, are set to 1. We assume large f_{c}, hence response failures are excluded. In addition, in Fig. 1 η = 0. The robustness of the results to noise, η > 0, is demonstrated in Fig. 2. The upper bound for the terminal weights is J_{i} = 10 and the lower bound is J_{i} = 10^{−6}.
The fraction of oscillations
The fraction of each type of dynamics was estimated using 20,000 random initial conditions for the delays, τ_{m}, and the weights, W_{m}, (defined above) for each number of inputs per terminal (Fig. 1d).
The number of attractors
Number of terminals = 3, number of inputs per terminal = 3. The average and the standard deviation of each point was obtained from 10–18 samples, each sample is with a fixed set of N delays (τ) and the initial conditions for the N weights are randomly sampled. In order to determine if two initial conditions lead to the same attractor, we compared the firing rate from each input link. We calculated the number of firing events for each link, and compared it with the same link from a simulation with different initial weights. If for all of the input links the difference is less than 2%, we determine that these different initial weights lead to the same attractor. For links that have low firing rates, the comparison was made between nonfiring events. We obtained very similar results when the comparison was done between the firing timings for each link, instead of number of firing events (Fig. 1e).
Recurrent network parameters
Number of terminals = 3, number of inputs per terminal = 60, f_{c} = 15 Hz, refractory period = 2 ms, adaptation step A = 0.05. The dynamics of each node is given by eq. (1) and is solved with a time resolution of 0.1 ms. Link weights are randomly chosen from a uniform distribution in the range [0.1, 0.2], delays are randomly chosen from a normal distribution with a mean of 100 ms and STD of 2 ms, initial terminal weights, J_{i}, are set to 1. In order to initiate the network simulation, 0.4 of the nodes in the network are stimulated abovethreshold. Spontaneous noise, external abovethreshold stimulations, is randomly added with an average frequency of 0.01 Hz per node (Fig. 3).
The ratio max/min of each weight (Fig. 3c) was calculated for the last 2 seconds of the simulation, out of 50 seconds for adaptive nodes and 350 seconds for adaptive links (same running time as for Fig. 3b). For networks of adaptive links (Fig. 3c_{1}) a fraction of the weights vanishes, hence the upper bound of the histogram is set to 60. The histograms (Fig. 3c_{2} and c_{3}) constitute of 100 bins each. For visibility, points in the raster plots (Fig. 3d_{1} and d_{2}) were 50% diluted.
The parameters for the feedforward network with random inputs
Number of terminals = 2, number of inputs per terminal = 60, f_{c} = 15 Hz, refractory period = 2 ms, adaptation step A = 0.1, link weights are randomly chosen from a uniform distribution in the range [0.1, 0.2], initial terminal weights are set to 1 (Fig. 4). The dynamics is given by eq. (1) and is solved with a time resolution of 0.1 ms. Running time = 2500 seconds, where a transient of 200 seconds is excluded in the measurements. Strong and weak weights (Fig. 4b–e) were chosen such that 50% of the weights were between the maximum of the weak and the minimum of the strong effective weights, and in addition for each limit (maximum and minimum) 1% of the extreme weights were excluded. The force was calculated in bin size of 0.05 and defined as \(\frac{({J}^{+}J)}{\langle J\rangle }\), where 〈J〉 stands for the average bin value. The error bar (Fig. 4c) stands for the standard deviation of the adaptation steps belonging to each bin.
References
Hebb, D. O. The Organization of Behavior: A Neuropsychological Theory (Wiley & Sons, New York, 1949).
Ghahramani, Z. Probabilistic machine learning and artificial intelligence. Nature 521, 452–459 (2015).
Watkin, T. L., Rau, A. & Biehl, M. The statistical mechanics of learning a rule. Reviews of Modern Physics 65, 499 (1993).
Engel, A. & Van den Broeck, C. Statistical mechanics of learning. (Cambridge University Press, 2001).
LeCun, Y., Bengio, Y. & Hinton, G. Deep learning. Nature 521, 436–444 (2015).
Buchanan, M. Depths of learning. Nat Phys 11, 798 (2015).
Zdeborová, L. Machine learning: New tool in the box. Nat Phys 13, 420–421 (2017).
Li, B. & Saad, D. Exploring the Function Space of DeepLearning Machines. Physical Review Letters 120, 248301 (2018).
Song, S., Sjöström, P. J., Reigl, M., Nelson, S. & Chklovskii, D. B. Highly nonrandom features of synaptic connectivity in local cortical circuits. Plos biology 3, e68 (2005).
Loewenstein, Y., Kuras, A. & Rumpel, S. Multiplicative dynamics underlie the emergence of the lognormal distribution of spine sizes in the neocortex in vivo. Journal of Neuroscience 31, 9481–9488 (2011).
Buzsáki, G. & Mizuseki, K. The logdynamic brain: how skewed distributions affect network operations. Nature Reviews Neuroscience 15, 264 (2014).
Park, Y., Choi, W. & Paik, S.B. Symmetry of learning rate in synaptic plasticity modulates formation of flexible and stable memories. Scientific reports 7, 5671 (2017).
Spruston, N. Pyramidal neurons: dendritic structure and synaptic integration. Nature Reviews Neuroscience 9, 206 (2008).
Del Ferraro, G. et al. Finding influential nodes for integration in brain networks using optimal percolation theory. Nature Communications 9, 2274 (2018).
Bashan, A., Bartsch, R. P., Kantelhardt, J. W., Havlin, S. & Ivanov, P. C. Network physiology reveals relations between network topology and physiological function. Nature communications 3, 702 (2012).
Liu, K. K., Bartsch, R. P., Lin, A., Mantegna, R. N. & Ivanov, P. C. Plasticity of brain wave network interactions and evolution across physiologic states. Frontiers in neural circuits 9, 62 (2015).
Sardi, S., Vardi, R., Sheinin, A., Goldental, A. & Kanter, I. New Types of Experiments Reveal that a Neuron Functions as Multiple Independent Threshold Units. Scientific reports 7, 18036 (2017).
Sardi, S. et al. Adaptive nodes enrich nonlinear cooperative learning beyond traditional adaptation by links. Sci RepUk 8, 5100 (2018).
Dan, Y. & Poo, M.M. Spike timingdependent plasticity: from synapse to perception. Physiological reviews 86, 1033–1048 (2006).
Cassenaer, S. & Laurent, G. Conditional modulation of spiketimingdependent plasticity for olfactory learning. Nature 482, 47 (2012).
Cossell, L. et al. Functional organization of excitatory synaptic strength in primary visual cortex. Nature 518, 399 (2015).
OttinoLoffler, B., Scott, J. G. & Strogatz, S. H. Evolutionary dynamics of incubation periods. eLife 6 (2017).
Levi, F. Applied mathematics: The discovery of skewness. Nature Physics 14, 108 (2018).
Opper, M. Learning in neural networks: Solvable dynamics. EPL (Europhysics Letters) 8, 389 (1989).
Li, A., Cornelius, S. P., Liu, Y.Y., Wang, L. & Barabási, A.L. The fundamental advantages of temporal networks. Science 358, 1042–1046 (2017).
Yan, G. et al. Network control principles predict neuron function in the Caenorhabditis elegans connectome. Nature 550, 519 (2017).
Unicomb, S., Iñiguez, G. & Karsai, M. Threshold driven contagion on weighted networks. Scientific reports 8, 3094 (2018).
Brette, R. & Gerstner, W. Adaptive exponential integrateandfire model as an effective description of neuronal activity. Journal of neurophysiology 94, 3637–3642 (2005).
Abeles, M. Corticonics: Neural circuits of the cerebral cortex. (Cambridge University Press, 1991).
Vardi, R. et al. Neuronal response impedance mechanism implementing cooperative networks with low firing rates and μs precision. Frontiers in neural circuits 9 (2015).
Brama, H., Guberman, S., Abeles, M., Stern, E. & Kanter, I. Synchronization among neuronal pools without common inputs: in vivo study. Brain Structure and Function 220, 3721–3731 (2015).
Acknowledgements
We thank Moshe Abeles for stimulating discussions. The assistance by Yael Tugendhaft is acknowledged.
Author information
Authors and Affiliations
Contributions
H.U. and S.S. performed the simulations and analyzed the data with the help of A.G. and developed the theoretical concepts under the guidance of I.K. H.U., S.S., A.G. and R.V. discussed the idea, results and commented on the manuscript. I.K. supervised all aspects of the work.
Corresponding author
Ethics declarations
Competing Interests
The authors declare no competing interests.
Additional information
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Electronic supplementary material
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Uzan, H., Sardi, S., Goldental, A. et al. Stationary lognormal distribution of weights stems from spontaneous ordering in adaptive node networks. Sci Rep 8, 13091 (2018). https://doi.org/10.1038/s41598018315231
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s41598018315231
This article is cited by

A convolutional neural network for estimating synaptic connectivity from spike trains
Scientific Reports (2021)
Comments
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.