Abstract
Most of the networks used by computer scientists and many of those studied by modelers in neuroscience represent unit activities as continuous variables. Neurons, however, communicate primarily through discontinuous spiking. We review methods for transferring our ability to construct interesting networks that perform relevant tasks from the artificial continuous domain to more realistic spiking network models. These methods raise a number of issues that warrant further theoretical and experimental study.
This is a preview of subscription content, access via your institution
Relevant articles
Open Access articles citing this article.
-
Multitask computation through dynamics in recurrent spiking neural networks
Scientific Reports Open Access 10 March 2023
-
Unconventional computing based on magnetic tunnel junction
Applied Physics A Open Access 03 March 2023
-
Intrinsic bursts facilitate learning of Lévy flight movements in recurrent neural network models
Scientific Reports Open Access 23 March 2022
Access options
Subscribe to this journal
Receive 12 print issues and online access
$189.00 per year
only $15.75 per issue
Rent or buy this article
Get just this article for as long as you need it
$39.95
Prices may be subject to local taxes which are calculated during checkout




References
Hansel, D. & Sompolinsky, H. Modeling feature selectivity in local cortical circuits. in Methods in Neuronal Modeling 2nd edn. (eds. Koch, C. & Segev, I.) 499–566 (MIT Press, Cambridge, Massachusetts, USA, 1998).
Seung, H.S., Lee, D.D., Reis, B.Y. & Tank, D.W. Stability of the memory of eye position in a recurrent network of conductance-based model neurons. Neuron 26, 259–271 (2000).
Wang, X.-J. Probabilistic decision making by slow reverberation in cortical circuits. Neuron 36, 955–968 (2002).
Renart, A., Song, P. & Wang, X.-J. Robust spatial working memory through homeostatic synaptic scaling in heterogeneous cortical networks. Neuron 38, 473–485 (2003).
Song, P. & Wang, X.-J. Angular path integration by moving “hill of activity”: a spiking neuron model without recurrent excitation of the head-direction system. J. Neurosci. 25, 1002–1014 (2005).
Eliasmith, C. A unified approach to building and controlling spiking attractor networks. Neural Comput. 17, 1276–1314 (2005).
Maass, W., Joshi, P. & Sontag, E.D. Computational aspects of feedback in neural circuits. PLoS Comput. Biol. 3, e165 (2007).
Burak, Y. & Fiete, I.R. Accurate path integration in continuous attractor network models of grid cells. PLoS Comput. Biol. 5, e1000291 (2009).
Boerlin, M. & Denève, S. Spike-based population coding and working memory. PLoS Comput. Biol. 7, e1001080 (2011).
Boerlin, M., Machens, C.K. & Denève, S. Predictive coding of dynamical variables in balanced spiking networks. PLoS Comput. Biol. 9, e1003258 (2013).
Lim, S. & Goldman, M.S. Balanced cortical microcircuitry for maintaining information in working memory. Nat. Neurosci. 16, 1306–1314 (2013).
Schwemmer, M.A., Fairhall, A.L., Denève, S. & Shea-Brown, E.T. Constructing precisely computing networks with biophysical spiking neurons. J. Neurosci. 35, 10112–10134 (2015).
Buonomano, D.V. & Merzenich, M.M. Temporal information transformed into a spatial code by a neural network with realistic properties. Science 267, 1028–1030 (1995).
Gütig, R. & Sompolinsky, H. The tempotron: a neuron that learns spike timing-based decisions. Nat. Neurosci. 9, 420–428 (2006).
Pfister, J.-P., Toyoizumi, T., Barber, D. & Gerstner, W. Optimal spike-timing-dependent plasticity for precise action potential firing in supervised learning. Neural Comput. 18, 1318–1348 (2006).
Diesmann, M., Gewaltig, M.-O. & Aertsen, A. Stable propagation of synchronous spiking in cortical neural networks. Nature 402, 529–533 (1999).
Maass, W., Natschläger, T. & Markram, H. Real-time computing without stable states: a new framework for neural computation based on perturbations. Neural Comput. 14, 2531–2560 (2002).
Reutimann, J., Yakovlev, V., Fusi, S. & Senn, W. Climbing neuronal activity as an event-based cortical representation of time. J. Neurosci. 24, 3295–3303 (2004).
Vogels, T.P. & Abbott, L.F. Signal propagation and logic gating in networks of integrate-and-fire neurons. J. Neurosci. 25, 10786–10795 (2005).
Liu, J.K. & Buonomano, D.V. Embedding multiple trajectories in simulated recurrent neural networks in a self-organizing manner. J. Neurosci. 29, 13172–13181 (2009).
Jahnke, S., Timme, M. & Memmesheimer, R.-M. Guiding synchrony through random networks. Phys. Rev. X 2, 041016 (2012).
Thalmeier, D., Uhlmann, M., Kappen, H.J. & Memmesheimer, R.-M. Learning universal computations with spikes. Preprint at http://arxiv.org/abs/1505.07866 (2015).
DePasquale, B., Churchland, M. & Abbott, L.F. Using firing-rate dynamics to train recurrent networks of spiking model neurons. Preprint at http://arxiv.org/abs/1601.07620 (2016).
Memmesheimer, R.-M., Rubin, R., Ölveczky, B.P. & Sompolinsky, H. Learning precisely timed spikes. Neuron 82, 925–938 (2014).
Eliasmith, C. et al. A large-scale model of the functioning brain. Science 338, 1202–1205 (2012).
Hennequin, G., Vogels, T.P. & Gerstner, W. Optimal control of transient dynamics in balanced networks supports generation of complex movements. Neuron 82, 1394–1406 (2014).
Ponulak, F. & Kasin´ski, A. Supervised learning in spiking neural networks with ReSuMe: sequence learning, classification, and spike shifting. Neural Comput. 22, 467–510 (2010).
Florian, R.V. The chronotron: a neuron that learns to fire temporally precise spike patterns. PLoS One 7, e40233 (2012).
Brea, J., Senn, W. & Pfister, J.-P. Matching recall and storage in sequence learning with spiking neural networks. J. Neurosci. 33, 9565–9575 (2013).
LeCun, Y., Bengio, Y. & Hinton, G. Deep learning. Nature 521, 436–444 (2015).
Mante, V., Sussillo, D., Shenoy, K.V. & Newsome, W.T. Context-dependent computation by recurrent dynamics in prefrontal cortex. Nature 503, 78–84 (2013).
Sussillo, D., Churchland, M.M., Kaufman, M.T. & Shenoy, K.V. A neural network that finds a naturalistic solution for the production of muscle activity. Nat. Neurosci. 18, 1025–1033 (2015).
Bohte, S.M., Kok, J.N. & Poutré, H.L. Error-backpropagation in temporally encoded networks of spiking neurons. Neurocomputing 48, 17–37 (2002).
Tino, P. & Mills, A.J.S. Learning beyond finite memory in recurrent networks of spiking neurons. Neural Comput. 18, 591–613 (2006).
Sporea, I. & Grüning, A. Supervised learning in multilayer spiking neural networks. Neural Comput. 25, 473–509 (2013).
van Vreeswijk, C. & Sompolinsky, H. Chaos in neuronal networks with balanced excitatory and inhibitory activity. Science 274, 1724–1726 (1996).
Brunel, N. Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons. J. Comput. Neurosci. 8, 183–208 (2000).
LeCun, Y. Learning processes in an asymmetric threshold network. in Disordered Systems and Biological Organization (eds. Bienenstock, E., Fogelman, F. & Weisbuch, G.) 233–240 (Springer, Berlin, 1986).
Bengio, Y. How auto-encoders could provide credit assignment in deep networks via target propagation. Preprint at http://arxiv.org/abs/1407.7906 (2014).
Laje, R. & Buonomano, D.V. Robust timing and motor patterns by taming chaos in recurrent neural networks. Nat. Neurosci. 16, 925–933 (2013).
Fisher, D., Olasagasti, I., Tank, D.W., Aksay, E.R.F. & Goldman, M.S. A modeling framework for deriving the structural and functional architecture of a short-term memory microcircuit. Neuron 79, 987–1000 (2013).
Rajan, K., Harvey, C. & Tank, D. Recurrent network models of sequence generation and memory. Neuron (in the press).
Jaeger, H. & Haas, H. Harnessing nonlinearity: predicting chaotic systems and saving energy in wireless communication. Science 304, 78–80 (2004).
Sussillo, D. & Abbott, L.F. Generating coherent patterns of activity from chaotic neural networks. Neuron 63, 544–557 (2009).
Sussillo, D. & Abbott, L.F. Transferring learning from external to internal weights in echo-state networks with sparse connectivity. PLoS One 7, e37372 (2012).
Lukoševičius, M., Jaeger, H. & Schrauwen, B. Reservoir computing trends. Künstl. Intell. 26, 365–371 (2012).
Sussillo, D. Neural circuits as computational dynamical systems. Curr. Opin. Neurobiol. 25, 156–163 (2014).
Eliasmith, C. & Anderson, C. Neural Engineering: Computation, Representation and Dynamics in Neurobiological Systems (MIT Press, Cambridge, Massachusetts, USA, 2003).
Denève, S. & Machens, C. Efficient codes and balanced networks. Nat. Neurosci. 19, 375–382 (2016).
Softky, W.R. & Koch, C. The highly irregular firing of cortical cells is inconsistent with temporal integration of random EPSPs. J. Neurosci. 13, 334–350 (1993).
Rivkind, A. & Barak, O. Local dynamics in trained recurrent neural networks. Preprint at http://arxiv.org/abs/1511.05222 (2015).
Hosoya, T., Baccus, S.A. & Meister, M. Dynamic predictive coding by the retina. Nature 436, 71–77 (2005).
Vogels, T.P., Sprekeler, H., Zenke, F., Clopath, C. & Gerstner, W. Inhibitory plasticity balances excitation and inhibition in sensory pathways and memory networks. Science 334, 1569–1573 (2011).
Bourdoukan, R., Barrett, D.G.T., Machens, C.K. & Denève, S. Learning optimal spike-based representations. Adv. Neural Inf. Process. Syst. 25, 2294–2302 (2012).
Kennedy, A. et al. A temporal basis for predicting the sensory consequences of motor commands in an electric fish. Nat. Neurosci. 17, 416–422 (2014).
Bourdoukan, R. & Denève, S. Enforcing balance allows local supervised learning in spiking recurrent networks. Adv. Neural Inf. Process. Syst. 28, 982–990 (2015).
Potjans, W., Morrison, A. & Diesmann, M. A spiking neural network model of an actor-critic learning agent. Neural Comput. 21, 301–339 (2009).
Hoerzer, G.M., Legenstein, R. & Maass, W. Emergence of complex computational structures from chaotic neural networks through reward-modulated Hebbian learning. Cereb. Cortex 24, 677–690 (2014).
Vasilaki, E., Frémaux, N., Urbanczik, R., Senn, W. & Gerstner, W. Spike-based reinforcement learning in continuous state and action space: when policy gradient methods fail. PLoS Comput. Biol. 5, e1000586 (2009).
Friedrich, J. & Senn, W. Spike-based decision learning of Nash equilibria in two-player games. PLoS Comput. Biol. 8, e1002691 (2012).
Kleindienst, T., Winnubst, J., Roth-Alpermann, C., Bonhoeffer, T. & Lohmann, C. Activity-dependent clustering of functional synaptic inputs on developing hippocampal dendrites. Neuron 72, 1012–1024 (2011).
Branco, T. & Häusser, M. Synaptic integration gradients in single cortical pyramidal cell dendrites. Neuron 69, 885–892 (2011).
Druckmann, S. et al. Structured synaptic connectivity between hippocampal regions. Neuron 81, 629–640 (2014).
London, M. & Häusser, M. Dendritic computation. Annu. Rev. Neurosci. 28, 503–532 (2005).
Major, G., Larkum, M.E. & Schiller, J. Active properties of neocortical pyramidal neuron dendrites. Annu. Rev. Neurosci. 36, 1–24 (2013).
Acknowledgements
We thank C. Machens, M. Churchland and D. Thalmeier for helpful discussions. Our research in this area was supported by US National Institutes of Health grant MH093338, the Gatsby Charitable Foundation through the Gatsby Initiative in Brain Circuitry at Columbia University, the Simons Foundation, the Swartz Foundation, the Harold and Leila Y. Mathers Foundation, the Kavli Institute for Brain Science at Columbia University, the Max Kade Foundation and the German Federal Ministry of Education and Research BMBF through the Bernstein Network (Bernstein Award 2014).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing financial interests.
Rights and permissions
About this article
Cite this article
Abbott, L., DePasquale, B. & Memmesheimer, RM. Building functional networks of spiking model neurons. Nat Neurosci 19, 350–355 (2016). https://doi.org/10.1038/nn.4241
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1038/nn.4241
This article is cited by
-
Multitask computation through dynamics in recurrent spiking neural networks
Scientific Reports (2023)
-
Unconventional computing based on magnetic tunnel junction
Applied Physics A (2023)
-
Spatiotemporal dynamics in spiking recurrent neural networks using modified-full-FORCE on EEG signals
Scientific Reports (2022)
-
Intrinsic bursts facilitate learning of Lévy flight movements in recurrent neural network models
Scientific Reports (2022)
-
Nonlinear computational models of dynamical coding patterns in depression and normal rats: from electrophysiology to energy consumption
Nonlinear Dynamics (2022)