Perspective | Published:

Building functional networks of spiking model neurons

Nature Neuroscience volume 19, pages 350355 (2016) | Download Citation

Abstract

Most of the networks used by computer scientists and many of those studied by modelers in neuroscience represent unit activities as continuous variables. Neurons, however, communicate primarily through discontinuous spiking. We review methods for transferring our ability to construct interesting networks that perform relevant tasks from the artificial continuous domain to more realistic spiking network models. These methods raise a number of issues that warrant further theoretical and experimental study.

Access optionsAccess options

Rent or Buy article

Get time limited or full article access on ReadCube.

from$8.99

All prices are NET prices.

References

  1. 1.

    & Modeling feature selectivity in local cortical circuits. in Methods in Neuronal Modeling 2nd edn. (eds. Koch, C. & Segev, I.) 499–566 (MIT Press, Cambridge, Massachusetts, USA, 1998).

  2. 2.

    , , & Stability of the memory of eye position in a recurrent network of conductance-based model neurons. Neuron 26, 259–271 (2000).

  3. 3.

    Probabilistic decision making by slow reverberation in cortical circuits. Neuron 36, 955–968 (2002).

  4. 4.

    , & Robust spatial working memory through homeostatic synaptic scaling in heterogeneous cortical networks. Neuron 38, 473–485 (2003).

  5. 5.

    & Angular path integration by moving “hill of activity”: a spiking neuron model without recurrent excitation of the head-direction system. J. Neurosci. 25, 1002–1014 (2005).

  6. 6.

    A unified approach to building and controlling spiking attractor networks. Neural Comput. 17, 1276–1314 (2005).

  7. 7.

    , & Computational aspects of feedback in neural circuits. PLoS Comput. Biol. 3, e165 (2007).

  8. 8.

    & Accurate path integration in continuous attractor network models of grid cells. PLoS Comput. Biol. 5, e1000291 (2009).

  9. 9.

    & Spike-based population coding and working memory. PLoS Comput. Biol. 7, e1001080 (2011).

  10. 10.

    , & Predictive coding of dynamical variables in balanced spiking networks. PLoS Comput. Biol. 9, e1003258 (2013).

  11. 11.

    & Balanced cortical microcircuitry for maintaining information in working memory. Nat. Neurosci. 16, 1306–1314 (2013).

  12. 12.

    , , & Constructing precisely computing networks with biophysical spiking neurons. J. Neurosci. 35, 10112–10134 (2015).

  13. 13.

    & Temporal information transformed into a spatial code by a neural network with realistic properties. Science 267, 1028–1030 (1995).

  14. 14.

    & The tempotron: a neuron that learns spike timing-based decisions. Nat. Neurosci. 9, 420–428 (2006).

  15. 15.

    , , & Optimal spike-timing-dependent plasticity for precise action potential firing in supervised learning. Neural Comput. 18, 1318–1348 (2006).

  16. 16.

    , & Stable propagation of synchronous spiking in cortical neural networks. Nature 402, 529–533 (1999).

  17. 17.

    , & Real-time computing without stable states: a new framework for neural computation based on perturbations. Neural Comput. 14, 2531–2560 (2002).

  18. 18.

    , , & Climbing neuronal activity as an event-based cortical representation of time. J. Neurosci. 24, 3295–3303 (2004).

  19. 19.

    & Signal propagation and logic gating in networks of integrate-and-fire neurons. J. Neurosci. 25, 10786–10795 (2005).

  20. 20.

    & Embedding multiple trajectories in simulated recurrent neural networks in a self-organizing manner. J. Neurosci. 29, 13172–13181 (2009).

  21. 21.

    , & Guiding synchrony through random networks. Phys. Rev. X 2, 041016 (2012).

  22. 22.

    , , & Learning universal computations with spikes. Preprint at (2015).

  23. 23.

    , & Using firing-rate dynamics to train recurrent networks of spiking model neurons. Preprint at (2016).

  24. 24.

    , , & Learning precisely timed spikes. Neuron 82, 925–938 (2014).

  25. 25.

    et al. A large-scale model of the functioning brain. Science 338, 1202–1205 (2012).

  26. 26.

    , & Optimal control of transient dynamics in balanced networks supports generation of complex movements. Neuron 82, 1394–1406 (2014).

  27. 27.

    & Supervised learning in spiking neural networks with ReSuMe: sequence learning, classification, and spike shifting. Neural Comput. 22, 467–510 (2010).

  28. 28.

    The chronotron: a neuron that learns to fire temporally precise spike patterns. PLoS One 7, e40233 (2012).

  29. 29.

    , & Matching recall and storage in sequence learning with spiking neural networks. J. Neurosci. 33, 9565–9575 (2013).

  30. 30.

    , & Deep learning. Nature 521, 436–444 (2015).

  31. 31.

    , , & Context-dependent computation by recurrent dynamics in prefrontal cortex. Nature 503, 78–84 (2013).

  32. 32.

    , , & A neural network that finds a naturalistic solution for the production of muscle activity. Nat. Neurosci. 18, 1025–1033 (2015).

  33. 33.

    , & Error-backpropagation in temporally encoded networks of spiking neurons. Neurocomputing 48, 17–37 (2002).

  34. 34.

    & Learning beyond finite memory in recurrent networks of spiking neurons. Neural Comput. 18, 591–613 (2006).

  35. 35.

    & Supervised learning in multilayer spiking neural networks. Neural Comput. 25, 473–509 (2013).

  36. 36.

    & Chaos in neuronal networks with balanced excitatory and inhibitory activity. Science 274, 1724–1726 (1996).

  37. 37.

    Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons. J. Comput. Neurosci. 8, 183–208 (2000).

  38. 38.

    Learning processes in an asymmetric threshold network. in Disordered Systems and Biological Organization (eds. Bienenstock, E., Fogelman, F. & Weisbuch, G.) 233–240 (Springer, Berlin, 1986).

  39. 39.

    How auto-encoders could provide credit assignment in deep networks via target propagation. Preprint at (2014).

  40. 40.

    & Robust timing and motor patterns by taming chaos in recurrent neural networks. Nat. Neurosci. 16, 925–933 (2013).

  41. 41.

    , , , & A modeling framework for deriving the structural and functional architecture of a short-term memory microcircuit. Neuron 79, 987–1000 (2013).

  42. 42.

    , & Recurrent network models of sequence generation and memory. Neuron (in the press).

  43. 43.

    & Harnessing nonlinearity: predicting chaotic systems and saving energy in wireless communication. Science 304, 78–80 (2004).

  44. 44.

    & Generating coherent patterns of activity from chaotic neural networks. Neuron 63, 544–557 (2009).

  45. 45.

    & Transferring learning from external to internal weights in echo-state networks with sparse connectivity. PLoS One 7, e37372 (2012).

  46. 46.

    , & Reservoir computing trends. Künstl. Intell. 26, 365–371 (2012).

  47. 47.

    Neural circuits as computational dynamical systems. Curr. Opin. Neurobiol. 25, 156–163 (2014).

  48. 48.

    & Neural Engineering: Computation, Representation and Dynamics in Neurobiological Systems (MIT Press, Cambridge, Massachusetts, USA, 2003).

  49. 49.

    & Efficient codes and balanced networks. Nat. Neurosci. 19, 375–382 (2016).

  50. 50.

    & The highly irregular firing of cortical cells is inconsistent with temporal integration of random EPSPs. J. Neurosci. 13, 334–350 (1993).

  51. 51.

    & Local dynamics in trained recurrent neural networks. Preprint at (2015).

  52. 52.

    , & Dynamic predictive coding by the retina. Nature 436, 71–77 (2005).

  53. 53.

    , , , & Inhibitory plasticity balances excitation and inhibition in sensory pathways and memory networks. Science 334, 1569–1573 (2011).

  54. 54.

    , , & Learning optimal spike-based representations. Adv. Neural Inf. Process. Syst. 25, 2294–2302 (2012).

  55. 55.

    et al. A temporal basis for predicting the sensory consequences of motor commands in an electric fish. Nat. Neurosci. 17, 416–422 (2014).

  56. 56.

    & Enforcing balance allows local supervised learning in spiking recurrent networks. Adv. Neural Inf. Process. Syst. 28, 982–990 (2015).

  57. 57.

    , & A spiking neural network model of an actor-critic learning agent. Neural Comput. 21, 301–339 (2009).

  58. 58.

    , & Emergence of complex computational structures from chaotic neural networks through reward-modulated Hebbian learning. Cereb. Cortex 24, 677–690 (2014).

  59. 59.

    , , , & Spike-based reinforcement learning in continuous state and action space: when policy gradient methods fail. PLoS Comput. Biol. 5, e1000586 (2009).

  60. 60.

    & Spike-based decision learning of Nash equilibria in two-player games. PLoS Comput. Biol. 8, e1002691 (2012).

  61. 61.

    , , , & Activity-dependent clustering of functional synaptic inputs on developing hippocampal dendrites. Neuron 72, 1012–1024 (2011).

  62. 62.

    & Synaptic integration gradients in single cortical pyramidal cell dendrites. Neuron 69, 885–892 (2011).

  63. 63.

    et al. Structured synaptic connectivity between hippocampal regions. Neuron 81, 629–640 (2014).

  64. 64.

    & Dendritic computation. Annu. Rev. Neurosci. 28, 503–532 (2005).

  65. 65.

    , & Active properties of neocortical pyramidal neuron dendrites. Annu. Rev. Neurosci. 36, 1–24 (2013).

Download references

Acknowledgements

We thank C. Machens, M. Churchland and D. Thalmeier for helpful discussions. Our research in this area was supported by US National Institutes of Health grant MH093338, the Gatsby Charitable Foundation through the Gatsby Initiative in Brain Circuitry at Columbia University, the Simons Foundation, the Swartz Foundation, the Harold and Leila Y. Mathers Foundation, the Kavli Institute for Brain Science at Columbia University, the Max Kade Foundation and the German Federal Ministry of Education and Research BMBF through the Bernstein Network (Bernstein Award 2014).

Author information

Affiliations

  1. Department of Neuroscience, Columbia University College of Physicians and Surgeons, New York, New York, USA.

    • L F Abbott
    • , Brian DePasquale
    •  & Raoul-Martin Memmesheimer
  2. Department of Physiology and Cellular Biophysics, Columbia University College of Physicians and Surgeons, New York, New York, USA.

    • L F Abbott
  3. Department for Neuroinformatics, Donders Institute for Brain Cognition and Behavior, Radboud University, Nijmegen, the Netherlands.

    • Raoul-Martin Memmesheimer

Authors

  1. Search for L F Abbott in:

  2. Search for Brian DePasquale in:

  3. Search for Raoul-Martin Memmesheimer in:

Competing interests

The authors declare no competing financial interests.

Corresponding author

Correspondence to L F Abbott.

About this article

Publication history

Received

Accepted

Published

DOI

https://doi.org/10.1038/nn.4241

Further reading