Abstract
With recent advances in learning algorithms, recurrent networks of spiking neurons are achieving performance that is competitive with vanilla recurrent neural networks. However, these algorithms are limited to small networks of simple spiking neurons and modest-length temporal sequences, as they impose high memory requirements, have difficulty training complex neuron models and are incompatible with online learning. Here, we show how the recently developed Forward-Propagation Through Time (FPTT) learning combined with novel liquid time-constant spiking neurons resolves these limitations. Applying FPTT to networks of such complex spiking neurons, we demonstrate online learning of exceedingly long sequences while outperforming current online methods and approaching or outperforming offline methods on temporal classification tasks. The efficiency and robustness of FPTT enable us to directly train a deep and performant spiking neural network for joint object localization and recognition, demonstrating the ability to train large-scale dynamic and complex spiking neural network architectures.
This is a preview of subscription content, access via your institution
Access options
Access Nature and 54 other Nature Portfolio journals
Get Nature+, our best-value online-access subscription
$29.99 / 30 days
cancel any time
Subscribe to this journal
Receive 12 digital issues and online access to articles
$119.00 per year
only $9.92 per issue
Rent or buy this article
Prices vary by article type
from$1.95
to$39.95
Prices may be subject to local taxes which are calculated during checkout



Data availability
This paper utilized publicly available datasets. The MNIST dataset can be accessed at http://yann.lecun.com/exdb/mnist/, while the Fashion MNIST dataset is available at https://github.com/zalandoresearch/fashion-mnist. The DVS-GESTURE dataset can be download at https://research.ibm.com/interactive/dvsgesture/, and the DVS-Cifar10 dataset can be obtained from https://figshare.com/articles/dataset/CIFAR10-DVS_New/4724671/2. The PASCAL VOC07 and VOC2012 datasets are publicly available at http://host.robots.ox.ac.uk/pascal/VOC/index.html. It is important to note that no software was used in the collection of data. Source data are provided with this paper.
Code availability
The code used in the study is publicly available from the GitHub repository https://github.com/byin-cwi/sFPTT/tree/v1.0.0 and Zenodo45 (https://zenodo.org/record/7498559).
References
Yin, B., Corradi, F. & Bohte, S. M. Accurate and efficient time-domain classification with adaptive spiking recurrent neural networks. Nat. Mach. Intell. 3, 905–913 (2021).
Stuijt, J., Sifalakis, M., Yousefzadeh, A. & Corradi, F. μBrain: an event-driven and fully synthesizable architecture for spiking neural networks. Front. Neurosci. 15, 538 (2021).
Perez-Nieves, N., Leung, V. C. H., Dragotti, P. L. & Goodman, D. F. M. Neural heterogeneity promotes robust learning. Nat. Commun. 12, 5791 (2021).
Keijser, J. & Sprekeler, H. Interneuron diversity is required for compartment-specific feedback inhibition. Preprint at bioRxiv https://doi.org/10.1101/2020.11.17.386920 (2020).
Bohte, S. M. Error-backpropagation in networks of fractionally predictive spiking neurons. In International Conference on Artificial Neural Networks 60–68 (Springer, 2011).
Neftci, E. O., Mostafa, H. & Zenke, F. Surrogate gradient learning in spiking neural networks: bringing the power of gradient-based optimization to spiking neural networks. IEEE Signal Process. Mag. 36, 51–63 (2019).
Paszke, A. et al. PyTorch: an imperative style, high-performance deep learning library. Adv. Neural Inf. Process. Syst. 32, 8026–8037 (2019).
Kag, A. & Saligrama, V. Training recurrent neural networks via forward propagation through time. In International Conference on Machine Learning 5189–5200 (PMLR, 2021).
Mehonic, A. & Kenyon, A. J. Brain-inspired computing needs a master plan. Nature 604, 255–260 (2022).
Williams, R. J. & Zipser, D. A learning algorithm for continually running fully recurrent neural networks. Neural Comput. 1, 270–280 (1989).
Bellec, G. et al. A solution to the learning dilemma for recurrent networks of spiking neurons. Nat. Commun. 11, 3625 (2020).
Bohnstingl, T., Woźniak, S., Pantazi, A. & Eleftheriou, E. Online spatio-temporal learning in deep neural networks. In IEEE Transactions on Neural Networks and Learning Systems (IEEE, 2022).
He, Y. et al. A 28.2 μC neuromorphic sensing system featuring SNN-based near-sensor computation and event-driven body-channel communication for insertable cardiac monitoring. In 2021 IEEE Asian Solid-State Circuits Conference (IEEE, 2021).
Hasani, R., Lechner, M., Amini, A., Rus, D. & Grosu, R. Liquid time-constant networks. In Proceedings of the AAAI Conference on Artificial Intelligence Vol. 35, 7657–7666 (AAAI, 2021).
Amir, A. et al. A low power, fully event-based gesture recognition system. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 7243–7252 (IEEE, 2017).
Fang, W. et al. Incorporating learnable membrane time constant to enhance learning of spiking neural networks. In Proceedings of the IEEE/CVF International Conference on Computer Vision 2661–2671 (IEEE, 2021).
Everingham, M., Van Gool, L., Williams, C. K., Winn, J. & Zisserman, A. The PASCAL visual object classes (VOC) challenge. Int. J. Comput. Vision 88, 303–338 (2010).
Kim, S., Park, S., Na, B. & Yoon, S. Spiking-YOLO: spiking neural network for energy-efficient object detection. In Proceedings of the AAAI Conference on Artificial Intelligence Vol. 34, 11270–11277 (AAAI, 2020).
Chakraborty, B., She, X. & Mukhopadhyay, S. A fully spiking hybrid neural network for energy-efficient object detection. IEEE Trans. Image Process. 30, 9014–9029 (2021).
Royo-Miquel, J., Tolu, S., Schöller, F. E. & Galeazzi, R. RetinaNet object detector based on analog-to-spiking neural network conversion. In 8th International Conference on Soft Computing and Machine Intelligence (IEEE, 2021).
Zhou, S., Chen, Y., Li, X. & Sanyal, A. Deep SCNN-based real-time object detection for self-driving vehicles using lidar temporal data. IEEE Access 8, 76903–76912 (2020).
Jiang, Z., Zhao, L., Li, S. & Jia, Y. Real-time object detection method based on improved YOLOv4-tiny. Preprint at https://arxiv.org/abs/2011.04244 (2020).
Werbos, P. J. Backpropagation through time: what it does and how to do it. Proc. IEEE 78, 1550–1560 (1990).
Elman, J. L. Finding structure in time. Cognit. Sci. 14, 179–211 (1990).
Mozer, M. C. Neural net architectures for temporal sequence processing. In Santa Fe Institute Studies on the Sciences of Complexity Proceedings Vol. 15, 243 (Addison-Wesley, 1993).
Murray, J. M. Local online learning in recurrent networks with random feedback. eLife 8, e43299 (2019).
Knight, J. C. & Nowotny, T. Efficient GPU training of LSNNs using eProp. In Neuro-Inspired Computational Elements Conference 8–10 (Association for Computing Machinery, 2022).
Bohte, S. M., Kok, J. N. & La Poutre, H. Error-backpropagation in temporally encoded networks of spiking neurons. Neurocomputing 48, 17–37 (2002).
Yin, B., Corradi, F. & Bohté, S. M. Effective and efficient computation with multiple-timescale spiking recurrent neural networks. In International Conference on Neuromorphic Systems (Association for Computing Machinery, 2020).
Scherr, F. & Maass, W. Analysis of the computational strategy of a detailed laminar cortical microcircuit model for solving the image-change-detection task. Preprint at bioRxiv https://doi.org/10.1101/2021.11.17.469025 (2021).
Hochreiter, S. & Schmidhuber, J. Long short-term memory. Neural Comput. 9, 1735–1780 (1997).
Li, H., Liu, H., Ji, X., Li, G. & Shi, L. CIFAR10-DVS: an event-stream dataset for object classification. Front. Neurosci. 11, 309 (2017).
Gerstner, W., Kreiter, A. K., Markram, H. & Herz, A. V. Neural codes: firing rates and beyond. Proc. Natl Acad. Sci. USA 94, 12740–12741 (1997).
Bochkovskiy, A., Wang, C.-Y. & Liao, H.-Y. M. YOLOv4: optimal speed and accuracy of object detection. Preprint at https://arxiv.org/abs/2004.10934 (2020).
Kalchbrenner, N. et al. Efficient neural audio synthesis. In International Conference on Machine Learning 2410–2419 (PMLR, 2018).
Sacramento, J., Ponte Costa, R., Bengio, Y. & Senn, W. Dendritic cortical microcircuits approximate the backpropagation algorithm. Adv. Neural Inf. Process. Syst. 31, 8721–8732 (2018).
Beniaguev, D., Segev, I. & London, M. Single cortical neurons as deep artificial neural networks. Neuron 109, 2727–2739 (2021).
Larkum, M. E., Senn, W. & Lüscher, H.-R. Top-down dendritic input increases the gain of layer 5 pyramidal neurons. Cereb. Cortex 14, 1059–1070 (2004).
Frey, U. & Morris, R. G. Synaptic tagging and long-term potentiation. Nature 385, 533–536 (1997).
Moncada, D., Ballarini, F., Martinez, M. C., Frey, J. U. & Viola, H. Identification of transmitter systems and learning tag molecules involved in behavioral tagging during memory formation. Proc. Natl Acad. Sci. USA 108, 12931–12936 (2011).
Rombouts, J. O., Bohte, S. M. & Roelfsema, P. R. How attention can create synaptic tags for the learning of working memories in sequential tasks. PLoS Comput. Biol. 11, e1004060 (2015).
Pozzi, I., Bohte, S. & Roelfsema, P. Attention-gated brain propagation: how the brain can implement reward-based error backpropagation. In Adv. Neural Inf. Process. Syst. 33, 2516–2526 (2020).
Scellier, B. & Bengio, Y. Equilibrium propagation: bridging the gap between energy-based models and backpropagation. Front. Comput. Neurosci. 11, 24 (2017).
Kingma, D. P. & Ba, J. Adam: a method for stochastic optimization. In 3rd International Conference on Learning Representations (ICLR) 1–15 (2015).
Yin, B. byin-cwi/sFPTT: Training SNN via FPTT. Zenodo https://doi.org/10.5281/ZENODO.7498559 (2023).
Woźniak, S., Pantazi, A., Bohnstingl, T. & Eleftheriou, E. Deep learning incorporating biologically inspired neural dynamics and in-memory computing. Nat. Mach. Intell. 2, 325–336 (2020).
Zou, Z. et al. Memory-inspired spiking hyperdimensional network for robust online learning. Sci. Rep. 12, 7641 (2022).
Shrestha, A., Fang, H., Wu, Q. & Qiu, Q. Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks. In Proceedings of the International Conference on Neuromorphic Systems (Association for Computing Machinery, 2019).
Kaiser, J., Mostafa, H. & Neftci, E. Synaptic plasticity dynamics for deep continuous local learning (DECOLLE). Front. Neurosci. 14, 424 (2020).
Acknowledgements
B.Y. is supported by the Nederlandse Organisatie voor Wetenschappelijk Onderzoek, Toegepaste en Technische Wetenschappen (NWO-TTW) Programme ‘Efficient Deep Learning’ (EDL) P16-25. S.M.B. is supported by the European Union (grant agreement 7202070 ‘Human Brain Project’). The authors are grateful to H. Corporaal for reading the manuscript and providing constructive remarks.
Author information
Authors and Affiliations
Contributions
B.Y., F.C. and S.M.B. conceived the experiments. B.Y. conducted the experiments. B.Y., F.C. and S.M.B. analysed the results. All authors reviewed the manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Peer review
Peer review information
Nature Machine Intelligence thanks Catherine Schuman and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary information
Supplementary information
Supplementary Figs. 1–6, Tables 1–5 and Appendix A for FPTT theory.
Source data
Source Data Fig. 2
Statistical source data for Fig. 2b,c.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Yin, B., Corradi, F. & Bohté, S.M. Accurate online training of dynamical spiking neural networks through Forward Propagation Through Time. Nat Mach Intell 5, 518–527 (2023). https://doi.org/10.1038/s42256-023-00650-4
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1038/s42256-023-00650-4