Recurrent networks can be trained using a generalization of backpropagation, called backpropagation through time, but a gap exists between the mathematics of this learning algorithm and biological plausibility. E-prop is a biologically inspired alternative that opens up possibilities for a new generation of online training algorithms for recurrent networks.
This is a preview of subscription content, access via your institution
Access options
Access Nature and 54 other Nature Portfolio journals
Get Nature+, our best-value online-access subscription
$29.99 / 30 days
cancel any time
Subscribe to this journal
Receive 12 digital issues and online access to articles
$119.00 per year
only $9.92 per issue
Rent or buy this article
Prices vary by article type
from$1.95
to$39.95
Prices may be subject to local taxes which are calculated during checkout

References
Silver, D. et al. Nature 550, 354–359 (2017).
Bellec, G. et al. Preprint at https://doi.org/10.1101/738385 (2019).
Werbos, P. J. Proc. IEEE 78, 1550–1560 (1990).
Clopath, C., Büsing, L., Vasilaki, E. & Gerstner, W. Nat. Neurosci. 13, 344–352 (2010).
Williams, R. J. & Zipser, D. Neural Comput. 1, 270–280 (1989).
Sutton, R. S. & Barto, A. G. Introduction to Reinforcement Learning Vol. 2 (MIT Press, 1998).
Vasilaki, E., Frémaux, N., Urbanczik, R., Senn, W. & Gerstner, W. PLOS Comput. Biol. 5, e1000586 (2009).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Rights and permissions
About this article
Cite this article
Manneschi, L., Vasilaki, E. An alternative to backpropagation through time. Nat Mach Intell 2, 155–156 (2020). https://doi.org/10.1038/s42256-020-0162-9
Published:
Issue Date:
DOI: https://doi.org/10.1038/s42256-020-0162-9