Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Perspective
  • Published:

Towards spike-based machine intelligence with neuromorphic computing

Abstract

Guided by brain-like ‘spiking’ computational frameworks, neuromorphic computing—brain-inspired computing for machine intelligence—promises to realize artificial intelligence while reducing the energy requirements of computing platforms. This interdisciplinary field began with the implementation of silicon circuits for biological neural routines, but has evolved to encompass the hardware implementation of algorithms with spike-based encoding and event-driven representations. Here we provide an overview of the developments in neuromorphic computing for both algorithms and hardware and highlight the fundamentals of learning and hardware frameworks. We discuss the main challenges and the future prospects of neuromorphic computing, with emphasis on algorithm–hardware codesign.

This is a preview of subscription content, access via your institution

Access options

Buy this article

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: Key attributes of biological and silicon-based computing frameworks.
Fig. 2: Timeline of major discoveries and advances in intelligent computing, from the 1940s to the present6, 10, 14, 73, 78, 84, 93, 105, 115, 136,150,151.
Fig. 3: SNN computational models.
Fig. 4: Global and local-learning principles in spiking networks.
Fig. 5: Some representative ‘Big Brain’ chips and AER methods.
Fig. 6: The use of non-volatile memory devices as synaptic storage.

Similar content being viewed by others

References

  1. Silver, D. et al. Mastering the game of Go without human knowledge. Nature 550, 354–359 (2017).

    ADS  CAS  Google Scholar 

  2. Cox, D. D. & Dean, T. Neural networks and neuroscience-inspired computer vision. Curr. Biol. 24, R921–R929 (2014).

    CAS  PubMed  Google Scholar 

  3. Milakov, M. Deep Learning With GPUs. https://www.nvidia.co.uk/docs/IO/147844/Deep-Learning-With-GPUs-MaximMilakov-NVIDIA.pdf (Nvidia, 2014).

  4. Bullmore, E. & Sporns, O. The economy of brain network organization. Nat. Rev. Neurosci. 13, 336–349 (2012).

    CAS  PubMed  Google Scholar 

  5. Felleman, D. J. & Van Essen, D. C. Distributed hierarchical processing in the primate cerebral cortex. Cereb. Cortex 1, 1–47 (1991).

    CAS  PubMed  Google Scholar 

  6. Krizhevsky, A., Sutskever, I. & Hinton, G. E. ImageNet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems Vol. 28 (eds Pereira, F. et al.) 1097–1105 (Neural Information Processing Systems Foundation, 2012). This work—using deep convolutional networks—was the first to win the ImageNet challenge, fuelling the subsequent deep-learning revolution.

  7. Deco, G., Rolls, E. T. & Romo, R. Stochastic dynamics as a principle of brain function. Prog. Neurobiol. 88, 1–16 (2009).

    PubMed  Google Scholar 

  8. Venkataramani, S., Roy, K. & Raghunathan, A. Efficient embedded learning for IoT devices. In 21st Asia and South Pacific Design Automation Conf. 308–311 (IEEE, 2016).

  9. Maass, W. Networks of spiking neurons: the third generation of neural network models. Neural Netw. 10, 1659–1671 (1997). This paper was one of the first works to provide a rigorous mathematical analysis of the computational power of spiking neurons, categorizing them as the third generation of neural networks (after perceptron and sigmoidal neurons).

    Google Scholar 

  10. McCulloch, W. S. & Pitts, W. A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophys. 5, 115–133 (1943).

    MathSciNet  MATH  Google Scholar 

  11. Nair, V. & Hinton, G. E. Rectified linear units improve restricted Boltzmann machines. In Proc. 27th Int. Conf. on Machine Learning (eds Fürnkranz, J. & Joachims, T.) 807–814 (IMLS, 2010).

  12. Rumelhart, D. E., Hinton, G. E. & Williams, R. J. Learning representations by back-propagating errors. Nature 323, 533–536 (1986). This seminal work proposed gradient-descent-based backpropagation as a learning method for neural networks.

    ADS  MATH  Google Scholar 

  13. Izhikevich, E. M. Simple model of spiking neurons. IEEE Trans. Neural Netw. 14, 1569–1572 (2003).

    CAS  PubMed  Google Scholar 

  14. Hebb, D. O. The Organization of Behavior: A Neuropsychological Theory (Wiley, 1949).

  15. Abbott, L. F. & Nelson, S. B. Synaptic plasticity: taming the beast. Nat. Neurosci. 3, 1178–1183 (2000).

    CAS  PubMed  Google Scholar 

  16. Liu, S.-C. & Delbruck, T. Neuromorphic sensory systems. Curr. Opin. Neurobiol. 20, 288–295 (2010).

    PubMed  Google Scholar 

  17. Lichtsteiner, P., Posch, C. & Delbruck, T. A. 128×128 120 db 15 μs latency asynchronous temporal contrast vision sensor. IEEE J. Solid-State Circuits 43, 566–576 (2008).

    ADS  Google Scholar 

  18. Vanarse, A., Osseiran, A. & Rassau, A. A review of current neuromorphic approaches for vision, auditory, and olfactory sensors. Front. Neurosci. 10, 115 (2016).

    PubMed  PubMed Central  Google Scholar 

  19. Benosman, R., Ieng, S.-H., Clercq, C., Bartolozzi, C. & Srinivasan, M. Asynchronous frameless event-based optical flow. Neural Netw. 27, 32–37 (2012).

    PubMed  Google Scholar 

  20. Wongsuphasawat, K. & Gotz, D. Exploring flow, factors, and outcomes of temporal event sequences with the outflow visualization. IEEE Trans. Vis. Comput. Graph. 18, 2659–2668 (2012).

    CAS  PubMed  Google Scholar 

  21. Rogister, P., Benosman, R., Ieng, S.-H., Lichtsteiner, P. & Delbruck, T. Asynchronous event-based binocular stereo matching. IEEE Trans. Neural Netw. Learn. Syst. 23, 347–353 (2012).

    PubMed  Google Scholar 

  22. Osswald, M., Ieng, S.-H., Benosman, R. & Indiveri, G. A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems. Sci. Rep. 7, 40703 (2017).

    ADS  CAS  PubMed  PubMed Central  Google Scholar 

  23. Hinton, G. E., Srivastava, N., Krizhevsky, A., Sutskever, I. & Salakhutdinov, R. R. Improving neural networks by preventing co-adaptation of feature detectors. Preprint at http://arxiv.org/abs/1207.0580 (2012).

  24. Deng, J. et al. ImageNet: a large-scale hierarchical image database. In IEEE Conf. on Computer Vision and Pattern Recognition 248–255 (IEEE, 2009).

  25. Rullen, R. V. & Thorpe, S. J. Rate coding versus temporal order coding: what the retinal ganglion cells tell the visual cortex. Neural Comput. 13, 1255–1283 (2001).

    PubMed  MATH  Google Scholar 

  26. Hu, Y., Liu, H., Pfeiffer, M. & Delbruck, T. DVS benchmark datasets for object tracking, action recognition, and object recognition. Front. Neurosci. 10, 405 (2016).

    PubMed  PubMed Central  Google Scholar 

  27. Geiger, A., Lenz, P., Stiller, C. & Urtasun, R. Vision meets robotics: the KITTI dataset. Int. J. Robot. Res. 32, 1231–1237 (2013).

    Google Scholar 

  28. Barranco, F., Fermuller, C., Aloimonos, Y. & Delbruck, T. A dataset for visual navigation with neuromorphic methods. Front. Neurosci. 10, 49 (2016).

    PubMed  PubMed Central  Google Scholar 

  29. Sengupta, A., Ye, Y., Wang, R., Liu, C. & Roy, K. Going deeper in spiking neural networks: VGG and residual architectures. Front. Neurosci. 13, 95 (2019). This paper was the first to demonstrate the competitive performance of a conversion-based spiking neural network on ImageNet data for deep neural architectures.

    PubMed  PubMed Central  Google Scholar 

  30. Cao, Y., Chen, Y. & Khosla, D. Spiking deep convolutional neural networks for energy-efficient object recognition. Int. J. Comput. Vis. 113, 54–66 (2015).

    MathSciNet  Google Scholar 

  31. Diehl, P. U. et al. Fast-classifying, high-accuracy spiking deep networks through weight and threshold balancing. In Int. Joint Conf. on Neural Networks 2933–2341 (IEEE, 2015).

  32. Pérez-Carrasco, J. A. et al. Mapping from frame-driven to frame-free event-driven vision systems by low-rate rate coding and coincidence processing—application to feedforward ConvNets. IEEE Trans. Pattern Anal. Mach. Intell. 35, 2706–2719 (2013).

    PubMed  Google Scholar 

  33. Rueckauer, B., Lungu, I.-A., Hu, Y., Pfeiffer, M. & Liu, S.-C. Conversion of continuous-valued deep networks to efficient event-driven networks for image classification. Front. Neurosci. 11, 682 (2017).

    PubMed  PubMed Central  Google Scholar 

  34. Diehl, P. U., Zarrella, G., Cassidy, A. S., Pedroni, B. U. & Neftci, E. Conversion of artificial recurrent neural networks to spiking neural networks for low-power neuromorphic hardware. In Int. Conf. on Rebooting Computing 20 (IEEE, 2016).

  35. Abadi, M. et al. Tensorflow: a system for large-scale machine learning. In 12th USENIX Symp. Operating Systems Design and Implementation 265–283 (2016).

  36. Hunsberger, E. & Eliasmith, C. Spiking deep networks with LIF neurons. Preprint at http://arxiv.org/abs/1510.08829 (2015).

  37. Pfeiffer, M. & Pfeil, T. Deep learning with spiking neurons: opportunities and challenges. Front. Neurosci. 12, 774 (2018).

    PubMed  PubMed Central  Google Scholar 

  38. Ponulak, F. & Kasiński, A. Supervised learning in spiking neural networks with ReSuMe: sequence learning, classification, and spike shifting. Neural Comput. 22, 467–510 (2010).

    MathSciNet  PubMed  MATH  Google Scholar 

  39. Gütig, R. & Sompolinsky, H. The tempotron: a neuron that learns spike-timing-based decisions. Nat. Neurosci. 9, 420–428 (2006).

    PubMed  Google Scholar 

  40. Bohte, S. M., Kok, J. N. & La Poutré, H. Error-backpropagation in temporally encoded networks of spiking neurons. Neurocomputing 48, 17–37 (2002).

    MATH  Google Scholar 

  41. Ghosh-Dastidar, S. & Adeli, H. A new supervised learning algorithm for multiple spiking neural networks with application in epilepsy and seizure detection. Neural Netw. 22, 1419–1431 (2009).

    PubMed  Google Scholar 

  42. Anwani, N. & Rajendran, B. NormAD: normalized approximate descent-based supervised learning rule for spiking neurons. In Int. Joint Conf. on Neural Networks 2361–2368 (IEEE, 2015).

  43. Lee, J. H., Delbruck, T. & Pfeiffer, M. Training deep spiking neural networks using backpropagation. Front. Neurosci. 10, 508 (2016).

    PubMed  PubMed Central  Google Scholar 

  44. Orchard, G. et al. HFirst: a temporal approach to object recognition. IEEE Trans. Pattern Anal. Mach. Intell. 37, 2028–2040 (2015).

    PubMed  Google Scholar 

  45. Mostafa, H. Supervised learning based on temporal coding in spiking neural networks. IEEE Trans. Neural Netw. Learn. Syst. 29, 3227–3235 (2018).

    PubMed  Google Scholar 

  46. Panda, P. & Roy, K. Unsupervised regenerative learning of hierarchical features in spiking deep networks for object recognition. In Int. Joint Conf. on Neural Networks 299–306 (IEEE, 2016).

  47. LeCun, Y., Cortes, C. & Burges, C. J. C. The MNIST Database of Handwritten Digits http://yann.lecun.com/exdb/mnist/ (1998).

  48. Masquelier, T., Guyonneau, R. & Thorpe, S. J. Competitive STDP-based spike pattern learning. Neural Comput. 21, 1259–1276 (2009).

    PubMed  MATH  Google Scholar 

  49. Diehl, P. U. & Cook, M. Unsupervised learning of digit recognition using spike-timing-dependent plasticity. Front. Comput. Neurosci. 9, 99 (2015). This is a good introduction to implementing spiking neural networks with unsupervised STDP-based learning for real-world tasks such as digit recognition.

    PubMed  PubMed Central  Google Scholar 

  50. Kheradpisheh, S. R., Ganjtabesh, M., Thorpe, S. J. & Masquelier, T. STDP-based spiking deep convolutional neural networks for object recognition. Neural Netw. 99, 56–67 (2018).

    PubMed  Google Scholar 

  51. Neftci, E., Das, S., Pedroni, B., Kreutz-Delgado, K. & Cauwenberghs, G. Event-driven contrastive divergence for spiking neuromorphic systems. Front. Neurosci. 7, 272 (2014).

    PubMed  PubMed Central  Google Scholar 

  52. Stromatias, E., Soto, M., Serrano-Gotarredona, T. & Linares-Barranco, B. An event-driven classifier for spiking neural networks fed with synthetic or dynamic vision sensor data. Front. Neurosci. 11, 350 (2017).

    PubMed  PubMed Central  Google Scholar 

  53. Lee, C., Panda, P., Srinivasan, G. & Roy, K. Training deep spiking convolutional neural networks with STDP-based unsupervised pre-training followed by supervised fine-tuning. Front. Neurosci. 12, 435 (2018).

    PubMed  PubMed Central  Google Scholar 

  54. Mostafa, H., Ramesh, V. & Cauwenberghs, G. Deep supervised learning using local errors. Front. Neurosci. 12, 608 (2018).

    PubMed Central  Google Scholar 

  55. Neftci, E. O., Augustine, C., Paul, S. & Detorakis, G. Event-driven random back-propagation: enabling neuromorphic deep learning machines. Front. Neurosci. 11, 324 (2017).

    PubMed  PubMed Central  Google Scholar 

  56. Srinivasan, G., Sengupta, A. & Roy, K. Magnetic tunnel junction based long-term short-term stochastic synapse for a spiking neural network with on-chip STDP learning. Sci. Rep. 6, 29545 (2016).

    ADS  CAS  PubMed  PubMed Central  Google Scholar 

  57. Tavanaei, A., Masquelier, T. & Maida, A. S. Acquisition of visual features through probabilistic spike-timing-dependent plasticity. In Int. Joint Conf. on Neural Networks 307–314 (IEEE, 2016).

  58. Bagheri, A., Simeone, O. & Rajendran, B. Training probabilistic spiking neural networks with first-to-spike decoding. In Int. Conf. on Acoustics, Speech and Signal Processing 2986–2990 (IEEE, 2018).

  59. Rastegari, M., Ordonez, V., Redmon, J. & Farhadi, A. XNOR-Net: ImageNet classification using binary convolutional neural networks. In Eur. Conf. on Computer Vision 525–542 (Springer, 2016).

  60. Courbariaux, M., Bengio, Y. & David, J.-P. BinaryConnect: training deep neural networks with binary weights during propagations. In Advances in Neural Information Processing Systems Vol. 28 (eds Cortes, C. et al) 3123–3131 (Neural Information Processing Systems Foundation, 2015).

  61. Stromatias, E. et al. Robustness of spiking deep belief networks to noise and reduced bit precision of neuro-inspired hardware platforms. Front. Neurosci. 9, 222 (2015).

    PubMed  PubMed Central  Google Scholar 

  62. Florian, R. V. Reinforcement learning through modulation of spike-timing-dependent synaptic plasticity. Neural Comput. 19, 1468–1502 (2007).

    MathSciNet  PubMed  MATH  Google Scholar 

  63. Vasilaki, E., Frémaux, N., Urbanczik, R., Senn, W. & Gerstner, W. Spike-based reinforcement learning in continuous state and action space: when policy gradient methods fail. PLOS Comput. Biol. 5, e1000586 (2009).

    ADS  PubMed  PubMed Central  Google Scholar 

  64. Zuo, F. et al. Habituation-based synaptic plasticity and organismic learning in a quantum perovskite. Nat. Commun. 8, 240 (2017).

    ADS  PubMed  PubMed Central  Google Scholar 

  65. Masquelier, T. & Thorpe, S. J. Unsupervised learning of visual features through spike-timing-dependent plasticity. PLOS Comput. Biol. 3, e31 (2007).

    ADS  PubMed  PubMed Central  Google Scholar 

  66. Rao, R. P. & Sejnowski, T. J. Spike-timing-dependent Hebbian plasticity as temporal difference learning. Neural Comput. 13, 2221–2237 (2001).

    CAS  PubMed  MATH  Google Scholar 

  67. Roy, S. & Basu, A. An online unsupervised structural plasticity algorithm for spiking neural networks. IEEE Trans. Neural Netw. Learn. Syst. 28, 900–910 (2017).

    PubMed  Google Scholar 

  68. Maass, W. Liquid state machines: motivation, theory, and applications. In Computability in Context: Computation and Logic in the Real World (eds Cooper, S. B. & Sorbi, A.) 275–296 (Imperial College Press, 2011).

  69. Schrauwen, B., D’Haene, M., Verstraeten, D. & Van Campenhout, J. Compact hardware liquid state machines on FPGA for real-time speech recognition. Neural Netw. 21, 511–523 (2008).

    PubMed  Google Scholar 

  70. Verstraeten, D., Schrauwen, B., Stroobandt, D. & Van Campenhout, J. Isolated word recognition with the liquid state machine: a case study. Inf. Process. Lett. 95, 521–528 (2005).

    MATH  Google Scholar 

  71. Panda, P. & Roy, K. Learning to generate sequences with combination of Hebbian and non-Hebbian plasticity in recurrent spiking neural networks. Front. Neurosci. 11, 693 (2017).

    PubMed  PubMed Central  Google Scholar 

  72. Maher, M. A. C., Deweerth, S. P., Mahowald, M. A. & Mead, C. A. Implementing neural architectures using analog VLSI circuits. IEEE Trans. Circ. Syst. 36, 643–652 (1989).

    ADS  Google Scholar 

  73. Mead, C. Neuromorphic electronic systems. Proc. IEEE 78, 1629–1636 (1990). This seminal work established neuromorphic electronic systems as a new paradigm in hardware computing and highlights Mead’s vision of going beyond the precise and well defined nature of digital computing towards brain-like aspects.

    Google Scholar 

  74. Mead, C. A. Neural hardware for vision. Eng. Sci. 50, 2–7 (1987).

    Google Scholar 

  75. NVIDIA Launches the World’s First Graphics Processing Unit GeForce 256. https://www.nvidia.com/object/IO_20020111_5424.html (Nvidia, 1999).

  76. Nageswaran, J. M., Dutt, N., Krichmar, J. L., Nicolau, A. & Veidenbaum, A. V. A configurable simulation environment for the efficient simulation of large-scale spiking neural networks on graphics processors. Neural Netw. 22, 791–800 (2009).

    PubMed  Google Scholar 

  77. Fidjeland, A. K. & Shanahan, M. P. Accelerated simulation of spiking neural networks using GPUs. In Int. Joint. Conf. on Neural Networks 3041–3048 (IEEE, 2010).

  78. Davies, M. et al. Loihi: a neuromorphic manycore processor with on-chip learning. IEEE Micro 38, 82–99 (2018).

    Google Scholar 

  79. Blouw, P., Choo, X., Hunsberger, E. & Eliasmith, C. Benchmarking keyword spotting efficiency on neuromorphic hardware. In Proc. 7th Annu. Neuro-inspired Computational Elements Workshop 1 (ACM, 2018).

  80. Hsu, J. How IBM got brainlike efficiency from the TrueNorth chip. IEEE Spectrum https://spectrum.ieee.org/computing/hardware/how-ibm-got-brainlike-efficiency-from-the-truenorth-chip (29 September 2014).

  81. Khan, M. M. et al. SpiNNaker: mapping neural networks onto a massively parallel chip multiprocessor. In Int. Joint Conf. on Neural Networks 2849–2856 (IEEE, 2008). This was one of the first works to implement a large-scale spiking neural network on hardware using event-driven computations and commercial processors.

  82. Benjamin, B. V. et al. Neurogrid: a mixed-analog–digital multichip system for large-scale neural simulations. Proc. IEEE 102, 699–716 (2014).

    Google Scholar 

  83. Schemmel, J. et al. A wafer-scale neuromorphic hardware system for large-scale neural modeling. In Int. Symp. Circuits and Systems 1947–1950 (IEEE, 2010).

  84. Merolla, P. A. et al. A million spiking-neuron integrated circuit with a scalable communication network and interface. Science 345, 668–673 (2014). This work describes TrueNorth, the first digital custom-designed, large-scale neuromorphic processor, an outcome of the DARPA SyNAPSE programme; it was geared towards solving commercial applications through a digital neuromorphic implementation.

    ADS  CAS  PubMed  Google Scholar 

  85. Furber, S. Large-scale neuromorphic computing systems. J. Neural Eng. 13, 051001 (2016).

    ADS  PubMed  Google Scholar 

  86. Qiao, N. et al. A reconfigurable on-line learning spiking neuromorphic processor comprising 256 neurons and 128k synapses. Front. Neurosci. 9, 141 (2015).

    PubMed  PubMed Central  Google Scholar 

  87. Indiveri, G. et al. Neuromorphic silicon neuron circuits. Front. Neurosci. 5, 73 (2011).

    PubMed  PubMed Central  Google Scholar 

  88. Seo, J.-s. et al. A 45 nm CMOS neuromorphic chip with a scalable architecture for learning in networks of spiking neurons. In Custom Integrated Circuits Conf. 311–334 (IEEE, 2011).

  89. Boahen, K. A. Point-to-point connectivity between neuromorphic chips using address events. IEEE Trans. Circuits Syst. II 47, 416–434 (2000). This paper describes the fundamentals of address event representation and its application to neuromorphic systems.

    MATH  Google Scholar 

  90. Serrano-Gotarredona, R. et al. AER building blocks for multi-layer multi-chip neuromorphic vision systems. In Advances in Neural Information Processing Systems Vol. 18 (eds Weiss, Y., Schölkopf, B. & Platt, J. C.) 1217–1224 (Neural Information Processing Systems Foundation, 2006).

  91. Moore, G. E. Cramming more components onto integrated circuits. Proc. IEEE 86, 82–85 (1998).

    Google Scholar 

  92. Waldrop, M. M. The chips are down for Moore’s law. Nature 530, 144 (2016).

    ADS  CAS  PubMed  Google Scholar 

  93. von Neumann, J. First draft of a report on the EDVAC. IEEE Ann. Hist. Comput. 15, 27–75 (1993).

    MathSciNet  MATH  Google Scholar 

  94. Mahapatra, N. R. & Venkatrao, B. The processor–memory bottleneck: problems and solutions. Crossroads 5, 2 (1999).

    Google Scholar 

  95. Gokhale, M., Holmes, B. & Iobst, K. Processing in memory: the Terasys massively parallel PIM array. Computer 28, 23–31 (1995).

    Google Scholar 

  96. Elliott, D., Stumm, M., Snelgrove, W. M., Cojocaru, C. & McKenzie, R. Computational RAM: implementing processors in memory. IEEE Des. Test Comput. 16, 32–41 (1999).

    Google Scholar 

  97. Ankit, A., Sengupta, A., Panda, P. & Roy, K. RESPARC: a reconfigurable and energy-efficient architecture with memristive crossbars for deep spiking neural networks. In Proc. 54th ACM/EDAC/IEEE Annual Design Automation Conf. 63.2 (IEEE, 2017).

  98. Bez, R. & Pirovano, A. Non-volatile memory technologies: emerging concepts and new materials. Mater. Sci. Semicond. Process. 7, 349–355 (2004).

    CAS  Google Scholar 

  99. Xue, C. J. et al. Emerging non-volatile memories: opportunities and challenges. In Proc. 9th Int. Conf. on Hardware/Software Codesign and System Synthesis 325–334 (IEEE, 2011).

  100. Wong, H.-S. P. & Salahuddin, S. Memory leads the way to better computing. Nat. Nanotechnol. 10, 191 (2015); correction 10, 660 (2015).

    ADS  CAS  PubMed  Google Scholar 

  101. Chi, P. et al. Prime: a novel processing-in-memory architecture for neural network computation in ReRAM-based main memory. In Proc. 43rd Int. Symp. Computer Architecture 27–39 (IEEE, 2016).

    Google Scholar 

  102. Shafiee, A. et al. ISAAC: a convolutional neural network accelerator with in-situ analog arithmetic in crossbars. In Proc. 43rd Int. Symp. Computer Architecture 14–26 (IEEE, 2016).

  103. Burr, G. W. et al. Neuromorphic computing using non-volatile memory. Adv. Phys. X 2, 89–124 (2017).

    Google Scholar 

  104. Snider, G. S. Spike-timing-dependent learning in memristive nanodevices. In Proc. Int. Symp. on Nanoscale Architectures 85–92 (IEEE, 2008).

  105. Chua, L. Memristor—the missing circuit element. IEEE Trans. Circuit Theory 18, 507–519 (1971). This was the first work to conceptualize memristors as fundamental passive circuit elements; they are currently being investigated as high-density storage devices through various emerging technologies for conventional general-purpose and neuromorphic computing architectures.

    Google Scholar 

  106. Strukov, D. B., Snider, G. S., Stewart, D. R. & Williams, R. S. The missing memristor found. Nature 453, 80–83 (2008).

    ADS  CAS  PubMed  Google Scholar 

  107. Waser, R., Dittmann, R., Staikov, G. & Szot, K. Redox-based resistive switching memories—nanoionic mechanisms, prospects, and challenges. Adv. Mater. 21, 2632–2663 (2009).

    CAS  Google Scholar 

  108. Burr, G. W. et al. Recent progress in phase-change memory technology. IEEE J. Em. Sel. Top. Circuits Syst. 6, 146–162 (2016).

    ADS  Google Scholar 

  109. Hosomi, M. et al. A novel nonvolatile memory with spin torque transfer magnetization switching: spin-RAM. In Int. Electron Devices Meeting 459–462 (IEEE, 2005).

  110. Ambrogio, S. et al. Statistical fluctuations in HfOx resistive-switching memory. Part I—set/reset variability. IEEE Trans. Electron Dev. 61, 2912–2919 (2014).

    ADS  CAS  Google Scholar 

  111. Fantini, A. et al. Intrinsic switching variability in HfO2 RRAM. In 5th Int. Memory Workshop 30–33 (IEEE, 2013).

  112. Merrikh-Bayat, F. et al. High-performance mixed-signal neurocomputing with nanoscale floating-gate memory cell arrays. IEEE Trans. Neural Netw. Learn. Syst. 29, 4782–4790 (2017).

    PubMed  Google Scholar 

  113. Ramakrishnan, S., Hasler, P. E. & Gordon, C. Floating-gate synapses with spike-time-dependent plasticity. IEEE Trans. Biomed. Circuits Syst. 5, 244–252 (2011).

    CAS  Google Scholar 

  114. Hasler, J. & Marr, H. B. Finding a roadmap to achieve large neuromorphic hardware systems. Front. Neurosci. 7, 118 (2013).

    PubMed  PubMed Central  Google Scholar 

  115. Hasler, P. E., Diorio, C., Minch, B. A. & Mead, C. Single transistor learning synapses. In Advances in Neural Information Processing Systems Vol. 7 (eds Tesauro, G., Touretzky, D. S. & Leen, T. K.) 817–824 (Neural Information Processing Systems Foundation, 1995). This was one of the first works to use a non-volatile memory device—specifically, a floating-gate transistor—as a synaptic element.

  116. Holler, M., Tam, S., Castro, H. & Benson, R. An electrically trainable artificial neural network (ETANN) with 10240 ‘floating gate’ synapses. In Int. Joint Conf. on Neural Networks Vol. 2, 191–196 (1989).

  117. Chen, P.-Y. et al. Technology–design co-optimization of resistive cross-point array for accelerating learning algorithms on chip. In Proc. Eur. Conf. on Design, Automation & Testing 854–859 (IEEE, 2015).

  118. Chakraborty, I., Roy, D. & Roy, K. Technology aware training in memristive neuromorphic systems for nonideal synaptic crossbars. IEEE Trans. Em. Top. Comput. Intell. 2, 335–344 (2018).

    Google Scholar 

  119. Alibart, F., Gao, L., Hoskins, B. D. & Strukov, D. B. High precision tuning of state for memristive devices by adaptable variation-tolerant algorithm. Nanotechnology 23, 075201 (2012).

    ADS  PubMed  Google Scholar 

  120. Dong, Q. et al. A 4 + 2T SRAM for searching and in-memory computing with 0.3-V V DDmin. IEEE J. Solid-State Circuits 53, 1006–1015 (2018).

    ADS  Google Scholar 

  121. Agrawal, A., Jaiswal, A., Lee, C. & Roy, K. X-SRAM: enabling in-memory Boolean computations in CMOS static random-access memories. IEEE Trans. Circuits Syst. I 65, 4219–4232 (2018).

    Google Scholar 

  122. Eckert, C. et al. Neural cache: bit-serial in-cache acceleration of deep neural networks. In Proc. 45th Ann. Int. Symp. Computer Architecture 383–396 (IEEE, 2018).

  123. Gonugondla, S. K., Kang, M. & Shanbhag, N. R. A variation-tolerant in-memory machine-learning classifier via on-chip training. IEEE J. Solid-State Circuits 53, 3163–3173 (2018).

    ADS  Google Scholar 

  124. Biswas, A. & Chandrakasan, A. P. Conv-RAM: an energy-efficient SRAM with embedded convolution computation for low-power CNN-based machine learning applications. In Int. Solid-State Circuits Conf. 488–490 (IEEE, 2018).

  125. Kang, M., Keel, M.-S., Shanbhag, N. R., Eilert, S. & Curewitz, K. An energy-efficient VLSI architecture for pattern recognition via deep embedding of computation in SRAM. In Int. Conf. on Acoustics, Speech and Signal Processing 8326–8330 (IEEE, 2014).

  126. Seshadri, V. et al. RowClone: fast and energy-efficient in-DRAM bulk data copy and initialization. In Proc. 46th Ann. IEEE/ACM Int. Symp. Microarchitecture 185–197 (ACM, 2013).

  127. Prezioso, M. et al. Training and operation of an integrated neuromorphic network based on metal-oxide memristors. Nature 521, 61–64 (2015).

    ADS  CAS  PubMed  Google Scholar 

  128. Sebastian, A. et al. Temporal correlation detection using computational phase-change memory. Nat. Commun. 8, 1115 (2017).

    ADS  PubMed  PubMed Central  Google Scholar 

  129. Jain, S., Ranjan, A., Roy, K. & Raghunathan, A. Computing in memory with spin-transfer torque magnetic RAM. IEEE Trans. Very Large Scale Integr. (VLSI) Syst. 26, 470–483 (2018).

    Google Scholar 

  130. Jabri, M. & Flower, B. Weight perturbation: an optimal architecture and learning technique for analog VLSI feedforward and recurrent multilayer networks. IEEE Trans. Neural Netw. 3, 154–157 (1992).

    CAS  PubMed  Google Scholar 

  131. Diorio, C., Hasler, P., Minch, B. A. & Mead, C. A. A floating-gate MOS learning array with locally computed weight updates. IEEE Trans. Electron Dev. 44, 2281–2289 (1997).

    ADS  Google Scholar 

  132. Bayat, F. M., Prezioso, M., Chakrabarti, B., Kataeva, I. & Strukov, D. Memristor-based perceptron classifier: increasing complexity and coping with imperfect hardware. In Proc. 36th Int. Conf. on Computer-Aided Design 549–554 (IEEE, 2017).

  133. Guo, X. et al. Fast, energy-efficient, robust, and reproducible mixed-signal neuromorphic classifier based on embedded NOR flash memory technology. In Int. Electron Devices Meeting 6.5 (IEEE, 2017).

  134. Liu, C., Hu, M., Strachan, J. P. & Li, H. Rescuing memristor-based neuromorphic design with high defects. In Proc. 54th ACM/EDAC/IEEE Design Automation Conf. 76.6 (IEEE, 2017).

  135. Tuma, T., Pantazi, A., Le Gallo, M., Sebastian, A. & Eleftheriou, E. Stochastic phase-change neurons. Nat. Nanotechnol. 11, 693–699 (2016).

    ADS  CAS  PubMed  Google Scholar 

  136. Fukushima, A. et al. Spin dice: a scalable truly random number generator based on spintronics. Appl. Phys. Express 7, 083001 (2014).

    ADS  Google Scholar 

  137. Le Gallo, M. et al. Mixed-precision in-memory computing. Nature Electron. 1, 246 (2018).

    Google Scholar 

  138. Krstic, M., Grass, E., Gürkaynak, F. K. & Vivet, P. Globally asynchronous, locally synchronous circuits: overview and outlook. IEEE Des. Test Comput. 24, 430–441 (2007).

    Google Scholar 

  139. Choi, H. et al. An electrically modifiable synapse array of resistive switching memory. Nanotechnology 20, 345201 (2009).

    ADS  PubMed  Google Scholar 

  140. Serrano-Gotarredona, T., Masquelier, T., Prodromakis, T., Indiveri, G. & Linares-Barranco, B. STDP and STDP variations with memristors for spiking neuromorphic learning systems. Front. Neurosci. 7, 2 (2013).

    CAS  PubMed  PubMed Central  Google Scholar 

  141. Kuzum, D., Jeyasingh, R. G., Lee, B. & Wong, H.-S. P. Nanoelectronic programmable synapses based on phase change materials for brain-inspired computing. Nano Lett. 12, 2179–2186 (2012).

    ADS  CAS  PubMed  Google Scholar 

  142. Krzysteczko, P., Münchenberger, J., Schäfers, M., Reiss, G. & Thomas, A. The memristive magnetic tunnel junction as a nanoscopic synapse–neuron system. Adv. Mater. 24, 762–766 (2012).

    CAS  PubMed  Google Scholar 

  143. Vincent, A. F. et al. Spin-transfer torque magnetic memory as a stochastic memristive synapse for neuromorphic systems. IEEE Trans. Biomed. Circuits Syst. 9, 166–174 (2015).

    PubMed  Google Scholar 

  144. Sengupta, A. & Roy, K. Encoding neural and synaptic functionalities in electron spin: a pathway to efficient neuromorphic computing. Appl. Phys. Rev. 4, 041105 (2017).

    ADS  Google Scholar 

  145. Borghetti, J. et al. ‘Memristive’ switches enable ‘stateful’ logic operations via material implication. Nature 464, 873–876 (2010).

    ADS  CAS  PubMed  Google Scholar 

  146. Hu, M. et al. Dot-product engine for neuromorphic computing: programming 1T1M crossbar to accelerate matrix-vector multiplication. InProc. 53rd ACM/EDAC/IEEE Annual Design Automation Conf.21.1 (IEEE, 2016).

  147. Sheridan, P. M. et al. Sparse coding with memristor networks. Nat. Nanotechnol. 12, 784–789 (2017).

    CAS  PubMed  Google Scholar 

  148. Wright, C. D., Liu, Y., Kohary, K. I., Aziz, M. M. & Hicken, R. J. Arithmetic and biologically-inspired computing using phase-change materials. Adv. Mater. 23, 3408–3413 (2011).

    CAS  PubMed  PubMed Central  Google Scholar 

  149. Le Gallo, M., Sebastian, A., Cherubini, G., Giefers, H. & Eleftheriou, E. Compressed sensing recovery using computational memory. In Int. Electron Devices Meeting 28.3.1 (IEEE, 2017).

  150. Rosenblatt, F. The perceptron: a probabilistic model for information storage and organization in the brain. Psychol. Rev. 65 386 (1958).

    CAS  PubMed  Google Scholar 

  151. Bi, G. Q. & Poo, M. M. Synaptic modifications in cultured hippocampal neurons: dependence on spike timing, synaptic strength, and postsynaptic cell type. J. Neurosci. 18, 10464–10472 (1998).

Download references

Acknowledgements

We thank A. Sengupta (Pennsylvania State University), A. Raychowdhury (Georgia Institute of Technology) and S. Gupta (Purdue University) for their input. The work was supported in part by the Center for Brain-inspired Computing Enabling Autonomous Intelligence (C-BRIC), a DARPA-sponsored JUMP center, the Semiconductor Research Corporation, the National Science Foundation, Intel Corporation, the DoD Vannevar Bush Fellowship, the ONR-MURI programme, and the US Army Research Laboratory and the UK Ministry of Defence under agreement number W911NF-16-3-0001.

Author information

Authors and Affiliations

Authors

Contributions

All authors contributed equally in devising the structure of the paper, designing the figures and writing the manuscript.

Corresponding author

Correspondence to Kaushik Roy.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Roy, K., Jaiswal, A. & Panda, P. Towards spike-based machine intelligence with neuromorphic computing. Nature 575, 607–617 (2019). https://doi.org/10.1038/s41586-019-1677-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1038/s41586-019-1677-2

This article is cited by

Comments

By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Search

Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing