Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Article
  • Published:

Neuromorphic scaling advantages for energy-efficient random walk computations

Abstract

Neuromorphic computing, which aims to replicate the computational structure and architecture of the brain in synthetic hardware, has typically focused on artificial intelligence applications. What is less explored is whether such brain-inspired hardware can provide value beyond cognitive tasks. Here we show that the high degree of parallelism and configurability of spiking neuromorphic architectures makes them well suited to implement random walks via discrete-time Markov chains. These random walks are useful in Monte Carlo methods, which represent a fundamental computational tool for solving a wide range of numerical computing tasks. Using IBM’s TrueNorth and Intel’s Loihi neuromorphic computing platforms, we show that our neuromorphic computing algorithm for generating random walk approximations of diffusion offers advantages in energy-efficient computation compared with conventional approaches. We also show that our neuromorphic computing algorithm can be extended to more sophisticated jump-diffusion processes that are useful in a range of applications, including financial economics, particle physics and machine learning.

This is a preview of subscription content, access via your institution

Access options

Buy this article

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: Neuromorphic hardware can efficiently implement random walks.
Fig. 2: Random walk processes are well suited for neuromorphic computing, and the inclusion of different terms in the stochastic process yields random walks with differing behaviours.
Fig. 3: Particle transport simulations on neuromorphic hardware.
Fig. 4: Neuromorphic-computing-based random walk algorithm can implement random walks over non-Euclidean geometries.

Similar content being viewed by others

Data availability

Source data are provided with this paper. The computational scaling data generated and analysed in this study are included in the published article as Extended Data.

Code availability

The code that supports the findings of this study is available from the corresponding author upon reasonable request and concurrence with the US DOE and relevant hardware partners.

References

  1. Davies, M. et al. Loihi: a neuromorphic manycore processor with on-chip learning. IEEE Micro 38, 82–99 (2018).

    Article  Google Scholar 

  2. Furber, S. B., Galluppi, F., Temple, S. & Plana, L. A. The SpiNNaker project. Proc. IEEE 102, 652–665 (2014).

    Article  Google Scholar 

  3. Merolla, P. A. et al. A million spiking-neuron integrated circuit with a scalable communication network and interface. Science 345, 668–673 (2014).

    Article  Google Scholar 

  4. Shor, P. W. Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer. SIAM Rev. 41, 303–332 (1999).

    Article  MathSciNet  Google Scholar 

  5. von Neumann, J. The Computer and the Brain (Yale Univ. Press, 2000).

  6. Roy, K., Jaiswal, A. & Panda, P. Towards spike-based machine intelligence with neuromorphic computing. Nature 575, 607–617 (2019).

    Article  Google Scholar 

  7. Severa, W., Vineyard, C. M., Dellana, R., Verzi, S. J. & Aimone, J. B. Training deep neural networks for binary communication with the Whetstone method. Nat. Mach. Intell. 1, 86–94 (2019).

    Article  Google Scholar 

  8. Shrestha, S. B. & Orchard, G. Slayer: spike layer error reassignment in time. SLAYER: spike layer error reassignment in time. In Proc. 32nd International Conference on Neural Information Processing Systems 1419–1428 (2018).

  9. Tavanaei, A., Ghodrati, M., Kheradpisheh, S. R., Masquelier, T. & Maida, A. Deep learning in spiking neural networks. Neural Netw. 111, 47–63 (2019).

    Article  Google Scholar 

  10. Pfeiffer, M. & Pfeil, T. Deep learning with spiking neurons: opportunities and challenges. Front. Neurosci. 12, 774 (2018).

    Article  Google Scholar 

  11. Göltz, J. et al. Fast and energy-efficient neuromorphic deep learning with first-spike times. Nat. Mach. Intell. 3, 823–835 (2021).

    Article  Google Scholar 

  12. Aimone, J. B. Neural algorithms and computing beyond Moore’s law. Commun. ACM 62, 110 (2019).

    Article  Google Scholar 

  13. Lanyon, B. P. et al. Towards quantum chemistry on a quantum computer. Nat. Chem. 2, 106–111 (2010).

    Article  Google Scholar 

  14. Feynman, R. P. Quantum mechanical computers. Found. Phys. 16, 507–531 (1986).

    Article  MathSciNet  Google Scholar 

  15. Harrow, A. W. & Montanaro, A. Quantum computational supremacy. Nature 549, 203–209 (2017).

    Article  Google Scholar 

  16. Arute, F. et al. Quantum supremacy using a programmable superconducting processor. Nature 574, 505–510 (2019).

    Article  Google Scholar 

  17. Parekh, O., Phillips, C. A., James, C. D. & Aimone, J. B. Constant-depth and subcubic-size threshold circuits for matrix multiplication. In Proc. 30th Symposium on Parallelism in Algorithms and Architectures 67–76 (ACM, 2018).

  18. Wolpert, D. H. & Macready, W. G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1, 67–82 (1997).

    Article  Google Scholar 

  19. Siu, K.-Y., Roychowdhury, V. & Kailath, T. Discrete Neural Computation: A Theoretical Foundation (Prentice-Hall, 1995).

  20. Aimone, J. B. et al. Non-neural network applications for spiking neuromorphic hardware. In 3rd International Workshop on Post-Moore’s Era Supercomputing (PMES, 2018).

  21. Aimone, J. B. et al. Dynamic programming with spiking neural computing. In Proc. International Conference on Neuromorphic Systems 20 (ACM, 2019).

  22. Mniszewski, S. M. Graph partitioning as quadratic unconstrained binary optimization (QUBO) on spiking neuromorphic hardware. In Proc. International Conference on Neuromorphic Systems 4 (ACM, 2019).

  23. Schuman, C. D. et al. Shortest path and neighborhood subgraph extraction on a spiking memristive neuromorphic implementation. In Proc. 7th Annual Neuro-inspired Computational Elements Workshop 3 (ACM, 2019).

  24. Buesing, L., Bill, J., Nessler, B. & Maass, W. Neural dynamics as sampling: a model for stochastic computation in recurrent networks of spiking neurons. PLoS Comput. Biol. 7, e1002211 (2011).

    Article  MathSciNet  Google Scholar 

  25. Pecevski, D., Buesing, L. & Maass, W. Probabilistic inference in general graphical models through sampling in stochastic networks of spiking neurons. PLoS Comput. Biol. 7, e1002294 (2011).

    Article  MathSciNet  Google Scholar 

  26. Mendat, D. R., Chin, S., Furber, S. & Andreou, A. G. Markov chain Monte Carlo inference on graphical models using event-based processing on the SpiNNaker neuromorphic architecture. In 2015 49th Annual Conference on Information Sciences and Systems (CISS) 1–6 (IEEE, 2015).

  27. Fonseca Guerra, G. A. & Furber, S. B. Using stochastic spiking neural networks on SpiNNaker to solve constraint satisfaction problems. Front. Neurosci. 11, 714 (2017).

    Article  Google Scholar 

  28. Hamilton, S. P., Slattery, S. R. & Evans, T. M. Multigroup Monte Carlo on GPUs: comparison of history-and event-based algorithms. Ann. Nucl. Energy 113, 506–518 (2018).

    Article  Google Scholar 

  29. Hamilton, S. P. & Evans, T. M. Continuous-energy Monte Carlo neutron transport on GPUs in the shift code. Ann. Nucl. Energy 128, 236–247 (2019).

    Article  Google Scholar 

  30. Severa, W., Lehoucq, R., Parekh, O. & Aimone, J. B. Spiking neural algorithms for Markov process random walk. In 2018 International Joint Conference on Neural Networks (IJCNN) 1–8 (IEEE, 2018).

  31. Smith, J. D. et al. Solving a steady-state PDE using spiking networks and neuromorphic hardware. In Proc. International Conference on Neuromorphic Systems 27 (ACM, 2020).

  32. Masuda, N., Porter, M. A. & Lambiotte, R. Random walks and diffusion on networks. Phys. Rep. 716, 1–58 (2017).

    Article  MathSciNet  Google Scholar 

  33. Hanson, F. B. Applied Stochastic Processes and Control for Jump-Diffusions: Modeling, Analysis and Computation (SIAM, 2007).

  34. Kloeden, P. E. & Platen, E. in Numerical Solution of Stochastic Differential Equations (eds Kloeden, P. E. & Platen, E.) 103–160 (Springer, 1992).

  35. Burdzy, K. & Chen, Z.-Q. Discrete approximations to reflected Brownian motion. Ann. Probab. 36, 698–727 (2008).

    Article  MathSciNet  Google Scholar 

  36. Skorokhod, A. V. Studies in the Theory of Random Processes Vol. 7021 (Courier Dover Publications, 1982).

  37. Schemmel, J. et al. A wafer-scale neuromorphic hardware system for large-scale neural modeling. In Proc. 2010 IEEE International Symposium on Circuits and Systems (ISCAS) 1947–1950 (IEEE, 2010).

  38. Billaudelle, S. et al. Structural plasticity on an accelerated analog neuromorphic hardware system. Neural Netw. 133, 11–20 (2021).

    Article  Google Scholar 

  39. Indiveri, G., Linares-Barranco, B., Legenstein, R., Deligeorgis, G. & Prodromakis, T. Integration of nanoscale memristor synapses in neuromorphic computing architectures. Nanotechnology 24, 384010 (2013).

    Article  Google Scholar 

  40. Pickett, M. D., Medeiros-Ribeiro, G. & Williams, R. S. A scalable neuristor built with Mott memristors. Nat. Mater. 12, 114–117 (2013).

    Article  Google Scholar 

  41. Krichmar, J. L., Severa, W., Khan, S. M. & Olds, J. L. Making BREAD: biomimetic strategies for artificial intelligence now and in the future. Front. Neurosci. 13, 666 (2019).

    Article  Google Scholar 

  42. Schuman, C. D. et al. A survey of neuromorphic computing and neural networks in hardware. Preprint at https://arxiv.org/abs/1705.06963 (2017).

  43. Meier, K. Special report: can we copy the brain?—The brain as computer. IEEE Spectr. 54, 28–33 (2017).

    Article  Google Scholar 

  44. Davies, M. Benchmarks for progress in neuromorphic computing. Nat. Mach. Intell. 1, 386–388 (2019).

    Article  Google Scholar 

  45. Davies, M. et al. Advancing neuromorphic computing with Loihi: a survey of results and outlook. Proc. IEEE 5, 911–934 (2021).

  46. Indiveri, G. & Douglas, R. Neuromorphic vision sensors. Science 288, 1189–1190 (2000).

    Article  Google Scholar 

  47. Aimone, J. B., Severa, W. & Vineyard, C. M. Composing neural algorithms with Fugu. In Proc. International Conference on Neuromorphic Systems 3 (ACM, 2019).

  48. Bossy, M. & Champagnat, N. in Encyclopedia of Quantitative Finance (ed. Rama, C.) 1142–1159 (John Wiley, 2010).

Download references

Acknowledgements

We thank S. Plimpton and A. Baczewski for reviewing an early version of the manuscript and A. Moody, S. Cardwell and C. Vineyard for managing access to the TrueNorth and Loihi platforms. We acknowledge financial support from Sandia National Laboratories’ Laboratory Directed Research and Development Program and the US Department of Energy (DOE) Advanced Simulation and Computing Program. Sandia National Laboratories is a multi-program laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International for the US DOE National Nuclear Security Administration under contract DE-NA0003525. This article describes objective technical results and analysis. Any subjective views or opinions that might be expressed do not necessarily represent the views of the US DOE or the US Government.

Author information

Authors and Affiliations

Authors

Contributions

J.D.S., B.C.F. and R.B.L. derived the mathematical results. J.D.S. and B.C.F. designed the particle experiments. J.D.S., W.S. and J.B.A. designed the geometry experiments. O.P., W.S. and J.B.A. developed the neuromorphic algorithm and performed theoretical neuromorphic complexity analysis. A.J.H. and J.B.A. performed the neuromorphic simulations. J.D.S., L.E.R. and W.S. performed the software simulations. All the authors wrote the paper.

Corresponding author

Correspondence to James B. Aimone.

Ethics declarations

Competing interests

The authors declare no competing interests.

Peer review

Peer review information

Nature Electronics thanks the anonymous reviewers for their contribution to the peer review of this work.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Extended data

Extended Data Fig. 1 Neural Circuits for Random Walk Algorithm.

(A) Neural Circuits for Buffering and Counting on Loihi. Red input lines (from left) represent inputs from supervisor neuron. Circle ends represent inhibitory connections (weight =-1), arrows represent excitatory connections (weight = 1). For buffer circuit, outputs (to right) go to counter circuit count neuron; for counter circuit, outputs go to probability neurons. (B) Illustration of computing probabilistic circuit with a decision tree to compute probabilities with example output probabilities in red. (C) Same decision tree compressed into a single layer, with source input driving probabilistic choice. The dotted line is an excitatory connection with a delay to correspond to skipping the probabilistic layer. From source neuron, weights from source neuron (green) to probability neurons (blue) are set to tune probabilities neurons fire, per equation M.1. Outputs of probability neurons with arrows are excitatory (weight = 1) and with circles are inhibitory (weight = −1). (D) Binary tree representing the stochastic walk through a TrueNorth mesh node. Probability neurons are 𝑟0, 𝑟1, and 𝑟2. Black edges are excitatory, red edges are inhibitory. Blue edges indicate a delay of 1, and bold blue and red dashed edges indicate a delay of 2. The four leaf nodes, 𝑜0, 𝑜1, 𝑜2, and 𝑜3, are the directional nodes with derived exit probabilities. (E) A near complete specification of the TrueNorth mesh node model for a random walk algorithm. This is a more defined representation of the binary tree from panel D. Neurons are represented by triangles, neuron inputs are on the left edge of the square and a synapse to a neuron is defined by a circle on the cross bar. Green circles are excitatory connections and yellow circles are inhibitory connections. The red number 2 above neurons 6, 7, and 8 indicate that they fire as a result of 2 or more incoming spikes, all other neurons fire as a result of 1 or more incoming spikes.

Extended Data Fig. 2 Markov chain.

Illustration on the creation of a Markov chain on the real line.

Extended Data Fig. 3 Diffusion Random Walks Scaling Studies.

(A) Walker updates per second for a 1,000 (dark green) and 32,000 (light green) basic diffusion simulation across conventional and neuromorphic platforms. (B) Comparison of Loihi and single-chip TrueNorth to a single-core CPU simulation on normalized time of a simple diffusion simulation as a function of increasing random walkers. All times normalized to the time it takes to complete a simulation with 1,000 walkers. (C) Comparison of multi-chip TrueNorth to multi-core CPU and GPU simulations. GPU generates threads for all walker scenarios; GPU Single Block allocates only 1,024 threads for all walkers. (D) TrueNorth Execution time reaches a limit as mesh counts increase. (E) TrueNorth Execution time scales linearly with walker count, again, but also demonstrates the sensitivity of the algorithm to bottlenecks caused by uneven transition probabilities. (F) TrueNorth Execution time is dramatically reduced once all walkers do not start on the same position. (G) Time required for NVIDIA Titan XP GPU to simulate diffusion on a torus for 100,000 time steps as a function of the number of walkers. A fixed 1,024 threads are allocated for each trial. (H) Time required for NVIDIA Titan XP GPU to simulate diffusion on a torus for 100,000 time steps as a function of the number of walkers. For this weak-scaling experiment, a block of 1024 is added for every 1,000 walkers. (I) Time required for NVIDIA Titan XP GPU to simulate diffusion on a torus for a single time step as a function of the number of walkers. For this weak-scaling experiment, a block of 1024 is added for every 1,000 walkers.

Extended Data Fig. 4 Details on Particle Transport and Non- Euclidean Meshes.

(A) To avoid issues with random walks not ending exactly on the mesh, Δ𝑥 can be expressed as a function of both Δ𝑡𝑡 and ΔΩ (see Eq. SN3.14). The marker on this plot shows the selection for our simulations. (B) Average rounding distance in a single time step across all directions determined by the given value of ΔΩ. (C) The approximate solution to Eq. SN3.12 is calculated using Δ𝑡 = 0.01, Δ𝑥 = 1/15 and the given value of ΔΩ utilizing 1 million walkers per starting location. The absolute value of the difference of this average value and the average value calculated when ΔΩ = 1/15 is presented. In both panels, the blue circle indicates the value of ΔΩ used in the Loihi simulation. (D) Visualization of mesh structure for heat transport examples in the sphere. The center of each triangle represents a location in the mesh or an element of the state space. (E) Visualization of the mesh structure in the barbell. The center of each triangle or rectangle represents a location in the mesh.

Supplementary information

Supplementary Information

Supplementary Notes 1–3.

Source data

Source Data Fig. 1

Data from the scaling experiments.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Smith, J.D., Hill, A.J., Reeder, L.E. et al. Neuromorphic scaling advantages for energy-efficient random walk computations. Nat Electron 5, 102–112 (2022). https://doi.org/10.1038/s41928-021-00705-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1038/s41928-021-00705-7

This article is cited by

Search

Quick links

Nature Briefing AI and Robotics

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing: AI and Robotics