Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Perspective
  • Published:

Materials and devices as solutions to computational problems in machine learning

Abstract

The growth of machine learning, combined with the approaching limits of conventional digital computing, are driving a search for alternative and complementary forms of computation, but few novel devices have been adopted by mainstream computing systems. The development of such computer technology requires advances in both computational devices and computer architectures. However, a disconnect exists between the device community and the computer architecture community, which limits progress. Here we explore this disconnect with a focus on machine learning hardware accelerators. We argue that the direct mapping of computational problems to materials and device properties provides a powerful route forwards. We examine novel materials and devices that have been successfully applied as solutions to computational problems: non-volatile memories for matrix-vector multiplication, magnetic tunnel junctions for stochastic computing and resistive memory for reconfigurable logic. We also propose metrics to facilitate comparisons between different solutions to machine learning tasks and highlight applications where novel materials and devices could potentially be of use.

This is a preview of subscription content, access via your institution

Access options

Buy this article

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: Development process for an integrated circuit from the perspective of a computer architecture and a device physicist.
Fig. 2: Hardware solutions to computational problems.
Fig. 3: Metrics for evaluating a given solution to an ML task.

Similar content being viewed by others

Data availability

All relevant data are included in the paper and/or its Supplementary Information files.

References

  1. Rumble, J. & Bruno, T. CRC Handbook of Chemistry and Physics 2019-2020: A Ready-reference Book of Chemical and Physical Data CRC Handbook of Chemistry and Physics (Taylor & Francis Group, 2019).

  2. Moskowitz, S. Advanced Materials Innovation: Managing Global Technology in the 21st century (Wiley, 2016).

  3. Keyes, R. W. What makes a good computer device? Science 230, 138–144 (1985).

    Google Scholar 

  4. Mehonic, A. & Kenyon, A. J. Brain-inspired computing needs a master plan. Nature 604, 255–260 (2022).

    Google Scholar 

  5. Abu-Mostafa, Y. S., Magdon-Ismail, M. & Lin, H.-T. Learning From Data (AMLBook, 2012).

  6. Domingos, P. A few useful things to know about machine learning. Commun. ACM 55, 78–87 (2012).

    Google Scholar 

  7. Goodfellow, I., Bengio, Y. & Courville, A. Deep Learning (MIT Press, 2016).

  8. Merolla, P. A. et al. A million spiking-neuron integrated circuit with a scalable communication network and interface. Science 345, 668–673 (2014).

    Google Scholar 

  9. Davies, M. et al. Loihi: a neuromorphic manycore processor with on-chip learning. IEEE Micro 38, 82–99 (2018).

    Google Scholar 

  10. Pei, J. et al. Towards artificial general intelligence with hybrid tianjic chip architecture. Nature 572, 106–111 (2019).

    Google Scholar 

  11. Painkras, E. et al. SpiNNaker: a 1-W 18-core system-on-chip for massively-parallel neural network simulation. IEEE J. Solid State Circuits 48, 1943–1953 (2013).

    Google Scholar 

  12. McCulloch, W. S. & Pitts, W. A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophys. 5, 115–133 (1943).

    MathSciNet  MATH  Google Scholar 

  13. Hodgkin, A. L. & Huxley, A. F. A quantitative description of membrane current and its application to conduction and excitation in nerve. J. Physiol. 117, 500–544 (1952).

    Google Scholar 

  14. Ermentrout, G. B. & Kopell, N. Parabolic bursting in an excitable system coupled with a slow oscillation. SIAM J. Appl. Math. 46, 233–253 (1986).

    MathSciNet  MATH  Google Scholar 

  15. Jolivet, R., Rauch, A., Lüscher, H.-R. & Gerstner, W. Predicting spike timing of neocortical pyramidal neurons by simple threshold models. J. Comput. Neurosci. 21, 35–49 (2006).

    MathSciNet  MATH  Google Scholar 

  16. Galves, A. & Löcherbach, E. Infinite systems of interacting chains with memory of variable length—a stochastic model for biological neural nets. J. Stat. Phys. 151, 896–921 (2013).

    MathSciNet  MATH  Google Scholar 

  17. Schuman, C. D. et al. Opportunities for neuromorphic computing algorithms and applications. Nat. Comput. Sci. 2, 10–19 (2022).

    Google Scholar 

  18. Smith, J. D. et al. Neuromorphic scaling advantages for energy-efficient random walk computations. Nat. Electron. 5, 102–112 (2022).

    Google Scholar 

  19. Zhang, H.-T. et al. Reconfigurable perovskite nickelate electronics for artificial intelligence. Science 375, 533–539 (2022).

    Google Scholar 

  20. Brent, R. P. Multiple-precision Zero-finding Methods and the Complexity of Elementary Function Evaluation 151–176 (Academic Press, 1976).

  21. Lecun, Y., Bottou, L., Bengio, Y. & Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 86, 2278–2324 (1998).

    Google Scholar 

  22. Hiatt, W. R. & Hickmott, T. W. Bistable switching in niobium oxide diodes. Appl. Phys. Lett. 6, 106–108 (1965).

    Google Scholar 

  23. Hu, M. et al. Memristor-based analog computation and neural network classification with a dot product engine. Adv. Mater. 30, 1705914 (2018).

    Google Scholar 

  24. Li, C. et al. In-memory computing with memristor arrays. In 2018 IEEE International Memory Workshop 1–4 (IEEE, 2018).

  25. Byerly, A., Kalganova, T. & Dear, I. No routing needed between capsules. Neurocomputing 463, 545–553 (2921).

    Google Scholar 

  26. Reuther, A. et al. Survey and benchmarking of machine learning accelerators. In IEEE High Performance Extreme Computing Conference 1–9 (IEEE, 2019).

  27. Norrie, T. et al. The design process for google’s training chips: TPUv2 and TPUv3. IEEE Micro 41, 56–63 (2021).

    Google Scholar 

  28. Fuchs, A. & Wentzlaff, D. The accelerator wall: limits of chip specialization. In IEEE International Symposium on High Performance Computer Architecture 1–14 (IEEE, 2019).

  29. Adolf, R., Rama, S., Reagen, B., Wei, G.-y. & Brooks, D. Fathom: reference workloads for modern deep learning methods. In IEEE International Symposium on Workload Characterization 1–10 (IEEE, 2016).

  30. Liu, J., Zhao, H., Ogleari, M. A., Li, D. & Zhao, J. Processing-in-memory for energy-efficient neural network training: a heterogeneous approach. In 51st Annual IEEE/ACM International Symposium on Microarchitecture 655–668 (IEEE, 2018).

  31. Reddy, M. in API Design for C++ (ed. Reddy, M.) 209–240 (Morgan Kaufmann, 2011).

  32. Sarpeshkar, R. Analog versus digital: extrapolating from electronics to neurobiology. Neural Comput. 10, 1601–1638 (1998).

    Google Scholar 

  33. Hu, M., Strachan, J. P., Li, Z. & Stanley-Williams, R. Dot-product engine as computing memory to accelerate machine learning algorithms. In 17th International Symposium on Quality Electronic Design 374–379 (IEEE, 2016).

  34. Garbin, D. et al. Variability-tolerant convolutional neural network for pattern recognition applications based on oxram synapses. In IEEE International Electron Devices Meeting 28.4.1–28.4.4 (IEEE, 2014).

  35. Lin, P. et al. Three-dimensional memristor circuits as complex neural networks. Nat. Electron. 3, 225–232 (2020).

    Google Scholar 

  36. Chen, J.-H., Jang, C., Xiao, S., Ishigami, M. & Fuhrer, M. S. Intrinsic and extrinsic performance limits of graphene devices on SiO2. Nat. Nanotechnol. 3, 206–209 (2008).

    Google Scholar 

  37. Querlioz, D., Bichler, O. & Gamrat, C. Simulation of a memristor-based spiking neural network immune to device variations. In 2011 International Joint Conference on Neural Networks 1775–1781 (IEEE, 2011).

  38. Payvand, M., Nair, M. V., Müller, L. K. & Indiveri, G. A neuromorphic systems approach to in-memory computing with non-ideal memristive devices: from mitigation to exploitation. Faraday Discuss. 213, 487–510 (2019).

    Google Scholar 

  39. Moro, F. et al. Neuromorphic object localization using resistive memories and ultrasonic transducers. Nat. Commun. 13, 3506 (2022).

    Google Scholar 

  40. Li, Y., Wang, Z., Midya, R., Xia, Q. & Yang, J. J. Review of memristor devices in neuromorphic computing: materials sciences and device challenges. J. Phys. D 51, 503002 (2018).

    Google Scholar 

  41. Wang, Y. et al. Mott-transition-based RRAM. Mater. Today 28, 63–80 (2019).

    Google Scholar 

  42. Wang, H. & Yan, X. Overview of resistive random access memory (RRAM): materials, filament mechanisms, performance optimization, and prospects. Phys. Status Solidi Rapid Res. Lett. 13, 1900073 (2019).

    Google Scholar 

  43. Akerman, J. Toward a universal memory. Science 308, 508–510 (2005).

    Google Scholar 

  44. Palem, K. V. Energy aware computing through probabilistic switching: a study of limits. IEEE Trans. Comput. 54, 1123–1137 (2005).

    Google Scholar 

  45. Camsari, K. Y., Faria, R., Sutton, B. M. & Datta, S. Stochastic p-bits for invertible logic. Phys. Rev. X 7, 031014 (2017).

    Google Scholar 

  46. Camsari, K. Y., Sutton, B. M. & Datta, S. p-bits for probabilistic spin logic. Appl. Phys. Rev. 6, 011305 (2019).

    Google Scholar 

  47. Borders, W. A. et al. Integer factorization using stochastic magnetic tunnel junctions. Nature 573, 390–393 (2019).

    Google Scholar 

  48. Khasanvis, S. et al. Self-similar magneto-electric nanocircuit technology for probabilistic inference engines. IEEE Trans. Nanotechnol. 14, 980–991 (2015).

    Google Scholar 

  49. Kim, J. et al. Exploitable magnetic anisotropy of the two-dimensional magnet CrI3. Nano Lett. 20, 929–935 (2020).

    Google Scholar 

  50. Chen, Z., He, J., Zhou, P., Na, J. & Sun, L. Strain control of the electronic structures, magnetic states, and magnetic anisotropy of Fe doped single-layer MoS2. Comput. Mater. Sci. 110, 102–108 (2015).

    Google Scholar 

  51. Mizrahi, A. et al. Neural-like computing with populations of superparamagnetic basis functions. Nat. Commun. 9, 1533 (2018).

    Google Scholar 

  52. Bhuin, S., Sweeney, J., Pagliarini, S., Biswas, A. K. & Pileggi, L. A self-calibrating sense amplifier for a true random number generator using hybrid FinFET-straintronic MTJ. In 2017 IEEE/ACM International Symposium on Nanoscale Architectures 147–152 (IEEE, 2017).

  53. Bhuin, S., Biswas, A. K. & Pileggi, L. Strained MTJs with latch-based sensing for stochastic computing. In IEEE 17th International Conference on Nanotechnology 1027–1030 (IEEE, 2017).

  54. Pagliarini, S. N., Bhuin, S., Isgenc, M. M., Biswas, A. K. & Pileggi, L. A probabilistic synapse with strained MTJs for spiking neural networks. IEEE Trans. Neural Netw. Learn. Syst. 31, 1113–1123 (2020).

    Google Scholar 

  55. McDowell, D. L. et al. in Integrated Design of Multiscale, Multifunctional Materials and Products (eds McDowell, D. L. et al.) 351–360 (Butterworth-Heinemann, 2010).

  56. Kaspar, C., Ravoo, B. J., van der Wiel, W. G., Wegner, S. V. & Pernice, W. H. P. The rise of intelligent matter. Nature 594, 345–355 (2021).

    Google Scholar 

  57. Goswami, S. et al. Decision trees within a molecular memristor. Nature 597, 51–56 (2021).

    Google Scholar 

  58. Zadeh, A. H., Poulos, Z. & Moshovos, A. Deep learning language modeling workloads: where time goes on graphics processors. In IEEE International Symposium on Workload Characterization 131–142 (IEEE, 2019).

  59. Oh, S. et al. Energy-efficient Mott activation neuron for full-hardware implementation of neural networks. Nat. Nanotechnol. https://doi.org/10.1038/s41565-021-00874-8 (2021).

  60. Surekcigil Pesch, I., Bestelink, E., de Sagazan, O., Mehonic, A. & Sporea, R. A. Multimodal transistors as ReLU activation functions in physical neural network classifiers. Sci. Rep. 12, 670 (2022).

    Google Scholar 

  61. Mennel, L. et al. Ultrafast machine vision with 2D material neural network image sensors. Nature 579, 62–66 (2020).

    Google Scholar 

  62. Yu, S., Jiang, H., Huang, S., Peng, X. & Lu, A. Compute-in-memory chips for deep learning: recent trends and prospects. IEEE Circuits Syst. Mag. 21, 31–56 (2021).

    Google Scholar 

  63. Gallego, G. et al. Event-based vision: a survey. IEEE Trans. Pattern Anal. Mach. Intell. 44, 154–180 (2022).

    Google Scholar 

  64. Yao, P. et al. Fully hardware-implemented memristor convolutional neural network. Nature 577, 641–646 (2020).

    Google Scholar 

  65. Chen, Y. et al. Polymer memristor for information storage and neuromorphic applications. Mater. Horiz. 1, 489–506 (2014).

    Google Scholar 

  66. Salas, E. B. Number of satellites launched from 1957 to 2019. Statista https://www.statista.com/statistics/896699/number-of-satellites-launched-by-year/#statisticContainer (2022).

  67. Tan, F. et al. Investigation on the response of TaOx-based resistive random-access memories to heavy-ion irradiation. IEEE Trans. Nucl. Sci. 60, 4520–4525 (2013).

    Google Scholar 

  68. Gao, L., Holbert, K. E. & Yu, S. Total ionizing dose effects of gamma-ray radiation on nbox-based selector devices for crossbar array memory. IEEE Trans. Nucl. Sci. 64, 1535–1539 (2017).

    Google Scholar 

  69. Lupo, N., Calligaro, C., Gastaldi, R., Wenger, C. & Maloberti, F. Design of resistive non-volatile memories for rad-hard applications. In IEEE International Symposium on Circuits and Systems 1594–1597 (IEEE, 2016).

  70. Park, G. et al. Immunologic and tissue biocompatibility of flexible/stretchable electronics and optoelectronics. Adv. Healthc. Mater. 3, 515–525 (2014).

    Google Scholar 

  71. Salmilehto, J., Deppe, F., Di Ventra, M., Sanz, M. & Solano, E. Quantum memristors with superconducting circuits. Sci. Rep. 7, 42044 (2017).

    Google Scholar 

  72. Spagnolo, M. et al. Experimental photonic quantum memristor. Nat. Photon. https://doi.org/10.1038/s41566-022-00973-5 (2022).

  73. Li, X. & Wu, X. Constructing long short-term memory based deep recurrent neural networks for large vocabulary speech recognition. In IEEE International Conference on Acoustics, Speech and Signal Processing 4520–4524 (IEEE, 2015).

  74. Soriano, M. C. Reservoir computing speeds up. Physics https://doi.org/10.1103/physics.10.12 (2017).

  75. Tanaka, G. et al. Recent advances in physical reservoir computing: a review. Neural Netw. 115, 100 – 123 (2019).

    Google Scholar 

  76. Larger, L. et al. High-speed photonic reservoir computing using a time-delay-based architecture: million words per second classification. Phys. Rev. X 7, 011015 (2017).

    Google Scholar 

  77. Fernando, C. & Sojakka, S. in Advances in Artificial Life: ECAL 2003 Lecture Notes in Computer Science Vol. 2801 (eds Banzhaf, W. et al.) 588–597 (Springer, 2003); https://doi.org/10.1007/978-3-540-39432-7_63

  78. Asanovic, K. et al. The Landscape Of Parallel Computing Research: A View From Berkeley Technical Report UCB/EECS-2006-183 (EECS Department, Univ. California, Berkeley, 2006); http://www2.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-183.html

  79. Jongerius, R., Stanley-Marbell, P. & Corporaal, H. Quantifying the common computational problems in contemporary applications. In IEEE International Symposium on Workload Characterization 74–74 (IEEE, 2011).

  80. Tsai, H., Ambrogio, S., Narayanan, P., Shelby, R. M. & Burr, G. W. Recent progress in analog memory-based accelerators for deep learning. J. Phys. D 51, 283001 (2018).

    Google Scholar 

Download references

Acknowledgements

P.S.-M. is supported by EPSRC grant EP/V047507/1 and by the UKRI Materials Made Smarter Research Centre (EPSRC grant EP/V061798/1). N.J.T. acknowledges funding from EPSRC grant EP/L016087/1. S.H. acknowledges funding from EPSRC (EP/P005152/1, EP/P007767/1). We thank J. Crowcroft, S. Tappertzhofen and H. Joyce for their comments and feedback on the paper. Lastly, we acknowledge the contributions of J. Meech and J. Rodowicz in compiling the data in Supplementary Fig. 5.

Author information

Authors and Affiliations

Authors

Contributions

P.S.-M. conceived the idea. N.J.T. wrote the paper, collected data and performed analysis under the guidance of S.H. and P.S.-M.

Corresponding author

Correspondence to Phillip Stanley-Marbell.

Ethics declarations

Competing interests

The authors declare no competing interests.

Peer review

Peer review information

Nature Electronics thanks Kerem Camsari, Sreetosh Goswami, Melika Payvand and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Supplementary Information

Supplementary Figs. 1–8, Discussion and Tables 1–3.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Tye, N.J., Hofmann, S. & Stanley-Marbell, P. Materials and devices as solutions to computational problems in machine learning. Nat Electron 6, 479–490 (2023). https://doi.org/10.1038/s41928-023-00977-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1038/s41928-023-00977-1

This article is cited by

Search

Quick links

Nature Briefing AI and Robotics

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing: AI and Robotics