Abstract
The growth of machine learning, combined with the approaching limits of conventional digital computing, are driving a search for alternative and complementary forms of computation, but few novel devices have been adopted by mainstream computing systems. The development of such computer technology requires advances in both computational devices and computer architectures. However, a disconnect exists between the device community and the computer architecture community, which limits progress. Here we explore this disconnect with a focus on machine learning hardware accelerators. We argue that the direct mapping of computational problems to materials and device properties provides a powerful route forwards. We examine novel materials and devices that have been successfully applied as solutions to computational problems: non-volatile memories for matrix-vector multiplication, magnetic tunnel junctions for stochastic computing and resistive memory for reconfigurable logic. We also propose metrics to facilitate comparisons between different solutions to machine learning tasks and highlight applications where novel materials and devices could potentially be of use.
This is a preview of subscription content, access via your institution
Access options
Access Nature and 54 other Nature Portfolio journals
Get Nature+, our best-value online-access subscription
$29.99 / 30 days
cancel any time
Subscribe to this journal
Receive 12 digital issues and online access to articles
$119.00 per year
only $9.92 per issue
Buy this article
- Purchase on SpringerLink
- Instant access to full article PDF
Prices may be subject to local taxes which are calculated during checkout
Similar content being viewed by others
Data availability
All relevant data are included in the paper and/or its Supplementary Information files.
References
Rumble, J. & Bruno, T. CRC Handbook of Chemistry and Physics 2019-2020: A Ready-reference Book of Chemical and Physical Data CRC Handbook of Chemistry and Physics (Taylor & Francis Group, 2019).
Moskowitz, S. Advanced Materials Innovation: Managing Global Technology in the 21st century (Wiley, 2016).
Keyes, R. W. What makes a good computer device? Science 230, 138–144 (1985).
Mehonic, A. & Kenyon, A. J. Brain-inspired computing needs a master plan. Nature 604, 255–260 (2022).
Abu-Mostafa, Y. S., Magdon-Ismail, M. & Lin, H.-T. Learning From Data (AMLBook, 2012).
Domingos, P. A few useful things to know about machine learning. Commun. ACM 55, 78–87 (2012).
Goodfellow, I., Bengio, Y. & Courville, A. Deep Learning (MIT Press, 2016).
Merolla, P. A. et al. A million spiking-neuron integrated circuit with a scalable communication network and interface. Science 345, 668–673 (2014).
Davies, M. et al. Loihi: a neuromorphic manycore processor with on-chip learning. IEEE Micro 38, 82–99 (2018).
Pei, J. et al. Towards artificial general intelligence with hybrid tianjic chip architecture. Nature 572, 106–111 (2019).
Painkras, E. et al. SpiNNaker: a 1-W 18-core system-on-chip for massively-parallel neural network simulation. IEEE J. Solid State Circuits 48, 1943–1953 (2013).
McCulloch, W. S. & Pitts, W. A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophys. 5, 115–133 (1943).
Hodgkin, A. L. & Huxley, A. F. A quantitative description of membrane current and its application to conduction and excitation in nerve. J. Physiol. 117, 500–544 (1952).
Ermentrout, G. B. & Kopell, N. Parabolic bursting in an excitable system coupled with a slow oscillation. SIAM J. Appl. Math. 46, 233–253 (1986).
Jolivet, R., Rauch, A., Lüscher, H.-R. & Gerstner, W. Predicting spike timing of neocortical pyramidal neurons by simple threshold models. J. Comput. Neurosci. 21, 35–49 (2006).
Galves, A. & Löcherbach, E. Infinite systems of interacting chains with memory of variable length—a stochastic model for biological neural nets. J. Stat. Phys. 151, 896–921 (2013).
Schuman, C. D. et al. Opportunities for neuromorphic computing algorithms and applications. Nat. Comput. Sci. 2, 10–19 (2022).
Smith, J. D. et al. Neuromorphic scaling advantages for energy-efficient random walk computations. Nat. Electron. 5, 102–112 (2022).
Zhang, H.-T. et al. Reconfigurable perovskite nickelate electronics for artificial intelligence. Science 375, 533–539 (2022).
Brent, R. P. Multiple-precision Zero-finding Methods and the Complexity of Elementary Function Evaluation 151–176 (Academic Press, 1976).
Lecun, Y., Bottou, L., Bengio, Y. & Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 86, 2278–2324 (1998).
Hiatt, W. R. & Hickmott, T. W. Bistable switching in niobium oxide diodes. Appl. Phys. Lett. 6, 106–108 (1965).
Hu, M. et al. Memristor-based analog computation and neural network classification with a dot product engine. Adv. Mater. 30, 1705914 (2018).
Li, C. et al. In-memory computing with memristor arrays. In 2018 IEEE International Memory Workshop 1–4 (IEEE, 2018).
Byerly, A., Kalganova, T. & Dear, I. No routing needed between capsules. Neurocomputing 463, 545–553 (2921).
Reuther, A. et al. Survey and benchmarking of machine learning accelerators. In IEEE High Performance Extreme Computing Conference 1–9 (IEEE, 2019).
Norrie, T. et al. The design process for google’s training chips: TPUv2 and TPUv3. IEEE Micro 41, 56–63 (2021).
Fuchs, A. & Wentzlaff, D. The accelerator wall: limits of chip specialization. In IEEE International Symposium on High Performance Computer Architecture 1–14 (IEEE, 2019).
Adolf, R., Rama, S., Reagen, B., Wei, G.-y. & Brooks, D. Fathom: reference workloads for modern deep learning methods. In IEEE International Symposium on Workload Characterization 1–10 (IEEE, 2016).
Liu, J., Zhao, H., Ogleari, M. A., Li, D. & Zhao, J. Processing-in-memory for energy-efficient neural network training: a heterogeneous approach. In 51st Annual IEEE/ACM International Symposium on Microarchitecture 655–668 (IEEE, 2018).
Reddy, M. in API Design for C++ (ed. Reddy, M.) 209–240 (Morgan Kaufmann, 2011).
Sarpeshkar, R. Analog versus digital: extrapolating from electronics to neurobiology. Neural Comput. 10, 1601–1638 (1998).
Hu, M., Strachan, J. P., Li, Z. & Stanley-Williams, R. Dot-product engine as computing memory to accelerate machine learning algorithms. In 17th International Symposium on Quality Electronic Design 374–379 (IEEE, 2016).
Garbin, D. et al. Variability-tolerant convolutional neural network for pattern recognition applications based on oxram synapses. In IEEE International Electron Devices Meeting 28.4.1–28.4.4 (IEEE, 2014).
Lin, P. et al. Three-dimensional memristor circuits as complex neural networks. Nat. Electron. 3, 225–232 (2020).
Chen, J.-H., Jang, C., Xiao, S., Ishigami, M. & Fuhrer, M. S. Intrinsic and extrinsic performance limits of graphene devices on SiO2. Nat. Nanotechnol. 3, 206–209 (2008).
Querlioz, D., Bichler, O. & Gamrat, C. Simulation of a memristor-based spiking neural network immune to device variations. In 2011 International Joint Conference on Neural Networks 1775–1781 (IEEE, 2011).
Payvand, M., Nair, M. V., Müller, L. K. & Indiveri, G. A neuromorphic systems approach to in-memory computing with non-ideal memristive devices: from mitigation to exploitation. Faraday Discuss. 213, 487–510 (2019).
Moro, F. et al. Neuromorphic object localization using resistive memories and ultrasonic transducers. Nat. Commun. 13, 3506 (2022).
Li, Y., Wang, Z., Midya, R., Xia, Q. & Yang, J. J. Review of memristor devices in neuromorphic computing: materials sciences and device challenges. J. Phys. D 51, 503002 (2018).
Wang, Y. et al. Mott-transition-based RRAM. Mater. Today 28, 63–80 (2019).
Wang, H. & Yan, X. Overview of resistive random access memory (RRAM): materials, filament mechanisms, performance optimization, and prospects. Phys. Status Solidi Rapid Res. Lett. 13, 1900073 (2019).
Akerman, J. Toward a universal memory. Science 308, 508–510 (2005).
Palem, K. V. Energy aware computing through probabilistic switching: a study of limits. IEEE Trans. Comput. 54, 1123–1137 (2005).
Camsari, K. Y., Faria, R., Sutton, B. M. & Datta, S. Stochastic p-bits for invertible logic. Phys. Rev. X 7, 031014 (2017).
Camsari, K. Y., Sutton, B. M. & Datta, S. p-bits for probabilistic spin logic. Appl. Phys. Rev. 6, 011305 (2019).
Borders, W. A. et al. Integer factorization using stochastic magnetic tunnel junctions. Nature 573, 390–393 (2019).
Khasanvis, S. et al. Self-similar magneto-electric nanocircuit technology for probabilistic inference engines. IEEE Trans. Nanotechnol. 14, 980–991 (2015).
Kim, J. et al. Exploitable magnetic anisotropy of the two-dimensional magnet CrI3. Nano Lett. 20, 929–935 (2020).
Chen, Z., He, J., Zhou, P., Na, J. & Sun, L. Strain control of the electronic structures, magnetic states, and magnetic anisotropy of Fe doped single-layer MoS2. Comput. Mater. Sci. 110, 102–108 (2015).
Mizrahi, A. et al. Neural-like computing with populations of superparamagnetic basis functions. Nat. Commun. 9, 1533 (2018).
Bhuin, S., Sweeney, J., Pagliarini, S., Biswas, A. K. & Pileggi, L. A self-calibrating sense amplifier for a true random number generator using hybrid FinFET-straintronic MTJ. In 2017 IEEE/ACM International Symposium on Nanoscale Architectures 147–152 (IEEE, 2017).
Bhuin, S., Biswas, A. K. & Pileggi, L. Strained MTJs with latch-based sensing for stochastic computing. In IEEE 17th International Conference on Nanotechnology 1027–1030 (IEEE, 2017).
Pagliarini, S. N., Bhuin, S., Isgenc, M. M., Biswas, A. K. & Pileggi, L. A probabilistic synapse with strained MTJs for spiking neural networks. IEEE Trans. Neural Netw. Learn. Syst. 31, 1113–1123 (2020).
McDowell, D. L. et al. in Integrated Design of Multiscale, Multifunctional Materials and Products (eds McDowell, D. L. et al.) 351–360 (Butterworth-Heinemann, 2010).
Kaspar, C., Ravoo, B. J., van der Wiel, W. G., Wegner, S. V. & Pernice, W. H. P. The rise of intelligent matter. Nature 594, 345–355 (2021).
Goswami, S. et al. Decision trees within a molecular memristor. Nature 597, 51–56 (2021).
Zadeh, A. H., Poulos, Z. & Moshovos, A. Deep learning language modeling workloads: where time goes on graphics processors. In IEEE International Symposium on Workload Characterization 131–142 (IEEE, 2019).
Oh, S. et al. Energy-efficient Mott activation neuron for full-hardware implementation of neural networks. Nat. Nanotechnol. https://doi.org/10.1038/s41565-021-00874-8 (2021).
Surekcigil Pesch, I., Bestelink, E., de Sagazan, O., Mehonic, A. & Sporea, R. A. Multimodal transistors as ReLU activation functions in physical neural network classifiers. Sci. Rep. 12, 670 (2022).
Mennel, L. et al. Ultrafast machine vision with 2D material neural network image sensors. Nature 579, 62–66 (2020).
Yu, S., Jiang, H., Huang, S., Peng, X. & Lu, A. Compute-in-memory chips for deep learning: recent trends and prospects. IEEE Circuits Syst. Mag. 21, 31–56 (2021).
Gallego, G. et al. Event-based vision: a survey. IEEE Trans. Pattern Anal. Mach. Intell. 44, 154–180 (2022).
Yao, P. et al. Fully hardware-implemented memristor convolutional neural network. Nature 577, 641–646 (2020).
Chen, Y. et al. Polymer memristor for information storage and neuromorphic applications. Mater. Horiz. 1, 489–506 (2014).
Salas, E. B. Number of satellites launched from 1957 to 2019. Statista https://www.statista.com/statistics/896699/number-of-satellites-launched-by-year/#statisticContainer (2022).
Tan, F. et al. Investigation on the response of TaOx-based resistive random-access memories to heavy-ion irradiation. IEEE Trans. Nucl. Sci. 60, 4520–4525 (2013).
Gao, L., Holbert, K. E. & Yu, S. Total ionizing dose effects of gamma-ray radiation on nbox-based selector devices for crossbar array memory. IEEE Trans. Nucl. Sci. 64, 1535–1539 (2017).
Lupo, N., Calligaro, C., Gastaldi, R., Wenger, C. & Maloberti, F. Design of resistive non-volatile memories for rad-hard applications. In IEEE International Symposium on Circuits and Systems 1594–1597 (IEEE, 2016).
Park, G. et al. Immunologic and tissue biocompatibility of flexible/stretchable electronics and optoelectronics. Adv. Healthc. Mater. 3, 515–525 (2014).
Salmilehto, J., Deppe, F., Di Ventra, M., Sanz, M. & Solano, E. Quantum memristors with superconducting circuits. Sci. Rep. 7, 42044 (2017).
Spagnolo, M. et al. Experimental photonic quantum memristor. Nat. Photon. https://doi.org/10.1038/s41566-022-00973-5 (2022).
Li, X. & Wu, X. Constructing long short-term memory based deep recurrent neural networks for large vocabulary speech recognition. In IEEE International Conference on Acoustics, Speech and Signal Processing 4520–4524 (IEEE, 2015).
Soriano, M. C. Reservoir computing speeds up. Physics https://doi.org/10.1103/physics.10.12 (2017).
Tanaka, G. et al. Recent advances in physical reservoir computing: a review. Neural Netw. 115, 100 – 123 (2019).
Larger, L. et al. High-speed photonic reservoir computing using a time-delay-based architecture: million words per second classification. Phys. Rev. X 7, 011015 (2017).
Fernando, C. & Sojakka, S. in Advances in Artificial Life: ECAL 2003 Lecture Notes in Computer Science Vol. 2801 (eds Banzhaf, W. et al.) 588–597 (Springer, 2003); https://doi.org/10.1007/978-3-540-39432-7_63
Asanovic, K. et al. The Landscape Of Parallel Computing Research: A View From Berkeley Technical Report UCB/EECS-2006-183 (EECS Department, Univ. California, Berkeley, 2006); http://www2.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-183.html
Jongerius, R., Stanley-Marbell, P. & Corporaal, H. Quantifying the common computational problems in contemporary applications. In IEEE International Symposium on Workload Characterization 74–74 (IEEE, 2011).
Tsai, H., Ambrogio, S., Narayanan, P., Shelby, R. M. & Burr, G. W. Recent progress in analog memory-based accelerators for deep learning. J. Phys. D 51, 283001 (2018).
Acknowledgements
P.S.-M. is supported by EPSRC grant EP/V047507/1 and by the UKRI Materials Made Smarter Research Centre (EPSRC grant EP/V061798/1). N.J.T. acknowledges funding from EPSRC grant EP/L016087/1. S.H. acknowledges funding from EPSRC (EP/P005152/1, EP/P007767/1). We thank J. Crowcroft, S. Tappertzhofen and H. Joyce for their comments and feedback on the paper. Lastly, we acknowledge the contributions of J. Meech and J. Rodowicz in compiling the data in Supplementary Fig. 5.
Author information
Authors and Affiliations
Contributions
P.S.-M. conceived the idea. N.J.T. wrote the paper, collected data and performed analysis under the guidance of S.H. and P.S.-M.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Peer review
Peer review information
Nature Electronics thanks Kerem Camsari, Sreetosh Goswami, Melika Payvand and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary information
Supplementary Information
Supplementary Figs. 1–8, Discussion and Tables 1–3.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Tye, N.J., Hofmann, S. & Stanley-Marbell, P. Materials and devices as solutions to computational problems in machine learning. Nat Electron 6, 479–490 (2023). https://doi.org/10.1038/s41928-023-00977-1
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1038/s41928-023-00977-1
This article is cited by
-
Bioinspired sensing-memory-computing integrated vision systems: biomimetic mechanisms, design principles, and applications
Science China Information Sciences (2024)