Abstract
Traditional von Neumann computing systems involve separate processing and memory units. However, data movement is costly in terms of time and energy and this problem is aggravated by the recent explosive growth in highly data-centric applications related to artificial intelligence. This calls for a radical departure from the traditional systems and one such non-von Neumann computational approach is in-memory computing. Hereby certain computational tasks are performed in place in the memory itself by exploiting the physical attributes of the memory devices. Both charge-based and resistance-based memory devices are being explored for in-memory computing. In this Review, we provide a broad overview of the key computational primitives enabled by these memory devices as well as their applications spanning scientific computing, signal processing, optimization, machine learning, deep learning and stochastic computing.
This is a preview of subscription content, access via your institution
Relevant articles
Open Access articles citing this article.
-
Device-scale atomistic modelling of phase-change memory materials
Nature Electronics Open Access 25 September 2023
-
Hardware-aware training for large-scale and diverse deep learning inference workloads using in-memory computing-based accelerators
Nature Communications Open Access 30 August 2023
-
Echo state graph neural networks with analogue random resistive memory arrays
Nature Machine Intelligence Open Access 13 February 2023
Access options
Access Nature and 54 other Nature Portfolio journals
Get Nature+, our best-value online-access subscription
$29.99 / 30 days
cancel any time
Subscribe to this journal
Receive 12 print issues and online access
$259.00 per year
only $21.58 per issue
Rent or buy this article
Prices vary by article type
from$1.95
to$39.95
Prices may be subject to local taxes which are calculated during checkout







Change history
16 July 2020
A Correction to this paper has been published: https://doi.org/10.1038/s41565-020-0756-8
References
Mutlu, O., Ghose, S., Gómez-Luna, J. & Ausavarungnirun, R. Processing data where it makes sense: Enabling in-memory computation. Microprocess. Microsyst. 67, 28–41 (2019).
Horowitz, M. Computing’s energy problem (and what we can do about it). In Proc. International Solid-state Circuits Conference (ISSCC) 10–14 (IEEE, 2014).
Keckler, S. W., Dally, W. J., Khailany, B., Garland, M. & Glasco, D. GPUs and the future of parallel computing. IEEE Micro 31, 7–17 (2011).
Jouppi, N. P. et al. In-datacenter performance analysis of a tensor processing unit. In Proc. International Symposium on Computer Architecture (ISCA) 1–12 (IEEE, 2017).
Sze, V., Chen, Y.-H., Yang, T.-J. & Emer, J. S. Efficient processing of deep neural networks: A tutorial and survey. Proc. IEEE 105, 2295–2329 (2017).
Patterson, D. et al. A case for intelligent RAM. IEEE Micro 17, 34–44 (1997).
Farooq, M. et al. 3D copper TSV integration, testing and reliability. In Proc. International Electron Devices Meeting 7–1 (IEEE, 2011).
Pawlowski, J. T. Hybrid memory cube (HMC). In Proceedings of the Hot Chips Symposium (HCS) 1–24 (IEEE, 2011).
Kim, J. & Kim, Y. HBM: Memory solution for bandwidth-hungry processors. In Proc. Hot Chips Symposium (HCS) 1–24 (IEEE, 2014).
Shulaker, M. M. et al. Three-dimensional integration of nanotechnologies for computing and data storage on a single chip. Nature 547, 74 (2017).
Di Ventra, M. & Pershin, Y. V. The parallel approach. Nat. Phys. 9, 200 (2013).
Indiveri, G. & Liu, S.-C. Memory and information processing in neuromorphic systems. Proc. The IEEE 103, 1379–1397 (2015).
Zhirnov, V. V. & Marinella, M. J. in Emerging Nanoelectronic Devices (eds Chen, A.) Ch. 3 (Wiley Online Library, 2015).
Wong, H.-S. P. & Salahuddin, S. Memory leads the way to better computing. Nat. Nanotechnol. 10, 191 (2015).
Chua, L. Resistance switching memories are memristors. Appl. Phys. A Mater. Sci. Process. 102, 765–783 (2011).
Li, S. et al. DRISA: A DRAM-based reconfigurable in-situ accelerator. In Proc. International Symposium on Microarchitecture (MICRO) 288–301 (IEEE, 2017).
Seshadri, V. et al. Ambit: In-memory accelerator for bulk bitwise operations using commodity DRAM technology. In Proc. International Symposium on Microarchitecture 273–287 (IEEE, 2017).
Jeloka, S., Akesh, N. B., Sylvester, D. & Blaauw, D. A 28 nm configurable memory (TCAM/BCAM/SRAM) using push-rule 6T bit cell enabling logic-in-memory. IEEE J. Solid-State Circuits 51, 1009–1021 (2016).
Aga, S. et al. Compute caches. In Proc. International Symposium on High Performance Computer Architecture (HPCA) 481–492 (IEEE, 2017).
Wang, J. et al. A compute SRAM with bit-serial integer/floating-point operations for programmable in-memory vector acceleration. In Proc. International Solid- State Circuits Conference (ISSCC) 224–226 (IEEE, 2019).
Jiang, Z., Yin, S., Seok, M. & Seo, J. XNOR-SRAM: In-memory computing SRAM macro for binary/ternary deep neural networks. In Proc. Symposium on VLSI Technology 173–174 (IEEE, 2018).
Biswas, A. & Chandrakasan, A. P. CONV-SRAM: an energy-efficient SRAM with in-memory dot-product computation for low-power convolutional neural networks. IEEE J. Solid-State Circuits 54, 217–230 (2019).
Valavi, H., Ramadge, P. J., Nestler, E. & Verma, N. A 64-tile 2.4-Mb in-memory-computing CNN accelerator employing charge-domain compute. IEEE J. Solid-State Circuits 54, 1789–1799 (2019).
Verma, N. et al. In-memory computing: Advances and prospects. IEEE J. Solid-State Circuits 11, 43–55 (2019).
Gonugondla, S. K., Kang, M. & Shanbhag, N. R. A variation-tolerant in-memory machine learning classifier via on-chip training. IEEE J. Solid-State Circuits 53, 3163–3173 (2018).
Bankman, D., Yang, L., Moons, B., Verhelst, M. & Murmann, B. An always-on 3.8 μ J/86% CIFAR-10 mixed-signal binary CNN processor with all memory on chip in 28-nm CMOS. IEEE J. Solid-State Circuits 54, 158–172 (2019).
Diorio, C., Hasler, P., Minch, A. & Mead, C. A. A single-transistor silicon synapse. IEEE Transactions on Electron Devices 43, 1972–1980 (1996).
Merrikh-Bayat, F. et al. High-performance mixed-signal neurocomputing with nanoscale floating-gate memory cell arrays. IEEE Trans. Neural Netw. Learn. Syst. 29, 4782–4790 (2018).
Wang, P. et al. Three-dimensional NAND flash for vector-matrix multiplication. EEE Trans. Very Large Scale Integr. VLSI Syst. 27, 988–991 (2019).
Burr, G. W. et al. Access devices for 3D crosspoint memory. J. Vac. Sci. Technol. B Nanotechnol. Microelectron. 32, 040802 (2014).
Hickmott, T. Low-frequency negative resistance in thin anodic oxide films. J. Appl. Phys. 33, 2669–2682 (1962).
Beck, A., Bednorz, J., Gerber, C., Rossel, C. & Widmer, D. Reproducible switching effect in thin oxide films for memory applications. Applied Physics Letters 77, 139–141 (2000).
Waser, R. & Aono, M. Nanoionics-based resistive switching memories. Nat. Mater. 6, 833–840 (2007).
Strukov, D. B., Snider, G. S., Stewart, D. R. & Williams, R. S. The missing memristor found. Nature 453, 80 (2008).
Ovshinsky, S. R. Reversible electrical switching phenomena in disordered structures. Phys. Rev. Lett. 21, 1450 (1968).
Wong, H.-S. P. et al. Phase change memory. Proc. IEEE 98, 2201–2227 (2010).
Burr, G. W. et al. Recent progress in phase-change memory technology. IEEE J. Emerg. Sel. Top. Circuits Syst. 6, 146–162 (2016).
Khvalkovskiy, A. et al. Basic principles of STT-MRAM cell operation in memory arrays. J. Phys. D Appl. Phys. 46, 074001 (2013).
Kent, A. D. & Worledge, D. C. A new spin on magnetic memories. Nat. Nanotechnol. 10, 187 (2015).
Vourkas, I. & Sirakoulis, G. C. Emerging memristor-based logic circuit design approaches: A review. IEEE Circuits and Systems Magazine 16, 15–30 (2016).
Borghetti, J. et al. Memristive switches enable stateful logic operations via material implication. Nature 464, 873 (2010).
Linn, E., Rosezin, R., Tappertzhofen, S., Böttger, U. & Waser, R. Beyond von neumann-logic operations in passive crossbar arrays alongside memory operations. Nanotechnology 23, 305205 (2012).
Jeong, D. S., Kim, K. M., Kim, S., Choi, B. J. & Hwang, C. S. Memristors for energy-efficient new computing paradigms. Adv. Electron. Mater. 2, 1600090 (2016).
Kvatinsky, S. et al. MAGIC-memristor-aided logic IEEE Trans. Circuits Syst. II Express Briefs 61, 895–899 (2014).
Mahmoudi, H., Windbacher, T., Sverdlov, V. & Selberherr, S. Implication logic gates using spin-transfer-torque-operated magnetic tunnel junctions for intrinsic logic-in-memory. Solid State Electron. 84, 191–197 (2013).
Kim, K. M. et al. Single-cell stateful logic using a dual-bit memristor. Phys. Status Solidi Rapid Res. Lett. 13, 1800629 (2019).
Xu, N., Fang, L., Kim, K. M. & Hwang, C. S. Time-efficient stateful dual-bit-memristor logic. Phys. Status Solidi Rapid Res. Lett. 13, 1900033 (2019).
Li, S. et al. Pinatubo: A processing-in-memory architecture for bulk bitwise operations in emerging non-volatile memories. In Proc. The Design Automation Conference (DAC) 173 (ACM, 2016).
Xie, L. et al. Scouting logic: A novel memristor-based logic design for resistive computing. In Proc. The IEEE Symposium on VLSI (ISVLSI) 176–181 (IEEE, 2017).
Maan, A. K., Jayadevi, D. A. & James, A. P. A survey of memristive threshold logic circuits. IEEE Trans. Neural Netw. Learn. Syst. 28, 1734–1746 (2016).
Burr, G. W. et al. Neuromorphic computing using non-volatile memory. Adv Phys X 2, 89–124 (2017).
Ielmini, D. & Wong, H.-S. P. In-memory computing with resistive switching devices. Nat. Electron. 1, 333 (2018).
Wang, Z. et al. Resistive switching materials for information processing. Nat. Rev. Mater. https://doi.org/10.1038/s41578-019-0159-3 (2020).
Wright, C. D., Hosseini, P. & Diosdado, J. A. V. Beyond von-neumann computing with nanoscale phase-change memory devices. Adv. Funct. Mater. 23, 2248–2254 (2013).
Sebastian, A. et al. Brain-inspired computing using phase-change memory devices. J. Appl. Phys. 124, 111101 (2018).
Godse, A. P. & Godse, D. A. Computer Organization and Architecture (Technical Publications, 2008).
Bojnordi, M. N. & Ipek, E. Memristive boltzmann machine: A hardware accelerator for combinatorial optimization and deep learning. In Proc. The International Symposium on High Performance Computer Architecture (HPCA) 1–13 (IEEE, 2016).
Shafiee, A. et al. ISAAC: A convolutional neural network accelerator with in-situ analog arithmetic in crossbars. Comput. Archit. News 44, 14–26 (2016).
Chi, P. et al. PRIME: A novel processing-in-memory architecture for neural network computation in ReRAM-based main memory. In Proc. 43rd Annual International Symposium on Computer Architecture (ISCA) News 27–39 (IEEE, 2016).
Song, L., Qian, X., Li, H. & Chen, Y. PIPELAYER: A pipelined ReRAM-based accelerator for deep learning. In Proc. The International Symposium on High Performance Computer Architecture (HPCA), 541–552 (IEEE, 2017).
Zidan, M. A. et al. A general memristor-based partial differential equation solver. Nat. Electron. 1, 411 (2018).
Higham, N. J. Accuracy and Stability of Numerical Algorithms, Vol. 80 (Society for Industrial and Applied Mathematics, 2002).
Bekas, C., Curioni, A. & Fedulova, I. Low cost high performance uncertainty quantification. In Proc. 2nd Workshop on High Performance Computational Finance 1–8 (ACM, 2009).
Le Gallo, M. et al. Mixed-precision in-memory computing. Nat. Electron. 1, 246–253 (2018).
Liu, S., Wang, Y., Fardad, M. & Varshney, P. K. A memristor-based optimization framework for artificial intelligence applications. IEEE Circuits and Systems Magazine 18, 29–44 (2018).
Sun, Z. et al. Solving matrix equations in one step with cross-point resistive arrays. Proc. Natl. Acad. Sci. USA 116, 4123–4128 (2019).
Sturges, R. H. Analog matrix inversion (robot kinematics). IEEE Journal on Robotics and Automation 4, 157–162 (1988).
Feinberg, B., Wang, S. & Ipek, E. Making memristive neural network accelerators reliable. In Proc. The International Symposium on High Performance Computer Architecture (HPCA) 52–65 (IEEE, 2018).
Feinberg, B., Vengalam, U. K. R., Whitehair, N., Wang, S. & Ipek, E. Enabling scientific computing on memristive accelerators. In Proc. International Symposium on Computer Architecture (ISCA) 367–382 (IEEE, 2018).
Li, C. et al. Analogue signal and image processing with large memristor crossbars. Nat. Electron. 1, 52–59 (2018).
Le Gallo, M., Sebastian, A., Cherubini, G., Giefers, H. & Eleftheriou, E. Compressed sensing with approximate message passing using in-memory computing. IEEE Trans. Electron Devices 65, 4304–4312 (2018).
Cai, F. et al. Harnessing intrinsic noise in memristor hopfield neural networks for combinatorial optimization. Preprint at https://arxiv.org/abs/1903.11194 (2019).
Mostafa, H., Müller, L. K. & Indiveri, G. An event-based architecture for solving constraint satisfaction problems. Nat. Commun. 6, 8941 (2015).
Parihar, A., Shukla, N., Jerry, M., Datta, S. & Raychowdhury, A. Vertex coloring of graphs via phase dynamics of coupled oscillatory networks. Sci. Rep. 7, 911 (2017).
Kumar, S., Strachan, J. P. & Williams, R. S. Chaotic dynamics in nanoscale NbO2 Mott memristors for analogue computing. Nature 548, 318 (2017).
Torrejon, J. et al. Neuromorphic computing with nanoscale spintronic oscillators. Nature 547, 428–431 (2017).
Seo, J. et al. On-chip sparse learning acceleration with CMOS and resistive synaptic devices. IEEE Trans. Nanotechnol. 14, 969–979 (2015).
Sheridan, P. M. et al. Sparse coding with memristor networks. Nat. Nanotechnol. 12, 784–789 (2017).
Sheridan, P. M., Du, C. & Lu, W. D. Feature extraction using memristor networks. IEEE Trans. Neural Netw. Learn. Syst. 27, 2327–2336 (2016).
Choi, S., Sheridan, P. & Lu, W. D. Data clustering using memristor networks. Sci. Rep. 5, 10492 (2015).
Karam, R., Puri, R., Ghosh, S. & Bhunia, S. Emerging trends in design and applications of memory-based computing and content-addressable memories. Proc. IEEE 103, 1311–1330 (2015).
Rahimi, A. et al. High-dimensional computing as a nanoscalable paradigm. IEEE Trans. Circuits Syst. I Regul. Pap. 64, 2508–2521 (2017).
Wu, T. F. et al. Hyperdimensional computing exploiting carbon nanotube FETs, resistive RAM, and their monolithic 3D integration. IEEE J. Solid-State Circuits 53, 3183–3196 (2018).
Graves, A. et al. Hybrid computing using a neural network with dynamic external memory. Nature 538, 471 (2016).
Ni, K. et al. Ferroelectric ternary content-addressable memory for one-shot learning. Nat. Electron. 2, 521–529 (2019).
Eryilmaz, S. B. et al. Brain-like associative learning using a nanoscale non-volatile phase change synaptic device array. Front. Neurosci. 8, 205 (2014).
Hu, S. et al. Associative memory realized by a reconfigurable memristive Hopfield neural network. Nat. Commun. 6, 7522 (2015).
Kavehei, O. et al. An associative capacitive network based on nanoscale complementary resistive switches for memory-intensive computing. Nanoscale 5, 5119–5128 (2013).
Du, C. et al. Reservoir computing using dynamic memristors for temporal information processing. Nat. Commun. 8, 2204 (2017).
Sebastian, A. et al. Temporal correlation detection using computational phase-change memory. Nat. Commun. 8, 1115 (2017).
LeCun, Y., Bengio, Y. & Hinton, G. Deep learning. Nature 521, 436 (2015).
He, K., Zhang, X., Ren, S. & Sun, J. Deep residual learning for image recognition. In Proc. Conference on Computer Vision and Pattern Recognition (CVPR) 770–778 (IEEE, 2016).
LeCun, Y. Deep learning hardware: Past, present, and future. In Proc. International Solid-State Circuits Conference (ISSCC) 12–19 (IEEE, 2019).
Chen, Y., Yang, T., Emer, J. & Sze, V. Eyeriss v2: A flexible accelerator for emerging deep neural networks on mobile devices. IEEE J. Em. Sel. Top. C 9, 292–308 (2019).
Dazzi, M. et al. 5 parallel prism: A topology for pipelined implementations of convolutional neural networks using computational memory. In Proc. NeurIPS MLSys Workshop (NeurIPS, 2019); http://learningsys.org/neurips19/acceptedpapers.html
Jia, Z., Maggioni, M., Smith, J. & Scarpazza, D. P. Dissecting the NVidia Turing T4 GPU via microbenchmarking. Preprint at https://arxiv.org/abs/1903.07486 (2019).
Hubara, I., Courbariaux, M., Soudry, D., El-Yaniv, R. & Bengio, Y. Quantized neural networks: Training neural networks with low precision weights and activations. J. Mach. Learn. Res. 18, 6869–6898 (2017).
Xue, C. et al. 24.1 a 1mb multibit ReRAM computing-in-memory macro with 14.6ns parallel MAC computing time for CNN based AI edge processors. In Proc. The International Solid-State Circuits Conference (ISSCC) 388–390 (IEEE, 2019).
Hu, M. et al. Memristor-based analog computation and neural network classification with a dot product engine. Advanced Materials 30, 1705914 (2018).
Yao, P. et al. Fully hardware-implemented memristor convolutional neural network. Nature 577, 641–646 (2020).
Suri, M. et al. Phase change memory as synapse for ultra-dense neuromorphic systems: Application to complex visual pattern extraction. In Proc. The International Electron Devices Meeting (IEDM) 4.4.1–44.4 (IEEE, 2011).
Chen, W.-H. et al. CMOS-integrated memristive non-volatile computing-in-memory for AI edge processors. Nat. Electron. 2, 420–428 (2019).
Murray, A. F. & Edwards, P. J. Enhanced mlp performance and fault tolerance resulting from synaptic weight noise during training. IEEE T. Neural Networ. 5, 792–802 (1994).
Liu, B. et al. Vortex: Variation-aware training for memristor X-bar. In Proc. The Design Automation Conference (DAC) 1–6 (DAC, 2015).
Sebastian, A. et al. Computational memory-based inference and training of deep neural networks. In Proc. The Symposium on VLSI Technology T168–T169 (IEEE, 2019).
Gokmen, T., Onen, M. & Haensch, W. Training deep convolutional neural networks with resistive cross-point devices. Front. Neurosci. 11, 538 (2017).
Alibart, F., Zamanidoost, E. & Strukov, D. B. Pattern classification by memristive crossbar circuits using ex situ and in situ training. Nat. Commun. 4, 2072 (2013).
Burr, G. W. et al. Experimental demonstration and tolerancing of a large-scale neural network (165 000 synapses) using phase-change memory as the synaptic weight element. IEEE T. Electron Dev. 62, 3498–3507 (2015).
Gokmen, T. & Vlasov, Y. Acceleration of deep neural network training with resistive cross-point devices: design considerations. Front. Neurosci. 10, 333 (2016).
Agarwal, S. et al. Achieving ideal accuracies in analog neuromorphic computing using periodic carry. In Proc. The Symposium on VLSI Technology T174–T175 (IEEE, 2017).
Ambrogio, S. et al. Equivalent-accuracy accelerated neural-network training using analogue memory. Nature 558, 60–67 (2018).
Yu, S. Neuro-inspired computing with emerging nonvolatile memory. Proc. IEEE 106, 260–285 (2018).
Prezioso, M. et al. Training and operation of an integrated neuromorphic network based on metal-oxide memristors. Nature 521, 61–64 (2015).
Yao, P. et al. Face classification using electronic synapses. Nat. Commun. 8, 15199 (2017).
Li, C. et al. Efficient and self-adaptive in-situ learning in multilayer memristor neural networks. Nat. Commun. 9, 2385 (2018).
Nandakumar, S. et al. Mixed-precision architecture based on computational memory for training deep neural networks. In Proc. The International Symposium on Circuits and Systems (ISCAS) 1–5 (IEEE, 2018).
Pfeiffer, M. & Pfeil, T. Deep learning with spiking neurons: Opportunities and challenges. Front. Neurosci. 12, 774 (2018).
Diehl, P. U. et al. Fast-classifying, high-accuracy spiking deep networks through weight and threshold balancing. In Proc. International Joint Conference on Neural Networks (IJCNN) 1–8 (IEEE, 2015).
Sengupta, A., Ye, Y., Wang, R., Liu, C. & Roy, K. Going deeper in spiking neural networks: VGG and residual architectures. Front. Neurosci. 13, 95 (2019).
Esser, S. K., Appuswamy, R., Merolla,P., Arthur, J. V. & Modha, D. S. Backpropagation for energy-efficient neuromorphic computing. In Proc. Advances in Neural Information Processing Systems (Eds. Cortes, C. et al) 1117–1125 (NIPS, 2015).
Lee, J. H., Delbruck, T. & Pfeiffer, M. Training deep spiking neural networks using backpropagation. Front. Neurosci. 10, 508 (2016).
Woźniak, S., Pantazi, A. & Eleftheriou, E. Deep networks incorporating spiking neural dynamics. Preprint at https://arxiv.org/abs/1812.07040 (2018).
Benjamin, B. V. et al. Neurogrid: A mixed-analog-digital multichip system for large-scale neural simulations. Proc. IEEE 102, 699–716 (2014).
Qiao, N. et al. A reconfigurable on-line learning spiking neuromorphic processor comprising 256 neurons and 128k synapses. Front. Neurosci. 9, 141 (2015).
Kuzum, D., Jeyasingh, R. G., Lee, B. & Wong, H.-S. P. Nanoelectronic programmable synapses based on phase change materials for brain-inspired computing. Nano Letters 12, 2179–2186 (2011).
Kim, S. et al. NVM neuromorphic core with 64k-cell (256-by-256) phase change memory synaptic array with on-chip neuron circuits for continuous in-situ learning. In Proc. The International Electron Devices Meeting (IEDM) 17–1 (IEEE, 2015).
Tuma, T., Le Gallo, M., Sebastian, A. & Eleftheriou, E. Detecting correlations using phase-change neurons and synapses. IEEE Electr. Device L. 37, 1238–1241 (2016).
Pantazi, A., Woźniak, S., Tuma, T. & Eleftheriou, E. All-memristive neuromorphic computing with level-tuned neurons. Nanotechnology 27, 355205 (2016).
Covi, E. et al. Analog memristive synapse in spiking networks implementing unsupervised learning. Front. Neurosci. 10, 482 (2016).
Serb, A. et al. Unsupervised learning in probabilistic neural networks with multi-state metal-oxide memristive synapses. Nat. Commun. 7, 12611 (2016).
Kheradpisheh, S. R., Ganjtabesh, M., Thorpe, S. J. & Masquelier, T. STDP-based spiking deep convolutional neural networks for object recognition. Neural Networks 99, 56–67 (2018).
Moraitis, T., Sebastian, A. & Eleftheriou, E. The role of short-term plasticity in neuromorphic learning: Learning from the timing of rate-varying events with fatiguing spike-timing-dependent plasticity. IEEE Nanotechnology Magazine 12, 45–53 (2018).
Wang, Z. et al. Memristors with diffusive dynamics as synaptic emulators for neuromorphic computing. Nat. Mater. 16, 101 (2017).
Carboni, R. & Ielmini, D. Stochastic memory devices for security and computing. Adv. Electron. Mater. 1900198 (2019).
Jo, S. H., Kim, K.-H. & Lu, W. Programmable resistance switching in nanoscale two-terminal devices. Nano letters 9, 496–500 (2008).
Le Gallo, M., Athmanathan, A., Krebs, D. & Sebastian, A. Evidence for thermally assisted threshold switching behavior in nanoscale phase-change memory cells. J. Appl. Phys. 119, 025704 (2016).
Le Gallo, M., Tuma, T., Zipoli, F., Sebastian, A. & Eleftheriou, E. Inherent stochasticity in phase-change memory devices. In Proc. 2016 46th European Solid-State Device Research Conference (ESSDERC) 373–376 (IEEE, 2016).
Alaghi, A. & Hayes, J. P. Survey of stochastic computing. ACM T Embed. Comput. S. 12, 92 (2013).
Gupta, S., Agrawal, A., Gopalakrishnan, K. & Narayanan, P. Deep learning with limited numerical precision. In Proc. International Conference on Machine Learning 1737–1746 (2015).
Yang, K. et al. 16.3 a 23mb/s 23pj/b fully synthesized true-random-number generator in 28nm and 65nm CMOS. In Proc. Proceedings of the International Solid-State Circuits Conference (ISSCC) 280–281 (IEEE, 2014).
Jiang, H. et al. A novel true random number generator based on a stochastic diffusive memristor. Nat. Commun. 8, 882 (2017).
Gaba, S., Sheridan, P., Zhou, J., Choi, S. & Lu, W. Stochastic memristive devices for computing and neuromorphic applications. Nanoscale 5, 5872–5878 (2013).
Balatti, S. et al. Physical unbiased generation of random numbers with coupled resistive switching devices. IEEE T. Electron Dev. 63, 2029–2035 (2016).
Choi, W. H. et al. A magnetic tunnel junction based true random number generator with conditional perturb and real-time output probability tracking. In Proc. The International Electron Devices Meeting 12–5 (IEEE, 2014).
Carboni, R. et al. Random number generation by differential read of stochastic switching in spin-transfer torque memory. IEEE Electr. Device L. 39, 951–954 (2018).
Shim, Y., Chen, S., Sengupta, A. & Roy, K. Stochastic spin-orbit torque devices as elements for bayesian inference. Sci. Rep. 7, 14101 (2017).
Tuma, T., Pantazi, A., Le Gallo, M., Sebastian, A. & Eleftheriou, E. Stochastic phase-change neurons. Nat. Nanotechnol. 11, 693–699 (2016).
Mizrahi, A. et al. Neural-like computing with populations of superparamagnetic basis functions. Nat. Commun. 9, 1533 (2018).
Bichler, O. et al. Visual pattern extraction using energy-efficient 2-PCM synapse neuromorphic architecture. IEEE T. Electron Dev. 59, 2206–2214 (2012).
Holcomb, D. E., Burleson, W. P. & Fu, K. Power-up SRAM state as an identifying fingerprint and source of true random numbers. IEEE T. Comput. 58, 1198–1210 (2009).
Gao, L., Chen, P.-Y., Liu, R. & Yu, S. Physical unclonable function exploiting sneak paths in resistive cross-point array. IEEE Transactions on Electron Devices 63, 3109–3115 (2016).
Nili, H. et al. Hardware-intrinsic security primitives enabled by analogue state and nonlinear conductance variations in integrated memristors. Nat. Electron. 1, 197 (2018).
Jiang, H. et al. A provable key destruction scheme based on memristive crossbar arrays. Nat. Electron. 1, 548 (2018).
Talati, N., Gupta, S., Mane, P. & Kvatinsky, S. Logic design within memristive memories using memristor-aided logic (MAGIC). IEEE T. Nanotechnol. 15, 635–650 (2016).
Cheng, L. et al. Functional demonstration of a memristive arithmetic logic unit (MemALU) for in-memory computing. Adv. Funct. Mater. (2019).
Haj-Ali, A., Ben-Hur, R., Wald, N., Ronen, R. & Kvatinsky, S. IMAGING: In-memory algorithms for image processing. IEEE T. Circuits Systems-I 65, 4258–4271 (2018).
Hamdioui, S. et al. Applications of computation-in-memory architectures based on memristive devices. In Proc. The Design, Automation & Test in Europe Conference & Exhibition (DATE) 486–491 (IEEE, 2019).
Xiong, F., Liao, A. D., Estrada, D. & Pop, E. Low-power switching of phase-change materials with carbon nanotube electrodes. Science 332, 568–570 (2011).
Li, K.-S. et al. Utilizing sub-5 nm sidewall electrode technology for atomic-scale resistive memory fabrication. In Proc. Symposium on VLSI Technology 1–2 (IEEE, 2014).
Salinga, M. et al. Monatomic phase change memory. Nat. Mater. 17, 681–685 (2018).
Pi, S. et al. Memristor crossbar arrays with 6-nm half-pitch and 2-nm critical dimension. Nat. Nanotechnol. 14, 35 (2019).
Brivio, S., Frascaroli, J. & Spiga, S. Role of Al doping in the filament disruption in HfO2 resistance switches. Nanotechnology 28, 395202 (2017).
Choi, S. et al. SiGe epitaxial memory for neuromorphic computing with reproducible high performance based on engineered dislocations. Nat. Mater. 17, 335 (2018).
Boybat, I. et al. Neuromorphic computing with multi-memristive synapses. Nat. Commun. 9, 2514 (2018).
Koelmans, W. W. et al. Projected phase-change memory devices. Nat. Commun. 6, 8181 (2015).
Giannopoulos, I. et al. 8-bit precision in-memory multiplication with projected phase-change memory. In Proc. The International Electron Devices Meeting (IEDM) 27–7 (IEEE, 2018).
Chen, Y. et al. DaDianNao: A machine-learning supercomputer. In Proc. The 47th Annual IEEE/ACM International Symposium on Microarchitecture 609–622 (IEEE Computer Society, 2014).
Ankit, A. et al. PUMA: A programmable ultra-efficient memristor-based accelerator for machine learning inference. In Proc. The International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS), 715–73 (ACM, 2019).
Eleftheriou, E. et al. Deep learning acceleration based on in-memory computing. IBM Journal of Research and Development (2019).
Yoon, K. J., Bae, W., Jeong, D.-K. & Hwang, C. S. Comprehensive writing margin analysis and its application to stacked one diode-one memory device for high-density crossbar resistance switching random access memory. Adv. Electron. Mater. 2, 1600326 (2016).
Le Gallo, M., Sebastian, A., Cherubini, G., Giefers, H. & Eleftheriou, E. Compressed sensing recovery using computational memory. In Proc. The International Electron Devices Meeting (IEDM) 28–3 (IEEE, 2017).
van de Burgt, Y. et al. A non-volatile organic electrochemical device as a low-voltage artificial synapse for neuromorphic computing. Nat. Mater. 16, 414 (2017).
Tang, J. et al. ECRAM as scalable synaptic cell for high-speed, low-power neuromorphic computing. In Proc. The International Electron Devices Meeting (IEDM) 13–1 (IEEE, 2018).
Fuller, E. J. et al. Parallel programming of an ionic floating-gate memory array for scalable neuromorphic computing. Science 364, 570–574 (2019).
Kimura, H. et al. Complementary ferroelectric-capacitor logic for low-power logic-in-memory VLSI. IEEE Journal of Solid-State Circuits 39, 919–926 (2004).
Aziz, A. et al. Computing with ferroelectric FETs: Devices, models, systems, and applications. In Proc. The Design, Automation & Test in Europe Conference & Exhibition (DATE) 1289–1298 (IEEE, 2018).
Chanthbouala, A. et al. A ferroelectric memristor. Nat. Mater. 11, 860 (2012).
Ríos, C. et al. Integrated all-photonic non-volatile multi-level memory. Nat. Photon. 9, 725 (2015).
Wuttig, M., Bhaskaran, H. & Taubner, T. Phase-change materials for non-volatile photonic applications. Nat. Photon. 11, 465 (2017).
Ríos, C. et al. In-memory computing on a photonic platform. Sci. Adv. 5, eaau5759 (2019).
Acknowledgements
We would like to thank T. Tuma for technical discussions and assistance with scientific illustrations, G. Sarwat and I. Boybat for critical review of the manuscript, and L. Rudin and N. Gustafsson for editorial help. A.S. acknowledges funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement number 682675).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Peer review information Nature Nanotechnology thanks Cheol Seong Hwang and the other, anonymous, reviewers for their contribution to the peer review of this work.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Sebastian, A., Le Gallo, M., Khaddam-Aljameh, R. et al. Memory devices and applications for in-memory computing. Nat. Nanotechnol. 15, 529–544 (2020). https://doi.org/10.1038/s41565-020-0655-z
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1038/s41565-020-0655-z
This article is cited by
-
A large-scale integrated vector–matrix multiplication processor based on monolayer molybdenum disulfide memories
Nature Electronics (2023)
-
An ultrafast bipolar flash memory for self-activated in-memory computing
Nature Nanotechnology (2023)
-
Hardware-aware training for large-scale and diverse deep learning inference workloads using in-memory computing-based accelerators
Nature Communications (2023)
-
Echo state graph neural networks with analogue random resistive memory arrays
Nature Machine Intelligence (2023)
-
A neuro-vector-symbolic architecture for solving Raven’s progressive matrices
Nature Machine Intelligence (2023)