Memory devices and applications for in-memory computing

A Publisher Correction to this article was published on 16 July 2020

This article has been updated

Abstract

Traditional von Neumann computing systems involve separate processing and memory units. However, data movement is costly in terms of time and energy and this problem is aggravated by the recent explosive growth in highly data-centric applications related to artificial intelligence. This calls for a radical departure from the traditional systems and one such non-von Neumann computational approach is in-memory computing. Hereby certain computational tasks are performed in place in the memory itself by exploiting the physical attributes of the memory devices. Both charge-based and resistance-based memory devices are being explored for in-memory computing. In this Review, we provide a broad overview of the key computational primitives enabled by these memory devices as well as their applications spanning scientific computing, signal processing, optimization, machine learning, deep learning and stochastic computing.

Access options

Rent or Buy article

Get time limited or full article access on ReadCube.

from$8.99

All prices are NET prices.

Fig. 1: In-memory computing.
Fig. 2: Charge-based memory devices and computational primitives.
Fig. 3: Resistance-based memory devices and computational primitives.
Fig. 4: The application landscape for in-memory computing.
Fig. 5: Increasing the precision of in-memory computing for scientific computing.
Fig. 6: Deep learning training and inference using in-memory computing.
Fig. 7: Stochasticity associated with memristive devices and applications in computing.

Change history

  • 16 July 2020

    An amendment to this paper has been published and can be accessed via a link at the top of the paper.

References

  1. 1.

    Mutlu, O., Ghose, S., Gómez-Luna, J. & Ausavarungnirun, R. Processing data where it makes sense: Enabling in-memory computation. Microprocess. Microsyst. 67, 28–41 (2019).

    Google Scholar 

  2. 2.

    Horowitz, M. Computing’s energy problem (and what we can do about it). In Proc. International Solid-state Circuits Conference (ISSCC) 10–14 (IEEE, 2014).

  3. 3.

    Keckler, S. W., Dally, W. J., Khailany, B., Garland, M. & Glasco, D. GPUs and the future of parallel computing. IEEE Micro 31, 7–17 (2011).

    Google Scholar 

  4. 4.

    Jouppi, N. P. et al. In-datacenter performance analysis of a tensor processing unit. In Proc. International Symposium on Computer Architecture (ISCA) 1–12 (IEEE, 2017).

  5. 5.

    Sze, V., Chen, Y.-H., Yang, T.-J. & Emer, J. S. Efficient processing of deep neural networks: A tutorial and survey. Proc. IEEE 105, 2295–2329 (2017).

    Google Scholar 

  6. 6.

    Patterson, D. et al. A case for intelligent RAM. IEEE Micro 17, 34–44 (1997).

    Google Scholar 

  7. 7.

    Farooq, M. et al. 3D copper TSV integration, testing and reliability. In Proc. International Electron Devices Meeting 7–1 (IEEE, 2011).

  8. 8.

    Pawlowski, J. T. Hybrid memory cube (HMC). In Proceedings of the Hot Chips Symposium (HCS) 1–24 (IEEE, 2011).

  9. 9.

    Kim, J. & Kim, Y. HBM: Memory solution for bandwidth-hungry processors. In Proc. Hot Chips Symposium (HCS) 1–24 (IEEE, 2014).

  10. 10.

    Shulaker, M. M. et al. Three-dimensional integration of nanotechnologies for computing and data storage on a single chip. Nature 547, 74 (2017).

    CAS  Google Scholar 

  11. 11.

    Di Ventra, M. & Pershin, Y. V. The parallel approach. Nat. Phys. 9, 200 (2013).

    Google Scholar 

  12. 12.

    Indiveri, G. & Liu, S.-C. Memory and information processing in neuromorphic systems. Proc. The IEEE 103, 1379–1397 (2015).

    CAS  Google Scholar 

  13. 13.

    Zhirnov, V. V. & Marinella, M. J. in Emerging Nanoelectronic Devices (eds Chen, A.) Ch. 3 (Wiley Online Library, 2015).

  14. 14.

    Wong, H.-S. P. & Salahuddin, S. Memory leads the way to better computing. Nat. Nanotechnol. 10, 191 (2015).

    CAS  Google Scholar 

  15. 15.

    Chua, L. Resistance switching memories are memristors. Appl. Phys. A Mater. Sci. Process. 102, 765–783 (2011).

    CAS  Google Scholar 

  16. 16.

    Li, S. et al. DRISA: A DRAM-based reconfigurable in-situ accelerator. In Proc. International Symposium on Microarchitecture (MICRO) 288–301 (IEEE, 2017).

  17. 17.

    Seshadri, V. et al. Ambit: In-memory accelerator for bulk bitwise operations using commodity DRAM technology. In Proc. International Symposium on Microarchitecture 273–287 (IEEE, 2017).

  18. 18.

    Jeloka, S., Akesh, N. B., Sylvester, D. & Blaauw, D. A 28 nm configurable memory (TCAM/BCAM/SRAM) using push-rule 6T bit cell enabling logic-in-memory. IEEE J. Solid-State Circuits 51, 1009–1021 (2016).

    Google Scholar 

  19. 19.

    Aga, S. et al. Compute caches. In Proc. International Symposium on High Performance Computer Architecture (HPCA) 481–492 (IEEE, 2017).

  20. 20.

    Wang, J. et al. A compute SRAM with bit-serial integer/floating-point operations for programmable in-memory vector acceleration. In Proc. International Solid- State Circuits Conference (ISSCC) 224–226 (IEEE, 2019).

  21. 21.

    Jiang, Z., Yin, S., Seok, M. & Seo, J. XNOR-SRAM: In-memory computing SRAM macro for binary/ternary deep neural networks. In Proc. Symposium on VLSI Technology 173–174 (IEEE, 2018).

  22. 22.

    Biswas, A. & Chandrakasan, A. P. CONV-SRAM: an energy-efficient SRAM with in-memory dot-product computation for low-power convolutional neural networks. IEEE J. Solid-State Circuits 54, 217–230 (2019).

    Google Scholar 

  23. 23.

    Valavi, H., Ramadge, P. J., Nestler, E. & Verma, N. A 64-tile 2.4-Mb in-memory-computing CNN accelerator employing charge-domain compute. IEEE J. Solid-State Circuits 54, 1789–1799 (2019).

    Google Scholar 

  24. 24.

    Verma, N. et al. In-memory computing: Advances and prospects. IEEE J. Solid-State Circuits 11, 43–55 (2019).

    Google Scholar 

  25. 25.

    Gonugondla, S. K., Kang, M. & Shanbhag, N. R. A variation-tolerant in-memory machine learning classifier via on-chip training. IEEE J. Solid-State Circuits 53, 3163–3173 (2018).

    Google Scholar 

  26. 26.

    Bankman, D., Yang, L., Moons, B., Verhelst, M. & Murmann, B. An always-on 3.8 μ J/86% CIFAR-10 mixed-signal binary CNN processor with all memory on chip in 28-nm CMOS. IEEE J. Solid-State Circuits 54, 158–172 (2019).

    Google Scholar 

  27. 27.

    Diorio, C., Hasler, P., Minch, A. & Mead, C. A. A single-transistor silicon synapse. IEEE Transactions on Electron Devices 43, 1972–1980 (1996).

    CAS  Google Scholar 

  28. 28.

    Merrikh-Bayat, F. et al. High-performance mixed-signal neurocomputing with nanoscale floating-gate memory cell arrays. IEEE Trans. Neural Netw. Learn. Syst. 29, 4782–4790 (2018).

    Google Scholar 

  29. 29.

    Wang, P. et al. Three-dimensional NAND flash for vector-matrix multiplication. EEE Trans. Very Large Scale Integr. VLSI Syst. 27, 988–991 (2019).

    Google Scholar 

  30. 30.

    Burr, G. W. et al. Access devices for 3D crosspoint memory. J. Vac. Sci. Technol. B Nanotechnol. Microelectron. 32, 040802 (2014).

    Google Scholar 

  31. 31.

    Hickmott, T. Low-frequency negative resistance in thin anodic oxide films. J. Appl. Phys. 33, 2669–2682 (1962).

    CAS  Google Scholar 

  32. 32.

    Beck, A., Bednorz, J., Gerber, C., Rossel, C. & Widmer, D. Reproducible switching effect in thin oxide films for memory applications. Applied Physics Letters 77, 139–141 (2000).

    CAS  Google Scholar 

  33. 33.

    Waser, R. & Aono, M. Nanoionics-based resistive switching memories. Nat. Mater. 6, 833–840 (2007).

    CAS  Google Scholar 

  34. 34.

    Strukov, D. B., Snider, G. S., Stewart, D. R. & Williams, R. S. The missing memristor found. Nature 453, 80 (2008).

    CAS  Google Scholar 

  35. 35.

    Ovshinsky, S. R. Reversible electrical switching phenomena in disordered structures. Phys. Rev. Lett. 21, 1450 (1968).

    Google Scholar 

  36. 36.

    Wong, H.-S. P. et al. Phase change memory. Proc. IEEE 98, 2201–2227 (2010).

    Google Scholar 

  37. 37.

    Burr, G. W. et al. Recent progress in phase-change memory technology. IEEE J. Emerg. Sel. Top. Circuits Syst. 6, 146–162 (2016).

    Google Scholar 

  38. 38.

    Khvalkovskiy, A. et al. Basic principles of STT-MRAM cell operation in memory arrays. J. Phys. D Appl. Phys. 46, 074001 (2013).

    Google Scholar 

  39. 39.

    Kent, A. D. & Worledge, D. C. A new spin on magnetic memories. Nat. Nanotechnol. 10, 187 (2015).

    CAS  Google Scholar 

  40. 40.

    Vourkas, I. & Sirakoulis, G. C. Emerging memristor-based logic circuit design approaches: A review. IEEE Circuits and Systems Magazine 16, 15–30 (2016).

    Google Scholar 

  41. 41.

    Borghetti, J. et al. Memristive switches enable stateful logic operations via material implication. Nature 464, 873 (2010).

    CAS  Google Scholar 

  42. 42.

    Linn, E., Rosezin, R., Tappertzhofen, S., Böttger, U. & Waser, R. Beyond von neumann-logic operations in passive crossbar arrays alongside memory operations. Nanotechnology 23, 305205 (2012).

    CAS  Google Scholar 

  43. 43.

    Jeong, D. S., Kim, K. M., Kim, S., Choi, B. J. & Hwang, C. S. Memristors for energy-efficient new computing paradigms. Adv. Electron. Mater. 2, 1600090 (2016).

    Google Scholar 

  44. 44.

    Kvatinsky, S. et al. MAGIC-memristor-aided logic IEEE Trans. Circuits Syst. II Express Briefs 61, 895–899 (2014).

    Google Scholar 

  45. 45.

    Mahmoudi, H., Windbacher, T., Sverdlov, V. & Selberherr, S. Implication logic gates using spin-transfer-torque-operated magnetic tunnel junctions for intrinsic logic-in-memory. Solid State Electron. 84, 191–197 (2013).

    CAS  Google Scholar 

  46. 46.

    Kim, K. M. et al. Single-cell stateful logic using a dual-bit memristor. Phys. Status Solidi Rapid Res. Lett. 13, 1800629 (2019).

    Google Scholar 

  47. 47.

    Xu, N., Fang, L., Kim, K. M. & Hwang, C. S. Time-efficient stateful dual-bit-memristor logic. Phys. Status Solidi Rapid Res. Lett. 13, 1900033 (2019).

    Google Scholar 

  48. 48.

    Li, S. et al. Pinatubo: A processing-in-memory architecture for bulk bitwise operations in emerging non-volatile memories. In Proc. The Design Automation Conference (DAC) 173 (ACM, 2016).

  49. 49.

    Xie, L. et al. Scouting logic: A novel memristor-based logic design for resistive computing. In Proc. The IEEE Symposium on VLSI (ISVLSI) 176–181 (IEEE, 2017).

  50. 50.

    Maan, A. K., Jayadevi, D. A. & James, A. P. A survey of memristive threshold logic circuits. IEEE Trans. Neural Netw. Learn. Syst. 28, 1734–1746 (2016).

    Google Scholar 

  51. 51.

    Burr, G. W. et al. Neuromorphic computing using non-volatile memory. Adv Phys X 2, 89–124 (2017).

    Google Scholar 

  52. 52.

    Ielmini, D. & Wong, H.-S. P. In-memory computing with resistive switching devices. Nat. Electron. 1, 333 (2018).

    Google Scholar 

  53. 53.

    Wang, Z. et al. Resistive switching materials for information processing. Nat. Rev. Mater. https://doi.org/10.1038/s41578-019-0159-3 (2020).

    Article  Google Scholar 

  54. 54.

    Wright, C. D., Hosseini, P. & Diosdado, J. A. V. Beyond von-neumann computing with nanoscale phase-change memory devices. Adv. Funct. Mater. 23, 2248–2254 (2013).

    CAS  Google Scholar 

  55. 55.

    Sebastian, A. et al. Brain-inspired computing using phase-change memory devices. J. Appl. Phys. 124, 111101 (2018).

    Google Scholar 

  56. 56.

    Godse, A. P. & Godse, D. A. Computer Organization and Architecture (Technical Publications, 2008).

  57. 57.

    Bojnordi, M. N. & Ipek, E. Memristive boltzmann machine: A hardware accelerator for combinatorial optimization and deep learning. In Proc. The International Symposium on High Performance Computer Architecture (HPCA) 1–13 (IEEE, 2016).

  58. 58.

    Shafiee, A. et al. ISAAC: A convolutional neural network accelerator with in-situ analog arithmetic in crossbars. Comput. Archit. News 44, 14–26 (2016).

    Google Scholar 

  59. 59.

    Chi, P. et al. PRIME: A novel processing-in-memory architecture for neural network computation in ReRAM-based main memory. In Proc. 43rd Annual International Symposium on Computer Architecture (ISCA) News 27–39 (IEEE, 2016).

  60. 60.

    Song, L., Qian, X., Li, H. & Chen, Y. PIPELAYER: A pipelined ReRAM-based accelerator for deep learning. In Proc. The International Symposium on High Performance Computer Architecture (HPCA), 541–552 (IEEE, 2017).

  61. 61.

    Zidan, M. A. et al. A general memristor-based partial differential equation solver. Nat. Electron. 1, 411 (2018).

    Google Scholar 

  62. 62.

    Higham, N. J. Accuracy and Stability of Numerical Algorithms, Vol. 80 (Society for Industrial and Applied Mathematics, 2002).

  63. 63.

    Bekas, C., Curioni, A. & Fedulova, I. Low cost high performance uncertainty quantification. In Proc. 2nd Workshop on High Performance Computational Finance 1–8 (ACM, 2009).

  64. 64.

    Le Gallo, M. et al. Mixed-precision in-memory computing. Nat. Electron. 1, 246–253 (2018).

    Google Scholar 

  65. 65.

    Liu, S., Wang, Y., Fardad, M. & Varshney, P. K. A memristor-based optimization framework for artificial intelligence applications. IEEE Circuits and Systems Magazine 18, 29–44 (2018).

    Google Scholar 

  66. 66.

    Sun, Z. et al. Solving matrix equations in one step with cross-point resistive arrays. Proc. Natl. Acad. Sci. USA 116, 4123–4128 (2019).

    CAS  Google Scholar 

  67. 67.

    Sturges, R. H. Analog matrix inversion (robot kinematics). IEEE Journal on Robotics and Automation 4, 157–162 (1988).

    Google Scholar 

  68. 68.

    Feinberg, B., Wang, S. & Ipek, E. Making memristive neural network accelerators reliable. In Proc. The International Symposium on High Performance Computer Architecture (HPCA) 52–65 (IEEE, 2018).

  69. 69.

    Feinberg, B., Vengalam, U. K. R., Whitehair, N., Wang, S. & Ipek, E. Enabling scientific computing on memristive accelerators. In Proc. International Symposium on Computer Architecture (ISCA) 367–382 (IEEE, 2018).

  70. 70.

    Li, C. et al. Analogue signal and image processing with large memristor crossbars. Nat. Electron. 1, 52–59 (2018).

    Google Scholar 

  71. 71.

    Le Gallo, M., Sebastian, A., Cherubini, G., Giefers, H. & Eleftheriou, E. Compressed sensing with approximate message passing using in-memory computing. IEEE Trans. Electron Devices 65, 4304–4312 (2018).

    Google Scholar 

  72. 72.

    Cai, F. et al. Harnessing intrinsic noise in memristor hopfield neural networks for combinatorial optimization. Preprint at https://arxiv.org/abs/1903.11194 (2019).

  73. 73.

    Mostafa, H., Müller, L. K. & Indiveri, G. An event-based architecture for solving constraint satisfaction problems. Nat. Commun. 6, 8941 (2015).

    CAS  Google Scholar 

  74. 74.

    Parihar, A., Shukla, N., Jerry, M., Datta, S. & Raychowdhury, A. Vertex coloring of graphs via phase dynamics of coupled oscillatory networks. Sci. Rep. 7, 911 (2017).

    Google Scholar 

  75. 75.

    Kumar, S., Strachan, J. P. & Williams, R. S. Chaotic dynamics in nanoscale NbO2 Mott memristors for analogue computing. Nature 548, 318 (2017).

    CAS  Google Scholar 

  76. 76.

    Torrejon, J. et al. Neuromorphic computing with nanoscale spintronic oscillators. Nature 547, 428–431 (2017).

    CAS  Google Scholar 

  77. 77.

    Seo, J. et al. On-chip sparse learning acceleration with CMOS and resistive synaptic devices. IEEE Trans. Nanotechnol. 14, 969–979 (2015).

    CAS  Google Scholar 

  78. 78.

    Sheridan, P. M. et al. Sparse coding with memristor networks. Nat. Nanotechnol. 12, 784–789 (2017).

    CAS  Google Scholar 

  79. 79.

    Sheridan, P. M., Du, C. & Lu, W. D. Feature extraction using memristor networks. IEEE Trans. Neural Netw. Learn. Syst. 27, 2327–2336 (2016).

    Google Scholar 

  80. 80.

    Choi, S., Sheridan, P. & Lu, W. D. Data clustering using memristor networks. Sci. Rep. 5, 10492 (2015).

    Google Scholar 

  81. 81.

    Karam, R., Puri, R., Ghosh, S. & Bhunia, S. Emerging trends in design and applications of memory-based computing and content-addressable memories. Proc. IEEE 103, 1311–1330 (2015).

    CAS  Google Scholar 

  82. 82.

    Rahimi, A. et al. High-dimensional computing as a nanoscalable paradigm. IEEE Trans. Circuits Syst. I Regul. Pap. 64, 2508–2521 (2017).

    Google Scholar 

  83. 83.

    Wu, T. F. et al. Hyperdimensional computing exploiting carbon nanotube FETs, resistive RAM, and their monolithic 3D integration. IEEE J. Solid-State Circuits 53, 3183–3196 (2018).

    Google Scholar 

  84. 84.

    Graves, A. et al. Hybrid computing using a neural network with dynamic external memory. Nature 538, 471 (2016).

    Google Scholar 

  85. 85.

    Ni, K. et al. Ferroelectric ternary content-addressable memory for one-shot learning. Nat. Electron. 2, 521–529 (2019).

    Google Scholar 

  86. 86.

    Eryilmaz, S. B. et al. Brain-like associative learning using a nanoscale non-volatile phase change synaptic device array. Front. Neurosci. 8, 205 (2014).

    Google Scholar 

  87. 87.

    Hu, S. et al. Associative memory realized by a reconfigurable memristive Hopfield neural network. Nat. Commun. 6, 7522 (2015).

    CAS  Google Scholar 

  88. 88.

    Kavehei, O. et al. An associative capacitive network based on nanoscale complementary resistive switches for memory-intensive computing. Nanoscale 5, 5119–5128 (2013).

    CAS  Google Scholar 

  89. 89.

    Du, C. et al. Reservoir computing using dynamic memristors for temporal information processing. Nat. Commun. 8, 2204 (2017).

    Google Scholar 

  90. 90.

    Sebastian, A. et al. Temporal correlation detection using computational phase-change memory. Nat. Commun. 8, 1115 (2017).

    Google Scholar 

  91. 91.

    LeCun, Y., Bengio, Y. & Hinton, G. Deep learning. Nature 521, 436 (2015).

    CAS  Google Scholar 

  92. 92.

    He, K., Zhang, X., Ren, S. & Sun, J. Deep residual learning for image recognition. In Proc. Conference on Computer Vision and Pattern Recognition (CVPR) 770–778 (IEEE, 2016).

  93. 93.

    LeCun, Y. Deep learning hardware: Past, present, and future. In Proc. International Solid-State Circuits Conference (ISSCC) 12–19 (IEEE, 2019).

  94. 94.

    Chen, Y., Yang, T., Emer, J. & Sze, V. Eyeriss v2: A flexible accelerator for emerging deep neural networks on mobile devices. IEEE J. Em. Sel. Top. C 9, 292–308 (2019).

    Google Scholar 

  95. 95.

    Dazzi, M. et al. 5 parallel prism: A topology for pipelined implementations of convolutional neural networks using computational memory. In Proc. NeurIPS MLSys Workshop (NeurIPS, 2019); http://learningsys.org/neurips19/acceptedpapers.html

  96. 96.

    Jia, Z., Maggioni, M., Smith, J. & Scarpazza, D. P. Dissecting the NVidia Turing T4 GPU via microbenchmarking. Preprint at https://arxiv.org/abs/1903.07486 (2019).

  97. 97.

    Hubara, I., Courbariaux, M., Soudry, D., El-Yaniv, R. & Bengio, Y. Quantized neural networks: Training neural networks with low precision weights and activations. J. Mach. Learn. Res. 18, 6869–6898 (2017).

    Google Scholar 

  98. 98.

    Xue, C. et al. 24.1 a 1mb multibit ReRAM computing-in-memory macro with 14.6ns parallel MAC computing time for CNN based AI edge processors. In Proc. The International Solid-State Circuits Conference (ISSCC) 388–390 (IEEE, 2019).

  99. 99.

    Hu, M. et al. Memristor-based analog computation and neural network classification with a dot product engine. Advanced Materials 30, 1705914 (2018).

    Google Scholar 

  100. 100.

    Yao, P. et al. Fully hardware-implemented memristor convolutional neural network. Nature 577, 641–646 (2020).

    CAS  Google Scholar 

  101. 101.

    Suri, M. et al. Phase change memory as synapse for ultra-dense neuromorphic systems: Application to complex visual pattern extraction. In Proc. The International Electron Devices Meeting (IEDM) 4.4.1–44.4 (IEEE, 2011).

  102. 102.

    Chen, W.-H. et al. CMOS-integrated memristive non-volatile computing-in-memory for AI edge processors. Nat. Electron. 2, 420–428 (2019).

    CAS  Google Scholar 

  103. 103.

    Murray, A. F. & Edwards, P. J. Enhanced mlp performance and fault tolerance resulting from synaptic weight noise during training. IEEE T. Neural Networ. 5, 792–802 (1994).

    CAS  Google Scholar 

  104. 104.

    Liu, B. et al. Vortex: Variation-aware training for memristor X-bar. In Proc. The Design Automation Conference (DAC) 1–6 (DAC, 2015).

  105. 105.

    Sebastian, A. et al. Computational memory-based inference and training of deep neural networks. In Proc. The Symposium on VLSI Technology T168–T169 (IEEE, 2019).

  106. 106.

    Gokmen, T., Onen, M. & Haensch, W. Training deep convolutional neural networks with resistive cross-point devices. Front. Neurosci. 11, 538 (2017).

    Google Scholar 

  107. 107.

    Alibart, F., Zamanidoost, E. & Strukov, D. B. Pattern classification by memristive crossbar circuits using ex situ and in situ training. Nat. Commun. 4, 2072 (2013).

    Google Scholar 

  108. 108.

    Burr, G. W. et al. Experimental demonstration and tolerancing of a large-scale neural network (165 000 synapses) using phase-change memory as the synaptic weight element. IEEE T. Electron Dev. 62, 3498–3507 (2015).

    Google Scholar 

  109. 109.

    Gokmen, T. & Vlasov, Y. Acceleration of deep neural network training with resistive cross-point devices: design considerations. Front. Neurosci. 10, 333 (2016).

    Google Scholar 

  110. 110.

    Agarwal, S. et al. Achieving ideal accuracies in analog neuromorphic computing using periodic carry. In Proc. The Symposium on VLSI Technology T174–T175 (IEEE, 2017).

  111. 111.

    Ambrogio, S. et al. Equivalent-accuracy accelerated neural-network training using analogue memory. Nature 558, 60–67 (2018).

    CAS  Google Scholar 

  112. 112.

    Yu, S. Neuro-inspired computing with emerging nonvolatile memory. Proc. IEEE 106, 260–285 (2018).

    CAS  Google Scholar 

  113. 113.

    Prezioso, M. et al. Training and operation of an integrated neuromorphic network based on metal-oxide memristors. Nature 521, 61–64 (2015).

    CAS  Google Scholar 

  114. 114.

    Yao, P. et al. Face classification using electronic synapses. Nat. Commun. 8, 15199 (2017).

    CAS  Google Scholar 

  115. 115.

    Li, C. et al. Efficient and self-adaptive in-situ learning in multilayer memristor neural networks. Nat. Commun. 9, 2385 (2018).

    Google Scholar 

  116. 116.

    Nandakumar, S. et al. Mixed-precision architecture based on computational memory for training deep neural networks. In Proc. The International Symposium on Circuits and Systems (ISCAS) 1–5 (IEEE, 2018).

  117. 117.

    Pfeiffer, M. & Pfeil, T. Deep learning with spiking neurons: Opportunities and challenges. Front. Neurosci. 12, 774 (2018).

    Google Scholar 

  118. 118.

    Diehl, P. U. et al. Fast-classifying, high-accuracy spiking deep networks through weight and threshold balancing. In Proc. International Joint Conference on Neural Networks (IJCNN) 1–8 (IEEE, 2015).

  119. 119.

    Sengupta, A., Ye, Y., Wang, R., Liu, C. & Roy, K. Going deeper in spiking neural networks: VGG and residual architectures. Front. Neurosci. 13, 95 (2019).

    Google Scholar 

  120. 120.

    Esser, S. K., Appuswamy, R., Merolla,P., Arthur, J. V. & Modha, D. S. Backpropagation for energy-efficient neuromorphic computing. In Proc. Advances in Neural Information Processing Systems (Eds. Cortes, C. et al) 1117–1125 (NIPS, 2015).

  121. 121.

    Lee, J. H., Delbruck, T. & Pfeiffer, M. Training deep spiking neural networks using backpropagation. Front. Neurosci. 10, 508 (2016).

    Google Scholar 

  122. 122.

    Woźniak, S., Pantazi, A. & Eleftheriou, E. Deep networks incorporating spiking neural dynamics. Preprint at https://arxiv.org/abs/1812.07040 (2018).

  123. 123.

    Benjamin, B. V. et al. Neurogrid: A mixed-analog-digital multichip system for large-scale neural simulations. Proc. IEEE 102, 699–716 (2014).

    Google Scholar 

  124. 124.

    Qiao, N. et al. A reconfigurable on-line learning spiking neuromorphic processor comprising 256 neurons and 128k synapses. Front. Neurosci. 9, 141 (2015).

    Google Scholar 

  125. 125.

    Kuzum, D., Jeyasingh, R. G., Lee, B. & Wong, H.-S. P. Nanoelectronic programmable synapses based on phase change materials for brain-inspired computing. Nano Letters 12, 2179–2186 (2011).

    Google Scholar 

  126. 126.

    Kim, S. et al. NVM neuromorphic core with 64k-cell (256-by-256) phase change memory synaptic array with on-chip neuron circuits for continuous in-situ learning. In Proc. The International Electron Devices Meeting (IEDM) 17–1 (IEEE, 2015).

  127. 127.

    Tuma, T., Le Gallo, M., Sebastian, A. & Eleftheriou, E. Detecting correlations using phase-change neurons and synapses. IEEE Electr. Device L. 37, 1238–1241 (2016).

    Google Scholar 

  128. 128.

    Pantazi, A., Woźniak, S., Tuma, T. & Eleftheriou, E. All-memristive neuromorphic computing with level-tuned neurons. Nanotechnology 27, 355205 (2016).

    Google Scholar 

  129. 129.

    Covi, E. et al. Analog memristive synapse in spiking networks implementing unsupervised learning. Front. Neurosci. 10, 482 (2016).

    Google Scholar 

  130. 130.

    Serb, A. et al. Unsupervised learning in probabilistic neural networks with multi-state metal-oxide memristive synapses. Nat. Commun. 7, 12611 (2016).

    CAS  Google Scholar 

  131. 131.

    Kheradpisheh, S. R., Ganjtabesh, M., Thorpe, S. J. & Masquelier, T. STDP-based spiking deep convolutional neural networks for object recognition. Neural Networks 99, 56–67 (2018).

    Google Scholar 

  132. 132.

    Moraitis, T., Sebastian, A. & Eleftheriou, E. The role of short-term plasticity in neuromorphic learning: Learning from the timing of rate-varying events with fatiguing spike-timing-dependent plasticity. IEEE Nanotechnology Magazine 12, 45–53 (2018).

    Google Scholar 

  133. 133.

    Wang, Z. et al. Memristors with diffusive dynamics as synaptic emulators for neuromorphic computing. Nat. Mater. 16, 101 (2017).

    CAS  Google Scholar 

  134. 134.

    Carboni, R. & Ielmini, D. Stochastic memory devices for security and computing. Adv. Electron. Mater. 1900198 (2019).

  135. 135.

    Jo, S. H., Kim, K.-H. & Lu, W. Programmable resistance switching in nanoscale two-terminal devices. Nano letters 9, 496–500 (2008).

    Google Scholar 

  136. 136.

    Le Gallo, M., Athmanathan, A., Krebs, D. & Sebastian, A. Evidence for thermally assisted threshold switching behavior in nanoscale phase-change memory cells. J. Appl. Phys. 119, 025704 (2016).

    Google Scholar 

  137. 137.

    Le Gallo, M., Tuma, T., Zipoli, F., Sebastian, A. & Eleftheriou, E. Inherent stochasticity in phase-change memory devices. In Proc. 2016 46th European Solid-State Device Research Conference (ESSDERC) 373–376 (IEEE, 2016).

  138. 138.

    Alaghi, A. & Hayes, J. P. Survey of stochastic computing. ACM T Embed. Comput. S. 12, 92 (2013).

    Google Scholar 

  139. 139.

    Gupta, S., Agrawal, A., Gopalakrishnan, K. & Narayanan, P. Deep learning with limited numerical precision. In Proc. International Conference on Machine Learning 1737–1746 (2015).

  140. 140.

    Yang, K. et al. 16.3 a 23mb/s 23pj/b fully synthesized true-random-number generator in 28nm and 65nm CMOS. In Proc. Proceedings of the International Solid-State Circuits Conference (ISSCC) 280–281 (IEEE, 2014).

  141. 141.

    Jiang, H. et al. A novel true random number generator based on a stochastic diffusive memristor. Nat. Commun. 8, 882 (2017).

    Google Scholar 

  142. 142.

    Gaba, S., Sheridan, P., Zhou, J., Choi, S. & Lu, W. Stochastic memristive devices for computing and neuromorphic applications. Nanoscale 5, 5872–5878 (2013).

    CAS  Google Scholar 

  143. 143.

    Balatti, S. et al. Physical unbiased generation of random numbers with coupled resistive switching devices. IEEE T. Electron Dev. 63, 2029–2035 (2016).

    Google Scholar 

  144. 144.

    Choi, W. H. et al. A magnetic tunnel junction based true random number generator with conditional perturb and real-time output probability tracking. In Proc. The International Electron Devices Meeting 12–5 (IEEE, 2014).

  145. 145.

    Carboni, R. et al. Random number generation by differential read of stochastic switching in spin-transfer torque memory. IEEE Electr. Device L. 39, 951–954 (2018).

    CAS  Google Scholar 

  146. 146.

    Shim, Y., Chen, S., Sengupta, A. & Roy, K. Stochastic spin-orbit torque devices as elements for bayesian inference. Sci. Rep. 7, 14101 (2017).

    Google Scholar 

  147. 147.

    Tuma, T., Pantazi, A., Le Gallo, M., Sebastian, A. & Eleftheriou, E. Stochastic phase-change neurons. Nat. Nanotechnol. 11, 693–699 (2016).

    CAS  Google Scholar 

  148. 148.

    Mizrahi, A. et al. Neural-like computing with populations of superparamagnetic basis functions. Nat. Commun. 9, 1533 (2018).

    Google Scholar 

  149. 149.

    Bichler, O. et al. Visual pattern extraction using energy-efficient 2-PCM synapse neuromorphic architecture. IEEE T. Electron Dev. 59, 2206–2214 (2012).

    Google Scholar 

  150. 150.

    Holcomb, D. E., Burleson, W. P. & Fu, K. Power-up SRAM state as an identifying fingerprint and source of true random numbers. IEEE T. Comput. 58, 1198–1210 (2009).

    Google Scholar 

  151. 151.

    Gao, L., Chen, P.-Y., Liu, R. & Yu, S. Physical unclonable function exploiting sneak paths in resistive cross-point array. IEEE Transactions on Electron Devices 63, 3109–3115 (2016).

    Google Scholar 

  152. 152.

    Nili, H. et al. Hardware-intrinsic security primitives enabled by analogue state and nonlinear conductance variations in integrated memristors. Nat. Electron. 1, 197 (2018).

    Google Scholar 

  153. 153.

    Jiang, H. et al. A provable key destruction scheme based on memristive crossbar arrays. Nat. Electron. 1, 548 (2018).

    Google Scholar 

  154. 154.

    Talati, N., Gupta, S., Mane, P. & Kvatinsky, S. Logic design within memristive memories using memristor-aided logic (MAGIC). IEEE T. Nanotechnol. 15, 635–650 (2016).

    CAS  Google Scholar 

  155. 155.

    Cheng, L. et al. Functional demonstration of a memristive arithmetic logic unit (MemALU) for in-memory computing. Adv. Funct. Mater. (2019).

  156. 156.

    Haj-Ali, A., Ben-Hur, R., Wald, N., Ronen, R. & Kvatinsky, S. IMAGING: In-memory algorithms for image processing. IEEE T. Circuits Systems-I 65, 4258–4271 (2018).

    Google Scholar 

  157. 157.

    Hamdioui, S. et al. Applications of computation-in-memory architectures based on memristive devices. In Proc. The Design, Automation & Test in Europe Conference & Exhibition (DATE) 486–491 (IEEE, 2019).

  158. 158.

    Xiong, F., Liao, A. D., Estrada, D. & Pop, E. Low-power switching of phase-change materials with carbon nanotube electrodes. Science 332, 568–570 (2011).

    CAS  Google Scholar 

  159. 159.

    Li, K.-S. et al. Utilizing sub-5 nm sidewall electrode technology for atomic-scale resistive memory fabrication. In Proc. Symposium on VLSI Technology 1–2 (IEEE, 2014).

  160. 160.

    Salinga, M. et al. Monatomic phase change memory. Nat. Mater. 17, 681–685 (2018).

    CAS  Google Scholar 

  161. 161.

    Pi, S. et al. Memristor crossbar arrays with 6-nm half-pitch and 2-nm critical dimension. Nat. Nanotechnol. 14, 35 (2019).

    CAS  Google Scholar 

  162. 162.

    Brivio, S., Frascaroli, J. & Spiga, S. Role of Al doping in the filament disruption in HfO2 resistance switches. Nanotechnology 28, 395202 (2017).

    Google Scholar 

  163. 163.

    Choi, S. et al. SiGe epitaxial memory for neuromorphic computing with reproducible high performance based on engineered dislocations. Nat. Mater. 17, 335 (2018).

    CAS  Google Scholar 

  164. 164.

    Boybat, I. et al. Neuromorphic computing with multi-memristive synapses. Nat. Commun. 9, 2514 (2018).

    Google Scholar 

  165. 165.

    Koelmans, W. W. et al. Projected phase-change memory devices. Nat. Commun. 6, 8181 (2015).

    Google Scholar 

  166. 166.

    Giannopoulos, I. et al. 8-bit precision in-memory multiplication with projected phase-change memory. In Proc. The International Electron Devices Meeting (IEDM) 27–7 (IEEE, 2018).

  167. 167.

    Chen, Y. et al. DaDianNao: A machine-learning supercomputer. In Proc. The 47th Annual IEEE/ACM International Symposium on Microarchitecture 609–622 (IEEE Computer Society, 2014).

  168. 168.

    Ankit, A. et al. PUMA: A programmable ultra-efficient memristor-based accelerator for machine learning inference. In Proc. The International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS), 715–73 (ACM, 2019).

  169. 169.

    Eleftheriou, E. et al. Deep learning acceleration based on in-memory computing. IBM Journal of Research and Development (2019).

  170. 170.

    Yoon, K. J., Bae, W., Jeong, D.-K. & Hwang, C. S. Comprehensive writing margin analysis and its application to stacked one diode-one memory device for high-density crossbar resistance switching random access memory. Adv. Electron. Mater. 2, 1600326 (2016).

    Google Scholar 

  171. 171.

    Le Gallo, M., Sebastian, A., Cherubini, G., Giefers, H. & Eleftheriou, E. Compressed sensing recovery using computational memory. In Proc. The International Electron Devices Meeting (IEDM) 28–3 (IEEE, 2017).

  172. 172.

    van de Burgt, Y. et al. A non-volatile organic electrochemical device as a low-voltage artificial synapse for neuromorphic computing. Nat. Mater. 16, 414 (2017).

    Google Scholar 

  173. 173.

    Tang, J. et al. ECRAM as scalable synaptic cell for high-speed, low-power neuromorphic computing. In Proc. The International Electron Devices Meeting (IEDM) 13–1 (IEEE, 2018).

  174. 174.

    Fuller, E. J. et al. Parallel programming of an ionic floating-gate memory array for scalable neuromorphic computing. Science 364, 570–574 (2019).

    CAS  Google Scholar 

  175. 175.

    Kimura, H. et al. Complementary ferroelectric-capacitor logic for low-power logic-in-memory VLSI. IEEE Journal of Solid-State Circuits 39, 919–926 (2004).

    Google Scholar 

  176. 176.

    Aziz, A. et al. Computing with ferroelectric FETs: Devices, models, systems, and applications. In Proc. The Design, Automation & Test in Europe Conference & Exhibition (DATE) 1289–1298 (IEEE, 2018).

  177. 177.

    Chanthbouala, A. et al. A ferroelectric memristor. Nat. Mater. 11, 860 (2012).

    CAS  Google Scholar 

  178. 178.

    Ríos, C. et al. Integrated all-photonic non-volatile multi-level memory. Nat. Photon. 9, 725 (2015).

    Google Scholar 

  179. 179.

    Wuttig, M., Bhaskaran, H. & Taubner, T. Phase-change materials for non-volatile photonic applications. Nat. Photon. 11, 465 (2017).

    CAS  Google Scholar 

  180. 180.

    Ríos, C. et al. In-memory computing on a photonic platform. Sci. Adv. 5, eaau5759 (2019).

    Google Scholar 

Download references

Acknowledgements

We would like to thank T. Tuma for technical discussions and assistance with scientific illustrations, G. Sarwat and I. Boybat for critical review of the manuscript, and L. Rudin and N. Gustafsson for editorial help. A.S. acknowledges funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement number 682675).

Author information

Affiliations

Authors

Corresponding author

Correspondence to Abu Sebastian.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Peer review information Nature Nanotechnology thanks Cheol Seong Hwang and the other, anonymous, reviewers for their contribution to the peer review of this work.

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Sebastian, A., Le Gallo, M., Khaddam-Aljameh, R. et al. Memory devices and applications for in-memory computing. Nat. Nanotechnol. 15, 529–544 (2020). https://doi.org/10.1038/s41565-020-0655-z

Download citation

Further reading

Search

Quick links

Find nanotechnology articles, nanomaterial data and patents all in one place. Visit Nano by Nature Research