Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Scalable massively parallel computing using continuous-time data representation in nanoscale crossbar array

Abstract

The growth of connected intelligent devices in the Internet of Things has created a pressing need for real-time processing and understanding of large volumes of analogue data. The difficulty in boosting the computing speed renders digital computing unable to meet the demand for processing analogue information that is intrinsically continuous in magnitude and time. By utilizing a continuous data representation in a nanoscale crossbar array, parallel computing can be implemented for the direct processing of analogue information in real time. Here, we propose a scalable massively parallel computing scheme by exploiting a continuous-time data representation and frequency multiplexing in a nanoscale crossbar array. This computing scheme enables the parallel reading of stored data and the one-shot operation of matrix–matrix multiplications in the crossbar array. Furthermore, we achieve the one-shot recognition of 16 letter images based on two physically interconnected crossbar arrays and demonstrate that the processing and modulation of analogue information can be simultaneously performed in a memristive crossbar array.

Access options

Rent or Buy article

Get time limited or full article access on ReadCube.

from$8.99

All prices are NET prices.

Fig. 1: Continuous-time data representation for FMC in a memristive crossbar array.
Fig. 2: Experimental implementations of FMC-based massively parallel computing.
Fig. 3: FMC-based one-shot recognition of numerous images and wireless communication of the recognition results.
Fig. 4: Performance of FMC-based massively parallel computing.

Data availability

The data supporting the findings of this study are available within the article and its Supplementary Information, and from the corresponding author upon reasonable request.

References

  1. 1.

    Markov, I. L. Limits on fundamental limits to computation. Nature 512, 147–154 (2014).

    CAS  Article  Google Scholar 

  2. 2.

    Zhirnov, V. V., Cavin, R. K., Hutchby, J. A. & Bourianoff, G. I. Limits to binary logic switch scaling — a gedanken model. Proc. IEEE 91, 1934–1939 (2003).

    Article  Google Scholar 

  3. 3.

    Waldrop, M. M. The chips are down for Moore’s law. Nature 530, 144–147 (2016).

    CAS  Article  Google Scholar 

  4. 4.

    Yasumoto, K., Yamaguchi, H. & Shigeno, H. Survey of real-time processing technologies of IoT data streams. J. Inf. Process. 24, 195–202 (2016).

    Google Scholar 

  5. 5.

    Di Ventra, M. & Pershin, Y. V. The parallel approach. Nat. Phys. 9, 200–202 (2013).

    Article  CAS  Google Scholar 

  6. 6.

    El-Kareh, B. & Hutter, L. N. Silicon Analog Components (Springer, 2015).

  7. 7.

    Big data needs a hardware revolution. Nature 554, 145–146 (2018).

  8. 8.

    Végh, J. How Amdahl’s Law limits the performance of large artificial neural networks. Brain Inform. 6, 4 (2019).

    Article  Google Scholar 

  9. 9.

    Krestinskaya, O., James, A. P. & Chua, L. O. Neuromemristive circuits for edge computing: a review. IEEE Trans. Neural Netw. Learn. Syst. 31, 4–23 (2020).

    Article  Google Scholar 

  10. 10.

    Yang, Y. Multi-tier computing networks for intelligent IoT. Nat. Electron. 2, 4–5 (2019).

    Article  Google Scholar 

  11. 11.

    Cai, F. et al. Power-efficient combinatorial optimization using intrinsic noise in memristor Hopfield neural networks. Nat. Electron. 3, 409–418 (2020).

    Article  Google Scholar 

  12. 12.

    Liu, C. et al. Small footprint transistor architecture for photoswitching logic and in situ memory. Nat. Nanotechnol. 14, 662–667 (2019).

    CAS  Article  Google Scholar 

  13. 13.

    Marković, D., Mizrahi, A., Querlioz, D. & Grollier, J. Physics for neuromorphic computing. Nat. Rev. Phys. 2, 499–510 (2020).

    Article  CAS  Google Scholar 

  14. 14.

    Miscuglio, M. & Sorger, V. J. Photonic tensor cores for machine learning. Appl. Phys. Rev. 7, 031404 (2020).

    CAS  Article  Google Scholar 

  15. 15.

    Lin, X. et al. All-optical machine learning using diffractive deep neural networks. Science 361, 1004–1008 (2018).

    CAS  Article  Google Scholar 

  16. 16.

    Feldmann, J., Youngblood, N., Wright, C. D., Bhaskaran, H. & Pernice, W. H. P. All-optical spiking neurosynaptic networks with self-learning capabilities. Nature 569, 208–214 (2019).

    CAS  Article  Google Scholar 

  17. 17.

    Kumar, S., Williams, R. S. & Wang, Z. Third-order nanocircuit elements for neuromorphic engineering. Nature 585, 518–523 (2020).

    CAS  Article  Google Scholar 

  18. 18.

    Sebastian, A., Le Gallo, M., Khaddam-Aljameh, R. & Eleftheriou, E. Memory devices and applications for in-memory computing. Nat. Nanotechnol. 15, 529–544 (2020).

    CAS  Article  Google Scholar 

  19. 19.

    Bandyopadhyay, A., Pati, R., Sahu, S., Peper, F. & Fujita, D. Massively parallel computing on an organic molecular layer. Nat. Phys. 6, 369–375 (2010).

    CAS  Article  Google Scholar 

  20. 20.

    Chai, Y. In-sensor computing for machine vision. Nature 579, 32–33 (2020).

    CAS  Article  Google Scholar 

  21. 21.

    Wang, C.-Y. et al. Gate-tunable van der Waals heterostructure for reconfigurable neural network vision sensor. Sci. Adv. 6, eaba6173 (2020).

    Article  Google Scholar 

  22. 22.

    Pan, C. et al. Reconfigurable logic and neuromorphic circuits based on electrically tunable two-dimensional homojunctions. Nat. Electron. 3, 383–390 (2020).

    CAS  Article  Google Scholar 

  23. 23.

    Wang, S. et al. Networking retinomorphic sensor with memristive crossbar for brain-inspired visual perception. Natl Sci. Rev. 8, nwaa172 (2020).

  24. 24.

    Shannon, C. E. Mathematical theory of the differential analyzer. J. Math. Phys. 20, 337–354 (1941).

    Article  Google Scholar 

  25. 25.

    Zhang, W. et al. Neuro-inspired computing chips. Nat. Electron. 3, 371–382 (2020).

    Article  Google Scholar 

  26. 26.

    Ielmini, D. & Wong, H. S. P. In-memory computing with resistive switching devices. Nat. Electron. 1, 333–343 (2018).

    Article  Google Scholar 

  27. 27.

    Sun, Z. et al. Solving matrix equations in one step with cross-point resistive arrays. Proc. Natl Acad. Sci. USA 116, 4123–4128 (2019).

    CAS  Article  Google Scholar 

  28. 28.

    Zidan, M. A. et al. A general memristor-based partial differential equation solver. Nat. Electron. 1, 411–420 (2018).

    Article  Google Scholar 

  29. 29.

    Xia, Q. & Yang, J. J. Memristive crossbar arrays for brain-inspired computing. Nat. Mater. 18, 309–323 (2019).

    CAS  Article  Google Scholar 

  30. 30.

    Zidan, M. A., Strachan, J. P. & Lu, W. D. The future of electronics based on memristive systems. Nat. Electron. 1, 22–29 (2018).

    Article  Google Scholar 

  31. 31.

    Pi, S. et al. Memristor crossbar arrays with 6-nm half-pitch and 2-nm critical dimension. Nat. Nanotechnol. 14, 35–39 (2019).

    CAS  Article  Google Scholar 

  32. 32.

    Sebastian, A., Le Gallo, M. & Eleftheriou, E. Computational phase-change memory: beyond von Neumann computing. J. Phys. D 52, 443002 (2019).

    CAS  Article  Google Scholar 

  33. 33.

    Chen, W.-H. et al. CMOS-integrated memristive non-volatile computing-in-memory for AI edge processors. Nat. Electron. 2, 420–428 (2019).

    CAS  Article  Google Scholar 

  34. 34.

    Wang, M. et al. Robust memristors based on layered two-dimensional materials. Nat. Electron. 1, 130–136 (2018).

    CAS  Article  Google Scholar 

  35. 35.

    Yao, P. et al. Fully hardware-implemented memristor convolutional neural network. Nature 577, 641–646 (2020).

    CAS  Article  Google Scholar 

  36. 36.

    Prezioso, M. et al. Training and operation of an integrated neuromorphic network based on metal-oxide memristors. Nature 521, 61–64 (2015).

    CAS  Article  Google Scholar 

  37. 37.

    Sheridan, P. M. et al. Sparse coding with memristor networks. Nat. Nanotechnol. 12, 784–789 (2017).

    CAS  Article  Google Scholar 

  38. 38.

    Lin, P. et al. Three-dimensional memristor circuits as complex neural networks. Nat. Electron. 3, 225–232 (2020).

    Article  Google Scholar 

  39. 39.

    Yao, P. et al. Face classification using electronic synapses. Nat. Commun. 8, 15199 (2017).

    CAS  Article  Google Scholar 

  40. 40.

    Raleigh, G. G. & Cioffi, J. M. Spatio-temporal coding for wireless communication. IEEE Trans. Commun. 46, 357–366 (1998).

    Article  Google Scholar 

  41. 41.

    Hu, M. et al. Memristor-based analog computation and neural network classification with a dot product engine. Adv. Mater. 30, 1705914 (2018).

    Article  CAS  Google Scholar 

  42. 42.

    International Roadmap for Devices and Systems: More Moore 2017 edn (IEEE, 2018).

  43. 43.

    Merolla, P. A. et al. A million spiking-neuron integrated circuit with a scalable communication network and interface. Science 345, 668 (2014).

    CAS  Article  Google Scholar 

  44. 44.

    Davies, M. et al. Loihi: a neuromorphic manycore processor with on-chip learning. IEEE Micro 38, 82–99 (2018).

    Article  Google Scholar 

  45. 45.

    Yeon, H. et al. Alloying conducting channels for reliable neuromorphic computing. Nat. Nanotechnol. 15, 574–579 (2020).

    CAS  Article  Google Scholar 

  46. 46.

    Choi, S. et al. SiGe epitaxial memory for neuromorphic computing with reproducible high performance based on engineered dislocations. Nat. Mater. 17, 335–340 (2018).

    CAS  Article  Google Scholar 

  47. 47.

    Cai, F. et al. A fully integrated reprogrammable memristor–CMOS system for efficient multiply–accumulate operations. Nat. Electron. 2, 290–299 (2019).

    CAS  Article  Google Scholar 

  48. 48.

    Pi, S. et al. Nanoscale memristive radiofrequency switches. Nat. Commun. 6, 7519 (2015).

    CAS  Article  Google Scholar 

  49. 49.

    Torrezan, A. C. et al. Sub-nanosecond switching of a tantalum oxide memristor. Nanotechnology 22, 485203 (2011).

    Article  CAS  Google Scholar 

  50. 50.

    Kim, M. et al. Analogue switches made from boron nitride monolayers for application in 5G and terahertz communication systems. Nat. Electron. 3, 479–485 (2020).

    CAS  Article  Google Scholar 

  51. 51.

    Satyanarayanan, M. How we created edge computing. Nat. Electron. 2, 42 (2019).

    Article  Google Scholar 

  52. 52.

    Fuller, E. J. et al. Parallel programming of an ionic floating-gate memory array for scalable neuromorphic computing. Science 364, 570–574 (2019).

    CAS  Article  Google Scholar 

  53. 53.

    Sebastian, A. et al. Tutorial: brain-inspired computing using phase-change memory devices. J. Appl. Phys. 124, 111101 (2018).

    Article  CAS  Google Scholar 

  54. 54.

    Burr, G. W. et al. Experimental demonstration and tolerancing of a large-scale neural network (165 000 synapses) using phase-change memory as the synaptic weight element. IEEE Trans. Electron Devices 62, 3498–3507 (2015).

    Article  Google Scholar 

  55. 55.

    Wong, H. S. P. & Salahuddin, S. Memory leads the way to better computing. Nat. Nanotechnol. 10, 191–194 (2015).

    CAS  Article  Google Scholar 

  56. 56.

    Arimoto, Y. & Ishiwara, H. Current status of ferroelectric random-access memory. MRS Bull. 29, 823–828 (2004).

    CAS  Article  Google Scholar 

  57. 57.

    Zhang, W., Mazzarello, R., Wuttig, M. & Ma, E. Designing crystallization in phase-change materials for universal memory and neuro-inspired computing. Nat. Rev. Mater. 4, 150–168 (2019).

    CAS  Article  Google Scholar 

  58. 58.

    Feldmann, J. et al. Parallel convolutional processing using an integrated photonic tensor core. Nature 589, 52–58 (2021).

    CAS  Article  Google Scholar 

  59. 59.

    Miscuglio, M. & Sorger, V. J. Photonic tensor cores for machine learning. Appl. Phys. Rev. 7, 031404 (2020).

    CAS  Article  Google Scholar 

Download references

Acknowledgements

This work was supported in part by the National Natural Science Foundation of China (62034004, 61625402, 61974176 and 61921005), the Strategic Priority Research Program of the Chinese Academy of Sciences (XDB44000000), the National Key R&D Program of China (2019YFB2205400 and 2019YFB2205402) and Fundamental Research Funds for the Central Universities (020414380179 and 020414380171). F.M. acknowledges the support from the AIQ foundation and experimental assistance from Q. Liu, X. Tan and Z. Wu.

Author information

Affiliations

Authors

Contributions

F.M., S.-J.L. and C.W. conceived the idea and designed the experiments. F.M. and S.-J.L. supervised the whole project. C.W. performed all experiments. C.W. and S.-J.L. analysed the experimental data. C.-Y.W. and C.P. provided assistance during the experiment design. Z.-Z.Y. assisted in the device fabrication and circuit assembly. X.S. and W.W. contributed to circuit measurement. Y.G., Z.Z. and C.Z. contributed to the MIMO model. C.W. and Y.Z. carried out the simulation of the circuit models. C.W., S.-J.L. and F.M. co-wrote the manuscript.

Corresponding author

Correspondence to Feng Miao.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Peer review information Nature Nanotechnology thanks Yang Chai, Suhas Kumar and Abu Sebastian for their contribution to the peer review of this work.

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Supplementary Information

Supplementary Figs. 1–18 and Tables 1–3.

Supplementary Video 1

The 16 letter images can be recognized in parallel by physically interconnecting two nanoscale crossbar arrays, in which one crossbar is used for data storage while the other serves as an artificial neural network for the inference task. The recognized results are transmitted to a wireless terminal (for example, cell phone) since the signal modulation is simultaneously accomplished with analogue computing.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Wang, C., Liang, SJ., Wang, CY. et al. Scalable massively parallel computing using continuous-time data representation in nanoscale crossbar array. Nat. Nanotechnol. (2021). https://doi.org/10.1038/s41565-021-00943-y

Download citation

Search

Quick links

Find nanotechnology articles, nanomaterial data and patents all in one place. Visit Nano by Nature Research