To circumvent the von Neumann bottleneck, substantial progress has been made towards in-memory computing with synaptic devices. However, compact nanodevices implementing non-linear activation functions are required for efficient full-hardware implementation of deep neural networks. Here, we present an energy-efficient and compact Mott activation neuron based on vanadium dioxide and its successful integration with a conductive bridge random access memory (CBRAM) crossbar array in hardware. The Mott activation neuron implements the rectified linear unit function in the analogue domain. The neuron devices consume substantially less energy and occupy two orders of magnitude smaller area than those of analogue complementary metal–oxide semiconductor implementations. The LeNet-5 network with Mott activation neurons achieves 98.38% accuracy on the MNIST dataset, close to the ideal software accuracy. We perform large-scale image edge detection using the Mott activation neurons integrated with a CBRAM crossbar array. Our findings provide a solution towards large-scale, highly parallel and energy-efficient in-memory computing systems for neural networks.
Subscribe to Journal
Get full journal access for 1 year
only $4.92 per issue
All prices are NET prices.
VAT will be added later in the checkout.
Tax calculation will be finalised during checkout.
Rent or Buy article
Get time limited or full article access on ReadCube.
All prices are NET prices.
The data that support the plots and other results of this paper are available from the corresponding author upon request.
The software codes used for this study are available from the corresponding author upon request.
Wong, H. S. P. et al. Metal–oxide RRAM. Proc. IEEE 100, 1951–1970 (2012).
Zidan, M. A., Strachan, J. P. & Lu, W. D. The future of electronics based on memristive systems. Nat. Electron. 1, 22–29 (2018).
Kang, D.-H. et al. A neuromorphic device implemented on a salmon-DNA electrolyte and its application to artificial neural networks. Adv. Sci. 6, 1901265 (2019).
Ge, R. et al. Atomristor: nonvolatile resistance switching in atomic sheets of transition metal dichalcogenides. Nano Lett. 18, 434–441 (2018).
van de Burgt, Y. et al. A non-volatile organic electrochemical device as a low-voltage artificial synapse for neuromorphic computing. Nat. Mater. 16, 414–418 (2017).
Zhao, X. et al. Confining cation injection to enhance CBRAM performance by nanopore graphene layer. Small 13, 1603948 (2017).
Chakrabarti, B. et al. A multiply-add engine with monolithically integrated 3D memristor crossbar/CMOS hybrid circuit. Sci. Rep. 7, 42429 (2017).
Kim, S. et al. Binarized neural network with silicon nanosheet synaptic transistors for supervised pattern classification. Sci. Rep. 9, 11705 (2019).
Oh, S., Huang, Z., Shi, Y. & Kuzum, D. The impact of resistance drift of phase change memory (PCM) synaptic devices on artificial neural network performance. IEEE Electron Device Lett. 40, 1325–1328 (2019).
He, K., Zhang, X., Ren, S. & Sun J. Deep residual learning for image recognition. In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 770–778 (IEEE, 2016).
Kataeva, I. et al. Towards the development of analog neuromorphic chip prototype with 2.4M integrated memristors. In Proc. IEEE International Symposium on Circuits and Systems (ISCAS) 255–259 (IEEE, 2019).
Gao, B. et al. Ultra-low-energy three-dimensional oxide-based electronic synapses for implementation of robust high-accuracy neuromorphic computation systems. ACS Nano 8, 6998–7004 (2014).
Yang, T.-J. & Sze, V. Design considerations for efficient deep neural networks on processing-in-memory accelerators. In Proc. IEEE International Electron Devices Meeting (IEDM) 514–517 (IEEE, 2019).
Krestinskaya, O., Salama, K. N. & James, A. P. Learning in memristive neural network architectures using analog backpropagation circuits. IEEE Trans. Circuits Syst. I 66, 719–732 (2019).
Giordano, M. et al. Analog-to-digital conversion with reconfigurable function mapping for neural networks activation function acceleration. IEEE J. Emerg. Sel. Top. Circuits Syst. 9, 367–376 (2019).
Ambrogio, S. et al. Equivalent-accuracy accelerated neural-network training using analogue memory. Nature 558, 60–67 (2018).
Stefanovich, G., Pergament, A. & Stefanovich, D. Electrical switching and Mott transition in VO2. J. Phys. Condens. Matter 12, 8837–8845 (2000).
Qazilbash, M. M. et al. Mott transition in VO2 revealed by infrared spectroscopy and nano-imaging. Science 318, 1750–1753 (2007).
del Valle, J. et al. Subthreshold firing in Mott nanodevices. Nature 569, 388–392 (2019).
Madan, H., Jerry, M., Pogrebnyakov, A., Mayer, T. & Datta, S. Quantitative mapping of phase coexistence in Mott-Peierls insulator during electronic and thermally driven phase transition. ACS Nano 9, 2009–2017 (2015).
LeCun, Y., Bottou, L., Bengio, Y. & Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 86, 2278–2324 (1998).
Radu, I. P. et al. Switching mechanism in two-terminal vanadium dioxide devices. Nanotechnology 26, 165202 (2015).
del Valle, J., Salev, P., Kalcheim, Y. & Schuller, I. K. A caloritronics-based Mott neuristor. Sci. Rep. 10, 4292 (2020).
Shi, Y. et al. Neuroinspired unsupervised learning and pruning with subquantum CBRAM arrays. Nat. Commun. 9, 5312 (2018).
Chen, P.-Y., Peng, X. & Yu, S. NeuroSim: a circuit-level macro model for benchmarking neuro-inspired architectures in online learning. IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 37, 3067–3080 (2018).
Shrivakshan, G. & Chandrasekar, C. A comparison of various edge detection techniques used in image processing. Int. J. Comput. Sci. Issues 9, 269–276 (2012).
Zhao, W. & Cao, Y. Predictive technology model for nano-CMOS design exploration. ACM J. Emerg. Technol. Comput. Syst. 3, https://doi.org/10.1145/1229175.1229176 (2007).
Zhao, W. & Cao, Y. New generation of predictive technology model for sub-45 nm early design exploration. IEEE Trans. Electron Devices 53, 2816–2823 (2006).
This work was supported by Office of Naval Research (N000142012405 and N00014162531), Samsung Electronics, the National Science Foundation (ECCS-1752241, ECCS-2024776 and ECCS-1734940), the National Institutes of Health (R21 EY029466, R21 EB026180 and DP2 EB030992) and Qualcomm Fellowship. The experimental aspects of this work were supported as part of the Quantum Materials for Energy Efficient Neuromorphic Computing (Q-MEEN-C) Energy Frontier Research Center (EFRC), funded by the US Department of Energy, Office of Science, Basic Energy Sciences under award #DE-SC0019273. The fabrication of the devices was performed at the San Diego Nanotechnology Infrastructure (SDNI) of the University of California San Diego, supported by the National Science Foundation (ECCS-1542148).
The authors declare no competing interests.
Peer review information Nature Nanotechnology thanks Jinfeng Kang and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
About this article
Cite this article
Oh, S., Shi, Y., del Valle, J. et al. Energy-efficient Mott activation neuron for full-hardware implementation of neural networks. Nat. Nanotechnol. (2021). https://doi.org/10.1038/s41565-021-00874-8