Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain
the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in
Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles
and JavaScript.
Inspired by human brain, neuromorphic computing technologies have made important breakthroughs in recent years as alternatives to overcome the power and latency shortfalls of traditional digital computing. An interdisciplinary approach is being taken to address the challenge of creating more efficient and intelligent computing systems that can perform diverse tasks, to design hardware with increasing complexity from single device to system architecture level, and to develop new theories and brain-inspired algorithms for future computing.
Edge and High-Performance Computing, Bio-Signal Processing and Brain-Computer Interface
We welcome the submissions of primary research that fall into any of the above-mentioned categories. All the submissions will be subject to the same peer review process and editorial standard as regular Nature Communications articles.
Dmitri Strukov (an electrical engineer, University of California at Santa Barbara), Giacomo Indiveri (an electrical engineer, University of Zurich), Julie Grollier (a material physicist, Unite Mixte de Physique CNRS) and Stefano Fusi (a neuroscientist, Columbia University) talked to Nature Communications about the opportunities and challenges in developing brain-inspired computing technologies, namely neuromorphic computing, and advocated effective collaborations crossing multidisciplinary research areas to support this emerging community.
Lockdowns due to the pandemic in the last two years forced a critical number of chip-making facilities across the world to shut down, giving rise to the chip shortage issues. Prof. Meng-Fan (Marvin) Chang (National Tsing Hua University, TSMC—Taiwan), Prof. Huaqiang Wu (Tsinghua University—China), Dr. Elisa Vianello (CEA-Leti—France), Dr. Sang Joon Kim (Samsung Electronics—South Korea) and Dr. Mirko Prezioso (Mentium Techn.—US) talked to Nature Communications to better understand whether and to what extent this crisis has impacted the development of in-memory/neuromorphic chips, an emerging technology for future computing.
A grand challenge in robotics is realising intelligent agents capable of autonomous interaction with the environment. In this Perspective, the authors discuss the potential, challenges and future direction of research aimed at demonstrating embodied intelligent robotics via neuromorphic technology.
Among the existing machine learning frameworks, reservoir computing demonstrates fast and low-cost training, and its suitability for implementation in various physical systems. This Comment reports on how aspects of reservoir computing can be applied to classical forecasting methods to accelerate the learning process, and highlights a new approach that makes the hardware implementation of traditional machine learning algorithms practicable in electronic and photonic systems.
Image reconstruction algorithms raise critical challenges in massive data processing for medical diagnosis. Here, the authors propose a solution to significantly accelerate medical image reconstruction on memristor arrays, showing 79× faster speed and 153× higher energy efficiency than state-of-the-art graphics processing unit.
Designing efficient photonic neuromorphic systems remains a challenge. Here, the authors develop an in-sensor Reservoir Computing system for multi-tasked pattern classification based on a light-responsive semiconducting polymer (p-NDI) with efficient exciton dissociations, charge trapping capability, and through-space charge-transport characteristics.
Developing an artificial olfactory system that can mimic the biological functions remains a challenge. Here, the authors develop an artificial chemosensory synapse based on a flexible organic electrochemical transistor gated by the potential generated by the interaction of gas molecules with ions in a chemoreceptive ionogel.
Designing machine learning hardware on flexible substrates is promising for several applications. Here, the authors propose an integrated smart system built with low-cost flexible electronics components for classifying human malodour, and demonstrates that the proposed system scores malodour as good as expert human assessors.
Information-based search strategies are relevant for the learning of interacting agents dynamics and usually need predefined data. The authors propose a method to collect data for learning a predictive sensor model, without requiring domain knowledge, human input, or previously existing data.
Wearable sensors with edge computing are desired for human motion monitoring. Here, the authors demonstrate a topographic design for wearable MXene sensor modules with wireless streaming or in-sensor computing models for avatar reconstruction.
Designing wereable neural invasive electrical stimulation system remains a challenge. Here, researchers provide an effective technology platform for the elimination of tricky neural stimulus-inertia using bionic electronic modulation, which is a significant step forward for long-lasting treatment of nervous system diseases.
Designing efficient bio-inspired vision systems remains a challenge. Here, the authors report a bio-inspired striate visual cortex with binocular and orientation selective receptive field based on self-powered memristor to enable machine vision with brisk edge and corner detection in the future applications.
Designing an efficient platform that enables verbal communication without vocalization remains a challenge. Here, the authors propose a silent speech interface by utilizing a deep learning algorithm combined with strain sensors attached near the subject’s mouth, able to collect 100 words and classify at a high accuracy rate.
With advances in robotic technology, the complexity of control of robot has been increasing owing to fundamental von Neumann bottlenecks. Here, we demonstrate coordinated movement by a fully parallel-processable synaptic array with reduced control complexity.
The adoption of photonic synapses with biosimilarity to realize analog signal transmission is of significance in realizing artificial illuminance modulation responses. Here, the authors report a biomimetic ocular prosthesis system based on quantum dots embedded photonic synapses with improved depression properties through mid-gap trap.
Tactile sensors in human-machine interaction systems can provide precise input signals and the necessary feedback between humans and machines. Here, the authors developed a black phosphorous-based tactile sensor array system that can provide touch into audio feedback.
Designing efficient brain-inspired electronics remains a challenge. Here, Liu et al. develop a flexible perovskite-based artificial synapse with low energy consumption and fast response frequency and realize an artificial neuromuscular system with muscular-fatigue warning.
Neuromorphic computing memristors are attractive to construct low-power- consumption electronic textiles. Here, authors report an ultralow-power textile memristor network of Ag/MoS2/HfAlOx/carbon nanotube with reconfigurable characteristics and firing energy consumption of 1.9 fJ/spike.
Designing efficient sensing-memory-computing systems remains a challenge. Here, the authors propose a self-powered vertical tribo-transistor based on MXenes to implement the multi-sensing-memory-computing function and the interaction of multisensory integration.
While great progress has been made in object recognition, implementing them is typically based on conventional electronic hardware. Here the authors introduce a concept of neuro-metamaterials that enable a dynamic entirely-optical object recognition and mirage.
The real-world object localization application needs a low-latency and power efficient computing system. Here, Moro et al. demonstrate a neuromorphic in-memory event driven system, inspired by the barn owl’s neuroanatomy, which is orders of magnitude more energy efficient than microcontrollers.
The scalability of neuromorphic devices depends on the dismissal of capacitors and additional circuits. Here Liu et al. report an artificial neuron based on the polarization and depolarization of an anti-ferroelectric film, avoiding additional elements and reaching 37 fJ/spike of power consumption.
Traditional learning procedures for artificial intelligence rely on digital methods not suitable for physical hardware. Here, Nakajima et al. demonstrate gradient-free physical deep learning by augmenting a biologically inspired algorithm, accelerating the computation speed on optoelectronic hardware.
The biological plausibility of backpropagation and its relationship with synaptic plasticity remain open questions. The authors propose a meta-learning approach to discover interpretable plasticity rules to train neural networks under biological constraints. The meta-learned rules boost the learning efficiency via bio-inspired synaptic plasticity.
Biologically inspired spiking neural networks are highly promising, but remain simplified omitting relevant biological details. The authors introduce here theoretical and numerical frameworks for incorporating dendritic features in spiking neural networks to improve their flexibility and performance.
Muscle electrophysiology is a promising tool for human-machine approaches in medicine and beyond clinical applications. The authors propose here a model simulating electric signals produced during human movements and apply this data for training of deep learning algorithms.
Hybrid neural networks combine advantages of spiking and artificial neural networks in the context of computing and biological motivation. The authors propose a design framework with hybrid units for improved flexibility and efficiency of hybrid neural networks, and modulation of hybrid information flows.
Reservoir computing has demonstrated high-level performance, however efficient hardware implementations demand an architecture with minimum system complexity. The authors propose a rotating neuron-based architecture for physically implementing all-analog resource efficient reservoir computing system.
Artificial neural networks are known to perform well on recently learned tasks, at the same time forgetting previously learned ones. The authors propose an unsupervised sleep replay algorithm to recover old tasks synaptic connectivity that may have been damaged after new task training.
Tasks involving continual learning and adaptation to real-time scenarios remain challenging for artificial neural networks in contrast to real brain. The authors propose here a brain-inspired optimizer based on mechanisms of synaptic integration and strength regulation for improved performance of both artificial and spiking neural networks.
Based on fundamental thermodynamics, traditional electronic computers, which operate serially, require more energy per computation the faster they operate. Here, the authors show that the energy cost per operation of a parallel computer can be kept very small.
Brain-inspired neural generative models can be designed to learn complex probability distributions from data. Here the authors propose a neural generative computational framework, inspired by the theory of predictive processing in the brain, that facilitates parallel computing for complex tasks.
Self-organizing maps are data mining tools for unsupervised learning algorithms dealing with big data problems. The authors experimentally demonstrate a memristor-based self-organizing map that is more efficient in computing speed and energy consumption for data clustering, image processing and solving optimization problems.
Deep learning techniques usually require a large quantity of training data and may be challenging for scarce datasets. The authors propose a framework that involves contrastive and transfer learning and reduces data requirements for training while keeping the prediction accuracy.
Dynamics of neural circuits mapping brain functions such as sensory processing and decision making, can be characterized by probabilistic representations and inference. The authors elaborate the role of spatiotemporal neural dynamics for more efficient performance of probabilistic computations.
Better understanding of a trade-off between the speed and accuracy of decision-making is relevant for mapping biological intelligence to machines. The authors introduce a brain-inspired learning algorithm to uncover dependencies in individual fMRI networks with features of neural activity and predict inter-individual differences in decision-making.
Sensing and processing UV light is essential for advanced artificial visual perception system. Here, the authors report a controllable UV-ultrasensitive neuromorphic vision sensor using organic phototransistors to integrate sensing, memory and processing functions, and perform the static image and dynamic movie recognition.
Dynamic machine vision requires recognizing the past and predicting the future of moving objects. Here, the authors demonstrate retinomorphic photomemristor networks with inherent dynamic memory for accurate motion recognition and prediction.
Designing full-color spherical artificial eyes remains a challenge. Here, Long et al. report a bionic eye where each pixel on the hemispherical retina can recognize different colors based on the unique bidirectional photo response; with optical adaptivity and neuromorphic preprocessing ability
Designing an infrared machine vision system that can efficiently perceive, convert, and process a massive amount of data remains a challenge. Here, the authors present a retina-inspired 2D optoelectronic device based on van der Waals heterostructure that can perform the data perception and spike-encoding simultaneously for night vision, sensing, spectroscopy, and free-space communications.
A big challenge for artificial intelligence is to gain the ability of learning by experience like biological systems. Here Bianchi et al. propose a hardware neural network based on resistive-switching synaptic arrays which dynamically adapt to the environment for autonomous exploration.
Designing scaled electronic devices for neuromorphic applications remains a challenge. Here, Zhang et al. develop an artificial molecular synapse based on self-assembled peptide molecule monolayer whose conductance can be dynamically modulated and used for waveform recognition.
Hardware-based neural networks can provide a significant breakthrough in artificial intelligence. Here, the authors demonstrate an integrated 3-dimensional ferroelectric array with a layer-by-layer computation for area-efficient neural networks.
Designing bio-inspired artificial neurons within a single device is challenging. Here, the authors demonstrate a spintronic neuron with leaky-integrate-fire and self-reset characteristics and corroborate a new trajectory of all-spin neuromorphic computing hardware holistic implementation.
Inspired by the multisensory cue integration in macaque’s brain for spatial perception, the authors develop a neuromorphic motion-cognition nerve that achieves cross-modal perceptual enhancement for robotics and wearable applications.
Analog–digital hybrid computing based on SnS2 memtransistors is demonstrated for lowpower sensor fusion in drones, where a drone with hybrid computing performs sensor fusion with higher energy efficiency than that with only a digital processor.
A highly efficient hardware element capable of sensing and encoding multiple physical signals is still lacking. Here, the authors report a spike-based neuromorphic perception system consisting of tunable and highly uniform artificial sensory neurons based on epitaxial VO2 capable of hand gesture classification.
Designing energy-efficient computing solution for the implementation of AI algorithms in edge devices remains a challenge. Yang et al. proposes a decentralized brain-inspired computing method enabling multiple edge devices to collaboratively train a global model without a fixed central coordinator.
Designing biocompatible and flexible electronic devices for neuromrophic applications remains a challenge. Here, Kireev et al. propose graphene-based artificial synaptic transistors with low-energy switching, long-term potentiation, and metaplasticity for future bio-interfaced neural networks.
Designing an efficient multi-agent hardware system to solve large-scale computational problems through high-parallelism processing with nonlinear interactions remains a challenge. Here, the authors demonstrate that a multi-agent hardware system deploying distributed Ag nanoclusters as physical agents enables parallel, complex computing.
Layered heterostructures are promising photosensitive materials for advanced optoelectronics. Here, the authors introduce an interfacial coassembly method to construct large-scale perylene/grahene oxide (GO) heterobilayer for broadband photoreception and efficient neuromorphics.
Designing in-sensor computing systems remains a challenge. Here, the authors demonstrate artificial optical neurons based on the in-sensor computing architecture that fuses sensory and computing nodes into a single platform capable of reducing data transfer time and energy for encoding and classification.
Designing a computing scheme to solve complex tasks as the big data field proliferates remains a challenge. Here, the authors present a probabilistic bit generation hardware built using the random nature of CuxTe1−x/HfO2/Pt memristors capable of performing logic gates with invertible mode, showing the expandability to complex logic circuits.
Memory augmented neural network for lifelong on-device learning is bottlenecked by limited bandwidth in conventional hardware. Here, the authors demonstrate its efficient in-memristor realization with a close-software accuracy, supported by hashing and similarity search in crossbars.
Designing efficient Bayesian neural networks remains a challenge. Here, the authors use the cycle variation in the programming of the 2D memtransistors to achieve Gaussian random number generator-based synapses, and combine it with the complementary 2D memtransistors-based tanh function to implement a Bayesian neural network.
The separation of sensor, memory, and processor in a recognition system deteriorates the latency of decision-making and increases the overall computing power. Here, Zhang et al. develop a photoelectronic reservoir computing system, consisting of DUV photo-synapses and a memristor array, to detect and recognize the latent fingerprint with in-sensor and parallel in-memory computing.
Magnetic skyrmions, due to their strongly nonlinearity and multiscale dynamics, are promising for implementing reservoir computing. Here, the authors experimentally demonstrate skyrmion-based spatially multiplexed reservoir computing able to perform Boolean Logic operations, using thermal and current driven dynamics of spin structures.
Retrieving the pupil phase of a optical beam path is a central problem for imaging systems across scales. The authors use Diffractive Neural Networks to directly extract pupil phase information with a single, compact optoelectronic device.
Existing memristors cannot be reconfigured to meet the diverse switching requirements of various computing frameworks, limiting their universality. Here, the authors present a nanocrystal memristor that can be reconfigured on-demand to address these limitations
The integration of artificial neuromorphic devices with biological systems plays a fundamental role for future brain-machine interfaces, prosthetics, and intelligent soft robotics. Harikesh et al. demonstrate all-printed organic electrochemical neurons on Venus flytrap that is controlled to open and close.
Synaptic plasticity and neuronal intrinsic plasticity are both involved in the learning process of hardware artificial neural network. Here, Lee et al. integrate a threshold switch and a phase change memory in a single device, which emulates biological synaptic and intrinsic plasticity simultaneously.
Neuromorphic computing requires the realization of high-density and reliable random-access memories. Here, Thean et al. demonstrate wafer-scale integration of solution-processed 2D MoS2 memristor arrays which show long endurance, long memory retention, low device variations, and high on/off ratio.
Designing energy efficient, uniform and reliable memristive devices for neuromorphic computing remains a challenge. By leveraging the self-rectifying behavior of gradual oxygen concentration of titanium dioxide, Choi et al. develop a transistor-free 1R cross-bar array with good uniformity and high yield.
Device-level complexity represents a big shortcoming for the hardware realization of analogue memory-based deep neural networks. Mackin et al. report a generalized computational framework, translating software-trained weights into analogue hardware weights, to minimise inference accuracy degradation.
Conventional filamentary memristors are limited in dynamics by the high electric-field dependence of the conductive filament. Here, Jeong et al. presents a method which creates a cluster-type memristor, enabling large conductance range and long data retention.
Silicon is an abundant element on earth and is perfectly compatible with the well-established CMOS processing industry. Here, Sun et al. demonstrate multifunctional neuromorphic devices based on silicon nanosheet stacks, bringing silicon back as a potential material for neuromorphic devices.
Intelligent materials change their properties under external stimuli, integrating functionalities at the matter level. Here, Guo et al. report an artificial vision system based on the memory effect produced by sliding ferroelectricity in multiwalled tungsten disulfide nanotubes.
The challenge of high-speed and high-accuracy coherent photonic neurons for deep learning applications lies to solve noise related issues. Here, Mourgias-Alexandris et al. address this problem by introducing a noise-resilient hardware architectural and a deep learning training platform.
One gap between the neuro-inspired computing and its applications lies in the intrinsic variability of the devices. Here, Payvand et al. suggest a technologically plausible co-design of the hardware architecture which takes into account and exploits the physics behind memristors.
Ising machines are accelerators for computing difficult optimization problems. In this work, Böhm et al. demonstrate a method that extends their use to perform statistical sampling and machine learning orders-of-magnitudes faster than digital computers.
Large-scale silicon-based integrated artificial neural networks lack of silicon-integrated optical neurons. Here, Yu et al, report a self-monitored all-optical neural network enabled by nonlinear germanium-silicon photodiodes, making the photonic neural network more versatile and compact.
Developing molecular electronics is challenged by integrating fragile organic molecules into modern micro/nanoelectronics based on inorganic semiconductors. Li et al. apply rolled-up nanotechnology to assemble on-chip molecular devices, which can be switched between photodiodes and volatile memristors.
Bioinspired neuromorphic vision components are highly desired for the emerging in-sensor computing technology. Here, Ge et al. develop an array of optoelectronic synapses capable of memorizing and processing ultraviolet images facilitated by photo-induced non-volatile phase transition in VO2 films.
Some types of machine learning rely on the interaction between multiple signals, which requires new devices for efficient implementation. Here, Sarwat et al demonstrate a memristor that is both optically and electronically active, enabling computational models such as three factor learning.
Spin-torque nano-oscillators have sparked interest for their potential in neuromorphic computing, however concrete demonstration are limited. Here, Romera et al show how spin-torque nano-oscillators can mutually synchronise and recognize temporal patterns, much like neurons, illustrating their potential for neuromorphic computing.
The conventional von Neumann computing architecture is ill suited to data intensive tasks as data must be repeated moved between the separated processing and memory units. Here, Seo et al propose a CMOS compatible, highly linear gate injection field-effect transistor where data can be both stored and processed.
Selective attention is an efficient processing strategy to allocate computational resources for pivotal optical information. Here, the authors propose a bionic vision hardware to emulate the behavior, showing a potential in image classification.
Computational properties of neuronal networks have been applied to computing systems using simplified models comprising repeated connected nodes. Here the authors create layered assemblies of genetically encoded devices that perform non-binary logic computation and signal processing using combinatorial promoters and feedback regulation.
Designing a full-memristive circuit for different algorithm remains a challenge. Here, the authors propose a recirculated logic operation scheme using memristive hardware and 2D transistors for cellular automata, supporting multiple algorithms with a 79-fold cost reduction compared to FPGA.
Multimodal cognitive computing task is an important research content in the field of AI. Here, the authors propose an efficient sensory memory processing system, which can process sensory information and generate synapse-like and multiwavelength light-emitting output for efficient multimodal information recognition.
Designing efficient photonic neuromorphic systems remains a challenge. Here, the authors develop a new class of memristor sensitive to the dual electro-optical history obtained by exploiting electrochemical, photovoltaic and photo-assisted oxygen ion motion effects at a high temperature superconductor / semiconductor interface.
Designing efficient neuromorphic systems remains a challenge. Here, the authors develop a system based on multi-terminal floating-gate memristor that mimics the temporal and spatial summation of multi-neuron connections based on leaky-integrate-and-fire functionality which is capable of high learning accuracy on unlabeled MNIST handwritten dataset.