Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain
the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in
Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles
and JavaScript.
Inspired by human brain, neuromorphic computing technologies have made important breakthroughs in recent years as alternatives to overcome the power and latency shortfalls of traditional digital computing. An interdisciplinary approach is being taken to address the challenge of creating more efficient and intelligent computing systems that can perform diverse tasks, to design hardware with increasing complexity from single device to system architecture level, and to develop new theories and brain-inspired algorithms for future computing.
Edge and High-Performance Computing, Bio-Signal Processing and Brain-Computer Interface
We welcome the submissions of primary research that fall into any of the above-mentioned categories. All the submissions will be subject to the same peer review process and editorial standard as regular Nature Communications articles.
Designing efficient AI hardware capable of creating artificial general intelligence remains a challenge. Here, the authors present an approach for the on-demand generation of complex networks within a single memristor by harnessing device dynamics with intrinsic cycle-to-cycle variability and demonstrate the effectiveness of memristive complex network-based reservoirs.
Reconfigurable logic is desirable for high-density information processing. Here, the authors demonstrate a binary/ternary logic conversion-in-memory, which can operate in both binary and ternary logic systems to implement various types of logic gates.
While reservoir computing can process temporal information efficiently, its hardware implementation remains a challenge due to the lack of robust and energy efficient hardware. Here, the authors develop an all-ferroelectric reservoir computing system, showing high accuracies and low power consumptions in various tasks like the time-series prediction.
Dendritic computing is a promising approach to enhance the processing capability of artificial neural networks. Here, the authors report the development of a neurotransistor based on a vertical dual-gate electrolyte-gated transistor with short-term memory characteristics, a 30 nm channel length, a low read power of ~3.16 fW and read energy of ~30 fJ for dendritic computing.
Designing efficient in-memory-computing architectures remains a challenge. Here the authors develop a multi-level FeFET crossbar for multi-bit MAC operations encoded in activation time and accumulated current with experimental validation at 28nm achieving 96.6% accuracy and high performance of 885 TOPS/W.
The elementary excitations of magnets are known as magnons. Like photons, they can carry information, but unlike photons, the interactions of magnons are intrinsically non-linear, making them particularly promising for physical reservoir computing, where the non-linear response of a dynamical system is used as a computational resource. Here, Körber et al demonstrate physical reservoir computing using the magnon eigenmodes of a permalloy disc.
Hardware architectures based on self-organized memristive networks of nano objects have attracted a growing attention. Here, nanowire connectomes are experimentally proved to translate spatially correlated short-term plasticity effects into long-lasting topological changes, thus emulating both information encoding and memory consolidation of human brain.
Designing a high-density memory array to effectively manage large data volumes remains a challenge. Here, the authors introduce a stacked ferroelectric memory array comprised of laterally gated ferroelectric field-effect transistors device with high vertical scalability and efficient memory properties, making it suitable for 3D in-memory computing structures.
Combinatorial optimization problems have various important applications but are notoriously difficult to solve. Here, the authors propose a quantum inspired algorithm and apply it to classical analog memristor hardware, demonstrating an efficient solution for intricate problems.
Designing efficient neuromorphic systems based on nanowire networks remains a challenge. Here, Zhu et al. demonstrate brain-inspired learning and memory of spatiotemporal features using nanowire networks capable of MNIST handwritten digit classification and a novel sequence memory task performed in an online manner.
Memory devices with open-loop analog programmability are highly desired in training tasks. Here, the authors developed an electrochemical memory array that can be accurately programmed without any feedback, offering unique capabilities for training.
The progress of high-performance oxide-based transistors is essential for seamlessly integrating monolithic 3-D circuits into the CMOS backend. The authors propose using atomic layer deposition for ZnO due to its compatibility with low-temperature backend integration. They also successfully integrated ZnO TFTs with HfO2 RRAM in a 1 kbit 1T1R array, showcasing RRAM switching capabilities.
Designing a monolithic 3D structure with interleaved logic and high-density memory layers has been difficult to achieve due to challenges in managing the thermal budget. Here, the authors demonstrate a 3D integration of monolayer MoS2 transistors with 3D vertical RRAMs through a low-temperature fabrication process whose 1T–nR structure shows high promise for low-power and high-density memory applications.
Spin defects in semiconductors are promising for quantum technologies but understanding of defect formation processes in experiment remains incomplete. Here the authors present a computational protocol to study the formation of spin defects at the atomic scale and apply it to the divacancy defect in SiC.
Designing high efficient optoelectronic memory remains a challenge. Here, the authors report a novel optoelectronic memory device based on a photosensitive dielectric that is an insulator in dark and a semiconductor under irradiation with multilevel storage ability, low energy consumption and good compatibility.
Dense random access memory is required for building future generations of superconducting computers. Here the authors study vortex-based memory cells, demonstrate their scalability to submicron sizes and robust word and bit-line operation at zero magnetic field.
Designing efficient multistate resistive switching devices is promising for neuromorphic computing. Here, the authors demonstrate a reversible hydrogenation in WO3 thin films at room temperature with an electrically-biased scanning probe. The associated insulator to metal transition offers the opportunity to precisely control multistate conductivity at nanoscale.
Designing efficient optoelectronic synaptic devices with advanced light responsive multimodal platforms remains a challenge. Here, the authors report on an organic optoelectronic neuromorphic platform that is based on conductive polymers and light-sensitive molecules that can be used to imitate the retina including visual pathways and typical memory processes of neurons.
Image reconstruction algorithms raise critical challenges in massive data processing for medical diagnosis. Here, the authors propose a solution to significantly accelerate medical image reconstruction on memristor arrays, showing 79× faster speed and 153× higher energy efficiency than state-of-the-art graphics processing unit.
Analog in-memory computing promises efficient DNN inference acceleration but suffers from nonidealities. Here, hardware-aware training methods are improved so that various larger DNNs of diverse topologies nevertheless achieve iso-accuracy.
Designing efficient selector devices remains a challenge. Here, the authors propose a CuAg alloy-based selector with excellent ON/OFF ratio and thermal stability. It can effectively suppress the sneak-path current in 1S1R arrays, making it suitable for storage class memory and neuromorphic computing applications.
Sensing and processing UV light is essential for advanced artificial visual perception system. Here, the authors report a controllable UV-ultrasensitive neuromorphic vision sensor using organic phototransistors to integrate sensing, memory and processing functions, and perform the static image and dynamic movie recognition.
Dynamic machine vision requires recognizing the past and predicting the future of moving objects. Here, the authors demonstrate retinomorphic photomemristor networks with inherent dynamic memory for accurate motion recognition and prediction.
Designing an infrared machine vision system that can efficiently perceive, convert, and process a massive amount of data remains a challenge. Here, the authors present a retina-inspired 2D optoelectronic device based on van der Waals heterostructure that can perform the data perception and spike-encoding simultaneously for night vision, sensing, spectroscopy, and free-space communications.
A big challenge for artificial intelligence is to gain the ability of learning by experience like biological systems. Here Bianchi et al. propose a hardware neural network based on resistive-switching synaptic arrays which dynamically adapt to the environment for autonomous exploration.
Designing scaled electronic devices for neuromorphic applications remains a challenge. Here, Zhang et al. develop an artificial molecular synapse based on self-assembled peptide molecule monolayer whose conductance can be dynamically modulated and used for waveform recognition.
Hardware-based neural networks can provide a significant breakthrough in artificial intelligence. Here, the authors demonstrate an integrated 3-dimensional ferroelectric array with a layer-by-layer computation for area-efficient neural networks.
Designing bio-inspired artificial neurons within a single device is challenging. Here, the authors demonstrate a spintronic neuron with leaky-integrate-fire and self-reset characteristics and corroborate a new trajectory of all-spin neuromorphic computing hardware holistic implementation.
Inspired by the multisensory cue integration in macaque’s brain for spatial perception, the authors develop a neuromorphic motion-cognition nerve that achieves cross-modal perceptual enhancement for robotics and wearable applications.
Analog–digital hybrid computing based on SnS2 memtransistors is demonstrated for lowpower sensor fusion in drones, where a drone with hybrid computing performs sensor fusion with higher energy efficiency than that with only a digital processor.
A highly efficient hardware element capable of sensing and encoding multiple physical signals is still lacking. Here, the authors report a spike-based neuromorphic perception system consisting of tunable and highly uniform artificial sensory neurons based on epitaxial VO2 capable of hand gesture classification.
Designing energy-efficient computing solution for the implementation of AI algorithms in edge devices remains a challenge. Yang et al. proposes a decentralized brain-inspired computing method enabling multiple edge devices to collaboratively train a global model without a fixed central coordinator.
Designing biocompatible and flexible electronic devices for neuromrophic applications remains a challenge. Here, Kireev et al. propose graphene-based artificial synaptic transistors with low-energy switching, long-term potentiation, and metaplasticity for future bio-interfaced neural networks.
Designing an efficient multi-agent hardware system to solve large-scale computational problems through high-parallelism processing with nonlinear interactions remains a challenge. Here, the authors demonstrate that a multi-agent hardware system deploying distributed Ag nanoclusters as physical agents enables parallel, complex computing.
Layered heterostructures are promising photosensitive materials for advanced optoelectronics. Here, the authors introduce an interfacial coassembly method to construct large-scale perylene/grahene oxide (GO) heterobilayer for broadband photoreception and efficient neuromorphics.
Designing in-sensor computing systems remains a challenge. Here, the authors demonstrate artificial optical neurons based on the in-sensor computing architecture that fuses sensory and computing nodes into a single platform capable of reducing data transfer time and energy for encoding and classification.
Designing a computing scheme to solve complex tasks as the big data field proliferates remains a challenge. Here, the authors present a probabilistic bit generation hardware built using the random nature of CuxTe1−x/HfO2/Pt memristors capable of performing logic gates with invertible mode, showing the expandability to complex logic circuits.
Memory augmented neural network for lifelong on-device learning is bottlenecked by limited bandwidth in conventional hardware. Here, the authors demonstrate its efficient in-memristor realization with a close-software accuracy, supported by hashing and similarity search in crossbars.
Designing efficient Bayesian neural networks remains a challenge. Here, the authors use the cycle variation in the programming of the 2D memtransistors to achieve Gaussian random number generator-based synapses, and combine it with the complementary 2D memtransistors-based tanh function to implement a Bayesian neural network.
The separation of sensor, memory, and processor in a recognition system deteriorates the latency of decision-making and increases the overall computing power. Here, Zhang et al. develop a photoelectronic reservoir computing system, consisting of DUV photo-synapses and a memristor array, to detect and recognize the latent fingerprint with in-sensor and parallel in-memory computing.
Magnetic skyrmions, due to their strongly nonlinearity and multiscale dynamics, are promising for implementing reservoir computing. Here, the authors experimentally demonstrate skyrmion-based spatially multiplexed reservoir computing able to perform Boolean Logic operations, using thermal and current driven dynamics of spin structures.
Retrieving the pupil phase of a optical beam path is a central problem for imaging systems across scales. The authors use Diffractive Neural Networks to directly extract pupil phase information with a single, compact optoelectronic device.
Existing memristors cannot be reconfigured to meet the diverse switching requirements of various computing frameworks, limiting their universality. Here, the authors present a nanocrystal memristor that can be reconfigured on-demand to address these limitations
The integration of artificial neuromorphic devices with biological systems plays a fundamental role for future brain-machine interfaces, prosthetics, and intelligent soft robotics. Harikesh et al. demonstrate all-printed organic electrochemical neurons on Venus flytrap that is controlled to open and close.
Synaptic plasticity and neuronal intrinsic plasticity are both involved in the learning process of hardware artificial neural network. Here, Lee et al. integrate a threshold switch and a phase change memory in a single device, which emulates biological synaptic and intrinsic plasticity simultaneously.
Neuromorphic computing requires the realization of high-density and reliable random-access memories. Here, Thean et al. demonstrate wafer-scale integration of solution-processed 2D MoS2 memristor arrays which show long endurance, long memory retention, low device variations, and high on/off ratio.
Designing energy efficient, uniform and reliable memristive devices for neuromorphic computing remains a challenge. By leveraging the self-rectifying behavior of gradual oxygen concentration of titanium dioxide, Choi et al. develop a transistor-free 1R cross-bar array with good uniformity and high yield.
Device-level complexity represents a big shortcoming for the hardware realization of analogue memory-based deep neural networks. Mackin et al. report a generalized computational framework, translating software-trained weights into analogue hardware weights, to minimise inference accuracy degradation.
Conventional filamentary memristors are limited in dynamics by the high electric-field dependence of the conductive filament. Here, Jeong et al. presents a method which creates a cluster-type memristor, enabling large conductance range and long data retention.
Silicon is an abundant element on earth and is perfectly compatible with the well-established CMOS processing industry. Here, Sun et al. demonstrate multifunctional neuromorphic devices based on silicon nanosheet stacks, bringing silicon back as a potential material for neuromorphic devices.
Intelligent materials change their properties under external stimuli, integrating functionalities at the matter level. Here, Guo et al. report an artificial vision system based on the memory effect produced by sliding ferroelectricity in multiwalled tungsten disulfide nanotubes.
The challenge of high-speed and high-accuracy coherent photonic neurons for deep learning applications lies to solve noise related issues. Here, Mourgias-Alexandris et al. address this problem by introducing a noise-resilient hardware architectural and a deep learning training platform.
One gap between the neuro-inspired computing and its applications lies in the intrinsic variability of the devices. Here, Payvand et al. suggest a technologically plausible co-design of the hardware architecture which takes into account and exploits the physics behind memristors.
Ising machines are accelerators for computing difficult optimization problems. In this work, Böhm et al. demonstrate a method that extends their use to perform statistical sampling and machine learning orders-of-magnitudes faster than digital computers.
Large-scale silicon-based integrated artificial neural networks lack of silicon-integrated optical neurons. Here, Yu et al, report a self-monitored all-optical neural network enabled by nonlinear germanium-silicon photodiodes, making the photonic neural network more versatile and compact.
Developing molecular electronics is challenged by integrating fragile organic molecules into modern micro/nanoelectronics based on inorganic semiconductors. Li et al. apply rolled-up nanotechnology to assemble on-chip molecular devices, which can be switched between photodiodes and volatile memristors.
Bioinspired neuromorphic vision components are highly desired for the emerging in-sensor computing technology. Here, Ge et al. develop an array of optoelectronic synapses capable of memorizing and processing ultraviolet images facilitated by photo-induced non-volatile phase transition in VO2 films.
Some types of machine learning rely on the interaction between multiple signals, which requires new devices for efficient implementation. Here, Sarwat et al demonstrate a memristor that is both optically and electronically active, enabling computational models such as three factor learning.
Spin-torque nano-oscillators have sparked interest for their potential in neuromorphic computing, however concrete demonstration are limited. Here, Romera et al show how spin-torque nano-oscillators can mutually synchronise and recognize temporal patterns, much like neurons, illustrating their potential for neuromorphic computing.
The conventional von Neumann computing architecture is ill suited to data intensive tasks as data must be repeated moved between the separated processing and memory units. Here, Seo et al propose a CMOS compatible, highly linear gate injection field-effect transistor where data can be both stored and processed.
Selective attention is an efficient processing strategy to allocate computational resources for pivotal optical information. Here, the authors propose a bionic vision hardware to emulate the behavior, showing a potential in image classification.
Computational properties of neuronal networks have been applied to computing systems using simplified models comprising repeated connected nodes. Here the authors create layered assemblies of genetically encoded devices that perform non-binary logic computation and signal processing using combinatorial promoters and feedback regulation.
Designing a full-memristive circuit for different algorithm remains a challenge. Here, the authors propose a recirculated logic operation scheme using memristive hardware and 2D transistors for cellular automata, supporting multiple algorithms with a 79-fold cost reduction compared to FPGA.
Multimodal cognitive computing task is an important research content in the field of AI. Here, the authors propose an efficient sensory memory processing system, which can process sensory information and generate synapse-like and multiwavelength light-emitting output for efficient multimodal information recognition.
Designing efficient photonic neuromorphic systems remains a challenge. Here, the authors develop a new class of memristor sensitive to the dual electro-optical history obtained by exploiting electrochemical, photovoltaic and photo-assisted oxygen ion motion effects at a high temperature superconductor / semiconductor interface.
Designing efficient neuromorphic systems remains a challenge. Here, the authors develop a system based on multi-terminal floating-gate memristor that mimics the temporal and spatial summation of multi-neuron connections based on leaky-integrate-and-fire functionality which is capable of high learning accuracy on unlabeled MNIST handwritten dataset.
Artificial spin ices consist of small magnets arranged in a lattice. Their simplicity belies their rich behaviour; they allowed for the investigation of effective magnetic monopoles, and more recently have been suggested as promising platforms for neuromorphic computing. For this latter function, efficient readout of the artificial spin ice state is critical. In this manuscript, Hu et al succeed in distinguishing artificial spin ice states using simple transport measurements.
Future intelligent vision systems need efficient capacitor-free spiking photoreceptor for color perception. Here, Wang et al. report a metal oxide-based vertically integrated spiking cone photoreceptor array which transduces light into spike trains with a power consumption of less than 400 picowatts.
Arranging nanomagnets into a two-dimensional lattice provides access to a rich landscape of magnetic behaviours. Control of the interactions between the nanomagnets after fabrication is a challenge. Here, Yun et al demonstrate all-electrical control of magnetic couplings in a two-dimensional array of nanomagnets using ionic gating.
Molecular electronics holds promise for building memristor at nanoscales for in-memory computing. Li et al. design tailored foldamers with furan-benzene and thiophene-benzene stacking to achieve voltage triggered quantum interference switching for potential random number generator application.
Data-centric applications benefit from dense, low-power memory. Here the authors use a combination of chalcogenide superlattices and nanocomposites to achieve low switching voltage (0.7 V) and fast speed (40 ns) in 40-nm-scale phase-change memory.
Optoelectronic neural networks are a promising avenue in AI computing for parallelization, power efficiency, and speed. Here, the authors present a dual-neuron optical-artificial learning approach for training large-scale diffractive neural networks, achieving VGG-level performance on ImageNet in simulation with a network that is 10 times larger than existing ones.
Designing efficient 3D artificial neural networks chip remains a challenge. Here, the authors report a M3D-LIME chip with monolithic three-dimensional integration of hybrid memory architecture based on resistive random-access memory, which achieves a high classification accuracy of 96% in one-shot learning task while exhibiting 18.3× higher energy efficiency than GPU.
Physical reservoirs that contain intrinsic nonlinear dynamic processes could serve as next-generation dynamic computing systems. Here, Liu et al. introduced an interface-type transistor based on oxygen ion dynamics to perform reservoir computing.
Designing an efficient activation function for optical neural networks remains a challenge. Here, the authors demonstrate a modulator-detector-in-one graphene/silicon heterojunction ring resonators enabling on-chip reconfigurable activation function devices with phase activation capability for optical neural networks.
Probabilistic computing has recently emerged as a promising energy-based computing system for solving non-deterministic polynomial-time-hard (NP-hard) problems. Here the authors develop a novel pbit unit, using NbOx volatile memristor, in which a self-clocking oscillator harnesses noise-induced metal-insulator transition, enabling high-performance probabilistic computing.
Bayesian networks gain importance in safety-critical applications. Authors conducted experiments with a memristor-based Bayesian network trained with variational inference with technological loss, achieving accurate heartbeats classification and prediction certainty.
Designing high performance organic neuromorphic devices remains a challenge. Here, Liu et al. report the development of an organic synapse based on a semicrystalline polymer PBFCL10 with device dimension of 50 nm and integration size of 1 Kb and a mixed‐signal neuromorphic hardware system based on the organic neuromatrix and FPGA controller for decision‐making tasks.
Layered thio- and seleno-phosphate ferroelectrics show promise for next-generation memory but have thermal stability issues. Using the electric field-driven phase transition in antiferroelectric CuCrP2S6, the authors introduce a robust memristor, emphasizing the potential of van der Waals antiferroelectrics in advanced neuromorphic computing.
Neural networks are powerful tools for solving complex problems, but finding the right network topology for a given task remains an open question. Here, the authors propose a bio-inspired artificial neural network hardware able to self-adapt to solve new complex tasks, by autonomously connecting nodes using electropolymerization.
Developing efficient reservoir computing hardware that combines optically excited acoustic and spin waves with high spatial density remains a challenge. In this work, the authors propose a design capable of recognizing visual shapes drawn by a laser within remarkably confined spaces, down to 10 square microns.
In-sensor and near-sensor computing are emerging as the next-generation computing paradigm, for high-density and low-power sensory processing. Here, the authors report a fully hardware-implemented artificial visual system for versatile image processing based on multimodal-multifunctional optoelectronic resistive memory devices with optical and electrical resistive switching modes.
Designing memristor-integrated passive crossbar arrays to accelerate artificial neural networks with high reliability remains a challenge. Here, the authors propose a self-rectifying resistive switching device incorporated into a crossbar array with a density of 1 kb whose operational performance is assessed in terms of defected-cell proportion, reading margin, and selection functionality.
Designing efficient high-density crossbar arrays are nowadays highly demanded for many artificial intelligence applications. Here, the authors propose a two-terminal ferroelectric fin diode non-volatile memory in which a ferroelectric capacitor and a fin-like semiconductor channel are combined to share both top and bottom electrodes with high performance and easy fabrication process
Designing efficient artificial neural network circuit architectures for optimal information routing remains a challenge. Here, the authors propose “Mosaic", the first demonstration of on-chip in-memory spike routing using memristors, optimized for small-world graphs prevalent in mammalian brains, offering orders of magnitude reduction in routing events compared to current approaches.
Frequency converters for wireless internet of things applications typically require separate circuits for different functions, causing energy and performance inefficiencies. Using an epitaxially grown VO2 memristor array, Liu et al. present a frequency converter with in-situ frequency synthesis and mix functionality.
Dealing with the explosive growth of diverse image data in the era of big data poses challenges for storage. Feng et al. propose a memristor-based near-storage in-memory processing system to boost the energy and storage efficiency.
Existing neuromorphic hardware, focusing mainly on shallow-reservoir computing, is challenged in providing adequate spatial and temporal scales characteristic for effective computing. Here, Gao et al. report an ultra-short channel organic neuromorphic vertical transistor with distributed reservoir states.
Probabilistic inference hardware prevents overconfidence. Lee et al. report a Gaussian-like memory transistor using p-n junction coupled with separate floating gate, offering precise control of the Gaussian outputs, simplified circuit design, and low power consumption for inference computing.
A wide reservoir computing system is an advanced architecture. However, its hardware implementation remains elusive due to the lack of 3D architecture framework. Choi et al. demonstrate such hardware made of a multilayered 3D stacked memristive crossbar array for efficient learning and forecasting.