Machine learning — the implementation of computational tasks that involve learning from experience — is inevitably inspired by the working principles of biological brains. However, the von Neumann architecture characteristic of conventional computers is intrinsically different from the brain's architecture. Aside from macroscopic differences such as the overall digital nature of the signal processing, opposed to the brain's overall analog approach, the former architecture suffers from an inefficient physical separation between the memory and central processor units.

To clarify the importance of machine-learning algorithms, researchers in the fields of high-energy physics and astronomy already exploit them routinely to extract extremely weak signals buried in noise — the recent detection of gravitational waves by the LIGO Scientific Collaboration being just the latest, groundbreaking example (R. Biswas et al., Phys. Rev. D 88, 062003; 2013). Now, a drastic paradigm shift is needed for the whole architecture of the underlying hardware if machine learning is to become even more efficient and effective. This motivates the recent efforts towards so-called neuromorphic computation, that is, the implementation of algorithms by means of hardware architectures mimicking the structure of the brain and based on spiking artificial neurons and synapses.

Remarkable progress in neuromorphic computation has been made based on complementary metal-oxide–semiconductors (CMOS), both on the level of single devices and hardware architectures. However, in terms of scalability and energy efficiency, CMOS technologies will never be suitable for neuromorphic computation. To emulate nature, nanotechnology and materials science must offer new concepts — clear goals of the National Nanotechnology Initiative (M. A. Meador, Nature Nanotech. 11, 401–402; 2016). In this direction, the recent discovery of memristive behaviour (J. Joshua Yang et al., Nature Nanotech. 8, 13–24; 2013) — a resistance state that depends on the previous history of applied voltages and currents, which is naturally suitable for implementing learning processes at the single-device level — has stimulated intensive basic research on possible materials and nanoscale devices possessing this property.

In this issue, on page 693, Tomas Tuma and colleagues now report a single nanoscale memristive device based on phase-change materials that mimics both the integrate-and-fire functionality of the neuronal membrane and its intrinsically stochastic dynamics on the nanosecond timescale, and that has a remarkably low energy consumption. In spite of the limitations, the prospect of scalability of these devices means that we are a step closer towards a deep paradigm shift in computation.