For decades, advances in computing technology have been driven by the shrinking dimensions of silicon transistors, which, in turn, provided increases in performance and reductions in cost. Underpinning this has been a prediction made by Gordon Moore in 1965 that the number of components in an integrated circuit would approximately double every year, which he later revised to every two years. The prediction — now known as Moore’s law — became a self-fulfilling prophecy for the semiconductor industry. Moore, who together with Robert Noyce co-founded Intel, died on 24 March 2023; see obituary in this issue of Nature Electronics.

Photograph of the set-up used to test the sensitivity of the artificial cochlea developed by Lenk and colleagues. Credit: Reproduced under a Creative Commons licence CC BY 4.0

Today, the continued scaling of conventional silicon electronics, which is based on complementary metal–oxide–semiconductor (CMOS) technology, faces considerable challenges, and the search for alternatives remains intense. One potential option is quantum computing, which is the focus of the latest article in our series exploring key topics in the field of electronics through research that has been featured in the pages of the journal. A quantum processor could be used to execute certain computational tasks exponentially faster than a conventional processor, and the technology is currently being pursued by researchers from across academia and industry. (The author of the obituary is, for instance, James Clarke, director of quantum hardware at Intel.)

A key challenge facing conventional electronics is the advance of machine learning and artificial intelligence — and the energy demands they create for computing hardware. Neuromorphic computing is another potential option that could help here. The approach, which draws inspiration from how biological nervous systems process information, is particularly advantageous when large amounts of data need to be processed in parallel with very low latency and energy cost (in autonomous vehicles, for example).

Research on neuromorphic computing typically focuses on the development of artificial neurons and synapses, such as those based on arrays of resistive memory devices, to recreate the combination of processing and memory that occurs in the brain. But in animals, important processing also occurs in the primary sensing organs, such as the retina and cochlea; most environmental stimuli are processed and screened here, at the edge of the network, with only the most salient features sent to higher processing levels.

As Carver Mead recounted in a Reverse Engineering article in the journal in 2020, it was the operation of retinas and cochleae that inspired him and his colleagues at the California Institute of Technology to develop the first neuromorphic sensors1. (Incidentally, Mead is also often credited with coining the term ‘Moore’s law’ back in the 1970s2.) In 1988, for example, Mead and Richard Lyon reported an artificial cochlea fabricated using silicon CMOS technology3. The device could mimic abilities of the ear, such as adaptive and active gain, that occur due to changes in outer-hair-cell motility.

Fast forward 35 years, and in an Article in this issue of Nature Electronics, Claudia Lenk and colleagues report an artificial cochlea based on a microelectromechanical system. The neuromorphic sensor is capable of a dynamic gain change of 44 dB, which is comparable to the mammalian cochlea. (See also the accompanying Research Briefing article on the work.)

The researchers — who are based at Technische Universität Ilmenau, University College Cork, Kiel University, and the Karlsruhe Institute of Technology — created the cochlea from a silicon beam integrated with a thermomechanical actuator and piezoresistive sensors. When the beam is deflected by sound pressure, feedback from the sensors to the actuator can autonomously shift the operating regime of the transducer. For example, at low sound pressure amplitudes the cochlea can enter a sensitive, nonlinear regime, whereas at higher pressures, it can decrease its sensitivity to improve discrimination of signal in a noisy background. The cochleae can also dynamically alter their bandwidth when the feedback of two transducers is coupled together. This effect can be used to reduce the number of sensors needed to cover a given range of frequencies, making the system more energy efficient.

In human hearing, frequency-selective and event-based sensing are used to discriminate otherwise quiet sounds from noisy backgrounds, such as a name spoken across a crowded room. Bioinspired adaptive sensors — whether for audio or vision4 — that artificially recreate these pre-processing functions at the edge of computing networks could be a valuable addition in the development of electronics that are more energy efficient and powerful.