NEWS AND VIEWS

Evolution of circuits for machine learning

The fundamental machine-learning task of classification can be difficult to achieve directly in ordinary computing hardware. Unconventional silicon-based electrical circuits can be evolved to accomplish this task.
Cyrus F. Hirjibehedin is in the Quantum Information and Integrated Nanosystems group, Lincoln Laboratory, Massachusetts Institute of Technology, Lexington, Massachusetts 02421, USA.
Contact

Search for this author in:

Artificial intelligence (AI) has allowed computers to solve problems that were previously thought to be beyond their capabilities, from defeating the best human opponents in complex games1 to automating the identification of diseases2. There is therefore great interest in developing specialized circuits that can complete AI calculations faster and with lower energy consumption than can current devices. Writing in Nature, Chen et al.3 demonstrate an unconventional electrical circuit in silicon that can be evolved in situ to carry out basic machine-learning operations.

Although computers excel at performing calculations that have well-defined answers, they have not been good at making guesses. For example, if you are thinking about selling your car, a computer is ideally suited for calculating the average price that similar cars have sold for, to help you determine your selling price. But by analysing the enormous digital data sets that are currently available, AI techniques such as machine learning can now teach computers to make sensible predictions. One of the most basic operations that machine-learning algorithms can carry out when provided with a large set of inputs (such as the age of a car and how many kilometres it has been driven) is classification into one of a set of categories, such as whether the car is in poor, fair or good condition and therefore whether you can expect to get the price you want for it.

Using the structure of the human brain as inspiration, scientists and engineers have made substantial progress in developing specialized hardware to greatly reduce the amount of time and energy needed to perform tasks such as classification4. There are also many unconventional device concepts for machine learning that are still in the early stages of development but that could offer radical new capabilities. For example, researchers are exploring whether superconductor-based electrical circuits that work at only a few degrees above absolute zero, and that operate at gigahertz frequencies with high energy efficiency, could enable machine-learning applications that are currently infeasible using conventional approaches5.

Chen and co-workers’ circuit is also inspired by the brain, and represents a major departure from typical electrical circuits. Normally, electrical current flows through circuits like water flowing in a river. If the river becomes so shallow that it is reduced to a set of small puddles, the water can no longer flow because the puddles are isolated from each other by barriers formed by the riverbed.

Although the barriers between water puddles are too wide for a water molecule to hop from one puddle to the next, this is not the case for electrical charge in ‘puddles’ of charge separated by nanoscale distances. Tools such as scanning tunnelling microscopes use the high sensitivity of a hopping process called quantum-mechanical tunnelling to routinely image features as small as individual atoms on surfaces6. This tunnelling is also at the heart of quantum-computing technologies that have made impressive advances in the past decade7.

Previous work by some of the current authors8 produced isolated charge puddles from a collection of gold nanoparticles that were randomly deposited on a silicon surface, with insulating molecules between them. These puddles were used to implement circuits that carried out conventional calculations, rather than machine learning, and are at the heart of Chen and colleagues’ circuit design (Fig. 1a). Information is input into such a circuit through electrical voltages that are applied using ordinary wires. The electric fields from these wires can alter whether or not hopping between neighbouring puddles can occur, and therefore can modify the hopping path for electrical charge through the circuit (Fig. 1b). The output of the circuit is determined by whether or not electrical current flows through another designated wire.

Figure 1 | An unconventional circuit for machine learning. a, Chen et al.3 demonstrate an electrical circuit in which charge hops between ‘puddles’ of charge in a background material. The operation of the circuit is tuned by applying voltages to control wires. Inputs to the circuit are provided by voltages on input wires, and the circuit’s output is determined by whether or not charge flows through an output wire. b, The control wires modify the regions of the circuit in which charge hopping can occur, thereby modifying which hopping paths can exist. An example of the effect produced by changing the voltage for a single control wire is shown here. The authors use their circuit to carry out basic machine-learning operations.

Because the distribution of charge puddles is random, it would seem impossible to predict how such a circuit would behave. However, the high sensitivity of tunnelling makes it possible to strongly modify the behaviour of the circuit using different control wires. This behaviour also cannot be easily predicted, but the previous work showed that configurations could be reliably found that carried out logic operations for two inputs, such as indicating whether at least one or both of the inputs were on (called an OR gate and an AND gate, respectively, in binary logic).

Although these earlier circuits did not perform machine-learning operations, the researchers did use a machine-learning algorithm to determine the control parameters needed to make the circuits carry out different operations. This algorithm is inspired by biological evolution, starting from a set of random control parameters and using only the most promising outcomes to ‘breed’ new parameter sets for successive generations. The previous study was groundbreaking because it showed that a single circuit could be reprogrammed in situ to execute any two-input logic operation by simply changing the voltages applied to five control wires.

In the present work, Chen et al. greatly expanded this basic idea by overcoming some of its key limitations and using the circuits to perform AI operations. First, the authors found a way to produce charge puddles directly in silicon by randomly implanting atoms that donate small amounts of charge to the silicon itself. This method makes the devices more broadly manufacturable than they previously were and potentially compatible with the current generation of electronics, which are also mainly based on silicon. Second, in this new material system, the maximum temperatures at which hopping dominates the electrical flow in these circuits, and therefore at which the desired operation is viable, is increased from barely above absolute zero to room temperature.

To demonstrate the increased potential of these devices, Chen et al. evolved circuits that could classify all 16 possible sets of 4 binary inputs (0000, 0001, …, 1111, where 0 represents no input on a given wire and 1 represents an input on a given wire). This classification was possible even when the number of control voltages was reduced from five to three.

The authors then incorporated this in silico 4-input classifier into the more complex AI task of classifying a standard set of black and white images of handwritten digits, which were encoded as a 28 × 28 array of pixels, each with a value of either 0 (white) or 1 (black) (see Fig. 4 of the paper3). To do this, Chen et al. subdivided the original array into sets of 2 × 2 neighbouring pixels and fed the value of each of these 4 pixels into the 4-input classi-fier’s input wires. The authors then set the control wires to perform classification for each of the 16 possible sets of 4 inputs, and passed all 16 outputs for each set of 2 × 2 pixels to a machine-learning algorithm run on conventional hardware that identified the digit in the full image.

Chen and colleagues’ hardware platform for classification is inherently scalable, and individual classifier devices can be run in parallel without any conflicts. In the current incarnation of the platform, the set-up used to perform the measurements limits the speed at which the classifiers can be operated and, in turn, the energy efficiency. The authors suggest alternative ways in which the measurements could be implemented to greatly improve the speed and energy efficiency of the circuits; demonstrating such improvement will be crucial if these designs are to move from the research laboratory to real-world applications.

Because the random distribution of charge puddles in both the previous and current designs is difficult to model, the circuits’ high sensitivity to the control voltages is essential for evolving a set of control parameters after fabrication. Although the absence of the need to precisely position the puddles makes the circuits easier to fabricate, their performance might be further enhanced by having pre-defined, atomically precise arrangements of the individual impurity atoms that donate charge to the silicon9. Such enhancements could include reproducibility of control parameters for different devices, improved reliability of operation at higher temperatures and reduced energy consumption.

Nature 577, 320-321 (2020)

doi: 10.1038/d41586-020-00002-x

References

  1. 1.

    Silver, D. et al. Nature 529, 484–489 (2016).

  2. 2.

    Liu, X. et al. Lancet Dig. Health 1, e271–e297 (2019).

  3. 3.

    Chen, T. et al. Nature 577, 341–345 (2020).

  4. 4.

    Merolla, P. A. et al. Science 345, 668–673 (2014).

  5. 5.

    Day, A., Wynn, A. & Golden, E. APS March Meet. 2020 abstr. L48.00008 (2020); go.nature.com/2ti4zt1

  6. 6.

    Binnig, G., Rohrer, H., Gerber, Ch. & Weibel, E. Phys. Rev. Lett. 50, 120–123 (1983).

  7. 7.

    Arute, F. et al. Nature 574, 505–510 (2019).

  8. 8.

    Bose, S. K. et al. Nature Nanotechnol. 10, 1048–1052 (2015).

  9. 9.

    Schofield, S. R. et al. Phys. Rev. Lett. 91, 136104 (2003).

Download references

Nature Briefing

An essential round-up of science news, opinion and analysis, delivered to your inbox every weekday.