Skip to main content

Thank you for visiting You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.


Smart connections

Nanoscale devices have now been made that mimic biological connections in the brain by responding to the relative timing of signals. This achievement might lead to the construction of artificial neural networks for computing applications.

The development of artificial neural networks that could rival their biological counterparts is arguably the last frontier in computing. A long-standing drawback is the lack of efficient hardware technology. Two papers — one by Kuzum et al.1 in Nano Letters and the other by Ohno and colleagues2 in Nature Materials — report the synthesis of inorganic artificial synapses that bring viable technology for artificial neural networks a step closer.

The mammalian brain outperforms computers in many computational tasks — for example, it can recognize a complex image faster and with better fidelity, and yet consumes a tiny fraction of the energy used by computers to do this. In an attempt to make computers that match the efficiency of biological information processing, artificial neural networks were conceived, the first implementations3 of which appeared even before the first digital microprocessor. But although the performance of microprocessors has improved exponentially during the past four decades, the progress of artificial neural networks has been much less remarkable.

At its most abstract, a neural network can be represented by a diagram (Fig. 1a) in which nodes corresponding to neurons are connected by lines corresponding to synapses (the junctions between neurons). In parallel, each node processes input information from the preceding nodes and then passes it to the next layer of nodes. For example, a node might add together input signals, each of which is scaled by a weight associated with the corresponding synapse; the output of the node would then depend on whether the sum of the weighted inputs is higher than the threshold value of the node. For any given structure of an artificial neural network, the overall functionality is defined by the synaptic weights, which are adjusted during learning processes.

Figure 1: Making the connection.

a, Artificial neural networks that mimic the behaviour of neurons in the brain have been proposed as possible computational systems. Such networks can be represented by diagrams depicting how the artificial neurons are connected. Arrows indicate that each neuron receives and transmits signals to all the other neurons to which it is connected. All the connections incorporate artificial synapses, but for simplicity only one is shown. b, In reality, devices that act as artificial synapses serve as junctions for electrodes projecting from artificial neurons, as shown here for part of a nanometre-scale 'hybrid crossbar' circuit. The components shown correspond to the coloured components depicted in a. Kuzum et al.1 and Ohno et al.2 report devices that act as nanometre-scale artificial synapses.

To match the complexity of the human brain, artificial neural networks would need to contain roughly 1011 nodes, with every node connected on average to 10,000 others — which equates to about 1015 synapses. This requirement for high complexity, connectivity and massive parallel information processing is what makes it so challenging to make artificial neural networks that mimic the performance of biological systems. Indeed, software simulations of neural networks on conventional computers ('von Neumann' computers, for those in the know) are very slow. Neural networks based on purpose-built integrated circuits are more efficient in terms of speed, area and power consumption, but even these are largely inadequate using current technology. For instance, the crudest artificial synapse requires more than ten transistors, which is impractical for the implementation of large networks.

Artificial synapses that address some of these challenges were reported last year4. These nanometre-scale devices consisted of thin films of amorphous silicon enriched with clusters of silver atoms, sandwiched between two metal electrodes. When a voltage bias was applied across the electrodes in one direction, silver filaments formed and/or grew in the silicon, decreasing the electrical resistivity of the device and so increasing the synaptic weight (the conductivity). When the polarity of the applied voltage bias was reversed, the silver filaments dispersed and/or shrank, increasing the resistivity and so reducing the synaptic weight.

The idea of using a resistive switching mechanism to implement artificial synapses has been around for many years3,5,6, but the devices described above4 clearly demonstrated that functional artificial synapses could be made that have a nanometre-scale footprint. In a neural network, the synapses would act as 'crossbar' junctions between overlapping electrodes attached to artificial neurons (Fig. 1b), and so the footprint of the synapses is limited only by the overlapping area of two electrodes — the smaller the overlapping area, the smaller the synapses can be. Crude estimates7 have shown that synapses smaller than 10 nanometres in length could be used to make artificial neural networks of sufficient complexity and connectivity to challenge the computational performance of the human brain.

Kuzum et al.1 now show that nanoscale artificial synapses can be constructed using different materials and physical mechanisms from those used previously4. The authors made their devices using a chalcogenide glass that switches reversibly between an amorphous phase (of high resistivity) and a crystalline phase (of low resistivity) in response to electric pulses. By applying a voltage to a film of the glass, they could thus induce resistive switching in their devices. The volumes of the two phases in the film change gradually and depend on the magnitude and duration of the applied voltage pulse, so the authors were able to continuously tune the resistivity of the devices.

Because of the large number of synapses, any practical implementation of artificial neural networks must be able to update synaptic weights in parallel; otherwise, learning processes would be too slow. The main mechanism for efficient learning in biological neural networks was postulated8 by Donald Hebb in 1949, and was eventually demonstrated experimentally9 in 1998. A crucial aspect of this mechanism is that changes in synaptic weight depend on the relative timing of pulses from presynaptic and postsynaptic neurons — the weight strengthens if two neurons are activated simultaneously, but weakens when they are activated separately.

The most exciting aspect of Kuzum and colleagues' work1 is the experimental demonstration that their artificial synapses can undergo Hebbian learning. In line with theoretical proposals10,11, the authors set up a synapse between two artificial neurons and used a specific sequence of voltage pulses to enforce learning. When active, the presynaptic neuron generated a sequence of high-amplitude, short-duration electric pulses followed by lower-amplitude but longer-duration pulses, whereas the postsynaptic neuron generated a single negative pulse. The authors observed that, if the negative pulse overlapped in time with the higher-amplitude presynaptic pulses, the total bias across the synapse was large enough to enforce gradual growth of the amorphous (highly resistive) phase of the synapse. But if the postsynaptic neuron's negative pulse overlapped with low-amplitude presynaptic pulses, the total applied bias across the synapse was smaller and caused gradual crystallization of the synapse — lowering the resistivity and increasing the synaptic weight. The authors set up the system so that, if the pulses didn't overlap, no changes in conductance occurred.

Studies of biological synapses have revealed two distinct temporal modes for synaptic plasticity (the process in which synaptic weights are modulated). In short-term plasticity, changes to weights last for less than a few seconds, whereas long-term potentiation causes increases in weights that can persist for many hours12. In other words, synaptic weights decay quickly after single events in which presynaptic and postsynaptic pulses overlap; weights become permanent only after the repeated reinforcement of overlapping pulses.

Ohno et al.2 report nanoscale artificial synapses that exhibit both short-term and long-term plasticity. The underlying mechanism is, again, resistive switching: the application of a voltage across the device results in the formation of silver filaments that bridge a nanometre-scale gap between two electrodes, lowering the resistivity of the synapse and increasing the synaptic weight. The authors observed that short-term plasticity occurs in response to intermittent voltage pulses, and propose that, under these conditions, incomplete silver bridges form — bridges the thickness of one atom that rapidly dissolve in the absence of a voltage. But when they applied rapidly repeated voltage pulses, they observed long-term plasticity. They attribute this to the formation of complete (thicker) bridges, which are more stable than the incomplete ones and persist for a longer time.

The fabrication technology used to make artificial synapses is still largely immature, resulting in variations in the properties of thin-film resistive switching devices. The amount of variation would be unacceptable in components of conventional integrated circuits. This situation is likely to change, however, given the large efforts currently being made by industry to develop digital computer memories based on crossbar junctions. Advances in crossbar-memory cells would aid the development of artificial neural networks, because similar materials and device structures are used in both technologies. The fact that neural networks are typically extremely fault tolerant7 should also relax the requirement for low variation in the properties of artificial synapses. So, after decades of frustratingly slow progress, perhaps artificial neural networks are finally about to come of age.


  1. 1

    Kuzum, D., Jeyasingh, R. G. D., Lee, B. & Wong, H.-S. P. Nano Lett. (2011).

  2. 2

    Ohno, T. et al. Nature Mater. 10, 591–595 (2011).

    ADS  CAS  Article  Google Scholar 

  3. 3

    Widrow, B. Tech. Rep. 1553-2; (1960).

  4. 4

    Jo, S. H. et al. Nano Lett. 10, 1297–1301 (2010).

    ADS  CAS  Article  Google Scholar 

  5. 5

    Thakoor, S., Moopenn, A., Daun, T. & Thakoor, A. P. J. Appl. Phys. 67, 3132–3135 (1990).

    ADS  CAS  Article  Google Scholar 

  6. 6

    Yang, J. J. et al. Nature Nanotechnol. 3, 429–433 (2008).

    CAS  Article  Google Scholar 

  7. 7

    Likharev, K. K. Science Adv. Mater. 3, 322–331 (2011).

    CAS  Article  Google Scholar 

  8. 8

    Hebb, D. O. The Organization of Behavior: A Neuropsychological Theory (Wiley, 1949).

    Google Scholar 

  9. 9

    Bi, G. & Poo, M. J. Neurosci. 18, 10464–10472 (1998).

    CAS  Article  Google Scholar 

  10. 10

    Snider, G. S. in Proc. IEEE/ACM Int. Symp. Nanoscale Architectures, Anaheim, California, 85–92 (2008).

    Google Scholar 

  11. 11

    Linares-Barranco, B. & Serrano-Gotarredona, T. Preprint at (2009).

  12. 12

    Kandel, E. R., Schwartz, J. H. & Jessel, T. M. Principles of Neural Science 4th edn (McGraw-Hill, 2000).

    Google Scholar 

Download references

Author information



Corresponding author

Correspondence to Dmitri B. Strukov.

Rights and permissions

Reprints and Permissions

About this article

Cite this article

Strukov, D. Smart connections. Nature 476, 403–405 (2011).

Download citation

Further reading


By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.


Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing