Molecular programming

DNA and the brain

Article metrics

The idea that artificial neural networks could be based on molecular components is not new, but making such a system has been difficult. A network of four artificial neurons made from DNA has now been created. See Letter p.368

The design of intelligent systems is a long-standing goal for scientists, not least those in the Acme Labs of the animated TV series Pinky and the Brain. The Acme researchers used their technology to enhance the intelligence of the eponymous mice — Brain became a fiendish genius bent on world domination, although Pinky's transformation into a dimwit was arguably less impressive. Such experiments are clearly fantasy, but a related and compelling bioengineering challenge in the real world is to demonstrate how tiny biological molecules could support limited forms of intelligent behaviour, as must have happened before brains evolved. On page 368 of this issue, Qian et al.1 report a leap forward in this area: a network of interacting DNA strands that can act as artificial neurons, and that supports simple memory functions.

Brains are large networks of neurons. Within these networks, individual cells produce electrochemical signals whose strength depends in a complex way on the strengths of input signals received from other neurons in the network, or from sensory inputs. Artificial neurons are theoretical, highly simplified models of neurons2 that produce a signal if the weighted sum of their inputs exceeds a threshold value (Fig. 1a,b). Because of their simplicity, networks of artificial neurons are but a shadow of the means used for information processing in the brain. Nevertheless, artificial neural networks implemented computationally are adept at pattern-association tasks that our brains do well, such as identifying letters of the alphabet in poor handwriting.

Figure 1: Pattern recognition by artificial neural networks.
figure1

a, The diagram represents an artificial neuron that has four inputs and one output. If the inputs from top to bottom are 1, 1, 1 and 0, then the weighted sum of inputs is 3 + 0 − 2 + 0 = 1. This is less than the threshold of 2, and so the output is 0. b, If the inputs to the same neuron are all 1, then the weighted sum of inputs is 2, and the output is 1. c, Networks of artificial neurons can be used for pattern recognition. Here, the letters L and X are depicted as patterns of nine black and white squares in a grid. d, A network of nine artificial neurons, where each neuron corresponds to a square in the grid, can identify whether an incomplete pattern, such as that shown, is L or X. Each neuron receives signals from all the other neurons, but, for simplicity, only the signals to and from the neuron associated with the top-right square — the large red neuron in the diagram — are shown. Neurons associated with white squares provide input values of 1, whereas those associated with black squares provide a value of 0. On the basis of its predetermined weightings and threshold value (not shown), the red neuron determines that the signal from the top-right square is 0 — that is, the square is black. Qian et al.1 have made a DNA-based network of four artificial neurons that distinguishes between four four-bit patterns, and that reconstructs the patterns on the basis of incomplete descriptions.

To understand how artificial neural networks perform such tasks, consider a pair of simple patterns: 3 × 3 grids of black or white squares that represent two letters in the alphabet (Fig. 1c). Given an incomplete description of a pattern, artificial neural networks use an automated method to find the letter that best matches it. The nine inputs to such a network describe the incomplete pattern, with black squares represented by '0' and white squares by '1'. Squares whose colour is unknown are represented by '?' (Fig. 1d). The nine outputs of the network should describe the pattern that best matches the incomplete input, using '0's and '1's as above, '?' for squares whose colour couldn't be resolved and, in some cases, 'x' if the input is invalid.

The network's agents are nine artificial neurons, each of which corresponds to a square on the grid. Each neuron determines one of the nine outputs, using signals from all the other neurons as clues. Roughly, the weighted-threshold feature of an artificial neuron provides a sort of voting mechanism for its incoming 1-valued signals. For example, the middle-left and bottom-middle squares in Figure 1d are white (1-valued), which implies that the top-right square should be black (0-valued). Accordingly, in the neuron that corresponds to the top-right square, the inputs from the two white squares should be weighted to help bring the overall sum of inputs below the threshold value of the neuron, thus ensuring that the output is '0'. In other words, the inputs from the two white squares should be negative numbers (or negative votes).

By contrast, if a different incomplete pattern had a white square in the centre of the grid, the centre square's input signal to the 'top-right' neuron should be positively weighted, helping to ensure an output of '1' to indicate that the top-right square is white. The weights used by each neuron are determined in advance from a collection of patterns — that is, before any incomplete pattern is provided. In effect, the weights are a neuron's means of 'remembering' the collection of patterns, enabling the neuron to match incomplete patterns.

To date, efforts to synthesize molecular systems that behave as artificial neurons have been on too small a scale to mimic the action of a single neuron. But Qian et al.1 have now built a network of four artificial neurons that distinguishes between four four-bit patterns, and that can identify which of these patterns matches an incomplete description. Their network is built entirely from DNA.

The authors constructed their artificial neurons from modules that add, multiply and compute thresholds. These arithmetic modules were in turn built from more primitive subcomponents called see-saw gates — versatile units that two of the authors had previously used3 in a quite different demonstration of digital logic circuits. The gates use different concentrations of two designated DNA strands to represent the three possible values of a signal: a high concentration of the first strand signals '0'; a high concentration of the second strand signals '1'; low concentrations of both strands signal '?'; and high concentrations of both strands signal 'x', indicating that the input does not match any pattern. Combinations of input DNA strands that are present in sufficiently high concentrations are converted by the see-saw gates into high concentrations of different output DNA strands, which in turn can be fed as input into other gates.

At the molecular level, see-saw gates use DNA-strand displacement as the basis of their function. Strand displacement happens when single-stranded DNA used as input forms duplexes with complementary strands in stable, multi-stranded complexes. The formation of new duplexes displaces extant strands of the original complex, which act as output.

Although Qian and colleagues' demonstration1 of an artificial neural network is technically impressive, its small scale and computing power are, alas, more reminiscent of Pinky than of the Brain. Another limitation is that the neuronal weights of the systems — in effect, the memory of the network — were predetermined using computer simulations, and are fixed. By contrast, our brains improve their performance in memory-association tasks, such as handwriting recognition, by fine-tuning the strengths of neuronal connections.

Nevertheless, the authors' DNA-based network is exciting because it shows how a biochemical system can remember information, and can use its memory to adapt to a changing environment by adjusting chemical concentrations. Because the network is built from a nucleic acid, it also provides a possible model for precursors of brains that existed in the RNA world — a postulated era of Earth in which all life was based on RNA molecules, rather than DNA. Moreover, the work opens the door to the development of biochemical neural networks that could fine-tune their neuronal weights over time, given appropriate feedback. In other words, it might pave the way for biochemical systems that can learn.

References

  1. 1

    Qian, L., Winfree, E. & Bruck, J. Nature 475, 368–372 (2011).

  2. 2

    McCulloch, W. S. & Pitts, W. Bull. Math. Biophys. 5, 115–133 (1943).

  3. 3

    Qian, L. & Winfree, E. Science 332, 1196–1201 (2011).

Download references

Author information

Correspondence to Anne Condon.

Rights and permissions

Reprints and Permissions

About this article

Cite this article

Condon, A. DNA and the brain. Nature 475, 304–305 (2011) doi:10.1038/475304a

Download citation

Further reading

Comments

By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.