Neural signals from the brains of monkeys have been used to drive the movement of robotic arms. The ultimate objective of such work is to design controllable prosthetic limbs.
The idea of driving robotic limbs with what effectively amounts to the mere 'power of thought' was once in the realm of science fiction. But this goal is edging closer to reality, thanks in part to two decades of studies that have revealed a close match between the activity of neurons in the brain's cerebral cortex and the movements of the hand1. Now, writing on page 361 of this issue2, Wessberg and colleagues describe how they have used electrical signals from five regions in the cerebral cortex of monkeys to drive the movement of robotic arms.
Such 'brain–computer interfacing'3 has clear-cut clinical aims. The ambitious goal is to provide amputees and patients suffering from a variety of severe motor disorders — such as paralysis and amyotropic lateral sclerosis — with the means to act and communicate by replacing the control of muscles with the control of artificial devices by brain activity.
The idea of using signals from the cerebral cortex to drive artificial limbs was pioneered 30 years ago4, when these signals were used to predict the movement of the wrist in real time. But only since the motor cortex region of the brain has been studied in detail5 have researchers begun to explore the possibility of reproducing the full complexity of arm movements6. To do so one requires the ability to extract — in real time — the information concerning arm movements that is hidden within the activities of large groups of neurons7.
Researchers are gradually developing the hardware and software needed to connect brains to robotic limbs. The hardware includes 'conic electrodes'8, an innovative development based on the idea of inducing nerve growth within small, hollow electrodes containing nerve growth factor. This technique establishes stable, long-lasting electrical contacts between brain cells and electrodes.
Wessberg et al.2, meanwhile, have been working on the software. They provide a simple solution to the problem of extracting 'movement information' from neural signals detected by microelectrodes implanted in different regions of the cerebral cortex of living owl monkeys ( Fig. 1). The neural signals determine the next position to be assumed by an artificial arm. At each instant, a computer program developed by Wessberg et al., running in real time, determines the next position of the artificial hand on the basis of the neural signals collected during the past second. As this process is continuously repeated, the program generates a sequence of hand positions to create an entire trajectory of hand movement.
In this computer program, the relationship between neural firing and hand position is of the simplest linear kind. The activity of each neuron — measured in impulses per second — is multiplied by a numerical coefficient, and the outcome is added to the outcomes from other neurons. The authors determined the coefficients at the beginning of the experiment by matching neural activities with the movements of each monkey's own hand. Wessberg et al. tried several computational schemes of varying complexity, using a repertoire of models. Remarkably, the simplest linear approach turned out to perform as well as the more complex methods.
In this way, the authors solve the problem of generating natural movements of a robotic arm in just one direction (left or right) or in three dimensions. They also find that the coefficients derived from the analysis of natural hand movements towards targets on the right side of the monkey can be used to guide the robot arm in different movements towards targets on the left side. This 'generalization' is an important property of Wessberg et al .'s method. It will be important to explore further the ability of this approach to generate movements of the robot arm over a wide region of space, but this work represents a first, innovative step.
However, the creation of interfaces between neural tissue and machines has implications beyond the development of controllable prosthetics. It will also help us to understand how the brain works.
For example, the use of 'neurorobotic' systems may prove invaluable in working out which regions in the cerebral cortex control which features of limb movements. Wessberg et al. placed signal-recording microelectrodes in five different areas in the cerebral cortex, including the left primary motor cortex, the posterior parietal cortex, and the premotor cortex of both left and right hemispheres. The dorsal premotor cortex is believed to plan the general spatial and temporal features of movements. The left motor cortex presides over the generation of movement commands for the right arm. And the posterior parietal cortex is thought to integrate visual, sensory and motor information to determine the location of a movement target (such as a piece of food), and how to reach it. When the authors analysed the efficiency of their program in extracting arm-movement information from these different areas, they found — somewhat unexpectedly — that the best place to put the electrodes was in the left dorsal premotor cortex. It seems that we still have much to learn about the functions of the cerebral cortex.
In addition, earlier work9,10 showed that it is possible to train cortical neurons in monkeys and in rats to control simple devices. Neural populations that would normally activate the muscles of a limb learn to activate an external motor. Remarkably, after learning was completed, these neurons stopped producing the natural movement of the limb — disproving the view that there is a fixed, unchangeable pattern of connections between cortical neurons and muscles.
As we study the brain using the metaphor of computers — and computers using the metaphor of the brain — we face the question of whether and how biological neurons can be 'programmed' to carry out specific computations such as those needed to control a robotic arm. Achieving this goal, which would give us direct access to the unsurpassed computational power of the brain, would represent a huge step forwards. The use of artificial devices to test the learning ability of networks of real neurons may also help us to understand the mechanisms inherent to any form of biological learning, such as long-lasting increases and decreases in signal transmission across neuronal synapses. Harnessing these mechanisms is perhaps the greatest challenge in building prosthetic devices controlled by brain activity.
Georgopoulos, A. P., Kalaska, J. F., Camintini, R. & Massey, J. T. J. Neurosci. 2, 1527–1537 (1982).
Wessberg, J. et al. Nature 408, 361–365 (2000).
Wolpaw, J. et al. IEEE Trans. Rehabil. Eng. 8, 164– 173 (2000).
Humphrey, D., Schmidt, E. & Thompson, W. Science 170, 758– 762 (1970).
Georgopoulos, A. Annu. Rev. Neurosci. 14, 361–377 (1991).
Isaacs, R., Weber, D. & Schwartz, A. IEEE Trans. Rehabil. Eng. 8, 196–198 (2000).
Lin, S., Si, J. & Schwartz, A. Neural Comput. 9, 607– 621 (1997).
Kennedy, P. J. Neurosci. Meth. 29, 181–193 (1989).
Fetz, E. & Baker, M. A. J. Neurophysiol. 36 , 179–204 (1973).
Chapin, J., Moxon, K., Markowitz, R. & Nicolelis, M. Nature Neurosci. 2, 664–670 ( 1999).
About this article
Journal of Clinical Neuroscience (2006)
Bioinspiration & Biomimetics (2006)
Expert Review of Medical Devices (2005)