Shut your eyes. Now, touch your nose. Chances are you can do this without even thinking about it. For this you can thank your sense of proprioception, which is so much a part of us that most of us are unaware that it exists. This 'sixth sense' lets our brain know the relative positions in space of different parts of our bodies. Without it, our brains are lost.

Credit: R. MASSEY/GETTY

Ian Waterman knows how that loss feels. More than 30 years ago he lost this sense almost overnight, when a flu-like virus damaged the required sensory nerves. His muscles worked perfectly, but he could not control them. “I lost ownership of my body,” he says. He could no longer stand, or even sit up by himself, and doctors said he would never be able to do so again. Waterman's condition arose from a disease called acute sensory neuropathy and is so rare that only a dozen or so similar cases are known to the medical literature.

Some neuroscientists are taking a cue from Waterman's experiences and starting to investigate whether robotic devices controlled by thought alone could be integrated with an artificial sense of proprioception. If so, they reason, these 'neuroprosthetics' could be made to work in a much more life-like way. What's more, they hope to gain a deeper understanding of how proprioception works, and how they might be able to manipulate it.

Some months after his virus attacked, Waterman, only 19 years old, was lying in bed applying all his mental energy to the fight for control of his body. He tensed his stomach muscles, lifted his head and stared down at the limbs that seemed no longer to belong to him. He willed himself to sit up.

Concentrated effort

Later, he realized that it was the visual feedback that allowed his body to unexpectedly obey the mental instruction. “But the euphoria of the moment made me lose concentration and I nearly fell out of bed,” he remembers.

From then on he learnt to compensate for his deficit in proprioception with other forms of sensory feedback to help him understand where his limbs are, and thus control them. It requires constant, intense concentration, but now, despite his profound impairments, he can manage fairly normal movements. Most of the input that he relies on is visual — standing up with his eyes closed is still nearly impossible — but he can also tune in to the tug of a jacket sleeve to work out the direction his arm is moving. Or to the cool air on his armpit when he raises his arm in a loose shirt. Neuroprosthetic engineers are realizing that many sensory feedback signals could be similarly harnessed.

A neuroprosthetic is more accurately called a brain–machine interface. Hundreds of electrodes, fixed into tiny arrays, are placed in or on the surface of the cortex, the thin, folded outer surface of the brain that controls complex functions including the organization of movement. The electrodes record the electrical signals from the cortex's neurons and these are translated by a computer algorithm and used to drive specific actions — the movement of a cursor on a computer screen, for example, or of an artificial limb.

In this issue of Nature, two papers1,2,3 demonstrate dramatic progress in the area. A team consisting of John Donoghue's group, based at Brown University in Rhode Island and Cybernetics Neurotechnology Systems in Foxborough, Massachusetts, implanted 96 electrodes into Matt Nagle's motor cortex, the brain region that processes information about movement. Nagle is a quadriplegic patient and the first human volunteer to reach this advanced stage of testing (see picture, above). Hooked up to computers and attended by a team of technicians, Nagle could move a cursor to issue different instructions — for example, to open e-mails or turn down the television.

Mind control: Matt Nagle's neuroprosthetic lets him move a cursor using thought alone. Credit: R. FRIEDMAN

Krishna Shenoy's group at Stanford University, California, has done similar work in a non-paralysed monkey's premotor cortex, the area of brain where the animal's movement-related 'intentions' are generated. Using a new algorithm, the team's brain–computer interface produced results four times faster and more accurate than previously seen.

Closing the loop

The two papers show how closely neuroprosthetics are approaching medical reality. But although moving a computer cursor by thought alone may be dazzling, scientists have long-term ambitions to make neuroprosthetics reproduce more complex functions. Could patients direct a robotic arm to pick up a coffee cup, for example? “For this, the devices need to deliver feedback to the brain — we need to close the loop,” says Daofen Chen, director of the neural prosthesis programme at the US National Institute of Neurological Disorders and Stroke in Bethesda, Maryland.

The brain's sensory cortex receives signals — proprioception, touch, pain and so on — from the body (see graphic), and in response constantly modifies its movement-related commands. The current generation of output-only neuroprosthetics are open-loop systems — with more limitations than Ian Waterman, who can at least use visual, temperature and tactile feedback. “Brain–machine interfaces will have to become interactive,” says Chen. “But now that we would like to exploit it, we realize we know next to nothing about sensory input.”

A handful of researchers is starting to try to work out where and how to stimulate the sensory nervous system to reproduce the sorts of information that a limb might send to the sensory cortex. It is early days: none of their work is published. And as so little is known about the system, there is no obvious place to start.

Theoretically, the 'where' could be the nerves running from the limb into the spinal cord, or the spinal cord itself (see graphic). Or it could be higher — in the brain's thalamus, where incoming sensory signals are integrated and redirected to the appropriate part of the cortex, or the sensory cortex itself.

Credit: ADAPTED FROM: M. J. T. FITZGERALD & J. FOLAN-CURRAN CLINICAL NEUROANATOMY AND RELATED NEUROSCIENCE

The 'how' refers to the design of the electrical signals to be fed into the cortex. These could mimic the sensory system's natural nerve impulses, based on parameters such as frequency and amplitude. Or they could involve creating artificial signals that the sensory cortex is able to distinguish, in the hope that the brain can be trained to associate particular signals with particular parameters.

Once scientists have worked out how best to encode the signals, the idea would be to place sensors on artificial limbs to generate signals representing proprioceptive information such as angle of joint, vibration, force of grip — and other sensory information that Waterman has found helpful, such as temperature.

Trained brain

Most in the field have a hunch that the signal will not have to mimic neural activity perfectly. The brain can, after all, cope with the very unphysiological signals generated by the most successful brain–machine interface to date: the cochlear transplant. Already, some 110,000 profoundly deaf people have been implanted with the device, according to the US National Institutes of Health. The implant sits in the inner ear and interfaces with the auditory nerve. Its signals are totally artificial, and, at first, recipients can make nothing of the noise. But the auditory cortex, it turns out, is highly adaptable. With appropriate training, it can quickly learn to associate particular codes with particular sounds, so that transplant recipients can learn to follow conversations with ease.

“When the concept of stimulating the auditory nerve emerged in the 1970s, people said it would be impossible to generate the right electrical signal to the brain,” says Shenoy. “But it turned out that you don't have to get it perfect, just close enough for the brain to do its own fine-tuning.” On the other hand, one does not want to burden patients with having to learn too much, says John Chapin a physiologist at the State University of New York Health Science Center in Brooklyn, and a pioneer in using neural activity to control robots. “Ideally we should aim to mimic the natural signal as closely as possible,” he says (see 'Voyagers in the cortex').

For now, whatever works will be good. “We don't know if it will turn out to be possible to incorporate sensory information but we are going to try,” says neuroscientist Andrew Schwartz of the University of Pittsburgh, an expert in brain–computer interface technology for the control of robotic arm movement.

Schwartz is working with Douglas Weber, a bioengineer at Pittsburgh who is developing a model for studying sensory input. This involves using electrodes to stimulate the sensory nerves from the limbs of an anaesthetized cat at the point just before they enter the spinal cord, and simultaneously recording from neurons in the sensory cortex. Weber will then repeat the recording, only this time manually moving the cat's limbs, instead of stimulating their nerves with electrodes. He will then compare the pattern of neural activity in the cortex in the two situations and see whether he can mimic the patterns he sees in response to passive movements with artificial stimulation.

Daily operation: the light traces of a person getting dressed show that even simple tasks require complicated movements. Credit: CONDÉ NAST ARCHIVE/CORBIS

“Not everyone agrees, but my gut feeling is that we will be more successful if we stimulate outside of the central nervous system,” says Weber. “At more central points there will be greater convergence of different inputs and I guess it would be hard to get clean signals”.

Lee Miller, a neurophysiologist from Northwestern University in Chicago, agrees that Weber could be right, but is nevertheless approaching the problem from the top. Although useful for study, stimulating nerves from peripheral areas of the body such as limbs will not work for a patient whose spinal cord is severed.

Working in monkeys, Miller's group is electrically stimulating the part of the cortex that processes proprioception, and recording neuronal activity in the motor cortex at the same time. Miller hopes this will eventually let him design stimulation patterns that can imitate the brain's own processing of proprioceptive signals, much in the way that Weber is designing signals to imitate processing of movement. Monkeys will be trained to move a 'virtual' arm, created on a computer screen by computer algorithms fed by both recordings from the motor cortex and the simulated proprioceptive feedback.

It's a complex experiment, he admits, which will probably take up to five years to get working optimally. But he draws hope from his group's finding, presented at the 2005 Society of Neuroscience meeting in Washington DC, that the monkeys can recognize and distinguish between high- and low-frequency stimulation.

Chapin's set-up is equally ambitious. He also works on monkeys but his chosen target is the thalamus, the brain's junction box for sensory input. “The higher you go in the brain, the more complex and abstract things become,” he says. “It is hard to know if you are stimulating something precise.” In his experimental system, Chapin electrically stimulates the area of the thalamus that relays touch-related signals. Simultaneously he records in the areas of the sensory cortex that process tactile information. The monkeys meanwhile, have one arm strapped down and one free. They have been taught to point with their free hand to an area on their immobilized arm that they 'feel' is being touched. “We have found that we can produce a sort of 'natural response' in the cortex when we stimulate in the thalamus,” he says. The response matches that produced normally when a specific part of the monkey's arm is touched. Chapin plans to extend his investigations to study proprioception in the same way.

The papers on brain-machine interfaces by Donoghue and Shenoy1,2 seem like science fiction becoming reality. The next step — trying to introduce sensory input into brain– machine interfaces — may appear at first glance to be as fanciful as the Six Million Dollar Man. But few neuroscientists could seriously doubt their theoretical potential. As experience with the first generation of neuroprosthetics shows — it's a question of understanding how the system works.