Computers might soon be able to deduce your thoughts just by scanning your brain. Credit: Punchstock

A computer model has been developed that can predict what word you are thinking of. The model may help to resolve questions about how the brain processes words and language, and might even lead to techniques for decoding people’s thoughts.

Researchers led by Tom Mitchell of Carnegie Mellon University in Pittsburgh, Pennsylvania, 'trained' a computer model to recognize the patterns of brain activity associated with 60 images, each of which represented a different noun, such as 'celery' or 'aeroplane'.

The team started with the assumption that the brain processes words in terms of how they relate to movement and sensory information. Words such as 'hammer', for example, are known to cause movement-related areas of the brain to light up; on the other hand, the word 'castle' triggers activity in regions that process spatial information.

Mitchell and his colleagues also knew that different nouns are associated more often with some verbs than with others – the verb 'eat', for example, is more likely to be found in conjunction with 'celery' than with 'aeroplane'.

The researchers designed the model to try and use these semantic links to work out how the brain would react to particular nouns. They fed 25 such verbs into the model.

Active association

The team then used functional magnetic resonance imaging (fMRI) to scan the brains of 9 volunteers as they looked at images of the nouns. The researchers then fed the model 58 of the 60 nouns to train it. For each noun, the model sorted through a trillion-word body of text to find how it was related to the 25 verbs, and how that related to the activation pattern.

After training, the models were put to the test. Their task was to predict the pattern of activity for the two missing words from the group of 60, and then to deduce which word was which. On average, the models came up with the right answer more than three-quarters of the time.

The team then went one step further, this time training the models on 59 of the 60 test words, and then showing them a new brain activity pattern and offering them a choice of 1,001 words to match it. The models performed well above chance when they were made to rank the 1,001 words according to how well they matched the pattern. The results are reported in Science1.

The idea is similar to another ‘brain-reading’ technique, reported in Nature earlier this year2, that can predict what picture a person is seeing from a selection of more than 100. The new model is different in that it has to look at the meanings of the words, rather than just lower-level visual features of a picture.

Mind readers

It shouldn’t be too difficult to get the model to choose accurately between a larger number of words, says John-Dylan Haynes, who also works on models of brain decoding at the Bernstein Center for Computational Neuroscience Berlin in Germany. “This study shows a method that allows one to read out a large number of different thoughts from brain activity, even with only a few calibration measurements,” he says.

An average English speaker knows 50,000 words, Mitchell says, so the model could in theory be used to select any word a subject chooses to think of.

Even whole sentences might not be too distant a prospect for the model, says Mitchell. “Now that we can see individual words, it gives the scaffolding for starting to see what the brain does with multiple words as it assembles them,” he says. This gives researchers the chance to understand the “mental chemistry” that the brain does when it processes such phrases, Mitchell suggests.

Models such as this one could also be useful in diagnosing disorders of language or helping students pick up a foreign language. In semantic dementia, for example, people lose the ability to remember the meanings of things - shown a picture of a chihuahua, they can only recall 'dog', for example - but little is known about what exactly goes wrong in the brain. “We could look at what the neural encoding is for this,” says Mitchell.