Skip to main content

Brain-Hacking Software Can Decode Your Knowledge and Dreams

New technologies are extracting detailed data from our brains that reveal what we know, have seen or have dreamed. Some of the signals could even fly a plane

A pilot sits in the cockpit of a Diamond DA42 light aircraft, mentally working through the steps needed to safely land on the runway ahead. Moments later he touches down without having laid a hand on the controls or stepped on the aircraft's pedals. He is no ordinary pilot—in fact, he is not a pilot at all, and he has just landed his aircraft using brain waves.

In a series of experiments earlier this year, seven people with varying degrees of cockpit experience—including none—successfully flew and landed a simulated DA42. Instead of developing the normal hand and foot coordination through hours of training and cockpit experience, these pilots relied on an electrode-laden cap that collects their neural impulses and flight-control algorithms that convert them into commands for a virtual twin-engine aircraft.

Experiments such as this one, which was led by aerospace engineer Tim Fricke of the Technical University of Munich in Germany, are pushing the limits of electroencephalography and other scanning technologies' ability to detect, decode and harness the brain's neural impulses. Where once EEG simply measured the brain's electrical signals as detected on the surface of the skull, the latest EEG headsets and computer algorithms can translate neuronal signals into specific actions that control a variety of mechanical devices, including wheelchairs, prostheses and now flight simulators. Such devices promise to take human performance to new levels.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Some of the EEG-connected pilots, for example, landed their simulated aircraft in dense, albeit digital, fog. Test subjects stayed on course with enough accuracy to, in part, fulfill the requirements needed for a pilot's license.

Other brain-decoding projects rely on a different technology, magnetic resonance imaging, which researchers have used to reconstruct images of faces a person has seen and to probe the content of sleepers' dreams. Although EEG excels at detecting the timing of brain signals down to milliseconds, it falls short in discerning their origins. Functional MRI systems, on the other hand, pinpoint the location of neural activity by registering the relatively slow changes in brain blood flow. By combining these systems, as scientists are now beginning to do, we can better understand what is happening in our brains when we process images, store memories, or make a snap decision such as swinging a bat at a fastball.

Brainflight

In a typical EEG setup, scientists position dozens to hundreds of small electrodes at various locations on a person's skull. These electrodes pick up the electrical signals that pulse through our heads at different frequencies as our brain cells communicate. As with all brain-scanning technologies, EEG relies on software to interpret brain waves. Not all brains behave alike, so the software must process reams of neural data before it can identify patterns in brain activity, a process known as machine learning. Once the system is up to speed on a particular person's neural patterns, it can convert them into useful control commands—say, turning a virtual aircraft left or right or moving a computer mouse. As these algorithms' ability to interpret neural signals improves, scientists can wield brain waves in new ways.

The German researchers began their simulated flights of fancy in January 2014. The experiments, published this past fall, were part of a European project called Brainflight, the goal of which was to determine whether neuroscience and neuroengineering could deliver an aircraft flown solely by signals emitted by the human brain. This extreme goal was made in the service of a few more practical ones: simplifying cockpit instrumentation by using a brain-computer interface (BCI) to replace dozens of buttons and levers; reducing the time and cost it takes to train pilots; and opening up piloting as a possible career path for people with physical disabilities.

During virtual flights, the simulator periodically instructed test pilots to mentally direct their aircraft toward a given heading, much the way an actual pilot must repeatedly make course corrections en route to a destination. The EEG setup determined within a fraction of a second what each pilot wanted to do and sent those commands to the flight-control system. “We hope that by using BCIs, flying can one day become like hands-free bicycling, where you still have to concentrate on where you want to go [and] on the surrounding traffic but not on controlling your vehicle,” Fricke adds.

Such flight simulations have advanced the development of BCIs, which enable the brain to communicate with an external device using thought alone. Most headsets communicate only intermittently with their host computers. This BCI, however, continuously sends information about each pilot's neural activity to the simulator, giving navigators greater control over their virtual flights. With improved software, brain-based command of any complex machinery—including automobiles—becomes a possibility.

Unlocked

Whereas neuro-airline pilots showcase the ability of EEG to extract simple motor commands, other EEG setups can ferret out surprisingly specific information in a person's mind. In an experiment published in 2012 computer scientist Ivan Martinovic, then at the University of California, Berkeley, and his colleagues asked 30 healthy individuals to don EEG headsets and watch a screen on which the researchers flashed images of ATM machines, debit cards, maps, people, and the numbers 0 to 9 in random order. Researchers then studied the EEG data for peaks in neural activity. Such upticks suggested that the person might be familiar with a particular digit or image. From those peaks, software tried to extract personal information, such as a person's ATM PIN code, month of birth, bank location and the type of debit card he or she used. The accuracy of these predictions varied—the correct answer was found on the first guess 20 to 30 percent of the time in the case of the PIN code, debit-card type and bank location. The software guessed the right birth month for nearly 60 percent of the participants.

The point of these experiments was to study possible nefarious BCI use, yet the work also highlights the technology's potential for extracting information from people unable to communicate in other ways. Medical researchers already use BCIs to help people who are locked in—unable to move or speak because of complete body paralysis. In the past, this technology has been able to glean simple yes-or-no answers from these individuals. New EEG algorithms might soon allow them to relay simple requests, needs or other information, says study author and cognitive neuroscientist Tomas Ros of the University of Geneva.

Although the algorithm used in that study requires the consent of the person whose brain is being tapped, coarser types of information might be extracted without a user's awareness. In a study published in 2013 security analyst Mario Frank of the University of California, Berkeley, and his colleagues flashed images of faces in front of participants so briefly that the viewers could not consciously process them. The software could nonetheless determine whether an individual recognized the person in the image 66 percent of the time.

Although such findings might raise the specter of surreptitiously spying on people's inner lives, they also could be parlayed into a useful monitoring tool for individuals suffering from cognitive or emotional disorders. A mobile EEG headset might, for instance, detect confusion in a person with Alzheimer's disease and send a signal to a head-mounted display resembling Google Glass. The display would respond with images or information related to whatever the user is viewing. For a less impaired individual, Ros suggests, such a device might detect a tip-of-the tongue moment and offer up the missing information.

Facial Reconstruction

The signals that EEG electrodes detect represent the brain's internal workings only roughly. For one thing, the skull blocks some of the electrical energy from reaching the scalp electrodes. Further, the signals are too diffuse to trace to their origin. To locate brain activity, scientists rely on fMRI. When neurons are active, they need oxygen, which they pull from the blood. The result is a local change in blood oxygen levels that an fMRI machine can detect.

Scientists have turned to fMRI as a tool for investigating how the brain processes language, learns and remembers, among other functions. Yet the technology is beginning to extract increasingly detailed forms of data from the brain. In one of the latest advances, scientists used brain activity recorded from people undergoing fMRI scans to, for the first time, accurately reconstruct pictures of faces a person had seen.

Earlier this year cognitive neuroscientist Marvin Chun of Yale University and his colleagues showed six individuals 300 distinct pictures of faces while they captured the viewers' brain activity. The researchers' machine-learning algorithm correlated both the faces as a whole and their individual features with patterns of blood oxygen levels. They then showed these participants 30 new faces. The software matched the brain activity elicited by the new visages with the catalogue of neural responses it had created during the original test. Using just the neural activity, the software re-created the faces that the individual had seen. The result was “strikingly accurate neural reconstructions” of the new faces, the researchers wrote in July in the journal NeuroImage.

Instead of relying on activity in the brain's occipital cortex, which plays a central role in processing images, these reconstructions were based on more distributed patterns of neural activity, much of it in high-level brain areas that identify and characterize objects by their general properties. The scientists were hunting for more abstract representations of the imagery rather than visual cues such as contour and shading. The researchers next want to investigate how memory, emotion and social judgment, the province of other brain regions, interact with vision to better understand how we perceive faces and objects.

Scientists have also used fMRI and pattern-recognition software to decode some of the content of people's dreams. In work published in 2013 neuroscientist Yukiyasu Kamitani of Japan's ATR Computational Neuroscience Laboratories and his colleagues asked three men to view objects in different categories—such as a person, word or book—while scanning their brains using fMRI. The men, who also wore EEG headsets, were then asked to sleep in the scanner for about three and a half hours at a stretch over seven to 10 days. The researchers awakened the volunteers up to 10 times an hour and asked them what they had seen in their dreams. The software detected a close match between brain-activity patterns collected during dreaming and those assigned to an appropriate object category at least 60 percent of the time.

Training the software to recognize a greater variety of categories—including actions and emotions—might reveal even more nuanced information about people's dreams. These findings could then provide clues to how sleeping and dreaming affect memory and mood.

The computer technology used to deconstruct images in detail, whether in dreams or while awake, has very broad applications. “Advancing our ability to decode brain activity and link it to behavior is the fundamental goal of cognitive neuroscience,” Chun says, “and it is the foundation for all kinds of clinical—diagnosis, prediction and treatment—and practical applications, including brain-machine interfaces.”

NeuroScout

As powerful as it is, fMRI has limitations, too. Multimillion-dollar MRI machines weigh several tons and require subjects to lie still in a narrow porthole while the noisy magnets do their work—hardly conducive to everyday situations. The technique also picks up neural activity relatively slowly, during seconds or minutes. As a result, fMRI cannot reveal the order in which different brain regions spring to life in response to stimuli. EEG, in contrast, can detect changes during milliseconds, closer to the speed of neural changes themselves.

Because the technologies are complimentary, combining fMRI with EEG holds enormous promise for revealing clues to various cognitive and emotional processes. To decode how our brains reorient our attention, for example, these scanning technologies might compare which brain areas light up at a moment of alertness with the pattern of activation when our attention wanders. “We might be able to create systems where, if we track those states, we know when to give somebody important information,” says biomedical engineer Paul Sajda of Columbia University. Such a system might warn inattentive drivers when they approach an intersection, for example.

But one of the first applications for this combination strategy is baseball. Sajda and his colleagues have spent the past few years combining EEG and fMRI in real time to study how quickly a baseball player can identify a type of pitch and decide whether to swing. In 2012 Columbia postdoctoral fellow Jason Sherwin and graduate student Jordan Muraskin developed a computer program that simulates pitches from a catcher's perspective. They had participants lie in an fMRI scanner wearing a 43-electrode EEG headset while they viewed a simulated oncoming baseball—a dot that expands as it moves to create the impression of a ball approaching at upward of 80 miles per hour. Initially they tested nonplayers, asking them to tap a button if the pitch they saw matched the one they were told to look for: either a fastball or a slider, which curves.

Later, they tried the experiment with players from Columbia's baseball team, who showed greater activation than nonplayers in certain brain regions that are indicative of expertise, Muraskin says. The precise differences in reaction times and location of brain activity between players and nonplayers—something the researchers will reveal in a forthcoming paper—could help poor performers by mapping the brain regions that are underactive compared with those in experts. Stimulating those regions through specific exercises could help laggards improve their pitch recognition and reaction time.

The EEG data could also quantify changes in reaction time so that a player can track how well different training methods are working. The researchers have founded a start-up, NeuroScout, to market the results of their analyses to sports teams.

Universal Decoder

So far algorithms that can interpret brain signals must be trained to recognize each person's idiosyncrasies, a process that can take minutes to hours, depending on the task. After all, an individual's brain and patterns of thought—even in response to the same stimuli—are unique. “If you mapped one person's brain really well, you could build a really good brain decoding device for that person,” says psychologist Jack Gallant of U.C. Berkeley. “If you try it on someone else, that person's brain is different,” so it will not work well, if at all.

Researchers would like to create more versatile algorithms—ones that require very little, if any, tweaking between uses and users. Such universal brain decoders would drastically reduce the amount of training time in experiments, speeding efforts to understand neural activity across large populations under a variety of circumstances. More robust software would also very likely lead to more reliable EEG-controlled devices.

Most neuroscientists agree that an effective universal decoder is years away. Companies such as Emotiv and NeuroSky make consumer BCI gaming headsets and biosensors with a handful of electrodes that can be shared among users and do not take long to calibrate. They detect only the strongest signals and, as a result, have limited capabilities. Another example of a basic universal decoder is on display through January 4, 2015, at New York City's Discovery Times Square, where Marvel's Avengers S.T.A.T.I.O.N. exhibition gives fans the ability to experience an interactive head-up display similar to Iron Man's. A three-electrode wireless EEG sensor activates the display when a visitor presses his or her forehead against it. The sensor also helps users navigate the display to watch video clips, play games and see their brain waves in real time, although eye-tracking sensors and software also assist with the navigation. The system was built with redundancy to ensure that it worked reliably for the large crowds visiting the exhibit.

Iron Man's complicated hands-free flight-control system might be out of reach for now, but Fricke and his colleagues are making progress getting their system off the ground. The German researchers say they are cutting down the amount of training their simulator software requires—a big step toward unlocking the black box between our ears and extending its power beyond the body.

FURTHER READING

On the Feasibility of Side-Channel Attacks with Brain-Computer Interfaces. Ivan Martinovic et al. in Proceedings of 21st USENIX Security Symposium, pages 143–158; August 2012.

Neural Decoding of Visual Imagery during Sleep. T. Horikawa et al. in Science, Vol. 40, pages 639–642; May 3, 2013.

Mind-Reading Technology Speeds Ahead. Kerri Smith and Nature. Published online October 23, 2013. www.scientificamerican.com/article/mind-reading-technology-speeds-ahead

Neural Portraits of Perception: Reconstructing Face Images from Evoked Brain Activity. Alan S. Cowen, Marvin M. Chun and Brice A. Kuhl in NeuroImage, Vol. 94, pages 12–22; July 1, 2014.

From Our Archives

Thinking Out Loud. Nicola Neumann and Niels Birbaumer; December 2004.

Out for Blood. Elizabeth M. C. Hillman; July/August 2014.

Larry Greenemeier is the associate editor of technology for Scientific American, covering a variety of tech-related topics, including biotech, computers, military tech, nanotech and robots.

More by Larry Greenemeier
SA Mind Vol 25 Issue 6This article was originally published with the title “Decoding the Brain” in SA Mind Vol. 25 No. 6 (), p. 40
doi:10.1038/scientificamericanmind1114-40