Skip to main content

Your iBrain: How Technology Changes the Way We Think

How the technologies that have become part of our daily lives are changing the way we think

You’re on a plane packed with other businesspeople, reading your electronic version of the Wall Street Journal on your laptop while downloading files to your BlackBerry and organizing your PowerPoint presentation for your first meeting when you reach New York. You relish the perfect symmetry of your schedule, to-do lists and phone book as you notice a woman in the next row entering little written notes into her leather-bound daily planner. You remember having one of those ... What? Like a zillion years ago? Hey, lady! Wake up and smell the computer age. You’re outside the airport now, waiting impatiently for a cab along with dozens of other people. It’s finally your turn, and as you reach for the taxi door a large man pushes in front of you, practically knocking you over. Your briefcase goes flying, and your laptop and BlackBerry splatter into pieces on the pavement. As you frantically gather up the remnants of your once perfectly scheduled life, the woman with the daily planner book gracefully steps into a cab and glides away.

The current explosion of digital technology not only is changing the way we live and communicate but also is rapidly and profoundly altering our brains. Daily exposure to high technology—computers, smart phones, video games, search engines such as Google and Yahoo—stimulates brain cell alteration and neurotransmitter release, gradually strengthening new neural pathways in our brains while weakening old ones. Because of the current technological revolution, our brains are evolving right now—at a speed like never before.

Besides influencing how we think, digital technology is altering how we feel, how we behave. Seven out of 10 American homes are wired for high-speed Internet. We rely on the Internet and digital technology for entertainment, political discussion, and communication with friends and co-workers. As the brain evolves and shifts its focus toward new technological skills, it drifts away from fundamental social skills, such as reading facial expressions during conversation or grasping the emotional context of a subtle gesture. A 2002 Stanford University study found that for every hour we spend on our computers, traditional face-to-face interaction time with other people drops by nearly 30 minutes.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Digital Natives
Today’s young people in their teens and 20s, who have been dubbed “digital natives,” have never known a world without computers, 24-hour TV news, Internet and cell phones—with their video, music, cameras and text messaging. Many of these natives rarely enter a library, let alone look something up in a traditional encyclopedia; they use Google, Yahoo and other online search engines. The neural networks in the brains of these digital natives differ dramatically from those of “digital immigrants,” people—including most baby boomers—who came to the digital/computer age as adults but whose basic brain wiring was laid down during a time when direct social interaction was the norm.

Now we are exposing our brains to technology for extensive periods every day, even at very young ages. A 2007 University of Texas at Austin study of more than 1,000 children found that on a typical day, 75 percent of children watch TV, whereas 32 percent of them watch videos or DVDs, with a total daily exposure averaging one hour and 20 minutes. Among those children, five- and six-year-olds spend an additional 50 minutes in front of the computer. A 2005 Kaiser Family Foundation study found that young people eight to 18 years of age expose their brains to eight and a half hours of digital and video sensory stimulation a day. The investigators reported that most of the technology exposure is passive, such as watching television and videos (four hours daily) or listening to music (one hour and 45 minutes), whereas other exposure is more active and requires mental participation, such as playing video games (50 minutes daily) or using the computer (one hour).

We know that the brain’s neural circuitry responds every moment to whatever sensory input it gets and that the many hours people spend in front of the computer—including trolling the Internet, exchanging e-mail, video conferencing, instant messaging and e-shopping—expose their brains to constant digital stimulation. Our research team at the University of California, Los Angeles, wanted to look at how much impact this extended computer time was having on the brain’s neural circuitry, how quickly it could build up new pathways, and whether we could observe and measure these changes as they occurred.

Google in Your Head
One of us (Small) enlisted the help of Susan Bookheimer and Teena Moody, U.C.L.A. experts in neuropsychology and neuroimaging. We planned to use functional magnetic resonance imaging to measure the brain’s activity during a common Internet computer task: searching Google for accurate information. We first needed to find people who were relatively inexperienced and naive to the computer.

After initial difficulty finding people who had not yet used PCs, we were able to recruit three volunteers in their mid-50s and 60s who were new to the technology yet willing to give it a try. To compare the brain activity of these three naive volunteers, we also recruited three computer-savvy volunteers of comparable age, gender and socioeconomic background. For our experiment, we chose searching on Google for specific and accurate information on a variety of topics, ranging from the health benefits of eating chocolate to planning a trip to the Galápagos.

Next, we had to figure out a way to perform MRIs on the volunteers while they used the Internet. Because the study subjects had to be inside a long, narrow tube of an MRI machine during the experiment, there would be no space for a computer, keyboard or mouse. To re-create the Google-search experience inside the scanner, we had the volunteers wear a pair of special goggles that presented images of Web site pages. The system allowed the volunteers to navigate the simulated computer screen and make choices to advance their search by pressing one finger on a small keypad, conveniently placed.

To make sure that the fMRI scanner was measuring the neural circuitry that controls Internet searches, we needed to factor out other sources of brain stimulation. To do this, we added a control task in which the study subjects read pages of a book projected through the specialized goggles during the MRI. This task allowed us to subtract from the MRI measurements any nonspecific brain activations that resulted from simply reading text, focusing on a visual image or concentrating.

We wanted to observe and measure only the brain’s activity from those mental tasks required for Internet searching, such as scanning for targeted key words, rapidly choosing from among several alternatives, going back to a previous page if a particular search choice was not helpful, and so forth. We alternated this control task—simply reading a simulated page of text—with the Internet-searching task. We also controlled for nonspecific brain stimulations caused by the photographs and drawings that are typically displayed on an Internet page.

Finally, to determine whether we could train the brains of Internet-naive volunteers, after the first scanning session we asked each volunteer to search the Internet for an hour every day for five days. We gave the computer-savvy volunteers the same assignment and repeated the fMRI scans on both groups after the five days of search-engine training.

Brain Changes
As we had predicted, the brains of computer-savvy and computer-naive subjects did not show any difference when they were reading the simulated book text; both groups had years of experience in this mental task, and their brains were quite familiar with reading books. In contrast, the two groups showed distinctly different patterns of neural activation when searching on Google. During the baseline scanning session, the computer-savvy subjects used a specific network in the left front part of the brain, known as the dorsolateral prefrontal cortex. The Internet-naive subjects showed minimal, if any, activation in this region.

One of our concerns in designing the study was that five days would not be enough time to observe any changes. But after just five days of practice, the exact same neural circuitry in the front part of the brain became active in the Internet-naive subjects. Five hours on the Internet, and these participants had already rewired their brains. The computer-savvy volunteers activated the same frontal brain region at baseline and had a similar level of activation during their second session, suggesting that for a typical computer-savvy individual, the neural circuit training occurs relatively early and then remains stable.

The dorsolateral prefrontal cortex is involved in our ability to make decisions and integrate complex information. It also is thought to control our mental process of integrating sensations and thoughts, as well as working memory, which is our ability to keep information in mind for a very short time—just long enough to manage an Internet-searching task or to dial a phone number after getting it from directory assistance.

In today’s digital age, we keep our smart phones at our hip and their earpieces attached to our ears. A laptop is always within reach, and there’s no need to fret if we can’t find a landline—there’s always Wi-Fi (short for wireless fidelity, which supplies a wireless connection to the Internet) to keep us connected.

Our high-tech revolution has plunged us into a state of “continuous partial attention,” which software executive Linda Stone, who coined the term in 1998, describes as continually staying busy—keeping tabs on everything while never truly focusing on anything. Continuous partial attention differs from multitasking, wherein we have a purpose for each task and we are trying to improve efficiency and productivity. Instead, when our minds partially attend, and do so continuously, we scan for an opportunity for any type of contact at every given moment. We virtually chat as our text messages flow, and we keep tabs on active buddy lists (friends and other screen names in an instant message program); everything, everywhere, is connected through our peripheral attention.

Although having all our pals online from moment to moment seems intimate, we risk losing personal touch with our real-life relationships and may experience an artificial sense of intimacy as compared with when we shut down our devices and devote our attention to one individual at a time.

Techno-Brain Burnout
When paying continuous partial attention, people may place their brain in a heightened state of stress. They no longer have time to reflect, contemplate or make thoughtful decisions. Instead they exist in a sense of constant crisis—on alert for a new contact or bit of exciting news or information at any moment. Once people get used to this state, they tend to thrive on the perpetual connectivity. It feeds their ego and sense of self-worth, and it becomes irresistible.

Neuroimaging studies suggest that this sense of self-worth may protect the size of the hippocampus—the horseshoe-shaped brain region in the medial (inward-facing) temporal lobe, which allows us to learn and remember new information. Psychiatry professor Sonia J. Lupien and her associates at McGill University studied hippocampal size in healthy younger and older adult volunteers. Measures of self-esteem correlated significantly with hippocampal size, regardless of age. They also found that the more people felt in control of their lives, the larger the hippocampus.

But at some point, the sense of control and self-worth we feel when we maintain continuous partial attention tends to break down—our brains were not built to sustain such monitoring for extended periods. Eventually the hours of unrelenting digital connectivity can create a unique type of brain strain. Many people who have been working on the Internet for several hours without a break report making frequent errors in their work. On signing off, they notice feeling spaced out, fatigued, irritable and distracted, as if they are in a “digital fog.” This new form of mental stress, what Small terms “techno-brain burnout,” is threatening to become an epidemic. Under this kind of stress, our brains instinctively signal the adrenal gland to secrete cortisol and adrenaline. In the short run, these stress hormones boost energy levels and augment memory, but over time they actually impair cognition, lead to depression, and alter the neural circuitry in the hippocampus, amygdala and prefrontal cortex—the brain regions that control mood and thought. Chronic and prolonged techno-brain burnout can even reshape the underlying brain structure.

Research psychologist Sara C. Mednick, then at Harvard University, and her colleagues were able to induce a mild form of techno-brain burnout in volunteers experimentally; they then were able to reduce its impact through power naps and by varying mental assignments. Their study subjects performed a visual task: reporting the direction of three lines in the lower left corner of a computer screen. The volunteers’ scores worsened over time, but their performance improved if the scientists alternated the visual task between the lower left and lower right corners of the computer screen. This result suggests that brain burnout may be relieved by varying the location of the mental task.

The investigators also found that the performance of study subjects improved if they took a 20- to 30-minute nap. The neural networks involved in the task were apparently refreshed during rest; however, optimum refreshment and reinvigoration for the task occurred when naps lasted up to 60 minutes—the amount of time it takes for rapid-eye-movement (REM) sleep to kick in.

The New, Improved Brain?
Whether we’re digital natives or immigrants, altering our neural networks and synaptic connections through activities such as e-mail, video games, Googling or other technological experiences does sharpen some cognitive abilities. We can learn to react more quickly to visual stimuli and improve many forms of attention, particularly the ability to notice images in our peripheral vision. We develop a better ability to sift through large amounts of information rapidly and decide what’s important and what isn’t—our mental filters basically learn how to shift into overdrive. In this way, we are able to cope with the massive amounts of data appearing and disappearing on our mental screens from moment to moment. Initially the daily blitz that bombards us can create a form of attention deficit, but our brains are able to adapt in a way that promotes rapid processing.

According to cognitive psychologist Pam Briggs of Northumbria University in England, Web surfers looking for facts on health spend two seconds or less on any particular site before moving on to the next one. She found that when study subjects did stop and focus on a particular site, that site contained data relevant to the search, whereas those they skipped over contained almost nothing relevant to the search. This study indicates that our brains learn to swiftly focus attention, analyze information and almost instantaneously decide on a go or no-go action. Rather than simply catching “digital ADD,” many of us are developing neural circuitry that is customized for rapid and incisive spurts of directed concentration.

Digital evolution may well be increasing our intelligence in the way we currently measure and define IQ. Average IQ scores have been steadily rising with the advancing digital culture, and the ability to multitask without errors is improving. Neuroscientist Paul Kearney of Unitec in New Zealand reported that some computer games can actually improve cognitive ability and multitasking skills. He found that volunteers who played the games eight hours a week improved multitasking skills by two and a half times. Other research at the University of Rochester has shown that playing video games can improve peripheral vision as well. As the modern brain continues to evolve, some attention skills improve, mental response times sharpen and the performance of many brain tasks becomes more efficient.

While the brains of today’s digital natives are wiring up for rapid-fire cyber searches, however, the neural circuits that control the more traditional learning methods are neglected and gradually diminished. The pathways for human interaction and communication weaken as customary one-on-one people skills atrophy. Our U.C.L.A. research team and other scientists have shown that we can intentionally alter brain wiring and reinvigorate some of these dwindling neural pathways, even while the newly evolved technology circuits bring our brains to extraordinary levels of potential.

All of us, digital natives and immigrants, will master new technologies and take advantage of their efficiencies, but we also need to maintain our people skills and our humanity. Whether in relation to a focused Google search or an empathic listening exercise, our synaptic responses can be measured, shaped and optimized to our advantage, and we can survive the technological adaptation of the modern mind.

Note: This article was originally printed with the title, "Meet Your iBrain".