Skip to main content

The Limits of Intelligence

The laws of physics may well prevent the human brain from evolving into an ever more powerful thinking machine

Santiago Ramón y Cajal, the Spanish Nobel-winning biologist who mapped the neural anatomy of insects in the decades before World War I, likened the minute circuitry of their vision-processing neurons to an exquisite pocket watch. He likened that of mammals, by comparison, to a hollow-chested grandfather clock. Indeed, it is humbling to think that a honeybee, with its milligram-size brain, can perform tasks such as navigating mazes and landscapes on a par with mammals. A honeybee may be limited by having comparatively few neurons, but it surely seems to squeeze everything it can out of them.

At the other extreme, an elephant, with its five-million-fold larger brain, suffers the inefficiencies of a sprawling Mesopotamian empire. Signals take more than 100 times longer to travel between opposite sides of its brain—and also from its brain to its foot, forcing the beast to rely less on reflexes, to move more slowly, and to squander precious brain resources on planning each step.

We humans may not occupy the dimensional extremes of elephants or honeybees, but what few people realize is that the laws of physics place tough constraints on our mental faculties as well. Anthropologists have speculated about anatomic roadblocks to brain expansion—for instance, whether a larger brain could fit through the birth canal of a bipedal human. If we assume, though, that evolution can solve the birth canal problem, then we are led to the cusp of some even more profound questions.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


One might think, for example, that evolutionary processes could increase the number of neurons in our brain or boost the rate at which those neurons exchange information and that such changes would make us smarter. But several recent trends of investigation, if taken together and followed to their logical conclusion, seem to suggest that such tweaks would soon run into physical limits. Ultimately those limits trace back to the very nature of neurons and the statistically noisy chemical exchanges by which they communicate. “Information, noise and energy are inextricably linked,” says Simon Laughlin, a theoretical neuroscientist at the University of Cambridge. “That connection exists at the thermodynamic level.”

Do the laws of thermodynamics, then, impose a limit on neuron-based intelligence, one that applies universally, whether in birds, primates, porpoises or praying mantises? This question apparently has never been asked in such broad terms, but the scientists interviewed for this article generally agree that it is a question worth contemplating. “It’s a very interesting point,” says Vijay Balasubramanian, a physicist who studies neural coding of information at the University of Penn­sylvania. “I’ve never even seen this point discussed in science fiction.”

Intelligence is of course a loaded word: it is hard to measure and even to define. Still, it seems fair to say that by most metrics, humans are the most intelligent animals on earth. But as our brain has evolved, has it approached a hard limit to its ability to process information? Could there be some physical limit to the evolution of neuron-based intelligence—and not just for humans but for all of life as we know it?

That Hungry Tapeworm in Your Head
The most intuitively obvious way in which brains could get more powerful is by growing larger. And indeed, the possible connection between brain size and intelligence has fascinated scientists for more than 100 years. Biologists spent much of the late 19th century and the early 20th century exploring universal themes of life—mathematical laws related to body mass, and to brain mass in particular, that run across the animal kingdom. One advantage of size is that a larger brain can contain more neurons, which should enable it to grow in complexity as well. But it was clear even then that brain size alone did not determine intelligence: a cow carries a brain well over 100 times larger than a mouse’s, but the cow isn’t any smarter. Instead brains seem to expand with body size to carry out more trivial functions: bigger bodies might, for example, impose a larger workload of neural housekeeping chores unrelated to intelligence, such as monitoring more tactile nerves, processing signals from larger retinas and controlling more muscle fibers.

Eugene Dubois, the Dutch anatomist who discovered the skull of Homo erectus in Java in 1892, wanted a way to estimate the intelligence of animals based on the size of their fossil skulls, so he worked to define a precise mathematical relation between the brain size and body size of animals—under the assumption that animals with disproportionately large brains would also be smarter. Dubois and others amassed an ever growing database of brain and body weights; one classic treatise reported the body, organ and gland weights of 3,690 animals, from wood roaches to yellow-billed egrets to two-toed and three-toed sloths.

Dubois’s successors found that mammals’ brains expand more slowly than their bodies—to about the ¾ power of body mass. So a muskrat, with a body 16 times larger than a mouse’s, has a brain about eight times as big. From that insight came the tool that Dubois had sought: the encephalization quotient, which compares a species’ brain mass with what is predicted based on body mass. In other words, it indicates by what factor a species deviates from the ¾ power law. Humans have a quotient of 7.5 (our brain is 7.5 times larger than the law predicts); bottlenose dolphins sit at 5.3; monkeys hover as high as 4.8; and oxen—no surprise there—slink around at 0.5. In short, intelligence may depend on the amount of neural reserve that is left over after the brain’s menial chores, such as minding skin sensations, are accounted for. Or to boil it down even more: intelligence may depend on brain size in at least a superficial way.

As brains expanded in mammals and birds, they almost certainly benefited from economies of scale. For example, the greater number of neural pathways that any one signal between neurons can travel means that each signal implicitly carries more information, implying that the neurons in larger brains can get away with firing fewer times per second. Meanwhile, however, another, competing trend may have kicked in. “I think it is very likely that there is a law of diminishing returns” to increasing intelligence indefinitely by adding new brain cells, Balasubramanian says. Size carries burdens with it, the most obvious one being added energy consumption. In humans, the brain is already the hungriest part of our body: at 2 percent of our body weight, this greedy little tapeworm of an organ wolfs down 20 percent of the calories that we expend at rest. In newborns, it’s an astounding 65 percent.

Staying in Touch
Much of the energetic burden of brain size comes from the organ’s communication networks: in the human cortex, communications account for 80 percent of energy consumption. But it appears that as size increases, neuronal connectivity also becomes more challenging for subtler, structural reasons. In fact, even as biologists kept collecting data on brain mass in the early to mid-20th century, they delved into a more daunting enterprise: to define the “design principles” of brains and how these principles are maintained across brains of different sizes.

A typical neuron has an elongated tail called the axon. At its end, the axon branches out, with the tips of the branches forming synapses, or contact points, with other cells. Axons, like telegraph wires, may connect different parts of the brain or may bundle up into nerves that extend from the central nervous system to the various parts of the body.

In their pioneering efforts, biologists measured the diameter of axons under microscopes and counted the size and density of nerve cells and the number of synapses per cell. They surveyed hundreds, sometimes thousands, of cells per brain in dozens of species. Eager to refine their mathematical curves by extending them to ever larger beasts, they even found ways to extract intact brains from whale carcasses. The five-hour process, meticulously described in the 1880s by biologist Gustav Adolf Guldberg, involved the use of a two-man lumberjack saw, an ax, a chisel and plenty of strength to open the top of the skull like a can of beans.

These studies revealed that as brains expand in size from species to species, several subtle but probably unsustainable changes happen. First, the average size of nerve cells increases. This phenomenon allows the neurons to connect to more and more of their compatriots as the overall number of neurons in the brain increases. But larger cells pack into the cerebral cortex less densely, so the distance between cells increases, as does the length of axons required to connect them. And because longer axons mean longer times for signals to travel between cells, these projections need to become thicker to maintain speed (thicker axons carry signals faster).

Researchers have also found that as brains get bigger from species to species, they are divided into a larger and larger number of distinct areas. You can see those areas if you stain brain tissue and view it under a microscope: patches of the cortex turn different colors. These areas often correspond with specialized functions, say, speech comprehension or face recognition. And as brains get larger, the specialization unfolds in another dimension: equivalent areas in the left and right hemispheres take on separate functions—for example, spatial versus verbal reasoning.

For decades this dividing of the brain into more work cubicles was viewed as a hallmark of intelligence. But it may also reflect a more mundane truth, says Mark Changizi, a theoretical neurobiologist at 2AI Labs in Boise, Idaho: specialization compensates for the connectivity problem that arises as brains get bigger. As you go from a mouse brain to a cow brain with 100 times as many neurons, it is impossible for neurons to expand quickly enough to stay just as well connected. Brains solve this problem by segregating like-functioned neurons into highly interconnected modules, with far fewer long-distance connections between modules. The specialization between right and left hemispheres solves a similar problem; it reduces the amount of information that must flow between the hemispheres, which minimizes the number of long, interhemispheric axons that the brain needs to maintain. “All of these seemingly complex things about bigger brains are just the backbends that the brain has to do to satisfy the connectivity problem” as it gets larger, Changizi argues. “It doesn’t tell us that the brain is smarter.”

Jan Karbowski, a computational neuroscientist at the Polish Academy of Sciences in Warsaw, agrees. “Somehow brains have to optimize several parameters simultaneously, and there must be trade-offs,” he says. “If you want to improve one thing, you screw up something else.” What happens, for example, if you expand the corpus callosum (the bundle of axons connecting right and left hemispheres) quickly enough to maintain constant connectivity as brains expand? And what if you thicken those axons, so the transit delay for signals traveling between hemispheres does not increase as brains expand? The results would not be pretty. The corpus callosum would expand—and push the hemispheres apart—so quickly that any performance improvements would be neutralized.

These trade-offs have been laid into stark relief by experiments showing the relation between axon width and conduction speed. At the end of the day, Karbowski says, neurons do get larger as brain size increases, but not quite quickly enough to stay equally well connected. And axons do get thicker as brains expand, but not quickly enough to make up for the longer conduction delays.

Keeping axons from thickening too quickly saves not only space but energy as well, Balasubramanian says. Doubling the width of an axon doubles energy expenditure, while increasing the velocity of pulses by just 40 percent or so. Even with all of this corner cutting, the volume of white matter (the axons) still grows more quickly than the volume of gray matter (the main body of neurons containing the cell nucleus) as brains increase in size. To put it another way, as brains get bigger, more of their volume is devoted to wiring rather than to the parts of individual cells that do the actual computing, which again suggests that scaling size up is ultimately unsustainable.

The Primacy of Primates
It is easy, with this dire state of affairs, to see why a cow fails to squeeze any more smarts out of its grapefruit-size brain than a mouse does from its blueberry-size brain. But evolution has also achieved impressive work­arounds at the level of the brain’s building blocks. When Jon H. Kaas, a neuroscientist at Vanderbilt University, and his colleagues compared the morphology of brain cells across a spectrum of primates in 2007, they stumbled on­to a game changer—one that has probably given humans an edge.

Kaas found that unlike in most other mammals, cortical neurons in primates enlarge very little as the brain increases in size. A few neurons do increase in size, and these rare ones may shoulder the burden of keeping things well connected. But the majority do not get larger. Thus, as primate brains expand from species to species, their neurons still pack together almost as densely. So from the marmoset to the owl monkey—a doubling in brain mass—the number of neurons roughly doubles, whereas in rodents with a similar doubling of mass the number of neurons increases by just 60 percent. That difference has huge consequences. Humans pack 100 billion neurons into 1.4 kilograms of brain, but a rodent that had followed its usual neuron-size scaling law to reach that number of neurons would now have to drag around a brain weighing 45 kilograms. And metabolically speaking, all that brain matter would eat the varmint out of house and home. “That may be one of the factors in why the large rodents don’t seem to be [smarter] at all than the small rodents,” Kaas says.

Having smaller, more densely packed neurons does seem to have a real impact on intelligence. In 2005 neurobiologists Gerhard Roth and Urusula Dicke, both at the University of Bremen in Germany, reviewed several traits that predict intelligence across species (as measured, roughly, by behavioral complexity) even more effectively than the encephalization quotient does. “The only tight correlation with intelligence,” Roth says, “is in the number of neurons in the cortex, plus the speed of neuronal activity,” which decreases with the distance between neurons and increases with the degree of myelination of axons. Myelin is fatty insulation that lets axons transmit signals more quickly.

If Roth is right, then primates’ small neurons have a double effect: first, they allow a greater increase in cortical cell number as brains enlarge; and second, they allow faster communication, because the cells pack more closely. Elephants and whales are reasonably smart, but their larger neurons and bigger brains lead to inefficiencies. “The packing density of neurons is much lower,” Roth says, “which means that the distance between neurons is larger and the velocity of nerve impulses is much lower.”

In fact, neuroscientists have recently seen a similar pattern in variations within humans: people with the quickest lines of communication between their brain areas also seem to be the brightest. One study, led in 2009 by Martijn P. van den Heuvel of the University Medical Center Utrecht in the Netherlands, used functional magnetic resonance imaging to measure how directly different brain areas talk to one another—that is, whether they talk via a large or a small number of intermediary areas. Van den Heuvel found that shorter paths between brain areas correlated with higher IQ. Edward Bullmore, an imaging neuroscientist at the University of Cambridge, and his collaborators obtained similar results the same year using a different approach. They compared working memory (the ability to hold several numbers in one’s memory at once) among 29 healthy people. They then used magnetoencephalographic recordings from their subjects’ scalp to estimate how quickly communication flowed between brain areas. People with the most direct communication and the fastest neural chatter had the best working memory.

It is a momentous insight. We know that as brains get larger, they save space and energy by limiting the number of direct connections between regions. The large human brain has relatively few of these long-distance connections. But Bullmore and van den Heuvel showed that these rare, nonstop connections have a disproportionate influence on smarts: brains that scrimp on resources by cutting just a few of them do noticeably worse. “You pay a price for intelligence,” Bullmore concludes, “and the price is that you can’t simply minimize wiring.”

Intelligence Design
If communication between neurons, and between brain areas, is really a major bottleneck that limits intelligence, then evolving neurons that are even smaller (and closer together, with faster communication) should yield smarter brains. Similarly, brains might become more efficient by evolving axons that can carry signals faster over longer distances without getting thicker. But something prevents animals from shrinking neurons and axons beyond a certain point. You might call it the mother of all limitations: the proteins that neurons use to generate electrical pulses, called ion channels, are inherently unreliable.

Ion channels are tiny valves that open and close through changes in their molecular folding. When they open, they allow ions of sodium, potassium or calcium to flow across cell membranes, producing the electrical signals by which neurons communicate. But being so minuscule, ion channels can get flipped open or closed by mere thermal vibrations. A simple biology experiment lays the defect bare. Isolate a single ion channel on the surface of a nerve cell using a microscopic glass tube, sort of like slipping a glass cup over a single ant on a sidewalk. When you adjust the voltage on the ion channel—a maneuver that causes it to open or close—the ion channel does not flip on and off reliably like your kitchen light does. Instead it flutters on and off randomly. Sometimes it does not open at all; other times it opens when it should not. By changing the voltage, all you do is change the likelihood that it opens.

It sounds like a horrible evolutionary design flaw—but in fact, it is a compromise. “If you make the spring on the channel too loose, then the noise keeps on switching it,” Laughlin says—as happens in the biology experiment described earlier. “If you make the spring on the channel stronger, then you get less noise,” he says, “but now it’s more work to switch it,” which forces neurons to spend more energy to control the ion channel. In other words, neurons save energy by using hair-trigger ion channels, but as a side effect the channels can flip open or close accidentally. The trade-off means that ion channels are reliable only if you use large numbers of them to “vote” on whether or not a neuron will generate an impulse. But voting becomes problematic as neurons get smaller. “When you reduce the size of neurons, you reduce the number of channels that are available to carry the signal,” Laughlin says. “And that increases the noise.”

In a pair of papers published in 2005 and 2007, Laughlin and his collaborators calculated whether the need to include enough ion channels limits how small axons can be made. The results were startling. “When axons got to be about 150 to 200 nanometers in diameter, they became impossibly noisy,” Laughlin says. At that point, an axon contains so few ion channels that the accidental opening of a single channel can spur the axon to deliver a signal even though the neuron did not intend to fire. The brain’s smallest axons probably already hiccup out about six of these accidental spikes per second. Shrink them just a little bit more, and they would blather out more than 100 per second. “Cortical gray matter neurons are working with axons that are pretty close to the physical limit,” Laughlin concludes.

This fundamental compromise between information, energy and noise is not unique to biology. It applies to everything from optical-fiber communications to ham radios and computer chips. Transistors act as gatekeepers of electrical signals, just like ion channels do. For five decades engineers have shrunk transistors steadily, cramming more and more onto chips to produce ever faster computers. Transistors in the latest chips are 22 nanometers. At those sizes, it becomes very challenging to “dope” silicon uniformly (doping is the addition of small quantities of other elements to adjust a semiconductor’s properties). By the time they reach about 10 nanometers, transistors will be so small that the random presence or absence of a single atom of boron will cause them to behave unpredictably.

Engineers might circumvent the limitations of current transistors by going back to the drawing board and redesigning chips to use entirely new technologies. But evolution cannot start from scratch: it has to work within the scheme and with the parts that have existed for half a billion years, explains Heinrich Reichert, a developmental neurobiologist at the University of Basel in Switzerland—like building a battleship with modified airplane parts.

Moreover, there is another reason to doubt that a major evolutionary leap could lead to smarter brains. Biology may have had a wide range of options when neurons first evolved, but 600 million years later a peculiar thing has happened. The brains of the honeybee, the octopus, the crow and intelligent mammals, Roth points out, look nothing alike at first glance. But if you look at the circuits that underlie tasks such as vision, smell, navigation and episodic memory of event sequences, “very astonishingly they all have absolutely the same basic arrangement.” Such evolutionary convergence usually suggests that a certain anatomical or physiological solution has reached maturity so that there may be little room left for improvement.

Perhaps, then, life has arrived at an optimal neural blueprint. That blueprint is wired up through a step-by-step choreography in which cells in the growing embryo interact through signaling molecules and physical nudging, and it is evolutionarily entrenched.

Bees Do It
So have humans reached the physical limits of how complex our brain can be, given the building blocks that are available to us? Laughlin doubts that there is any hard limit on brain function the way there is one on the speed of light. “It’s more likely you just have a law of diminishing returns,” he says. “It becomes less and less worthwhile the more you invest in it.” Our brain can pack in only so many neurons; our neurons can establish only so many connections among themselves; and those connections can carry only so many electrical impulses per second. Moreover, if our body and brain got much bigger, there would be costs in terms of energy consumption, dissipation of heat and the sheer time it takes for neural impulses to travel from one part of the brain to another.

The human mind, however, may have better ways of expanding without the need for further biological evolution. After all, honeybees and other social insects do it: acting in concert with their hive sisters, they form a collective entity that is smarter than the sum of its parts. Through social interaction we, too, have learned to pool our intelligence with others.

And then there is technology. For millennia written language has enabled us to store information outside our body, beyond the capacity of our brain to memorize. One could argue that the Internet is the ultimate consequence of this trend toward outward expansion of intelligence beyond our body. In a sense, it could be true, as some say, that the Internet makes you stupid: collective human intelligence—culture and computers—may have reduced the impetus for evolving greater individual smarts.

Douglas Fox writes about biology, geology and climate science from California. He wrote our July 2021 article "The Carbon Rocks of Oman," about efforts to turn carbon dioxide into solid minerals.

More by Douglas Fox
Scientific American Magazine Vol 305 Issue 1This article was originally published with the title “The Limits of Intelligence” in Scientific American Magazine Vol. 305 No. 1 ()