The first major results of the Blue Brain Project, a detailed simulation of a bit of rat neocortex about the size of a grain of coarse sand, were published last year1. The model represents 31,000 brain cells and 37 million synapses. It runs on a supercomputer and is based on data collected over 20 years. Furthermore, it behaves just like a speck of brain tissue. But therein, say critics, lies the problem. “It's the best biophysical model we have of any brain, but that's not enough,” says Christof Koch, a neuroscientist at the Allen Institute for Brain Science in Seattle, Washington, which has embarked on its own large-scale brain-modelling effort. The trouble with the model is that it holds no surprises: no higher functions or unexpected features have emerged from it.

Credit: Mario Wagner

Some neuroscientists, including Koch, say that this is because the model was not built with a particular hypothesis about cognitive processes in mind. Its success will depend on whether specific questions can be asked of it. The irony, says neuroscientist Alexandre Pouget, is that deriving answers will require drastic simplification of the model, “unless we figure out how to adjust the billions of parameters of the simulations, which would seem to be a challenging problem to say the least”. By contrast, Pouget's group at the University of Geneva, Switzerland, is generating and testing hypotheses on how the brain deals with uncertainty in functions such as attention and decision-making.

There is a widespread preference for hypothesis-driven approaches in the brain-modelling community. Some models might be very small and detailed, for example, focusing on a single synapse. Others might explore the electrical spiking of whole neurons, the communication patterns between brain areas, or even attempt to recapitulate the whole brain. But ultimately a model needs to answer questions about brain function if we are to advance our understanding of cognition.

From top to bottom

Blue Brain is not the only sophisticated model to have hit the headlines in recent years. In late 2012, theoretical neuroscientist Chris Eliasmith at the University of Waterloo in Canada unveiled Spaun, a whole-brain model that contains 2.5 million neurons (a fraction of the human brain's estimated 86 billion). Spaun has a digital eye and a robotic arm, and can reason through eight complex tasks such as memorizing and reciting lists, all of which involve multiple areas of the brain2. Nevertheless, Henry Markram, a neurobiologist at the Swiss Federal Institute of Technology in Lausanne who is leading the Blue Brain Project, noted3 at the time: “It is not a brain model.”

Although Markram's dismissal of Spaun amused Eliasmith, it did not surprise him. Markram is well known for taking a different approach to modelling, as he did in the Blue Brain Project. His strategy is to build in every possible detail to derive a perfect imitation of the biological processes in the brain with the hope that higher functions will emerge — a 'bottom-up' approach. Researchers such as Eliasmith and Pouget take a 'top-down' strategy, creating simpler models based on our knowledge of behaviour. These skate over certain details, instead focusing on testing hypotheses about brain function.

Rather than dismiss the criticism, Eliasmith took Markram's comment on board and added bottom-up detail to Spaun. He selected a handful of frontal cortex neurons, which were relatively simple to begin with, and swapped them for much more complicated neurons — ones that account for multiple ion channels and changes in electrical activity over time. Although these complicated neurons were more biologically realistic, Eliasmith found that they brought no improvement to Spaun's performance on the original eight tasks. “A good model doesn't introduce complexity for complexity's sake,” he says.

Simplify, simplify, simplify

For many years, computational models of the brain were what theorists call unconstrained: there were not enough experimental data to map onto the models or to fully test them. For instance, scientists could record electrical activity, but from only one neuron at a time, which limited their ability to represent neural networks. Back then, brain models were simple out of necessity.

In the past decade, an array of technologies has provided more information. Imaging technology has revealed previously hidden parts of the brain. Researchers can control genes to isolate particular functions. And emerging statistical methods have helped to describe complex phenomena in simpler terms. These techniques are feeding newer generations of models.

Nevertheless, most theorists think that a good model includes only the details needed to help answer a specific question. Indeed, one of the most challenging aspects of model building is working out which details are important to include and which are acceptable to ignore. “The simpler the model is, the easier it is to analyse and understand, manipulate and test,” says cognitive and computational neuroscientist Anil Seth of the University of Sussex in Chichester, UK.

An oft-cited success in theoretical neuroscience is the Reichardt detector — a simple, top-down model for how the brain senses motion — proposed by German physicist Werner Reichardt in the 1950s. “The big advantage of the Reichardt model for motion detection was that it was an algorithm to begin with,” says neurobiologist Alexander Borst of the Max Planck Institute of Neurobiology in Martinsried, Germany. “It doesn't speak about neurons at all.”

When Borst joined the Max Planck Society in the mid-1980s, he ran computational simulations of the Reichardt model, and got surprising results. He found, for instance, that neurons oscillated when first presented with a pattern that was moving at constant velocity — a result that he took to Werner Reichardt, who was also taken aback. “He didn't expect his model to show that,” says Borst. They confirmed the results in real neurons, and continued to refine and expand Reichardt's model to gain insight into how the visual system detects motion.

In the realm of bottom-up models, the greatest success has come from a set of equations developed in 1952 to explain how flow of ions in and out of a nerve cell produces an axon potential. These Hodgkin–Huxley equations are “beautiful and inspirational”, says neurobiologist Anthony Zador of Cold Spring Harbor Laboratory in New York, adding that they have allowed many scientists to make predictions about how neuronal excitability works. The equations, or their variants, form some of the basic building blocks of many of today's larger brain models of cognition.

Gamble in details

Although many theoretical neuroscientists do not see value in pure bottom-up approaches such as that taken by the Blue Brain Project, they do not dismiss bottom-up models entirely. These types of data-driven brain simulations have the benefit of reminding model-builders what they do not know, which can inspire new experiments. And top-down approaches can often benefit from the addition of more detail, says theoretical neuroscientist Peter Dayan of the Gatsby Computational Neuroscience Unit at University College London. “The best kind of modelling is going top-down and bottom-up simultaneously,” he says.

Borst, for example, is now approaching the Reichardt detector from the bottom up to explore questions such as how neurotransmitter receptors on motion-sensitive neurons interact. And Eliasmith's more complex Spaun has allowed him to do other types of experiment that he couldn't before — in particular, he can now mimic the effect of sodium-channel blockers on the brain.

Also taking a multiscale approach is neuroscientist Xiao-Jing Wang of New York University Shanghai in China, whose group described a large-scale model of the interaction of circuits across different regions of the macaque brain4. The model is built, in part, from his previous, smaller models of local neuronal circuits that show how neurons in a group fire in time. To scale up to the entire brain, Wang had to include the strength of the feedback between areas. Only now has he got the right data — thanks to the burgeoning field of connectomics (the study of connection maps within an organism's nervous system) — to build in this important detail, he says. Wang is using his model to study decision-making, the integration of sensory information and other cognitive processes.

In physics, the marriage between experiment and theory led to the development of unifying principles. And although neuroscientists might hope for a similar revelation in their field, the brain (and biology in general) is inherently more noisy than a physical system, says computational neuroscientist Gustavo Deco of the Pompeu Fabra University in Barcelona, Spain, who is an investigator on the Human Brain Project. Deco points out that equations describing the behaviour of neurons and synapses are non-linear, and neurons are connected in a variety of ways, interacting in both a feedforward and a feedback manner. That said, there are examples of theory allowing neuroscientists to extract general principles, such as how the brain balances excitation and inhibition, and how neurons fire in synchrony, Wang says.

Complex neuroscience often requires huge computational resources. But it is not a want of supercomputers that limits good, theory-driven models. “It is a lack of knowledge about experimental facts. We need more facts and maybe more ideas,” Borst says. Those who crave vast amounts of computer power misunderstand the real challenge facing scientists who are trying to unravel the mysteries of the brain, Borst contends. “I still don't see the need for simulating one million neurons simultaneously in order to understand what the brain is doing,” he says, referring to the large-scale simulation linked with the Human Brain Project. “I'm sure we can reduce that to a handful of neurons and get some ideas.”

We make best progress if we focus on specific elements of neural computation.

Computational neuroscientist Andreas Herz, of the Ludwig-Maximilians University in Munich, Germany, agrees. “We make best progress if we focus on specific elements of neural computation,” he says. For example, a single cortical neuron receives input from thousands of other cells, but it is unclear how it processes this information. “Without this knowledge, attempts to simulate the whole brain in a seemingly biologically realistic manner are doomed to fail,” he adds.

At the same time, supercomputers do allow researchers to build details into their models and see how they compare to the originals, as with Spaun. Eliasmith has used Spaun and its variations to see what happens when he kills neurons or tweaks other features to investigate ageing, motor control or stroke damage in the brain. For him, adding complexity to a model has to serve a purpose. “We need to build bigger and bigger models in every direction, more neurons and more detail,” he says. “So that we can break them.”