Main

In contrast, computer studies use models whose elements and principles of operation are explicit, usually encoded in terms of differential equations or other kinds of laws. These models are extremely flexible, and subject only to the limitations of available computational power: any stimulus that can be conceptualized can be delivered, any measurement made, and any hypothesis tested. In a model of a single neuron, for example, it is simple to deliver separate impulse trains to 1,000 different synapses, controlling the rate, temporal pattern of spikes within each train (periodic, random, bursty), degree of correlation between trains, spatial distribution of activated synaptic contacts (clustered, distributed, apical or basal, branch tips, trunks), spatiotemporal mix of excitation and inhibition, and so on. Furthermore, every voltage, current, conductance, chemical concentration, phosphorylation state or other relevant variable can be recorded at every location within the cell simultaneously. And if necessary, the experiment can be exactly reproduced ten years later.

Nor are such experiments confined to reality: computers permit exploration of pure hypotheticals. Models can contrast a system's behavior in different states, some of which do not exist. For example, several spatial distributions of voltage-dependent channels could be compared within the same dendritic morphology to help an investigator dissect the dastardly complex interactions between channel properties and dendritic structure, and to tease apart their separate and combined contributions to synaptic integration. This sort of hands-on manipulation gives the computer experimentalist insight into general principles governing the surrounding class of neural systems, in addition to the particular system under study.

The need for modeling in neuroscience is particularly intense because what most neuroscientists ultimately want to know about the brain is the model—that is, the laws governing the brain's information processing functions. The brain as an electrical system, or a chemical system, is simply not the point. In general, the model as a research tool is more important when the system under study is more complex. In the extreme case of the brain, the most complicated machine known, the importance of gathering more facts about the brain through empirical studies must give way to efforts to relate brain facts to each other, which requires models matched to the complexity of the brain itself. There is no escaping this: imagine a neuroscientist assigned to fully describe the workings of a modern computer (which has only 1010 transistors to the brain's 1015 synapses). The investigator is allowed only to inject currents and measure voltages, even a million voltages at once, and then is told to simply think about what the data mean. The task is clearly impossible. Many levels of organization, from electron to web server, or from ion channel to consciousness—each governed by its own set of rules—lie between the end of the experimentalist's probe and a deep understanding of the abstract computing system at hand. A true understanding of the brain implies the capacity to build a working replica in any medium that can incorporate the same principles of operation—silicon wafers, strands of DNA, computer programs or even plumbing fixtures. This highly elevated 'practioner's' form of understanding must be our ultimate goal, since it will not only allow us to explain the brain's current form and function, but will help us to fix broken brains, or build better brains, or adapt the brain to altogether different uses.