What is the basic computational unit of the brain? The neuron? The cortical column? The gene? Although to a neuroscientist this question might seem poorly formulated, to a computer scientist it is well-defined. The essence of computation is nonlinearity. A cascade of linear functions, no matter how deep, is just another linear function—the product of any two matrices is just another matrix—so it is impossible to compute with a purely linear system. A cascade of the appropriate simple nonlinear functions, by contrast, permits the synthesis of any arbitrary nonlinear function, including even that very complex function we use to decide, based on the activation of photoreceptors in our retinae, whether we are looking at our grandmother. The basic unit of any computational system, then, is its simplest nonlinear element.

In a digital computer, the basic nonlinearity is of course the transistor. In the brain, however, the answer is not as clear. Among brain modelers, the conventional view, first enunciated by McCulloch and Pitts1, is that the single neuron represents the basic unit. In these models, a neuron is usually represented as a device that computes a linear sum of the inputs it receives from other neurons, weighted perhaps by the strengths of synaptic connections, and then passes this sum through a static nonlinearity (Fig. 1a). From this early formulation, through the first wave of neural models in the sixties2 and on through the neural network renaissance in the eighties3, the saturating or sigmoidal relationship between input and output firing rate has been enshrined as the essential nonlinearity in most formal models of brain computation4. Synapses are typically regarded as simple linear elements whose essential role is in learning and plasticity.

Figure 1: Location of the essential nonlinearity.
figure 1

(a) Standard model of processing. Inputs 1–n from other neurons are multiplied by the corresponding passive synaptic weights w, summed (∑) and then passed through a nonlinearity (S). (b) An alternative model of processing in which the synapses themselves provide the essential nonlinearity.

Brain modelers have made some attempts to elaborate the basic formulation of McCulloch and Pitts. Many neurons have complex spine-studded dendritic trees, which scientists have speculated might provide a substrate for further linear5 or nonlinear6,7 processing (see Koch and Segev, this issue). The focus of even these theories nevertheless remains on the postsynaptic element as the locus of computation.

Experimentalists have recognized for decades that a synapse is not merely a passive device whose output is a linear function of its input, but is instead a dynamic element with complex nonlinear behavior8. The output of a synapse depends on its input, because of a host of presynaptic mechanisms, including paired-pulse facilitation, depression, augmentation and post-tetanic potentiation. In many physiological experiments designed to study the properties of synapses, stimulation parameters are chosen specifically to minimize these nonlinearities, but they can dominate the synaptic responses to behaviorally relevant spike trains9. Quantitative models were developed to describe these phenomena at the neuromuscular junction more than two decades ago10 and at central synapses more recently11,12.

There has been growing recognition that these synaptic nonlinearities may be important in computation. Nonlinear synapses have been postulated to underlie specific functions, such as gain control12 or temporal responsiveness of neurons in area V1 (ref. 13). They have also been considered in the context of more general models of network computation14,15,16, and it has been rigorously proven that such networks can implement a very rich class of computations17. Common to all these models is the notion that synapses do more than just provide a substrate for the long-lasting changes underlying learning and memory; they are critical in the computation itself.

What is the basic unit of computation in the brain? For over five decades since McCulloch and Pitts, neural models have focused on the single neuron, but it is interesting to speculate whether this is a historical accident. If McCulloch and Pitts had happened to have offices down the hall from the synaptic physiology laboratory of Bernard Katz, might their basic formulation have emphasized the nonlinearities of the synapse instead? The challenge now is to figure out which, if any, of the experimental discoveries made since McCulloch and Pitts are actually important to how we formulate our models of the networks that underlie neural computation.