What is the basic computational unit of the brain? The neuron? The cortical column? The gene? Although to a neuroscientist this question might seem poorly formulated, to a computer scientist it is well-defined. The essence of computation is nonlinearity. A cascade of linear functions, no matter how deep, is just another linear function—the product of any two matrices is just another matrix—so it is impossible to compute with a purely linear system. A cascade of the appropriate simple nonlinear functions, by contrast, permits the synthesis of any arbitrary nonlinear function, including even that very complex function we use to decide, based on the activation of photoreceptors in our retinae, whether we are looking at our grandmother. The basic unit of any computational system, then, is its simplest nonlinear element.
In a digital computer, the basic nonlinearity is of course the transistor. In the brain, however, the answer is not as clear. Among brain modelers, the conventional view, first enunciated by McCulloch and Pitts1, is that the single neuron represents the basic unit. In these models, a neuron is usually represented as a device that computes a linear sum of the inputs it receives from other neurons, weighted perhaps by the strengths of synaptic connections, and then passes this sum through a static nonlinearity (Fig. 1a). From this early formulation, through the first wave of neural models in the sixties2 and on through the neural network renaissance in the eighties3, the saturating or sigmoidal relationship between input and output firing rate has been enshrined as the essential nonlinearity in most formal models of brain computation4. Synapses are typically regarded as simple linear elements whose essential role is in learning and plasticity.
Brain modelers have made some attempts to elaborate the basic formulation of McCulloch and Pitts. Many neurons have complex spine-studded dendritic trees, which scientists have speculated might provide a substrate for further linear5 or nonlinear6,7 processing (see Koch and Segev, this issue). The focus of even these theories nevertheless remains on the postsynaptic element as the locus of computation.
Experimentalists have recognized for decades that a synapse is not merely a passive device whose output is a linear function of its input, but is instead a dynamic element with complex nonlinear behavior8. The output of a synapse depends on its input, because of a host of presynaptic mechanisms, including paired-pulse facilitation, depression, augmentation and post-tetanic potentiation. In many physiological experiments designed to study the properties of synapses, stimulation parameters are chosen specifically to minimize these nonlinearities, but they can dominate the synaptic responses to behaviorally relevant spike trains9. Quantitative models were developed to describe these phenomena at the neuromuscular junction more than two decades ago10 and at central synapses more recently11,12.
There has been growing recognition that these synaptic nonlinearities may be important in computation. Nonlinear synapses have been postulated to underlie specific functions, such as gain control12 or temporal responsiveness of neurons in area V1 (ref. 13). They have also been considered in the context of more general models of network computation14,15,16, and it has been rigorously proven that such networks can implement a very rich class of computations17. Common to all these models is the notion that synapses do more than just provide a substrate for the long-lasting changes underlying learning and memory; they are critical in the computation itself.
What is the basic unit of computation in the brain? For over five decades since McCulloch and Pitts, neural models have focused on the single neuron, but it is interesting to speculate whether this is a historical accident. If McCulloch and Pitts had happened to have offices down the hall from the synaptic physiology laboratory of Bernard Katz, might their basic formulation have emphasized the nonlinearities of the synapse instead? The challenge now is to figure out which, if any, of the experimental discoveries made since McCulloch and Pitts are actually important to how we formulate our models of the networks that underlie neural computation.
McCulloch, W. S. & Pitts, W. Bull. Math. Biophys. 5, 115–133 ( 1943).
Rosenblatt, F. Principles of Neurodynamics (Spartan, New York, 1962 ).
Hopfield, J. J. Proc. Natl. Acad. Sci. USA 79, 2554– 2558 (1982).
Hertz, J., Krogh, A. & Palmer, R. G. Introduction to the Theory of Neural Computation (Addison-Wesley, Redwood City, California, 1991).
Rall, W. in The Handbook of Physiology, The Nervous System Vol. 1, Cellular Biology of Neurons (eds. Kandel, E. R., Brookhart, J. M. & Mountcastle, V. B.) 39–97 (American Physiol. Soc., Bethesda, Maryland, 1977).
Shepherd, G. M. et al. Proc. Natl. Acad. Sci. USA 82, 2192 –2195 (1985).
Mel, B. W. Neural Comput. 6, 1031–1085 (1994).
del Castillo, J. & Katz, B. J. Physiol. (Lond.) 124, 574–585 ( 1954).
Dobrunz, L. E. & Stevens, C. F. Neuron 22, 157–166 (1999).
Magleby, K. L. Prog. Brain Res. 49, 175–182 (1979).
Tsodyks, M. V. & Markram, H. Proc. Natl. Acad. Sci. USA 94, 719–723 ( 1997).
Abbott, L. F., Varela, J. A., Sen, K. & Nelson, S. B. Science 275, 220–224 (1997).
Chance, F. S., Nelson, S. B. & Abbott, L. F. J. Neurosci. 18, 4785– 4799 (1998).
Maass, W. & Zador, A. M. Neural Comput. 11, 903–917 (1999).
Liaw, J. S. & Berger, T. W. Proc. IJCNN 3, 2175–2179 (1998).
Little, W. A. & Shaw, G. L. Behav. Biol. 14, 115–133 (1975).
Maass, W. & Sontag, E. D. Neural Comput. 12, 1743–1772 (2000).
About this article
Cite this article
Zador, A. The basic unit of computation. Nat Neurosci 3 (Suppl 11), 1167 (2000). https://doi.org/10.1038/81432
This article is cited by
Nature Neuroscience (2000)