“I had arranged electrodes on the optic nerve of a toad in connexion with some experiments on the retina. The room was nearly dark and I was puzzled to hear repeated noises in the loudspeaker attached to the amplifier, noises indicating that a great deal of impulse activity was going on. It was not until I compared the noises with my own movements around the room that I realized I was in the field of vision of the toad's eye and that it was signalling what I was doing. . .”

Adrian's experience1 is familiar to systems neurophysiologists, and it still provides both novice and old hand with a thrill unmatched in experimental neuroscience. It also confronts them repeatedly with the central question: what are these neurons actually 'signaling'? Adrian's answer was simple, and his protocol was the key that opened most of the doors to our modern understanding of brain function2. It is all the more astonishing that his conclusion, in 1928 (Fig. 1), that the intensity of sensation is proportional to the frequency of sensory nerve impulses remains the most compelling and most universally applied principle in neuroscience.

Figure 1: Relationship between stimulus, sensory message and sensation.
figure 1

From ref. 1.

Adrian's student, Horace Barlow, was the first to construct a coherent framework to link neural activity directly with perception. The central proposition of Barlow's 'neuron doctrine'3 is that single neurons code for perceptually significant events. In Barlow's theory, a hierarchical feedforward sequence of processing creates progressively more complex and invariant features, which are economically represented by single cells. His neuron doctrine strongly supported explorations of the feature selectivity of single neurons, which in his view would reveal directly the computations being carried out between sensory periphery and motor act2.

Barlow's notion that the stimuli used should have some specific behavioral consequence was not lost on Lettvin and Maturana, who deployed in their experiments, “a delightful exhibit. . . a large color photograph of the natural habitat of the frog from a frog's eye view, flowers and grass.” (see ref. 4) Their close encounter with two remarkable pioneers of computational neuroscience, Pitts and McCulloch (see ref. 5, this issue), inspired them to search for the neural basis of the perception of universals, such as 'prey' and 'enemy'. To describe the encoding properties of such cells, Lettvin coined the catchy term 'grandmother cell', as a suitably secular successor to Sherrington's 'pontifical cell'6 and Barlow's 'cardinal cell'3. Lettvin and colleagues' revelation7 of what the frog's eye told the frog's brain led to a fundamental division in the development of brain theories. One stream viewed the brain as a signal processor3 and gave dominant roles to information theory and linear systems analysis; the other viewed the brain as a symbolic processor and used concepts drawn from computer science. This latter view is best exemplified by Marr and colleagues8, who in the 1970s, promoted an influential approach to computational theories of vision that was in direct opposition to the insights derived from experimental work.

Marr was scathing of attempts to boot-strap to high-level theory from experimental observations. In the throes of developing a theory of retinal computation of lightness, he wrote to Sydney Brenner, “One of our wholly new findings is that the so called centre-surround organization of the retinal ganglion cells is all a hoax! It is nothing but a by-product of showing silly little spot stimuli to a clever piece of machinery designed for looking at complete views.”9 Barlow's neuron doctrine3 also received short shrift: “the time has now come to abandon those older ways of thinking.”8 Yet despite Marr's criticisms and his philosophical position that computational theory was independent of its implementation, his theoretical work was stimulated and constrained by biological considerations. Indeed, he was dismissive of neural network theories precisely because they ignored biology. “Hoping that random neural net studies will elucidate the operations of the brain is therefore like waiting for a monkey to type Hamlet. . . .a neural network theory, unless it is closely tied to the known anatomy and physiology of some part of the brain and makes some unexpected predictions, is of no value.”10 (Similar views are now editorial policy of popular science journals.)

Time has not been kind to Marr's approach to neural computation. Today, neural nets are in the ascendant, 'kinder, gentler' versions of the grandmother cell are being developed by Marr's erstwhile colleagues11,12, and insights derived from experimental work have reasserted themselves2. Transported to the present, theorists of the 1950s would experience a strong sense of déjà vu as they find reincarnations of their ideas inspiring computer simulations that they could only have dreamed of doing. In the experimental arena, Adrian's simple rate code still rules despite precisely timed attacks, the use of natural images is back in fashion, and receptive fields are again the basis for the exploration of neural circuits and computations. Will the present influx to computational neuroscience of physicists, mathematicians and engineers, with their fast ideas and fast computers, transform our theories of brain? We live in hope, but they will have to make more than a quantum leap to escape the long shadows of their 20th century predecessors.