Main

A question about the future of computational neuroscience can be bluntly put. Is understanding how the brain works going to be an enterprise in which pure theorists, scientists without experimental laboratories and not mere subsidiary parts of an experimentalist's laboratory, make essential contributions? Are independent theorists important to neuroscience? Important enough, say, to merit independent faculty positions in universities? Or will researchers doing experiments (or at least controlling experimental laboratories) make all the significant contributions, and be the only appropriate occupants of professorial positions in neuroscience?

The history of chemistry is the closest parallel. It is a subject in which both qualitative theory (the periodic table, the chemical bond) and quantitative theory (statistical mechanics, quantum mechanics) have been important. Modern quantitative theory and its impact on chemistry was brought forward by people who did not themselves do experiments, such as chemistry Nobelists Onsager and Kohn, whose ability in mathematics was key to understanding how to make new predictions and how to ground in understanding concepts that came qualitatively from experiments (in the areas of chemical bonding and irreversible thermodynamics).

Physics, geology, chemistry and astronomy have developed independent theorists when the breadth of these subjects exceeded the span of talents of a single individual. Within neuroscience I know no one who is both outstandingly able to perform inventive rat brain surgery and able to cogently describe modern artificial intelligence theories of learning and learnability. These are such different dimensions of expertise! Having both the talent and the time to span such a range is now impossible. Computational neuroscience is therefore in the process of bifurcating into theorists and experimentalists.

Sensible theory in science is rooted in facts, be they general or specific, so theory and experiment must interact. In physical science the development of a theoretical branch was at the time made easier because the relatively small number of essential experimental facts were all available in scientific journals. Now, in the more complex parts of these subjects, large data sets are only summarized in publications, and sharing of the extensive data sets themselves has become commonplace. Two forces have pushed this accessibility. One is the genuine wish to advance science rapidly. The other is pragmatic: doing experimental science is expensive. Science is chiefly paid for from the public purse, either directly by government or indirectly by the tax-free subsidization of charitable foundations. In appealing for publicly based support for a science, it is important that resources are seen to be used effectively.

Good experimentalists excel in the art of knowing which parts of their own unpublished data should be ignored, so not all data ought be shared. But certain sharing should become common practice. For example, neuroscientists understand that the (partial) publication of data only through summaries such as post-stimulus time histograms can conceal what is actually happening. In these days of web sites, it would be trivial to make available all spike rasters from which summaries are published.

Some of my friends lament “we will fail to get credit for our work.” But most scientists know that it was the careful measurements of Tycho Brahe that led Kepler to his three laws of planetary motion. Reputations of experimentalists are only enhanced by having their data cited as significant by others in the motivation or testing of ideas.