Main

The first problem is that quantitative experiments are generally difficult and time consuming, and it is simply not possible to do all the experiments that one might think of. Nor is it possible to publish all the data that any given experiment generates. Given that so much must be excluded, it is essential that the experiments should be guided by theory, if they are to yield more than an arbitrary collection of unfocused facts. Conversely, theory needs to be informed by experimental data: too many theoretical papers present hypotheses that are incompatible with known facts about biology, and this problem is exacerbated by the difficulty theorists face in keeping up with a large and ever-expanding experimental literature.

How might the situation be improved? One step would be to ensure that theoretical papers are reviewed by experimentalists. This would help theoreticians not only to keep current with the experimental literature, but also to develop a better appreciation of how data are presented. Theoreticians are often tempted, for example, to extract quantitative information from representative examples of 'raw' data, failing to realize that 'representative' usually means 'best typical', thus compromising any practical utility.

Theoreticians also need to improve the presentation of their own models. It is taken for granted that experimental papers should contain sufficient information for others to replicate the results, but unfortunately, much theoretical work neglects this basic principle. Attempts to reproduce published computer models often fail, and it is difficult to know whether such failures reflect something profound, or whether they arise simply because the documentation of models with many parameters is naturally prone to error.

Experimental neuroscientists are not likely to pay serious attention to theoretical models until this problem is resolved. One solution is to develop a standard format for expressing model structure and parameters, and indeed this goal is evident in various neuroscience database projects currently underway. Supplying model source code is usually not enough. The format should be efficient and concise, yet allow a level of generic expression readable by humans and readily translatable for different simulation and evaluation tools. These requirements suggest exploiting programming languages oriented toward symbolic as well as numeric relations. It will be encouraging if such a standard is adopted at the publication level, because this will facilitate a more thorough review process as well as provide an accessible database for the reader. Eventually, this approach can contribute to a seamless database covering the entire field of neuroscience.

Finally, it is vital for this young field that the scientific and funding environment allow many interdisciplinary flowers to bloom. Support is needed for the marriage of theory and experiment at all levels of neuroscience, ranging from the biophysical basis of neural computation, to the neural coding of the organism's external and internal worlds, all the way up to the mysterious but (we assume) concrete link between brain and mind. Progress at the first level in particular will be essential if any rational medical therapeutics are to emerge from all this work. Core neuroscience courses should include a theoretical component, demonstrating its fundamental relevance to experimental neuroscience. At the same time, an ongoing critical examination of this marriage is necessary for the evolution of computational neuroscience. Perhaps we could learn lessons from physics, in which there is a more mature liaison between theory and application. As neuroscientists we may not avoid the occasional wild goose chase, but we can at least hope that a theory or two may be falsified in the process, clearing the path a bit for the next go-around and making it all worthwhile.