“In theory, there is no difference between theory and practice. But, in practice, there is,” wrote Nobel laureate chemist Manfred Eigen. In neuroscience, unfortunately, there remains a considerable difference between the two—particularly in the number of people who appreciate these different ways of doing research. Thus, we have devoted part of this issue to a special focus on research presented at the Computational and Systems Neuroscience (Cosyne) meeting earlier this year, in an effort to illustrate how theoretical and experimental approaches can work together to provide insight into brain function.

Theory has developed a bit of a bad reputation among experimentalists. Many scientists are skeptical of claims based on simulated data, feeling that such efforts are too far removed from biology to be informative. Others question the utility of attempts to assign machine-like or—worse still—anthropomorphic operating principles to the brain. Many find the dense language of theoretical papers exhausting and are frustrated when straightforward biological principles are obfuscated by impenetrable math. Hard experimental evidence is the key to understanding the brain, such scientists say, so why indulge in these mental exercises?

In reality, theory is an integral part of all good neuroscience papers—including experimental papers. Any good paper includes an intuitive framework for its results and why they came out the way they did. For example, a study identifying a new protein involved in long-term potentiation is nothing more than a disconnected data set without a mechanistic framework for how it interacts with other elements in the pathway and an intuition for the functional consequences of these interactions. 'Theoretical' papers simply formalize and explore these intuitions and mechanisms—sometimes leading to the conclusion that our initial, hand-waving explanations do not provide a good fit to the data. Good theories can synthesize large quantities of empirical data, distilling them to a few simple notions, and can establish quantitative relationships between individual observations. They can generate predictions that can serve to validate current and future experiments. Given the vast number of empirical studies being generated by the field and the sheer complexity of the brain, it is clear that theoretical approaches have great potential for making sense of the problem.

What makes for a computational paper that is not only a good study but one that will have wide impact among experimental neuroscientists? Fundamentally, a good theory paper contains the same elements as any good paper in cellular, molecular, systems or cognitive neuroscience. The paper should have a thought-provoking new hypothesis that is of potential interest to a wide audience. The model should be rigorously tested. Is it robust to biological variability? Can the model be falsified, and does it survive that test? Results, such as network simulations, should be quantified and not just demonstrated qualitatively. As with any other neuroscience paper, the hypothesis and assumptions should be reasonably constrained by available evidence. Theories that are motivated by biology are the ones that are most likely to be influential with biologists. Bold, abstract theories may turn out to be right in the end, but if there is no way to conceive of how the brain can implement them nor a way to test them experimentally in the near future, then the audience that may be influenced by the work diminishes.

As with any paper, but particularly so for computation papers, the essentials must be presented in an intuitive way that can be grasped by scientists outside the field. This means keeping jargon to a minimum, and presenting arguments in sentences, not equations written out in words. Esoteric quantifiers such as 'model-dependent statistic' may be mathematically more elegant than 'mean' and 'standard deviation' (and sound more impressive), but a paper using the latter terms is far more likely to reach its audience. As programs in computational neuroscience and annual workshops such as the advanced computational neuroscience courses offered at Woods Hole and in the European Union flourish, an increasing number of theorists and biologists are becoming more facile with the language of the complementary approach and are coming to appreciate the value of integrating the two disciplines. However, both fields have a long way to go before it will be commonplace for them to proceed hand in hand.

We feel that the papers presented in this special issue, which was put together by Associate Editor I-han Chou, exemplify the application of theory to empirical studies. In a departure from our usual focus format, which normally includes only commissioned reviews, the focus also features primary research papers highlighting the best work presented at the Cosyne meeting (http://www.cosyne.org). This meeting was held in March in Salt Lake City, Utah, and brought together a broad range of theorists and experimentalists interested in systems neuroscience. Reflecting the diversity of attendees at the meeting, the papers span a variety of topics and contain different degrees of theoretical formalization.

Every research article in this special issue was subjected to our regular peer-review process. We applied our usual stringent editorial standards to each paper, and each one met the criteria for publication in a regular issue of Nature Neuroscience. To accompany these papers, we have also commissioned several perspectives on quantitative approaches to probing neural data. Gidon Felsen and Yang Dan discuss the merits of using natural scenes to expand our understanding of the visual system, whereas Nicole Rust and Tony Movshon counter with a piece extolling the approach of using synthetic stimuli. Jonathan Victor discusses data-analysis techniques applied to different experimental disciplines, and possible ways to translate them across fields. We hope that this focus will highlight the value of increased dialogue between theorists and experimentalists, and spur future integrative efforts.