Profound insights into neural mechanism have sprung from theory-based research. Some of the more powerful examples are the Hodgkin-Huxley model of action potential propagation, Hebbian-based plasticity rules, Barlow's efficient coding hypothesis and Marr's three levels of analysis. Despite these successes, there remains a disconnect between theory and experiment in biological science that would be inconceivable in the physical sciences. Technological advances render this divide even more prescient in modern-era neuroscience. It is precisely the ability to generate so much data that requires incisive experimental design and focused questions to gain understanding. As conceptualized in the cover of this issue, without this grounding, experimental neuroscience amounts to not much more than one observation heaped on another, with nearly the same utility and pointlessness that turtles bring to cosmology.

Thrice previously, Nature Neuroscience has published a focus issue on computational neuroscience. The first of these was dominated by historical reviews that covered decades of an important, but underappreciated, field. The next two were hybrid issues of a sort, split between reviews and primary research articles. We are now at a point where the field is mature enough to support a focus that is entirely comprised of Reviews and Perspectives, as well as a Commentary, that highlight recent advances in the computational and theoretical neurosciences.

Experimental innovation has touched nearly every subdiscipline in neuroscience, and more data is brought to bear on each question. On page 348, Churchland and Abbott recommend a big tent approach, with computation and theory entering at all stages from the analysis of raw data through building detailed mechanistic and biophysical models.

One such example of detailed biophysical modeling revolves around networks with approximately equal excitatory and inhibitory inputs. Empirical evidence supports the existence of such synaptic balance. On page 375, Denève and Machens review recent advances in balanced network modeling, with a particular emphasis on networks that can support efficient coding. Most network models are instantiated as continuous time systems. However, we know that neurons primarily communicate through discrete spike events. Abbott, DePasquale and Memmesheimer discuss on page 350 current methods to bridge the analog-to-digital divide. Conversely, rather than imposing a network structure to match spiking outputs, we might wish to work backwards. Population recording and imaging is now commonplace. Is it possible to infer the physiological mechanisms and sources of variability? On page 383, Doiron and colleagues review how tracking neural correlations across changes in brain state can hint at underlying causal elements.

When considering the theory of memory, a somewhat different view of neural state becomes important: the question of how we can build and maintain persistent, stable representations. Chaudhuri and Fiete focus on the principles necessary for a memory system on page 394. Their Review includes discussion of relevant network architectures, underlying biophysical processes, robustness to noise, and information capacity and coding strategies, and a nod to parallels with computer science. Computer science, particularly computer vision techniques to build object recognition systems, has come back full circle to inform models of human visual perception. In a Perspective on page 356, Yamins and DiCarlo present some of the reasons behind recent successes of goal-driven deep neural networks in explaining sensory processing and suggest similar progress could be made beyond sensory systems.

Progressing up the neural hierarchy, we come to decisions, perceptual or overt. In a Perspective on page 366, Pouget, Drugowitsch and Kepecs propose a categorical distinction between decision certainty and decision confidence. When the brain operates under noisy conditions (and by 'when' let's be clear that we mean 'at all times'), it must compute probability distributions. The authors argue that certainty should refer to the encoding of all probabilistic decision variables, whereas confidence specifically should be defined as the probability that a decision is correct. Time will determine whether you should have confidence in their proposal. Having better informed models of perception, behavior and decision-making also potentially enables us to diagnose and tailor treatment when systems break. Huys, Maia and Frank review the burgeoning area of computational psychiatry on page 404. They detail advances in machine-learning computational applications and theory-based mechanistic models in classifying and treating diseases from the confluence of these two approaches.

The topics covered in this collection span a wide range, with most highlighting rather recent advances published across a host of journals. Indeed, the number of computational neuroscience articles in our own pages has increased, albeit modestly, over the last few years. A general interest in computational neuroscience has some tentative support (confirmation bias?) from cherry-picking Google Trends. Searches with the term 'computational neuroscience' have been steady over the last five years. To give this some context, the frequency of searches for 'optogenetics' is remarkably similar over the same period of time despite optogenetics trending upward. It might not be too much to hope for a similar inflection for computational neuroscience, as these hits are driven by searches for subject-specific journals (for example, Journal of Computational Neuroscience) as well as for doctoral programs strong in the area. Interestingly, the number of searches for 'cognitive neuroscience' by far exceeds that for computational neuroscience, but with a provocative seasonal rhythm suggesting many of those searches are fueled by the annual influx of new undergrads onto college campuses every autumn. Making computation a core part of neurobiology curricula should be high on the agenda at every university. The small benefit is that in five years' time, we needn't start off the computational focus by trying to justify its relevance for neuroscience. The great hope is that it leads to better biological understanding and treatment of disease.