Mathematical models that use instabilities to describe changes of weather patterns or spacecraft trajectories are well established. Could such principles apply to the sense of smell, and to other aspects of neural computation?
Dynamical stability is ubiquitous in many systems — and more often than not is desirable. Travelling down a straight road, a cyclist with stable dynamics will continue in more or less a straight line despite a gust of wind or a bumpy surface. In recent years, however, unstable dynamics has been identified not only as being present in diverse processes, but even as being beneficial. A further exciting candidate for this phenomenon is to be found in the realm of neuroscience — mathematical models1,2,3 now hint that instabilities might also be advantageous in representing and processing information in the brain.
A state of a system is dynamically stable when it responds to perturbations in a proportionate way. As long as the gust of wind is not too strong, our cyclist might wobble, but the direction and speed of the cycle will soon return to their initial, stable-state values. This stable state can be depicted in ‘state space’ (the collection of all possible states of the system) as a sink — a state at which all possible nearby courses for dynamic evolution converge (Fig. 1a).
By contrast, at unstable states of a system, the effect of a small perturbation is out of all proportion to its size. A pendulum that is held upside-down, for example, although it can in theory stay in that position for ever, will in practice fall away from upright with even the smallest of disturbances. On a state-space diagram, this is depicted by paths representing possible evolutions of the system running away from the state, rather than towards it. If the unstable state is a ‘saddle’ (Fig. 1b), typical evolutions may linger nearby for some time and will then move away from that state. Only certain perturbations, in very specific directions, may behave as if the state was stable and return to it.
There is, however, nothing to stop the pendulum from coming back very close to upright if frictional losses are not too great. This is indicated on a state-space diagram by a path travelling close to what is known as a heteroclinic connection between two saddles. Heteroclinic connections between saddle states (Fig. 1c) occur in many different systems in nature. They have, for example, been implicated in rapid weather changes that occur after long periods of constant conditions4. Engineers planning interplanetary space missions5 routinely save enormous amounts of fuel by guiding spacecraft through the Solar System using orbits that connect saddle states where the gravitational pulls of celestial bodies balance out.
Several studies1,2,3,6,7 have raised the idea that this kind of dynamics along a sequence of saddles (Fig. 1c) could also be useful for processing information in neural systems. Many traditional models of neural computation share the spirit of a model8 devised by John Hopfield, where completion of a task is equivalent to the system becoming stationary at a stable state. Rabinovich et al.1 and, more recently, Huerta et al.2 have shown that, in mathematical models of the sense of smell, switching among unstable saddle states — and not stable-state dynamics — may be responsible for the generation of characteristic patterns of neural activity, and thus information representation. In creating their models, they have been inspired by experimental findings in the olfactory systems of zebrafish and locusts9 that exhibit reproducible odour-dependent patterns.
Huerta et al.2 model the dynamics in two neural structures known as the antennal lobe and the mushroom body. These form staging posts for processing the information provided by signals coming from sensory cells that are in turn activated by odour ingredients. Whereas activity in the mushroom body is modelled by standard means using stable dynamics, the dynamics of the antennal lobe is modelled in a non-standard way using networks that exhibit switching induced by instabilities. In these models, the dynamics of the neural system explores a sequence of states, generating a specific pattern of activity that represents one specific odour. The vast number of distinct switching sequences possible in such a system with instabilities could provide an efficient way of encoding a huge range of subtly different odours.
Both Rabinovich et al.1 and Huerta et al.2 interpret neural switching in terms of game theory: the neurons, they suggest, are playing a game that has no winner. Individual states are characterized by certain groups of neurons being more active than others; however, because each state is a saddle, and thus intrinsically unstable, no particular group of neurons can eventually gain all the activity and ‘win the game’. The theoretical study1 was restricted to very specific networks of coupled neurons, but Huerta and Rabinovich have now shown3 that switching along a sequence of saddles occurs naturally, even if neurons are less closely coupled, as is the case in a biological system.
Similar principles of encoding by switching along a sequence of saddles have also been investigated in more abstract mathematical models (see refs 6, 7 for examples) that pinpoint possible mechanisms for directing the switching processes. One problem with these proposals from mathematical modelling1,2,3,6,7 is that there is no clear-cut experimental evidence of their validity in any real olfactory system. Nevertheless, all of the mathematical models rely on the same key features — saddles that are never reached but only visited in passing, inducing non-stationary switching — that have been shown to be relevant in other natural systems4,5. In biology, the detection of odours by populations of neurons could be only one example.
Much remains to be done in fleshing out this view of natural processes in terms of dynamics exploiting saddle instabilities. Then we will see just how much sense instability really makes.
Rabinovich, M. et al. Phys. Rev. Lett. 87, 068102 (2001).
Huerta, R. et al. Neural Comput. 16, 1601–1640 (2004).
Huerta, R. & Rabinovich, M. Phys. Rev. Lett. 93, 238104 (2004).
Stewart, I. Nature 422, 571–573 (2003).
Taubes, G. Science 283, 620–622 (1999).
Hansel, D., Mato, G. & Meunier, C. Phys. Rev. E 48, 3470–3477 (1993).
Kori, H. & Kuramoto, Y. Phys. Rev. E 62, 046214 (2001).
Hopfield, J. J. Proc. Natl Acad. Sci. USA 79, 2554–2558 (1982).
Laurent, G. Nature Rev. Neurosci. 3, 884–895 (2002).
About this article
Journal of Nonlinear Science (2019)
Physical Review E (2018)
International Journal of Control (2017)
Chaos: An Interdisciplinary Journal of Nonlinear Science (2017)
Proceedings of the Royal Society B: Biological Sciences (2016)