Perhaps nothing in mathematics is more useful for physics, chemistry, biology and engineering than differential equations. Isaac Newton had to invent calculus so he could write down the first such equation — for planetary motion under the influence of gravity. Today, few phenomena fully escape the descriptive reach of a generalized form of Newton's recipe: set the rates of change of key variables (the left side of the equation) equal to some continuous functions of those same variables (on the right).

This recipe is an assertion of determinism, as the current state of the system fixes what happens next, although continuity and determinism in the equations do not guarantee smooth or predictable outcomes. Smooth fluid flows routinely develop shock waves, for example, where the flow becomes discontinuous — velocity or pressure changing sharply over a microscopic distance. Dynamical chaos destroys long-term predictability even while preserving strict determinism.

But our familiarity with these phenomena — and satisfaction that continuous mathematics is able to describe how they emerge — could well blind us to a wider world of possibilities. A half century ago, a mathematician named A. F. Filippov wrote an odd paper entitled Differential Equations with Discontinuous Right-hand Sides, exploring what might happen if the coefficients appearing in differential equations aren't always smooth, but instead jump from one value to another abruptly. Filippov's work seems to be known to only a narrow group of applied mathematicians, but it may hold some big implications for science.

Differential equations normally involve continuous functions because small changes in a system's current state generally cause equally small changes in its dynamics. But think of a superconducting sample initially cooled well below the superconducting transition temperature TC, and then left to gradually warm up. The equations for its evolution would involve coefficients for thermal conductivities and other properties having one set of values for T < TC and another for T > TC. Equations of this kind also arise naturally in models of biological processes or electrical control circuits.

What might happen if the coefficients appearing in differential equations aren't always smooth?

In his early analysis, Filippov suggested that discontinuities would have their most interesting consequences in situations where a system's dynamics (away from the discontinuity) act automatically to bring the discontinuity into play. Take the superconductor example again. If the equations for T > TC drive the temperature down towards the discontinuity at T = TC, whereas the equations operating for T < TC drive the temperature upwards, then the discontinuity acts as a kind of trapping surface.

Mathematician Mike Jeffrey has now taken Filippov's analysis several steps further, and shows that it predicts two interesting effects. One is an abrupt kind of bifurcation in which regular system behaviour — a stable cycle, for example — may suddenly disappear, with no prior warning. The second, more provocative effect is the appearance of an explosive kind of what Jeffrey calls “non-deterministic chaos” — a scenario in which a discontinuity can make an apparently deterministic system suddenly begin behaving quite randomly.

The first effect Jeffrey refers to as a 'grazing' bifurcation — it's a sharp change in system dynamics brought about when a trajectory just happens to graze up against the discontinuity. Again, consider the superconductor. This sample (as part of a larger system) might exhibit a cyclical behaviour, its temperature rising and falling repeatedly, yet always remaining above TC. If slow changes in system parameters gradually decrease the temperature on this cycle, its lowest value will eventually just barely reach TC, in which case the system trajectory grazes against the surface of the discontinuity.

Think of an aeroplane doing loop-the-loops over a muddy field, gradually losing altitude, until finally making disastrous grazing contact with the sticky surface. The plane sticks to the ground and its cycling is suddenly finished. Similarly, the grazing at T = TC means that the equations for T > TC no longer apply, and the system may do something surprising.

Earlier experiments have actually shown that such bifurcations happen in a real system — a ring of the superconductor niobium nitride. The electrical current in this ring, and its temperature, change according to two simple differential equations containing coefficients that change abruptly if the temperature passes through the superconducting transition temperature. If its temperature descends to the discontinuity, this system — which seems to be fully deterministic — suddenly begins flipping randomly between periods of regular oscillations interrupted by intervals of stasis.

But these bifurcations are quite mundane compared with a more spectacular possibility. As Jeffrey points out, grazing bifurcations require rather special circumstances — the system has to get on just the right trajectory to graze the discontinuity at a special point. This can happen, as the experiments attest, but may not be likely. However, Jeffrey has shown that in higher-dimensional systems — those having three or more independent variables — there should naturally arise situations in which the special conditions for the grazing are automatically satisfied.

The mathematics here gets fairly subtle, but the idea is relatively simple — that a discontinuity in higher dimensions may itself act to trap and focus system trajectories, forcing them to pass through the grazing point. This situation is what dynamical-systems theorists call 'generic' — it happens without any special tuning, and so should be seen routinely in practice. Any system of this kind, on being forced through the grazing point, will exhibit truly random dynamics. In effect, on reaching this point the ordinary equations for the system temporarily lose control, and microscopic noise then exerts a strong influence over the future, at least over a short interval.

The biggest question is how common the non-deterministic chaos resulting from situations may be. Perhaps, for reasons yet unknown, it is something that rarely happens in real systems. Or maybe it hasn't been noted before simply because no one has known to look for it. An experimenter, encountering it in some device might dismiss it as a malfunction and simply turn the thing off for a time.

The real surprise of this analysis is that discontinuities in higher dimensions should be expected to make it easy and natural for systems to reach these grazing points, which Jeffrey also refers to as “points of indecision”. Filippov's peculiar discontinuous mathematics of 50 years ago could be far more important than most of us have suspected.