Many scientists believe that scientific theories become ever more general and encompassing with time. More advanced theories such as quantum mechanics contain older theories, such as classical mechanics, as special cases. Thermodynamics was superseded when it was shown to follow from the more general ideas of statistical mechanics; the latter reduces to the former in the limit of a very large number of particles. Similarly, Newtonian physics drops out of Einstein's relativity if the speed of light increases without bound.

Typically, we imagine this connection through limits in analogy to simple functions — one theory turns into another much as exp(εx) turns into 1 + εx for ε approaching 0. But is this right? Or is it only familiar? Several authors — notably physicist Michael Berry and philosopher Robert Batterman — have suggested that the nature of this limit is often quite different (M. V. Berry, Phys. Today 55, 10–11; May 2002, and R. W. Batterman, The Devil in the Details: Asymptotic Reasoning in Explanation, Reduction and Emergence; Oxford Univ. Press, 2002). Many theories are actually connected more subtly through singular asymptotic limits — where the end point is qualitatively distinct from all points along the path.

How, for example, does classical physics emerge as the limit of quantum mechanics? Suppose we take the path integral perspective. In this view, the probability amplitude (or wavefunction) for a system to go from state A to state B is formed by adding up an infinite number of amplitudes for all of the many paths the system might take in going from A to B. Each contributing amplitude is simply exp(iS/ħ), where S is the classical action for the system moving along that path. As Richard Feynman showed in his beautiful initial papers on this approach, classical behaviour emerges in the limit of ħ going to zero.

Gibbs raised a host of tricky issues concerning the connections between theory at different levels.

But it is not a simple limit. Rather, as ħ gets small, contributions from different paths fluctuate in an increasingly violent way. In any simple mathematical sense, there is no convergence. As ħ gets infinitely close to zero, the wavefunction remains a wavefunction — it just becomes infinitely complicated in structure. The definite trajectory one expects from classical physics only emerges if one somewhat arbitrarily sets to zero all the fluctuating parts away from the path where S is stationary.

Making the connection takes something more than just a simple limit. There's a transformation of concepts involved too. Berry has examined many other examples that work similarly, including the small-wavelength transformation of wave optics into geometric optics, or the emergence of fluid turbulence in the limit of a high Reynold's number.

Leo Kadanoff has also recently discussed this idea in a historical perspective on the life and work of J. Willard Gibbs (http://arxiv.org/abs/1403.2460). Gibbs introduced the concepts of phase space, phase transitions and thermodynamic surfaces, and raised a host of tricky issues concerning the connections between theory at different levels. Again, these connections almost always involve singular or asymptotic limits. For example, the sharp distinction between thermodynamic phases — liquid water and solid ice — does not exist in any statistical mechanics for a finite number of particles, as Gibbs seems to have noticed.

It makes sense, then, that Gibbs was also the first to bring wide attention to a mathematical puzzle — now known as the Gibbs paradox — which illustrates singular limits in a simple way. Imagine a periodic saw-tooth-shaped function f(x) with a period of 2π. It rises linearly from zero, then at 2π drops sharply back to zero, and repeats this pattern. It is trivial to approximate this as a Fourier series, summing N sinusoidal terms of period 2π/n for n running from 1 to N. For large values of N, this series converges to the function, or so Gibbs first reported in 1898 in the pages of Nature.

However, he soon had to report an error — actually, the Fourier series only converges for most values of x. Trouble emerges near the points where the saw-tooth curve abruptly jumps down. Near those points, the highest frequency terms in the series work hard to capture the sharp vertical moves of the saw-tooth, but never quite succeed. The approximation always wiggles more than the real function and, as Gibbs discovered, the height of the wiggles approaches a fixed value as N gets very large. It does go away, but differently — becoming compressed into an ever narrower zone around the saw-tooth boundaries.

As Kadanoff points out, the resolution of the paradox that Gibbs found requires a change of variables suited to giving ever increasing scrutiny to these special zones. Returning to physics and links between theories, we might think of the saw-tooth and its Fourier approximation as different theories. Making the connection demands different approaches in different settings. Perhaps, as Kadanoff suggests, there are no 'all encompassing' theories for physical systems. Any full description of the physical world would “require different and non-overlapping conceptualizations.”

These ideas bring to mind the philosopher of science Thomas Kuhn, who famously wrote about subsequent theories in history as being “incommensurate”. Einstein's relativity does not just describe the same world in different terms; it describes a different world. Mass can be completely transformed into energy and vice versa, an idea that is entirely foreign to Newtonian physics.

This idea of singular limit connections between theories might help clarify what Kuhn was talking about. Batterman has argued that many theories that we take to be fundamental — theories that supersede earlier, less fundamental views — really aren't. They don't offer complete explanations on their own, but require implicit reference to relic elements of earlier theories. The reason, he suggests, has everything to do with singular limits.

It's another example of mathematics offering the right conceptual language to make sense of things outside mathematics. I've speculated before that fields such as history might benefit by updating their mathematical metaphors — moving from old familiar ideas of cycles, as suggested by celestial mechanics, to modern conceptions of chaos and naturally erratic dynamics, as commonly seen in systems driven out of equilibrium. Mathematics, as the study of abstract examples of connections between different logical structures, is an invaluable resource for philosophy itself.