Abstract
In physics, two systems that radically differ at short scales can exhibit strikingly similar macroscopic behaviour: they are part of the same longdistance universality class^{1}. Here we apply this viewpoint to geometry and initiate a program of classifying homogeneous metrics on group manifolds^{2} by their longdistance properties. We show that many metrics on lowdimensional Lie groups have markedly different shortdistance properties but nearly identical distance functions at long distances, and provide evidence that this phenomenon is even more robust in high dimensions. An application of these ideas of particular interest to physics and computer science is complexity geometry^{3,4,5,6,7}—the study of quantum computational complexity using Riemannian geometry. We argue for the existence of a large universality class of definitions of quantum complexity, each linearly related to the other, a much finergrained equivalence than typically considered. We conjecture that a new effective metric emerges at larger complexities that describes a broad class of complexity geometries, insensitive to various choices of microscopic penalty factors. We discuss the implications for recent conjectures in quantum gravity.
Main
Universality is an idea that permeates physics and computer science. In various guises, this principle says that there are broad equivalence classes of phenomena at long distances, at long times or at high complexity that may be insensitive to the details of how processes are defined at short distances, short times or low complexity. This means that large, slow and highly complex experimentalists can make predictions that are robust against having to know the exact details of how everything works at the fundamental level.
In computer science, universality is integrated into the foundations of the field via the Church–Turing thesis. According to this thesis, and its associated theorems, a broad class of ways to define a computer all have the same computational power—that is, any function that can be computed in finite time by one machine in the class can be computed in finite time by any other member of the class. They all define the same set of computable functions. Universality is also foundational to complexity theory. To define the classes of functions that can be computed in polynomial time on a classical computer (P) or on a quantum computer (BQP), we do not need to be too careful about exactly which fundamental operations are permitted for our computer because a broad class of definitions are all equivalent. If there exist compilers such that two programming languages can each emulate the other with only polynomial overhead, then the set of functions they can compute in polynomial time is the same.
Universality also has a key role in physics. A large number of physical theories that differ in their predictions at the highest energies and shortest distance scales all give rise to approximately the same predictions at low energies and long distances. For example, in statistical physics, the Landau–Ginzburg theory of secondorder phase transitions^{8} says that the critical exponents generically depend only on the symmetries and are independent of any other details of the molecular dynamics. Similarly, in quantum field theory, almost all perturbations that can be made to the Hamiltonian at short distances (in the ‘ultraviolet’ (UV)) turn out to be irrelevant—that is, they have an ever smaller effect as we probe the system on longer and longer length scales (in the ‘infrared’ (IR)). The renormalized couplings associated with these irrelevant perturbations are said to decrease with length scale.
In this paper, we explore a kind of universality in longdistance geometry. We study the distance function on several curved spaces and argue that the distance between two wellseparated points can be insensitive to most changes in the metric at short separations. We illustrate this with various examples and provide evidence that it is a generic feature of highdimensional Riemannian geometries.
We then connect this phenomenon in differential geometry back to complexity theory. The bridge is complexity geometry^{3,4,5,6,7} (Methods), which defines the complexity of performing operations on a quantum computer as a distance function on a highdimensional group manifold. Through this definition, the conjectured universality of geometries becomes a universality of definitions of quantum complexity. Unlike the universality of computability, which allows any computational overhead as long as it is finite, or the universality of BQP, which demands only polynomial equivalence between runtimes, our conjectured universality allows at most linear overhead. We argue that this much more finegrained universality is nevertheless robust. Finally, we describe how our results support previous conjectures about the connection between computational complexity and the properties of holographic black holes in quantum gravity.
Motivating example
Let us motivate our program with an example. Consider a homogeneous threedimensional space, of finite volume and a diameter of one metre. Define a ‘small’ deformation of this metric as one that preserves homogeneity and does not change the distance between any pair of points by more than one picometre.
Surprising fact: a small deformation can make the volume infinite. The surprise here is that in one sense, the homogeneous metric has been changed a lot (the volume becomes infinite), and in another sense, the metric has hardly been changed at all (the distance between any two points has a tiny additive variation). We will come to understand this phenomenon as an example of longdistance universality. Our small deformation will change the shortdistance geometry, and we will see that while the volume is sensitive to the shortdistance geometry, the distance function at large separations is not. The shortdistance geometry and the longdistance geometry decouple.
Lowdimensional Riemannian cases
Berger sphere
Our first example of longdistance universality is the Berger sphere^{9}. The Berger sphere manifests the surprising fact observed in the previous section and is the simplest nontrivial complexity geometry^{3,4,5,6,7}—the complexity geometry of a single qubit^{10,11}. It describes the difficulty of synthesizing elements of the group SU(2) when \({\widehat{\sigma }}_{x}\) and \({\widehat{\sigma }}_{y}\) rotations are cheap but \({\widehat{\sigma }}_{z}\) rotations are expensive^{11}. The metric in Euler angles \(U={{\rm{e}}}^{{\rm{i}}{\sigma }_{z}z}{{\rm{e}}}^{{\rm{i}}{\sigma }_{y}y}{{\rm{e}}}^{{\rm{i}}{\sigma }_{x}x}\) is
For \({\mathcal{I}}=1\), this gives the standard round metric on a threesphere—which is both homogeneous and isotropic—otherwise known as the biinvariant innerproduct (‘Killing’) metric on SU(2). For \({\mathcal{I}}\ne 1\), this gives a squashed threesphere, which is still homogeneous but no longer isotropic (right invariant but no longer left invariant).
To exemplify the surprising fact, take the Berger sphere with \({\mathcal{I}}=1{0}^{30}\), and make the small deformation \({\mathcal{I}}\to \infty \). This sends the volume to infinity, \(\int \sqrt{\det [\,g]}=2{{\pi }}^{2}\sqrt{{\mathcal{I}}}\to \infty \), but makes only a tiny additive change to distances. It may seem surprising that taking \({\mathcal{I}}\to \infty \) only changes the distances a little, as it makes moving directly in the σ_{z} direction infinitely expensive: the cost of directly synthesizing \({{\rm{e}}}^{{\rm{i}}{\sigma }_{z}z}\) is \(\sqrt{{\mathcal{I}}}z\). But we can also synthesize \({{\rm{e}}}^{{\rm{i}}{\sigma }_{z}z}\) indirectly by circling in the σ_{x} and σ_{y} directions and using the group commutator:
The cost \({\mathcal{C}}\) of this indirect technique—the geometric length of this path—is independent of \({\mathcal{I}}\); because the amount of σ_{z} generated is proportional to the area, the cost scales such as \(\sqrt{z}\) at small z, and in general is^{12,13}
Let us see how much distances change under our small deformation. As there are two ways to manufacture σ_{z}, the line \({{\rm{e}}}^{{\rm{i}}{\sigma }_{z}z}\) is cut into two. In the inner region, direct synthesis is the cheapest so \({\mathcal{C}}(z)=\sqrt{{\mathcal{I}}}z\); in this region, the geometry depends strongly on the value of \({\mathcal{I}}\), and increasing \({\mathcal{I}}\) can make a large multiplicative change to distances. But when \({\mathcal{I}}\) is very big, this region is very small, extending only as far out as the cut locus at \({{\mathcal{C}}}_{{\rm{cut}}}\approx {{\mathcal{I}}}^{1/2}\). For larger z, the optimal path is a mixture of direct and indirect synthesis, and the farther we go the less direct synthesis is involved. This means that the farther we go into the outer region, the less the distance function depends on \({\mathcal{I}}\): if you already were not availing yourself much of the direct technique, making the direct technique even more expensive makes little difference. This insensitivity to \({\mathcal{I}}\) reaches its apotheosis when making the very hardest unitary, z = π: for \({\mathcal{I}}\ge 1\), the cost of making \(U={{\rm{e}}}^{{\rm{i}}{\sigma }_{z}{\pi }}\) does not depend on \({\mathcal{I}}\) at all, because we can reach the antipode by proceeding along any great circle \({{\rm{e}}}^{{\rm{i}}{\sigma }_{x}{\pi }}={{\rm{e}}}^{{\rm{i}}{\sigma }_{y}{\pi }}={{\rm{e}}}^{{\rm{i}}{\sigma }_{z}{\pi }}=\,{\mathbb{1}},\) so the greatest separation on the Berger sphere is exactly π and is completely insensitive to \({\mathcal{I}}\)(refs. ^{12,13}). We are thus left with the following picture. Close to the origin, the distance function depends strongly on \({\mathcal{I}}\), but this region is small and shrinks to zero in the subRiemannian limit, \({\mathcal{I}}\to \infty \). Outside this region, distances are largely independent of \({\mathcal{I}}\). Nowhere does \({\mathcal{I}}\) make a large additive difference.
A careful analysis^{12,13} confirms this picture and shows that the very largest additive discrepancy from the \({\mathcal{I}}=\infty \) distance is found near the cut locus, so for all U and \({\mathcal{I}}\ge 1\),
As \({(1{0}^{30})}^{\frac{1}{2}}=1{0}^{15}\), the \({\mathcal{I}}=\infty \) and \({\mathcal{I}}=1{0}^{30}\) Berger spheres agree on distances to within a picometre. (A twodimensional example that shows the same phenomenon is given in Supplementary Information 3).
Finally, let us examine the role of curvature. At large \({\mathcal{I}}\), the metric becomes strongly curved: the easy–easy section becomes very negatively curved \(\kappa (x,y)=43{\mathcal{I}}\), and the easy–hard sections become very positively curved \(\kappa (x,z)={\mathcal{I}}\). We call the σ_{x} and σ_{y} directions easy because they are cheap to move it, whereas the σ_{z} direction is hard to move in for \({\mathcal{I}} > 1\). The curvature length \( \kappa { }^{1/2}\approx {{\mathcal{I}}}^{1/2}\), which is also the distance to the cut locus in the hard direction, becomes very short. The high curvature explains how the metric can hide lots of volume at short distances that are invisible at long distances. Consider an operational definition of volume that counts how many marbles can be packed into the space: if we can cram in n(r) marbles each of radius r, then the volume is proportional to \(\mathop{\mathrm{lim}}\limits_{\,\,r\to 0}{r}^{3}n(r)\). Although the volume grows without bound as \({\mathcal{I}}\to \infty \), the effective volume n(r)r^{3} at any finite value of r does not. Instead, the effective volume grows like r^{−1} as we take r smaller and only levels off at \(\sqrt{{\mathcal{I}}}\) once r is less than the curvature length \({{\mathcal{I}}}^{1/2}\). Thus even as \({\mathcal{I}}\to \infty \) the effective volume, as probed by experiments with finite resolution, stays finite.
Euclidean group
Consider parallel parking a unicycle.
The unicycle starts facing parallel to the curb and ends facing parallel to the curb but displaced sideways by z. The configuration space of the unicycle is its possible locations {y, z} and orientations {θ}, forming the Euclidean group SE(2). There are three primitive operations: roll forwards or backwards; turn; or drift sideways (perpendicular to the rolling direction). We model the difficulty of any parking manoeuvre with
The cost of parking \({\mathcal{C}}(z)\) is the length ∫ds of the shortest path that connects our starting configuration {θ, y, z} = {0, 0, 0} to our parking spot {0, 0, z}.
The important difference from the Berger sphere example is that, because there is no bound on how far away the curb can be, there is now no upper bound on \({\mathcal{C}}(z)\). However, we still get an inequality like equation (4) that is all the more powerful in this noncompact setting. Whether drifting is just as easy as rolling (\({\mathcal{I}}=1\)) or drifting is completely forbidden (\({\mathcal{I}}=\infty \)), the cost of parking never changes by more than O(1), so at large z all metrics with \({\mathcal{I}}\ge 1\) have the same linear growth.
Figure 1 plots the cost of parking. For \({\mathcal{I}}=1\), the metric is flat \({{\mathbb{R}}}^{2}\times {S}^{1}\) and the optimal parking manoeuvre is just to drift directly in the z direction at cost \({\mathcal{C}}(z)=z\). For large \({\mathcal{I}}\), the optimal parking manoeuvre can be more complicated, and there are three regimes. The first regime, at tiny z, is to drift directly in the z direction, which gives linear growth \({\mathcal{C}}(z)=\sqrt{{\mathcal{I}}}z\) that is strongly dependent on \({\mathcal{I}}\); this regime extends as far as the cut locus, a curvature length from the origin at \({{\mathcal{C}}}_{{\rm{cut}}}\approx {{\mathcal{I}}}^{1/2}\). The second regime involves commuting the two easy directions giving \({\mathcal{C}}(z)\approx \sqrt{z}\). (Turn through an angle \(\sqrt{z}\), roll forwards \(\sqrt{z}\), turn back through \(\sqrt{z}\) to be again parallel to the curb and then roll backwards \(\sqrt{z}\) into the parking spot). So far this is the same as the Berger sphere example, but the difference is what happens next. For z ≳ 1, the square root behaviour comes to an end, because once you have turned to face the z direction, there are no further efficiencies from turning more. But because we have already paid the fixed cost of turning to face the z direction, the marginal cost of going farther in the z direction is just the cost of rolling forwards. The third regime is thus another linear regime, but this time with gradient one, \({\mathcal{C}}(z)=z+{\rm{constant}}\).
There is a simple upper bound on the distance that works for all values of \({\mathcal{I}}\). Consider a strategy in which we first turn 90°, then roll forwards all the way to the curb, and then on arrival turn 90° to end parallel to the curb. This upperbounds \({\mathcal{C}}(z)\le \frac{\pi }{2}+z+\frac{\pi }{2}\), and by optimizing this strategy we find that at large z
in which the omitted terms vanish rapidly at large z. Indeed, a more careful analysis finds the behaviour shown in Fig. 1.
We have thus found a large universality class of metrics (every \({\mathcal{I}}\ge 1\)), all of which agree at long distances, up to moderate additive corrections. In this equivalence class, the leadingorder longdistance behaviour is independent of \({\mathcal{I}}\). As the dependence on \({\mathcal{I}}\) appears to be only in the subleading corrections, we may say that \({\mathcal{I}}\) is an irrelevant deformation. Although the members of the equivalence class agree at long distances (borrowing the language of quantum field theory, we could say they agree in the IR), they disagree markedly at short distances (in the UV): they have radically different curvatures, and radically different cut loci.
For most members of the universality class, the relationship between short and longdistance geometry is convoluted. For one member, however, the relationship is straightforward. For \({\mathcal{I}}=1\)—the critical value below which changes in \({\mathcal{I}}\) do affect the longdistance behaviour—the UV behaviour and the IR behaviour match exactly, with the same linear growth coefficient. We refer to this special value as giving the critical metric. If you wish to approximate distances in the IR, it is easiest just to calculate with this critical metric, no matter what the true UV value of \({\mathcal{I}}\ge 1\) is. As we approach the critical metric, the cut locus gets pushed out to z = ∞ and the curvature becomes small. We use these properties to help identify the critical metric in more complicated models.
Finally, let us comment on the error, or lack thereof. For the unicycle parking, we did not have to introduce a tolerance, because we can hit exactly with minimal extra cost any point that we can get close to, even in the \({\mathcal{I}}\to \infty \) limit. From the IR point of view, this is obvious—because small changes in direction have small costs, and we have plenty of ‘wiggle room’ to make minor adjustments to the end point. From the UV point of view, this is surprising—a Suzuki–Trotterstyle^{14} perturbative expansion in z that models the path as a piecewise linear sequence of timeindependent Hamiltonians markedly overestimates the cost of correcting errors.
The Berger sphere has a diameter π and therefore no longdistance regime. The unicycle parking example does have a longdistance regime, but the critical geometry is flat. In the next example, we will see a nontrivial critical distance function.
Highdimensional gate example
In the gate model of quantum complexity, we build complex unitaries by arranging simple unitaries in a circuit^{15}. The simple unitaries are klocal gates, which means unitaries in U(2^{k}) that act nontrivially on k qubits. We can use any element of U(2^{k}), at a cost \(\sqrt{{{\mathcal{I}}}_{k}}\) (the cost therefore depends on only k); the cost of a circuit that uses n_{k }klocal gates is \({\mathcal{C}}[{\rm{circuit}}]={\sum }_{k}\sqrt{{{\mathcal{I}}}_{k}}{n}_{k}\). To complete our definition of complexity, we need to specify the penalty schedule, which means specifying the penalty factors \({{\mathcal{I}}}_{k}\) for every value of k ≤ N. We then ask how the complexity of a given unitary depends on our choice of penalty schedule.
Our first observation is that if we take any one of the \({{\mathcal{I}}}_{k}\) large (while keeping the others fixed) the complexity soon becomes completely independent of the value of that penalty factor. This is because instead of directly deploying a klocal gate, we could also indirectly synthesize the same unitary with a subcircuit built out of cheaper gates. No gate is indispensable, and the set of mlocal gates for any individual m ≥ 2 is sufficient to reconstitute all the others^{15}. To replace a klocal gate we never need to pay more than \({\min }_{m}\sqrt{{{\mathcal{I}}}_{m}}{n}_{m}(k)\), where n_{m}(k) is defined as the number of mlocal gates needed to build any U ∈ U(2^{k}). Once \(\sqrt{{{\mathcal{I}}}_{k}}\) exceeds this critical value, the complexity becomes completely independent of \(\sqrt{{{\mathcal{I}}}_{k}}\).
Now let us consider the critical schedule for which every penalty factor takes its critical value. This means that the price of each gate is the same as the cost of indirectly synthesizing it out of cheaper gates. We can show that there is a schedule with approximately this property by estimating n_{m}(k), which simple dimension counting bounds by
This lower bound is approximately saturated^{16} (although not exactly saturated because when two consecutive gates overlap on a qubit, there is an U(2) isotropy subgroup, so adding the dimensions of the two gates overcounts by dim[U(2)] = 4). If we fix the normalization by setting \({{\mathcal{I}}}_{2}=1\), this means that the critical schedule is roughly
independent of N for fixed k. For this critical schedule, direct and indirect syntheses are approximately degenerate. If we start with the critical schedule and make one of the penalty factors more expensive, this has little effect on the complexity of any unitary, because we will just switch to the cheaper option. By contrast, if we start with the critical schedule and make any penalty factor less expensive, the complexity of almost all unitaries reduces.
We have thus, once again, found a large universality class of definitions of complexity (every schedule with \({{\mathcal{I}}}_{2}={\bar{{\mathcal{I}}}}_{2}=1\) that is no less expensive than the critical schedule \(\forall k,{{\mathcal{I}}}_{k}\ge {\bar{{\mathcal{I}}}}_{k}\)), all of which approximately agree on the complexities of all unitaries. If the penalty schedule is in this universality class, the largedistance complexity will be the same. For most members of this class, the cheapest way to make a typical element of U(2^{k}) involves a convoluted compilation strategy, but there is a unique member of the class for which the optimal compilation is straightforward: for the critical schedule, we make the element with a single gate.
The universality behaviour is somewhat broader than the class \({{\mathcal{I}}}_{2}={\bar{{\mathcal{I}}}}_{2},{{\mathcal{I}}}_{k}\ge {\bar{{\mathcal{I}}}}_{k}\). First, if we start with the critical schedule and make \({{\mathcal{I}}}_{2}\) bigger, this may have a large effect at short distances but only a small effect at long distances: in the language of quantum field theory, this would be an ‘irrelevant’ deformation. Similarly, if we take some of the \({{\mathcal{I}}}_{k}\) with moderate k and make them cheaper than the critical schedule, this would affect both the UV and the IR, but its effect on the IR would be just to multiply all complexities by \({\min }_{k}\sqrt{{\bar{{\mathcal{I}}}}_{k}/{{\mathcal{I}}}_{k}}\): the entire effect on the IR of a complicated UV deformation would be summarized by a single parameter.
The features mentioned above all have direct analogues in the examples given in the section ‘Lowdimensional Riemannian cases’. Now let us examine a feature that emerges only when the number of dimensions is large. The critical schedule was defined by demanding that \(\sqrt{{\bar{{\mathcal{I}}}}_{k}}={\min }_{m < k}\sqrt{{\bar{{\mathcal{I}}}}_{m}}{n}_{m}(k)\). But there is a scaling symmetry n_{m}(k) ≈ n_{m}(p)n_{p}(k), which means that if we want to make a klocal unitary out of mlocal gates, we pay little extra cost for doing a hierarchical compilation that first makes the klocal unitary out of plocal gates and then makes the plocal gates out of mlocal gates. This scaling symmetry means that for the critical schedule the quantity \(\sqrt{{\bar{{\mathcal{I}}}}_{m}}{n}_{m}(k)\) is largely independent of m. There is thus a huge degeneracy of ways to make a klocal gate: there is an equallength path for every value of m, as well as a great many more paths of mixed m. It is this massive degeneracy and redundancy—called ‘load balancing’ in network engineering—that makes the numerical value of the complexity so robust against upward deformations of the penalty schedule.
Application to complexity geometry
We now extrapolate the lessons learnt from the simple examples seen in the sections ‘Lowdimensional Riemannian cases’ and ‘Highdimensional gate example’ and to make conjectures about the longdistance behaviour of highdimensional complexity geometry. Complexity geometry is reviewed in the Methods.
Main conjectures
Let us ask how the value of the complexity of a unitary depends on the choice of penalty schedule. Our fundamental observation is that the complexity geometry shares the same properties that drove the universal behaviour we saw in the previous examples. The universal behaviour in these cases was caused by the overcompleteness of the set of primitive operations, which meant that there were many different ways to effect any given change. The complexity geometry is even more overcomplete than the gate definition of the previous section, because on the one hand its target is the same (elements of U(2^{N})) but on the other hand the tools at its disposal are much more powerful: whereas a klocal gate is constrained to act only on k qubits at a time, in the complexity geometry we may also move in any polynomial superposition over different klocal terms, or terms of mixed klocality; and whereas the gate definition gets charged a full \(\sqrt{{{\mathcal{I}}}_{k}}\) for even a small step of innerproduct size ϵ in a klocal direction, the complexity geometry charges only \(\sqrt{{{\mathcal{I}}}_{k}}{\epsilon }\) and is, therefore, able to change direction without penalty and economically deploy very wiggly paths. Because constructing a path through the complexity geometry affords so many more options than compiling gates into a circuit, the primitive operations are more overcomplete, and so the universality behaviour should be correspondingly more robust.
On the basis of these considerations, we expect an enhanced version of the same universality properties we saw earlier. We can formalize this into two (independent) conjectures:
Conjecture 1
There exists a critical schedule \({\bar{{\mathcal{I}}}}_{k}\), and a universality class consisting of all schedules that are anchored at \({{\mathcal{I}}}_{2}={\bar{{\mathcal{I}}}}_{2}\) that are no easier than the critical schedule, \(\forall k,{{\mathcal{I}}}_{k}\ge {\bar{{\mathcal{I}}}}_{k}\). In this universality class, the distance functions may differ greatly at short separation (in the UV), but will approximately agree at long separation (in the IR).
Conjecture 2
The critical schedule \({\bar{{\mathcal{I}}}}_{k}\) is the unique member of the universality class for which the UV and the IR behaviours match. This means that for the critical schedule the cut locus is pushed far out in almost all directions, so that geodesics leaving the origin typically remain minimal for a time exponential in N and the linear growth continues uninterrupted with the same coefficient at long and short distances.
To make these conjectures more quantitative, we need to specify how close the members of the universality class are to each other and to identify the critical schedule. Our purpose in this paper is not to settle these more quantitative questions but to lay out an effective geometry program to address them, and to identify some valid candidate answers. The easiest schedule in the universality class is the critical schedule, \({\bar{{\mathcal{I}}}}_{k}\), whereas the hardest schedule is the cliff metric:
in the limit \({{\mathcal{I}}}_{{\rm{cliff}}}\to \infty \). As the penalty factors vary by an infinite amount between these two extremes, we might think that the assigned complexities could as well; the content of conjecture 1 is that this does not happen. Let us now describe a plausible quantitative conjecture inspired by the results of the previous sections and describe some supporting evidence. In the Methods, we lay out some more conjectures about the critical metric and describe more directly the relationship to the existing mathematics literature.
Quantitative conjecture for complexity growth
For concreteness, let us consider the complexity of a unitary of the form \(U={{\rm{e}}}^{{\rm{i}}{H}_{k}z}\), where H_{k} is a typical (polynomial) klocal Hamiltonian. We normalize such that \({\rm{Tr}}{H}^{2}=1\), so that z is the innerproduct (Killing) distance. This is a useful case to consider because constant H_{k} gives a geodesic. This is because all terms in H_{k} have the same weight, and by assumption the penalty factor depends only on the weight, and a constant Hamiltonian in which all terms have the same penalty factor gives a geodesic. A candidate quantitative conjecture is shown in Fig. 2.
The complexity growth for the easiest schedule in the universality class, the critical schedule \({\bar{{\mathcal{I}}}}_{k}\), is shown in Fig. 2a. For a klocal Hamiltonian, our statement that the critical metric typically does not hit a cut locus means the complexity \({\mathcal{C}}\) is simply given by the direct geodesic distance s, that is, \({\mathcal{C}}({{\rm{e}}}^{{\rm{i}}{H}_{k}z})=s=\sqrt{{\bar{{\mathcal{I}}}}_{k}}z\), until z is exponentially large in N.
The complexity growth for the hardest schedule in the universality class, the infinitecliff metric \({{\mathcal{I}}}_{1}={{\mathcal{I}}}_{2}=1\), \({{\mathcal{I}}}_{k\ge 3}={{\mathcal{I}}}_{{\rm{cliff}}}\to \infty \), is shown in Fig. 2b. At vanishingly small distances, the two schedules disagree about the complexity of \({{\rm{e}}}^{{\rm{i}}{H}_{k}z}\) by a multiplicative factor of \(\sqrt{{{\mathcal{I}}}_{{\rm{cliff}}}/{\bar{{\mathcal{I}}}}_{k}}\to \infty \). However, as \({{\mathcal{I}}}_{{\rm{cliff}}}\) diverges, the cut locus in the z direction approaches the origin, and beyond the cut locus \({{\rm{e}}}^{{\rm{i}}{H}_{k}z}\) will be more economically synthesized by an indirect path that commutes the cheap directions. For the cliff metric, we expect the same threeregime behaviour as we saw for the parking unicycle in Fig. 1. The first regime is linear with a huge coefficient \({\mathcal{C}}=\sqrt{{{\mathcal{I}}}_{{\rm{cliff}}}}z\). This regime soon ends at a cut locus (and is squeezed out entirely for \({{\mathcal{I}}}_{{\rm{cliff}}}\to \infty \)). Beyond the cut locus, we synthesize \({{\rm{e}}}^{{\rm{i}}{H}_{k}z}\) indirectly, by commuting 2local Hamiltonians. As the number of 2local operators we must commute to make a klocal operator is at least k − 1, the second regime has sublinear growth with \({\mathcal{C}}\propto {z}^{\frac{1}{k1}}\) with some multiplicative coefficient (the exponent of this power law is fixed by the ballbox theorem)^{17}. After z ~ 1, linear growth resumes with coefficient given by the critical metric. In a strong form of the conjecture, the additive deviation of the complexity is never more than the cost of a single klocal gate so that \({{\mathcal{C}}}_{{\rm{cliff}}}\,[{{\rm{e}}}^{{\rm{i}}{H}_{k}z}]\le {{\mathcal{C}}}_{{\rm{critical}}}\,[{{\rm{e}}}^{{\rm{i}}{H}_{k}z}]+O(\sqrt{{\bar{{\mathcal{I}}}}_{k}})\).
We have argued that there exists a critical metric. In the section ‘Quantitative conjecture for critical metric’, we make a quantitative conjecture for the form of this metric, and argue that the penalty factors should grow at least exponentially with the number of qubits they touch. In the section ‘Consistency checks’, we discuss further evidence for these conjectures.
Discussion
In this paper, we investigated longdistance universality and argued that shortdistance geometry and longdistance geometry can decouple. For homogeneous metrics, the shortdistance properties—those that can be measured in the vicinity of a point, such as the curvature and its derivatives—determine (up to global identifications) everything about the entire geometry. But the farther away from the origin you get, the more convoluted the relationship between shortdistance and longdistance geometry becomes. We illustrated this by considering firing a geodesic in some direction and following it for a distance z. At first, the distance from the origin is just z, so the distance function is simple. But when a cut locus is encountered, the geodesic is no longer minimal, and the distance function becomes much more complicated. Nevertheless, we argued that at even greater separations the distance function becomes simple again. We argued that far out beyond the cut locus a new kind of order emerges. This new order—an emergent longdistance metric—is largely insensitive to the details of the shortdistance metric. There is instead a form of universality, in which a broad class of shortdistance metrics all give rise to the same effective longdistance geometry, with the entire effect of the shortdistance geometry being summarized in a handful of relevant parameters. This is the short–long decoupling that enables different spaces of wildly different volumes to agree on every distance to within a picometre.
In the section ‘Lowdimensional Riemannian cases’, we exhibited longdistance universality in lowdimensional Riemannian examples that we could solve exactly. In the section ‘Highdimensional gate example’, we exhibited largecomplexity universality in the highdimensional but nonRiemannian case of gate complexity. In the section ‘Application to complexity geometry’ we studied highdimensional Riemannian geometries, and marshalled the evidence for our conjectures that these complexity geometries exhibit longdistance universality (which in this context is, by definition, the same thing as highcomplexity universality).
Our investigations were originally motivated by the ideas in holographic black holes. Although our results stand independent of those motivations, we discuss the implications for holography in the section ‘Relation to black holes and holography’; we observe that if our conjectures are true, then complexity geometry may provide just the precisebutrobust definition of complexity needed to undergird the holographic complexity correspondences^{18,19,20,21}. An inspiration for our investigations was the Wilsonian theory of renormalization in quantum field theory^{1}. In some sense, this paper is an attempt to apply those ideas to geometry. We develop the connection further in the section ‘Wilsonian connections’. Although we have emphasized the application of our ideas to complexity geometry, it seems likely that they are applicable to all rightinvariant metrics on sufficiently ‘large’ Lie groups, where ‘large’ could mean either noncompact or a sequence of compact Lie groups with growing dimensions; an interesting mathematical program (Methods and Supplementary Information) would be to characterize the equivalence classes of rightinvariant geometries on large Lie groups.
Longdistance universality should be a robust feature of sufficiently rich highdimensional spaces. Here we have examined the implications for one example of a highdimensional space: the space of unitary functions on quantum states, which has a dimension exponential in the number of qubits. But highdimensional spaces are found in many areas in physics and computer science, from manybody systems to deep neural networks. It is tempting to speculate that the concept of emergent geometry may be of broader relevance.
Methods
Review of complexity geometry
Now we provide an overview of Nielsen’s complexity geometry^{3,4,5,6,7}; we recommend ref. ^{11} (sections 1 and 2) as a more pedagogical review (for other recent works, see refs. ^{22,23,24,25,26,27,28,29}). Similar to gate complexity, complexity geometry endows the unitary group U(2^{N}) with a rightinvariant distance function \({\mathcal{C}}({U}_{1},{U}_{2})={\mathcal{C}}({U}_{1}{U}_{2}^{1},{\mathbb{1}})\) between two unitaries, which we interpret as the definition of the relative complexity. However, in contrast with gate complexity, which is not a continuous function of U, complexity geometry endows the unitary group with a smooth Riemannian metric. A general Riemannian rightinvariant metric is parameterized by a symmetric moment of inertia tensor \({{\mathcal{I}}}_{IJ}\), so that the infinitesimal distance ds between U and \(U+{\rm{d}}U=({\mathbb{1}}+{\rm{i}}{\sigma }_{I}{\rm{d}}{\Omega }^{I})U\) is given by
and \({\rm{d}}{\Omega }^{I}={\rm{i}}{\rm{Tr}}\,{\rm{d}}{U}^{\dagger }{\sigma }_{I}U\). Here σ_{I} are the generalizedPauli operators, which provide a complete basis on the tangent space,
where lowercase letters run over the Pauli indices a ∈ {x, y, z} and capital letters indicate the qubit on which the Pauli operator acts, A ∈ {1, 2, …, N}, and we have normalized the trace so that \({\rm{T}}{\rm{r}}{\mathbb{1}}={\rm{T}}{\rm{r}}{\sigma }_{{\rm{I}}}^{2}=1\). The distance between the two unitaries is defined as the minimal geodesic distance in this metric, \({\mathcal{C}}\,=\,\text{min}\int {\rm{d}}s\). If \({{\mathcal{I}}}_{IJ}\) were the identity matrix, this would recover the standard innerproduct metric on the unitary group in which all directions are equally easy to move in, but in general a complexity geometry will stretch difficult directions to make complex unitaries farther away. To specify a complexity geometry, we must specify \({{\mathcal{I}}}_{IJ}\). Following Nielsen, let us consider \({{\mathcal{I}}}_{IJ}\) s that are diagonal in the generalizedPauli basis and for which the penalty factor of a given generalizedPauli operator is solely a function of the klocality, that is, solely a function of the weight (or size) k of the operator, defined as the number of capital indices in equation (11). Notice that a klocal Hamiltonian can be an arbitrary superposition of weightk generalizedPauli operators—it is allowed to touch all the qubits, so long as no single term touches more than k at a time—whereas a klocal gate (defined in the section ‘Highdimensional gate example’) acts on only k qubits. To specify the metric, we then need only to specify the penalty schedule \({{\mathcal{I}}}_{k}\), that is, the choice of \({{\mathcal{I}}}_{k}\) for each k ≤ N.
We can also consider the complexity geometry on 2N Majorana fermions; this would be natural for studying the complexity of the Sachdev–Ye–Kitaev (SYK) model^{30,31} or other fermionic theories^{32,33}.
The critical metric
Here we elaborate on the conjectures made in the section ‘Application to complexity geometry’.
Quantitative conjecture for critical metric
Let us identify a good candidate to be the critical schedule. Unlike the cliff metric, for which all \({{\mathcal{I}}}_{k\ge 3}\) are the same, for the critical metric we expect the penalty factors to steadily grow with k. This is to reflect the fact that the difficulty of compiling a direction using twolocal Hamiltonians increases with the klocality. On the other hand, we expect the largest penalty factor to be exponentially large in N. This is to reflect the fact that the maximum complexity is exponentially large in N, and the maximum complexity is bounded above by the maximum penalty factor, complexity\({}_{\max }\le \pi \sqrt{{{\mathcal{I}}}_{\max }}\). In equation (8), we showed that the exponential metric,
is a good approximation to the critical schedule of the gate model, up to subexponential corrections. Our quantitative conjecture is that an exponential metric, possibly with some different base x not necessarily equal to 4, is also a good approximation to the critical schedule for the complexity geometry,
A previous study^{7} pointed out some of the attractive features of the exponential metric for complexity geometry. One of the features is that, similar to the critical metric we examined in the section ‘Lowdimensional Riemannian cases’, the exponential metric has low curvature. Let us review that now. Another study^{2} showed that when the commutator of two directions is much more expensive than either direction individually, the sectional curvatures are
Two generalizedPauli operators have a nonzero commutator only when they overlap on at least a single qubit, so the weight of the commutator is always less than the sum of the weights of the two operators,
This means that for the exponential metric the magnitude of all the sectional curvatures, of both signs, is always less than O(1). In the section ‘Lowdimensional Riemannian cases’, we saw that low curvature is a signature of the critical metric. By contrast, the cliff metric has huge sectional curvatures because two easy 2local directions (\({{\mathcal{I}}}_{2}=1\)) commute to a very hard 3local direction \(({{\mathcal{I}}}_{3}={{\mathcal{I}}}_{{\rm{cliff}}})\). This huge sectional curvature of the cliff metric indicates that the cut locus in the hard direction is close.
Consistency checks
Now let us describe some important consistency checks on these ideas.
An important consistency check is the diameter. If members of the universality class are to have approximately the same longdistance behaviour, then they certainly need to approximately agree on the diameter (that is, the greatest separation of any pair of points). We saw in the Berger sphere example that all members of that universality class agree on the diameter exactly. It is not obvious in advance that the cliff metric with \({{\mathcal{I}}}_{{\rm{cliff}}}\to \infty \) should even have a finite diameter, because some of the directions are becoming infinitely expensive and the volume is diverging. However, Chow’s theorem^{34,35} ensures that so long as we can reach every element of the algebra by nested commutators of finitely expensive elements of the algebra, then the distance function converges in the limit \({{\mathcal{I}}}_{{\rm{cliff}}}\to \infty \) and the diameter is therefore finite. We can place a tighter upper bound by noticing that everything we can do in the gate definition of complexity from the section ‘Highdimensional gate example’ we can do no more expensively (up to a multiplicative factor of π) in the complexity geometry with the same penalty schedule because every kqubit gate U(2^{k}) can be made by evolving with a \({k}^{{\prime} }\)local Hamiltonian \(({k}^{{\prime} }\le k)\) that acts only on those \({k}^{{\prime} }\) qubits for an innerproduct distance at most π, giving a complexity geometry cost at most \({\pi }\sqrt{{{\mathcal{I}}}_{k}}\). Furthermore, we know from ref. ^{16} that even with the infinitecliff schedule we can construct a circuit for every element of U(2^{N}) with a cost no greater than N^{2}4^{N}. This gives the upper bound. A previous study^{4} was also able to prove a lower bound on the diameter of the cliff metric of 4^{N/3}. If our conjecture is correct, the diameter of the critical schedule cannot be substantially less than the diameter of the infinitecliff metric. It is therefore relevant that in ref. ^{36} a result is proved that lowerbounds the diameter of the exponential metric, equation (13), for all x > 1 (and several other metrics) by a quantity exponentially large in N. Our conjecture thus passes this consistency check.
This result is encouraging, but much weaker than what we want to show. We want to show that not only do all metrics in the universality class agree on the diameter, but also they approximately agree on the complexity of almost all sufficiently complex unitaries. Let us now report a step in that direction.
First, let us describe a heuristic compilation strategy for \({{\rm{e}}}^{{\rm{i}}{H}_{k}z}\) that suggests an upper bound for the critical schedule. This compilation strategy aims to synthesize \({{\rm{e}}}^{{\rm{i}}{H}_{k}z}\) using only 2local Hamiltonians (which are always cheap for all members of the universality class). A typical klocal Hamiltonian H_{k} = ∑_{I}ω_{I}σ_{I} is a weighted sum of about \({3}^{k}\left(\begin{array}{c}N\\ k\end{array}\right)k\)local generalizedPauli operators (monomials). The dimensionality of the space of klocal Hamiltonians is therefore exponentially bigger than the dimensionality of the space of 2local Hamiltonians, by a factor of
If we wish to write a typical klocal Hamiltonian as the nested commutator of 2local Hamiltonians, simple dimension counting tells us that this requires no fewer than n_{2}(k) levels of nesting. However, there are atypical klocal Hamiltonians that can be generated much more compactly. In particular, there is a special set of Hamiltonians, of dimension approximately \((k1){3}^{2}\left(\begin{array}{c}N\\ 2\end{array}\right)\), that can be written as the nested commutator of only (k − 1) 2local terms. This set includes the klocal generalizedPauli operators. Our compilation strategy uses these special Hamiltonians as building blocks. In particular, we use the fact that any operator of the form \({{\rm{e}}}^{{\rm{i}}{\sigma }_{I}z}\), where σ_{I} is a klocal generalizedPauli operator, can be constructed exactly out of 2local operations with a cost no greater than O(k).
An example of a compilation strategy is the following. Any generalizedPauli σ_{K} of weight k can be written as the commutator of a weight(k − 1) generalizedPauli σ_{J} and a weight2 σ_{I} that overlap at a single qubit. These three operators satisfy \({{\rm{e}}}^{{\rm{i}}{\sigma }_{K}z}={{\rm{e}}}^{{\rm{i}}{\sigma }_{I}\frac{{\pi }}{4}}{{\rm{e}}}^{{\rm{i}}{\sigma }_{J}z}{{\rm{e}}}^{{\rm{i}}{\sigma }_{I}\frac{{\pi }}{4}}\), just as they would if they were elements of SU(2). In this way, we can recursively synthesize motion in any klocal monomial direction with a cost \({\mathcal{C}}[{{\rm{e}}}^{{\rm{i}}{\sigma }_{K}z}]\le O(k)\). As moving indirectly in monomial directions is so cheap, the cut locus in monomial directions is very close to the origin even for the critical schedule. The extreme closeness of cut loci in monomial directions does not violate conjecture 2 because monomial directions are extremely atypical.
This implies that we can approximate the operator \(U={\prod }_{I}{{\rm{e}}}^{{\rm{i}}{\omega }_{I}{\sigma }_{I}z}\) with a total cost of about \({\mathcal{C}}\approx k{n}_{2}(k)\). This operator agrees with our target operator \({{\rm{e}}}^{{\rm{i}}{\sum }_{I}{\omega }_{I}{\sigma }_{I}z}\) at leading order in z, and has an innerproduct error of about z^{2}. This can be improved to z^{3} by using the next order in the Suzuki–Trotter expansion, but going to even higher orders becomes prohibitively expensive. It is at this point that we make our heuristic step. In the Euclidean group example, we saw that the complexity geometry has so many degrees of freedom that by making minor deformations of the path we can correct small errors at small extra cost, in a way that is not captured by any finite order of the Suzuki–Trotter expansion, and is instead an emergent feature in the IR. Compared with the SU(2) example in the section ‘Berger sphere’, the task of compiling in U(2^{N}) is complicated by the fact that there are many more directions in which to err; on the other hand, there are correspondingly more directions in which we can wiggle the path to eliminate the error, and as a statistical matter, we expect that to dominate. If the small innerproduct errors can be corrected by wiggling the path, then we can synthesize \({{\rm{e}}}^{{\rm{i}}{H}_{k}z}\) for z < 1 at cost kn_{2}(k). To generate \({{\rm{e}}}^{{\rm{i}}{H}_{k}z}\) at larger values of z, the triangle inequality (\({\mathcal{C}}(az)\le a\,{\mathcal{C}}(z)\) for any \(a\in {\mathbb{N}}\)) guarantees that the complexity grows no faster than linearly with coefficient kn_{2}(k). This argument heuristically shows that the binomial metric is in the same universality class as the infinitecliff metric, and therefore upperbounds the critical schedule:
The upperbound equation (17) holds at all but the largest k, where the analysis becomes unreliable. Note also that although the binomial metric does not have a curvature as small as the exponential metric, it is still very moderate ∣κ∣ ≤ O(N) compared to the cliff metric \( \kappa  \, \sim \,{{\mathcal{I}}}_{{\rm{cliff}}}\). The reasoning that leads to equation (17) is heuristic, because to eliminate error it appeals to a statistical argument. In ref. ^{37}, it is shown that there is a weaker result that can be proved. The study also shows that any unitary that can be reached with a path that in the binomial metric has a length \({{\mathcal{C}}}_{{\rm{bin.}}}(U)\) can be approximated to within innerproduct error ϵ by a path that in the infinitecliff metric has a length
Our conjectures imply that this can be improved from polynomial to linearwithadditiveconstant and from approximate to exact.
Finally, let us note that a property we have conjectured for the complexity geometry—namely, linear growth of complexity that lasts for an exponential duration—has been proved already in two simple toy models: a discrete randomcircuit model on Cayley graphs^{38} and a continuous randomcircuit model on the unitary group that tolerates zero error^{39}.
Next steps
In attempting to prove, refute or provide further evidence for our conjectures about precise equivalences between highdimensional complexity geometries, two broad strategies could be pursued: starting at low dimension and working up or starting at high dimension and making the equivalencies more precise.
Following the latter strategy, we could initiate a program of proving increasingly precise equivalence relations. We would show that all metrics in the equivalence classes have approximately the same largeseparation distance functions, with escalating strength for the form of the discrepancy (for example, polynomial versus linear versus additive), for the N dependence of the discrepancy (for example, exponential versus polynomial versus linear), for the form of the error tolerated (for example, innerproduct distance versus operatornorm distance versus exact), and for whether tight bounds on the discrepancy are to be found in only moderately easy directions or in all directions. This program would pursue a progressive strengthening of the results given in ref. ^{37}.
A complementary program would be to start with the lowdimensional examples in the section ‘Lowdimensional Riemannian cases’ and steadily increase the dimension. For example, a concrete next step to test our conjectures would be to numerically calculate the distance function for a modest number of qubits (or Majoranas), extending the numerical analysis of ref. ^{40} from two qubits to a handful or more.
Relation to black holes and holography
In the context of the gauge–gravity duality^{41}, it has been conjectured that some geometric properties of the black hole interior^{18,19,20,21} are related to the quantum complexity of the holographic dual of the black hole. For example, in ref. ^{19} it was conjectured that
where the volume is the volume of a wormhole behind a black hole horizon and the ≈ symbol accounts for an unknown multiplicative constant. In refs. ^{20,21}, an even more precise conjecture was made:
where again the action is evaluated for a certain geometric region of the holographic wormhole and this time there is no multiplicative ambiguity.
From the point of view of conventional complexity theory, equations (19) and (20) are alarming. On the righthand side of the equations we have geometric quantities whose values can be calculated exactly, whereas on the lefthand side we have a quantity that in the conventional view is robustly defined only up to polynomial equivalence, and only then not for a single solution but for a family of solutions of varying N in the limit that N gets large. In this view, it is a category error to expect to be able to give robust meaning to the numerical value of the complexity of a particular unitary. Of course, even in this view, we can always extract a numerical value by being extremely precise about which choices we make for the definition of complexity (for example, exactly which primitive gates or which penalty factors), but there would be no expectation that the numerical value would be robust against perturbing these choices. Furthermore, there are no known principles that would dictate these seemingly arbitrary choices.
But if the conjectures in this study are correct, equations (19) and (20) are no longer so alarming. Instead, the universality of longdistance complexity tells you that (in the semiclassical limit, in which complexities are large and the dual spacetime is effectively classical), there is a robust definition of complexity to place on the lefthand sides of equations (19) and (20), in which many of the seemingly arbitrary choices of penalty factors do not matter. This could enable a rigorous formulation of holographic complexity.
Further links between holography and the results are discussed in Supplementary Information 2.
Wilsonian connections
We make explicit the analogy between our findings in geometry and the Wilsonian theory of renormalization^{1}.
A starting point for complexity geometry, both logically and historically, is Nielsen’s cliff metric with a huge penalty factor for the noneasy directions \({{\mathcal{I}}}_{{\rm{cliff}}}\) (see section ‘Main conjectures’). In terms of renormalization, we might call this a bare theory of complexity. For this theory, the behaviours of the UV (that is, short distances) and the IR (that is, long distances) are very different. The UV has violently large curvatures and a very short distance to the cut locus. Using the bare theory, computing complexity growth in the UV (shortdistance behaviour) is straightforward. We find a linear growth with a very large slope. However, the calculation breaks down once the geodesic we are following passes the cut locus, in which nonperturbative effects become important. These effects slow the growth of complexity, and if our conjectures are correct, eventually the complexity growth becomes linear again, but with a muchreduced slope. A new schedule of penalty factors—the critical schedule—defines an effective theory that is easy to use in the IR.
In statistical mechanics and quantum field theory, this is analogous to the statement that a field theory is a flow between a UV conformal field theory and an IR conformal field theory. This means (among other things) that certain correlation functions in field theories exhibit a powerlaw decay in the UV and a powerlaw decay in the IR, but with different (anomalous) logarithmic slopes (referred to as critical exponents) in the UV and IR. Here the slopes of the linear growth of distances play the part of the logarithmic slopes in statistical physics. Our conjecture that the IR slopes differ dramatically from the UV slopes is analogous to the statement that in a strongly coupled field theory, anomalous dimensions are typically large.
The values of the penalty factors \({{\mathcal{I}}}_{k}\) are the parameters of the theory, playing the role of the set of (inverse) coupling constants in a quantum field theory. If a given penalty factor is greater than the value it attains in the critical schedule, then that parameter is irrelevant—that is, perturbing it does not affect the IR behaviour. The penalty factor becomes relevant only when it has the same value it would have had on the critical schedule, and any further decrease in \({{\mathcal{I}}}_{k}\) beyond this point then changes the distance function in the IR.
In describing the geometry of the group manifolds, we have used the terms of Wilsonian quantum field theory: UV and IR, bare theory, anomalous dimension, nonperturbative, effective theory, flow, coupling constants, relevant and irrelevant. At the moment the similarities between complexity geometry and the renormalization of quantum field theories are far from a precise isomorphism, but they are suggestive of deeper connections.
Connection to coarse geometry
We explain the relationship of our work to the mathematical subject of coarse geometry and geometric group theory^{42,43}. Supplementary Information 1 further rephrases our investigation and conjectures in this language, but the equivalences discussed there are somewhat less coarse than those allowed under the standard definitions reviewed here.
The main idea of coarse geometry is that given two metric spaces \(X,{X}^{{\prime} }\) equipped with distance functions \(d,{d}^{{\prime} }\) we can say that they are coarse equivalent or quasiisometric \(d \sim {d}^{{\prime} }\) iff there exists a map \(f:X\to {X}^{{\prime} }\) such that
for some c ≥ 1 and a ≥ 0. Furthermore, it is required that every point \({x}^{{\prime} }\in {X}^{{\prime} }\) is at most a fixed distance b ≥ 0 from some image point f(x), where x could depend on \({x}^{{\prime} }\). For our purposes, we apply this definition to the same underlying space \(X={X}^{{\prime} }\) equipped with two different distance functions and take f to be the identity. We then say that the two metrics are coarse equivalent \(d \sim {d}^{{\prime} }\) iff there exist a and c such that
For an unbounded metric space, the statement has content because a and c are required to be finite. For a bounded space such as a metric on a finite group or a compact Lie group, the statement has no content unless we upperbound a and c. In the context of complexity geometry, it is natural to consider a sequence of metric spaces X_{n}, for example, X_{n} = U(2^{N}). Then we would say that the sequences of geometries are coarse equivalent if we can find some constants a and c that are independent of n. If the diameter of X_{n} is unbounded as n → ∞ this is a nontrivial statement.
The notion of coarse equivalent or quasiisometry defines an equivalence relation on the set of metrics d on a given space. In our context, we are interested in the case in which the space is a Lie group G. In general, we can fully specify a leftinvariant geometry on a Lie group of dimension \(\dim (G)\) by specifying a \(\dim (G)\times \dim (G)\) matrix \({\mathcal{I}}\) worth of parameters (which we refer to as penalty factors), which can be viewed as the infinitesimal line element near the identity element of the Lie group^{2}. Hence in this context, we are discussing equivalence relations on these penalty factors \({\mathcal{I}} \sim {{\mathcal{I}}}^{{\prime} }\). As mentioned above, this equivalence relation is meaningful as stated for a noncompact Lie group in which the diameter of the geometry is infinite with any reasonable choice of penalty factors. For sequences of compact Lie groups X_{n}, for example, U(2^{N}), we consider corresponding sequences of penalty factors \(\{{{\mathcal{I}}}_{n}\}\) and define an equivalence relation between such sequences \(\{{{\mathcal{I}}}_{n}\} \sim \{{{\mathcal{I}}}_{n}^{{\prime} }\}\).
More generally, we can imagine adjusting this criterion in various directions. For example, we could require that a and c are not independent of n but have a mild n dependence. In the Supplementary Information, we mention a bound^{44} that was proved in the context of nilpotent Lie groups of the form
where 0 < α < 1. This implies that the fractional error vanishes at large d at a rate no slower than d^{α−1}. This is a stronger statement than equation (22) if c is left unspecified, but it is a slightly weaker statement than equation (22) if it requires c = 1.
In the context of discrete groups, an elementary result is that the coarse geometry defined by the Cayley graph is independent of the choice of generating set^{43}. That is, we can consider a group G that is generated by some set of easy elements g_{1}, …, g_{k}. The distance from the identity to some group element g is the minimum length of the word formed from g_{1}, …, g_{k} that expresses g. Although the metric depends on the choice of generating set (or more generally, the penalty factors associated with each group element), the claim is that different choices of generating sets give distance functions that satisfy equation (22). Furthermore, we can consider properties of the geometry that depend on only the equivalence class. A particularly interesting property is δhyperbolicity, which is a notion of negative curvature that applies even in this discrete context. We can identify negative curvature by observing that all triangles in negatively curved spaces are slim—that is, any point on one side of the triangle is close to some point on another side of the triangle, with the maximum separation set by the curvature scale. This property defines what is known as Gromov hyperbolic groups^{45} and is the subject of ongoing mathematical work. A simple example is a free group, where the Cayley graph is an infinite tree. This notion of negative curvature may explain our conjecture that the critical metric has negative sectional curvatures^{7,38}. In particular, the notion of δhyperbolicity shows that the concept of largescale curvature is not a contradiction. In the context of Lie groups, we expect that although many members of a given equivalence class exhibit extreme local curvatures, their largescale curvatures (for example, that probed by large triangles) should approximately agree with the largescale curvatures of the critical metric.
This work calls for an extension of the geometric group theory program to cover noncompact Lie groups and sequences of compact Lie groups. Furthermore, in analogy to the Cayley graph and discrete groups, we believe that in many cases, the number of equivalence classes of coarse geometries is small, despite there being a naively infinite number of different rightinvariant metrics on Lie groups.
References
Wilson, K. G. The renormalization group: critical phenomena and the Kondo problem. Rev. Mod. Phys. 47, 773–840 (1975).
Milnor, J. Curvatures of left invariant metrics on Lie groups. Adv. Math. 21, 293–329 (1976).
Nielsen, M. A. A geometric approach to quantum circuit lower bounds. Preprint at https://arxiv.org/abs/quantph/0502070 (2005).
Nielsen, M. A., Dowling, M., Gu, M. & Doherty, A. C. Quantum computation as geometry. Science 311, 1133–1135 (2006).
Nielsen, M. A., Dowling, M. R., Gu, M. & Doherty, A. C. Optimal control, geometry, and quantum computing. Phys. Rev. A 73, 062323 (2006).
Dowling, M. R. & Nielsen, M. A. The geometry of quantum computation. Preprint at https://arxiv.org/abs/quantph/0701004 (2006).
Brown, A. R. & Susskind, L. Second law of quantum complexity. Phys. Rev. D 97, 086015 (2018).
Ginzburg, V. L. & Landau, L. D. On the theory of superconductivity. Zh. Eksp. Teor. Fiz. 20, 1064–1082 (1950).
Marcel, B. Variétés riemanniennes à courbure positive. Bull. Soc. Math. France 87, 285–292 (1959).
Gu, M., Doherty, A. & Nielsen, M. A. Quantum control via geometry: an explicit example. Phys. Rev. A 78, 032327 (2008).
Brown, A. R. & Susskind, L. Complexity geometry of a single qubit. Phys. Rev. D 100, 046020 (2019).
Sakai, T. Cut loci of Berger’s spheres. Hokkaido Math. J. 10, 143–155 (1981).
Podobryaev, A. V. Diameter of the Berger sphere. Math. Notes 103, 846–851 (2018).
Suzuki, M. Decomposition formulas of exponential operators and Lie exponentials with some applications to quantum mechanics and statistical physics. J. Math. Phys. 26, 601–612 (1985).
Nielsen, M. A. & Chuang, I. L. in Quantum Computation and Quantum Information 10th anniversary edn, Ch. 4 (Cambridge Univ. Press, 2010).
Knill, E. Approximation by quantum circuits. Preprint at https://arxiv.org/abs/quantph/9508006 (1995).
Gromov, M. in SubRiemannian Geometry: Progress in Mathematics Vol. 144 (eds Bellaïche, A. & Risler, J. J.) 79–323 (Birkhäuser Basel, 1996).
Susskind, L. Computational complexity and black hole horizons. Fortsch. Phys. 64, 24–43 (2016).
Stanford, D. & Susskind, L. Complexity and shock wave geometries. Phys. Rev. D 90, 126007 (2014).
Brown, A. R., Roberts, D. A., Susskind, L., Swingle, B. & Zhao, Y. Holographic complexity equals bulk action? Phys. Rev. Lett. 116, 191301 (2016).
Brown, A. R., Roberts, D. A., Susskind, L., Swingle, B. & Zhao, Y. Complexity, action, and black holes. Phys. Rev. D 93, 086006 (2016).
Li, B., Yu, Z.H. & Fei, S.M. Geometry of quantum computation with qutrits. Sci. Rep. 3, 2594 (2013).
Chapman, S., Heller, M. P., Marrochio, H. & Pastawski, F. Toward a definition of complexity for quantum field theory states. Phys. Rev. Lett. 120, 121602 (2018).
Jefferson, R. A. & Myers, R. C. Circuit complexity in quantum field theory. J. High Energy Phys. 2017, 107 (2017).
Wu, Q.F. Sectional curvatures distribution of complexity geometry. J. High Energy Phys. 2022, 197 (2022).
Bulchandani, V. B. & Sondhi, S. L. How smooth is quantum complexity?. J. High Energy Phys. 2021, 230 (2021).
Balasubramanian, V., DeCross, M., Kar, A. & Parrikar, O. Quantum complexity of time evolution with chaotic Hamiltonians. J. High Energy Phys. 2020, 134 (2020).
Yan, B. & Chemissany, W. Quantum chaos on complexity geometry. Preprint at https://arxiv.org/abs/2004.03501 (2020).
Auzzi, R. et al. Geometry of quantum complexity. Phys. Rev. D 103, 106021 (2021).
Kitaev, A. A simple model of quantum holography. KITP Program: Entanglement in StronglyCorrelated Quantum Matter (KITP, 2015).
Sachdev, S. and Ye, J. Gapless spinfluid ground state in a random quantum Heisenberg magnet. Phys. Rev. Lett. 70, 3339–3342 (1993).
Hackl, L. & Myers, R. C. Circuit complexity for free fermions. J. High Energy Phys. 2018, 139 (2018).
Khan, R., Krishnan, C. & Sharma, S. Circuit complexity in Fermionic field theory. Phys. Rev. D 98, 126001 (2018).
Chow, W. L. Über Systeme von linearen partiellen Differentialgleichungen erster Ordnung. Math. Ann. 117, 98–105 (1939).
Rashevsky, P. K. On connecting two points of a completely nonholonomic space by admissible curve. Uch. Zapiski Ped. Inst. Libknechta 2, 83–94 (1938).
Brown, A. R. A quantum complexity lower bound from differential geometry. Nat. Phys. 19, 401–406 (2023).
Brown, A. R. Polynomial equivalence of complexity geometries. Preprint at https://arxiv.org/abs/2205.04485 (2022).
Lin, H. W. Cayley graphs and complexity geometry. J. High Energy Phys. 2019, 63 (2019).
Haferkamp, J., Faist, P., Kothakonda, N. B. T., Eisert, J. & Halpern, N. Y. Linear growth of quantum circuit complexity. Nat. Phys. 18, 528–532 (2022).
Caginalp, R. J. & Leutheusser, S. Complexity in one and twoqubit systems. Preprint at https://arxiv.org/abs/2010.15099 (2020).
Maldacena, J. M. The large N limit of superconformal field theories and supergravity. Adv. Theor. Math. Phys. 2, 231–252 (1998).
Roe, J. Lectures on Coarse Geometry (American Mathematical Society, 2003).
Gromov, M. Asymptotic Invariants of Infinite Groups Report No. IHESM928 (Institut des Hautes Etudes Scientifiques, 1992).
Breuillard, E. & Le Donne, E. On the rate of convergence to the asymptotic cone for nilpotent groups and subFinsler geometry. Proc. Natl Acad. Sci. USA 110, 19220–19226 (2013).
Gromov, M. in Essays in Group Theory (ed. Gersten, S. M.) 75–263 (Springer, 1987).
Acknowledgements
M.H.F. acknowledges the Aspen Center for Physics for their hospitality in summer 2019 and 2021. L.S. is supported in part by the NSF grant PHY1720397.
Author information
Authors and Affiliations
Contributions
All authors contributed to the paper and extensively discussed the results.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Peer review
Peer review information
Nature thanks Fernando Pastawski and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary information
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Brown, A.R., Freedman, M.H., Lin, H.W. et al. Universality in longdistance geometry and quantum complexity. Nature 622, 58–62 (2023). https://doi.org/10.1038/s41586023064603
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1038/s41586023064603
This article is cited by

Spontaneously interacting qubits from GaussBonnet
Journal of High Energy Physics (2024)
Comments
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.