Ultimate Speed Limits to the Growth of Operator Complexity

In an isolated system, the time evolution of a given observable in the Heisenberg picture can be efficiently represented in Krylov space. In this representation, an initial operator becomes increasingly complex as time goes by, a feature that can be quantified by the Krylov complexity. We introduce a fundamental and universal limit to the growth of the Krylov complexity by formulating a Robertson uncertainty relation, involving the Krylov complexity operator and the Liouvillian, as generator of time evolution. We further show the conditions for this bound to be saturated and illustrate its validity in paradigmatic models of quantum chaos.


Introduction
Q uantum speed limits (QSL) impose fundamental constraints on the pace at which a physical process can unfold. Since their conception [1,2], they have been formulated as bounds on the minimal time at which a distance between quantum states can be traversed. The freedom in the choice of the distance can be used to sharpen the discrimination between quantum states, and with it, the notion of the speed of evolution [3,4]. Additional efforts have been devoted to exploring the role of the underlying dynamics, generalizing early results from isolated systems to open [5][6][7][8] and classical processes [9,10]. The resulting speed limits have become a useful tool in various branches of physics, ranging from information processing [11] to many-body physics [12], quantum control [13] and quantum metrology [14]. However, traditional QSL are too conservative in estimating the relevant time scales in many processes, such as thermalization [15]. This has motivated the development of speed limits suited for specific measures and observables [16], as in the pioneering work by Mandelstam and Tamm [1]. In this sense, certain speed limits follow from generalized uncertainty relations such as those derived by Heisenberg and Robertson [17].
In parallel with the study of QSL, quantifying the complexity of a physical process is a central task for the advancement of fundamental physics and quantum technologies. Lloyd pointed out that the computational complexity of physical processes is limited by QSL [18]. Analogously, the circuit complexity of a quantum state [19], defined as the number of elementary operations required to generate it from a reference state, can be characterized in terms of conventional QSL [20][21][22][23]. A complementary approach for many-body quantum systems focuses on the buildup of complexity in the timeevolution of an initial local observable, known as operator growth [24][25][26][27][28]. The intuition is that simple operators unitarily evolve into increasingly complex ones. Quantum information initially encoded in a few degrees of freedom is thus scrambled over the system in the course of evolution, making it impossible to recover it through local measurements and giving rise to thermal-ization. The unambiguous description of this scrambling process remains an open problem. One possibility is to probe it via an out-of-time-ordered correlator [29,30] that may be used to identify an analog of the Lyapunov exponent, providing a connection with classical chaos, e.g., the butterfly effect. Such quantum Lyapunov exponent obeys a universal upper bound [30], which helps refine the notion of maximal chaos, is saturated by black holes, and is further tied to the eigenstate thermalization hypothesis [31,32]. A related approach, which we shall pursue in this work, is to study the dynamical evolution of operators in Krylov space, exploited in numerical techniques such as the recursion method [33]. In this context, operator growth is quantified by the so-called Krylov complexity, a measure of the delocalization of the time-dependent operator in the Krylov basis [34][35][36][37][38]. The authors of [34] made a conjecture on the universal operator growth, namely, that Krylov complexity can grow at most exponentially, and it does so in generic non-integrable systems. Remarkably, its growth rate upper bounds the Lyapunov exponent, establishing a connection with the bound on out-of-time-ordered correlators [30,39]. Further studies have shown that exponential operator growth is possible in free and integrable systems [40], while the role of the interaction graph in a quantum network has been explored in [41].
Here, we characterize the growth of Krylov complexity by deriving a fundamental limit on its rate of change and by studying analytically the conditions under which this bound is saturated. Our results show that saturation, which is also found to correspond to a particular notion of minimum uncertainty, occurs whenever the dynamical evolution of the system has the underlying structure of a three-dimensional complexity algebra, which was introduced by [42]. In this setting, the unitary evolution of an operator can be represented as the displacement of generalized coherent states [42], which display classical-like behavior [43]. As demonstrated in several paradigmatic examples, the saturation of the growth rate may be possible in some chaotic systems, but quantum chaos is not required for it.

Quantum dynamics in Krylov space
Consider an isolated quantum system in which the time evolution of an observable O is generated by a time-independent Hamiltonian H according to the (it) n n! L n O shows that its dynamics is contained in the complex linear span of the operators {L n O} ∞ n=0 . This span is completely determined by the Hamiltonian and the initial observable and is known as the Krylov space.
From now on, we consider the restriction of each operator and superoperator to the Krylov space. To highlight the vector space structure, we make use of the braket notation |A) when expressing an operator A in an equation. We choose to equip the Krylov space with an inner product satisfying the properties 1. (A|LB) = (LA|B), ∀A, B.
An example of a family of inner products satisfying these two properties is given by (A|B) = e βH/2 A † e −βH/2 B β . The bracket · β denotes the thermal expectation value with respect to the equilibrium Gibbs state e −βH /Z and thus (A|B) reduces to the Hilbert-Schmidt inner product when β = 0, up to a normalization factor. It follows from the second property of the inner product that the operators O and LO are orthogonal. Let b 0 = O and b 1 = LO , where · is the norm induced by the inner product. By starting from the normalized vectors O 0 = O/b 0 and O 1 = LO/b 1 , we can construct an orthonormal basis {O n } D−1 n=0 for the Krylov space by applying the Lanczos algorithm. This algorithm works as follows: given the first n+1 basis vectors, one constructs the orthogonal vector |A n+1 ) = L|O n ) − b n |O n−1 ), where b n = A n and then normalize it to obtain |O n+1 ). We call the constructed basis the Krylov basis. It is possible that the Krylov dimension D is infinite, in which case the Lanczos algorithm never halts. We remark that the Lanczos algorithm is only guaranteed to construct an orthonormal basis if the Liouvillian is self-adjoint, i.e., the first property of the inner product is satisfied. Generally, the Lanczos algorithm involves a third term on the right-hand side of the equation for |A n+1 ). This term is however always zero whenever the second property of the inner-product is satisfied. Thus, with our chosen inner-product, the action of the Liouvillian on the Krylov basis takes the specific form L|O n ) = b n+1 |O n+1 ) + b n |O n−1 ). As pointed out in [42], this motivates one to consider abstract raising and lowering operators that we denote by L + and L − , respectively. Their action on the Krylov basis is given by L + |O n ) = b n+1 |O n+1 ) and L − |O n ) = b n |O n−1 ). The Liouvillian can then be expressed as their sum.
It is further convenient to introduce the real valued functions ϕ n (t), which appear in the expansion of O(t) as |O(t)) = 1 O D−1 n=0 i n ϕ n (t)|O n ). We will refer to these functions as the amplitudes of the observable. These amplitudes evolve according to the recursion relation ∂ t ϕ n (t) = b n ϕ n−1 (t) − b n+1 ϕ n+1 (t) with the initial conditions ϕ 0 (0) = 1 and ϕ n (0) = 0 for n > 0. Thinking of the Krylov basis vectors as forming the sites of a one dimensional lattice, b n can be interpreted as a hopping amplitude, see, e.g., [34,35]. In this sense, one can think of O as a one dimensional discrete wave function that is initially localized and then spreads out over the lattice as time evolves. An increase in the population of the sites further away from the origin reflects a greater increase of complexity of the observable. In order to quantify this, it is natural to consider the Krylov complexity of O(t), defined to be The main task of our work is to bound the growth of Krylov complexity. Due to unitary dynamics, the norm of the evolution is preserved and the Krylov complexity is unchanged if one normalizes the operators studied. We will, therefore, without loss of generality, consider O to be normalized. By introducing the complexity operator K = D−1 n=0 n|O n )(O n |, which plays the role of the position operator in the Krylov lattice, it is possible to express Krylov complexity as the "expectation value" of K with respect to O(t). More precisely, if K t ≡ (O(t)|KO(t)) then K(t) = K t .

Dispersion bound on Krylov complexity
If the Krylov space forms an inner product space in which A and B are self-adjoint superoperators, then there ought to exist a Robertson uncertainty relation given by ∆A∆B ≥ 1 2 | [A, B] |, where ∆A = A 2 − A 2 is the dispersion of A with respect to some state |A). When the Krylov dimension is infinite it is necessary that |A) is contained in the intersection between the domains of AB and BA, otherwise the inequality might not hold [44]. Letting A = O(t), A = L, B = K and noting that ∆L = b 1 , we can rewrite the uncertainty relation as In other words, the growth of Krylov complexity is upper bounded by a constant times the dispersion of the complexity operator. By defining a characteristic time-scale τ K = ∆K/|∂ t K(t)|, one obtains τ K b 1 ≥ 1/2 which takes the form of a Mandelstam-Tam bound, and emphasizes the role of b 1 = LO as a norm of the generator of evolution in Krylov space. To avoid confusion with the uncertainty relation for observables, we will refer to this bound as the dispersion bound. We note that no bound tighter than (2) can be found by considering the more general Schrödinger uncertainty relation, as the extra term given by the anti-commutator identically vanishes, as shown in Methods. It is not self-evident that saturation of the dispersion bound can be achieved under unitary dynamics of the observable. There are very specific relations between L, O and K that need to hold: the Liouvillian is required to be tridiagonal in the eigenbasis of the complexity operator and the initial state of the observable is required to be parallel to the eigenvector with the lowest eigenvalue. The conditions for the saturation of the dispersion bound are thus highly constrained and differ from those known for saturation of a Robertson uncertainty relation in general. The required conditions admit a geometrical interpretation, elaborated in Methods. The bound is saturated if and only if the evolution curve moves along the gradient of the Krylov complexity. This requires that the dynamics is directed along the direction that maximizes the local growth of complexity; see Methods. The only exception involves extremal points in which any direction away from the extremal point leads to saturation. This is indeed the case for t = 0. Indeed, there exists Liouvillians of the form L = L + +L − for which the tangent of the generated path will be parallel with the gradient for all times.

Saturation of the dispersion bound
Time evolutions saturating the dispersion bound are characterized by a unique algebraic structure. Define the superoperator B = L + − L − . Following [42], we consider their simplicity hypothesis: namely, the assumption that L, B and the commutatorK = [L, B] close an algebra with respect to the Lie bracket. It was shown in [42] that this forcesK to be related to the complexity operator viaK = αK + γ, where α, γ ∈ R. We show in Supplementary Note 2 that γ is a positive number and α is a real number satisfying the condition α ≥ 0 for infinite Krylov dimension and α = − 2γ D−1 for finite Krylov dimension. Moreover, the only possible closure of the algebra is given by the commutation relations Given this algebra, the evolving observable can be interpreted as a curve of generalized coherent states evolving according to the displacement operator D(ξ) = e ξL+−ξL− , where ξ = it. Moreover, the initial state is the highest weight state of the representation, which is annihilated by L − by construction. Coherent states can be viewed as the states closest to the classical ones in the sense that they typically minimize an uncertainty Growth of Krylov complexity at the speed limit. Saturation of the dispersion bound occurs in three different scenarios, each of which is associated with a different complexity algebra, that is specified by the sign of α. relation. It is for example known that coherent states of the Harmonic oscillator saturate the Robertson uncertainty relation for the pair of observables of position and momentum. Building on this intuition, we could expect that the dispersion bound is saturated for the simplicity hypothesis. It turns out that this intuition is indeed correct. In fact, as we show in Supplementary Note 2, the dispersion bound is saturated if and only if the simplicity hypothesis holds. The saturation of the dispersion bound dictates the evolution of the Krylov complexity, where three different scenarios are possible, as shown in Figure 1a. The growth of complexity at the speed limit is described by the differential equation with the conditions that K(0) = 0 and K(−t) = K(t).
For finite Krylov dimension, saturation of the dispersion bound sets the complexity growing according to . In this, case, the corresponding complexity algebra (3) reduces to the SU(2) algebra. By contrast, for infinite Krylov dimension there are two distinct scenarios for the complexity growth: for α > 0 one finds The complexity algebra in these two cases reduces to SL(2, R) and the Heisenberg-Weyl algebra (HW), respectively. Reference examples maximizing the Krylov-complexity growth rate at all times are discussed in Supplementary Note 1. One such example with α > 1 is the Sachdev-Ye-Kitaev (SYK) model [45], a paradigm of quantum chaos.
However, the saturation of the bound does not require quantum chaos and can indeed be achieved by a single qubit, with α = 0 (Supplementary Note 1). Together with the time-dependence of K(t) and the complexity algebra, the value of α also determines the growth of the Lanczos coefficients in the Krylov lattice. As proven in Supplementary Note 2, the dispersion bound is saturated if and only if the Lanczos coefficients grow according to exhibiting three different scalings as function of α, see Figure 1b. That the simplicity hypothesis implies (5) has already been pointed out in [42]. For α > 1 and large n, this dependence captures the linear growth b n = √ αn conjectured by Parker et al. to hold in generic nonintegrable systems, maximizing the Krylov complexity growth [34].

Krylov complexity in generic systems
We next discuss the Krylov complexity growth in generic systems not fulfilling the simplicity hypothesis. We can use Eq. (5) to estimate when and at what time scale a generic system deviates from the bound. By expanding Krylov complexity up to fourth order we find that . Since we can always find a value on α and γ such that b 1 and b 2 satisfy (5), we conclude that the bound (2) is saturated up to third order in time. By expanding the Krylov complexity up to sixth order, we find that the Lanczos coefficient b 3 will appear in the last term and since we are not guaranteed to be able to find a value on α and γ such that b 1 , b 2 and b 3 satisfy(5), we conclude that the system can only start deviating from the bound (2) as a result from fifth order terms in the expansion. We can estimate this time scale by finding the value of t for which the third order coefficient of ∂ t K(t) is equal to its fifth order coefficient. We will call this time the deviation time, denoted by τ d , and it is explicitly given by To get an understanding of the complexity growth in a generic setting, we next illustrate the Krylov dynamics of a system described by a random matrix Hamiltonian. Specifically, we consider the Krylov complexity of an ensemble E(H) of random matrix Hamiltonians, a paradigm of quantum chaos [46]. We sample the Hamiltonian matrices H from the Gaussian Orthogonal Ensemble GOE(d), where d is the dimension of the Hilbert space. We then calculate the Lanczos coefficients {b n } with partial re-orthogonalization [36,47]. Specifically, we consider samples of real matrices H = (X + X )/2, where all elements x ∈ R of X are pseudo-randomly generated with probability measure given by the normal distribution, exp −x 2 / 2σ 2 /(σ √ 2π). In order to study the general behaviour of Lanczos coefficients, we choose an initial observable which is represented as the normalized vector |O) = (1/d, 1/d, . . . , 1/d) T , expressed in a fixed eigenbasis of the Liouvillian. However, the following results do not depend strongly on the choice of O, provided it is dense in the eigenbasis of the Hamiltonian. Figure 2a shows the squares of the Lanczos coefficients for a single realization and the average {b n } E(H) over 100 different Hamiltonians of dimension d = 32, sampled from GOE(d) with standard deviation σ = 1. Operator growth is displayed by the time-dependent amplitudes, which are found by solving the recursion relation and exhibit diffusion-like dynamics on the Krylov basis, shown for a single realization in Figure 2b. The corresponding time evolution of Krylov complexity and its growth rate are shown in panels c and d, respectively. Hamiltonians sampled from GOE(d) behave as a generic system, given that the Lanczos coefficients do not, in general, grow according to (5) as shown in Figure 2a. As a result, the growth rate starts deviating from the dispersion bound around the time scale τ d in Eq. (6), indicated by the vertical line in Figure 2, panels c and d. In short, while GOE Hamiltonians provide a useful paradigm in the description of quantum chaotic systems, the dynamics generated by them does not maximize the growth of Krylov complexity for t > τ d .
Our results establish the ultimate speed limit to operator growth in isolated quantum systems. Specifically, the dispersion bound governs the growth rate of Krylov complexity, playing the role of a Mandelstam-Tamm uncertainty relation in operator space. This bound is saturated by quantum systems in which the Liouvillian governing the time evolution fulfills a simplicity algebra. The latter arises naturally in certain quantum chaotic systems, such as the SYK model. However, other paradigmatic instances of quantum chaos, such as random-matrix Hamiltonians, do not maximize the growth of Krylov complexity. Indeed, saturation of the bound does not require quantum chaos and can be achieved, e.g., by a single qubit.

Methods
Vanishing of the anticommutator contribution in the Robertson uncertainty relation for K and L. We establish a universal feature of Krylov complexity, valid for any physical system: namely, that its anticommutator with the Liouvillian L has vanishing expectation value over the evolved operator |O(t)). The relevance of this result relies on the fact that this quantity enters the Schrödinger uncertainty principle for the two operators K and L 4(∆K∆L) 2 ≥ |(O(t)|[K, L]|O(t))| 2 (7) from which one can bound the complexity rate ∂ t K. We have that and where which, by performing the sums over k and n, yields Since the amplitudes ϕ n and the coefficients b n are real quantities, comparing Eqs. (9) and (12) we immediately conclude that Let us note that the key condition to obtain this result is the fact that the Liouvillian connects only states that are nearest-neighbors on the Krylov lattice, so that we are left with a purely imaginary phase It is this peculiar property that allows the Liouvillian to be interpreted as a sum of generalized ladder operators L ± [42]. However, let us point that here we are not making any assumption regarding the commutation rules between these operators: we are considering the structure of Krylov space in full generality. Moreover, from Eq. (8) we immediately obtain the relation between the anticommutator [K, L] and the complexity rate ∂ t K: Therefore, the Schrödinger uncertainty relation (7) can be recast as the dispersion bound (2) on the growth of Krylov complexity: Geometrical interpretation of the saturation of the bound.
For the geometrical interpretation of the saturation of the bound, we assume the Krylov space to be of finite dimension. However, the results could potentially be extended to infinite-dimensional Krylov spaces as well.
The Krylov space is isomorphic to a 2D-dimensional real vector space and we can therefore consider the Euclidean metric g, given by the real part of the inner product. The evolution curve of O will then be restricted to the unit sphere of the Krylov space. This unit sphere forms a Riemannian manifold and we can consider the Krylov complexity as a function on this manifold defined by K(A) = (A|KA), for any element |A) in the Krylov space with unit norm. In this sense, when we write K(t) we simply mean K(O(t)) which is consistent with how we defined complexity for the evolution. The differential of Krylov complexity will be denoted by dK and its action on any tangent vectorȦ at A is given by dK(Ȧ) = (Ȧ|A)+(A|Ȧ). This differential together with the metric can be used to define the gradient of Krylov complexity. It follows from the theory of differential geometry that the gradient of Krylov complexity at A, denoted by ∇K(A), is the unique vector satisfying the expression g(∇K(A),Ȧ) = dK(Ȧ) for all tangent vectorṡ A at A [48]. It can be checked that the gradient must then be given by ∇K(A) = 2(K − K )A, which indeed is tangent to the unit sphere at A. The change of Krylov complexity along the curve O(t), generated by the Liouvillian, is given by ∂ t K(t) = g(∇K(t), ∂ t O(t)), where ∇K(t) is the gradient at O(t). Applying the Cauchy-Schwarz inequality on the right-hand side gives us the inequality The right-hand side of this inequality is exactly 2b 1 ∆K and we note that it is saturated if and only if the tangent vector of O(t) is parallel to the gradient of Krylov complexity. We also note that the gradient is the zero vector at time zero and so the dispersion bound is always initially saturated.
The unitary orbit of O is the set of all points U † OU , where U is a unitary operator. We emphasize that this is a proper subset of the unit sphere in Krylov space which, in contrast, is the set of all points UO, where U is a unitary superoperator. The gradient we have considered is with respect to the unit sphere and it is therefore not obvious that this gradient will ever be tangential to the unitary orbit of O. However, the gradient is indeed tangential to the unitary orbit at time zero and at all times provided the simplicity algebra is fulfilled.
On the closure of the complexity algebra. Here we show the proof that the only possible closure of the complexity algebra introduced by [42] is given by Eq. (3). The (anti-Hermitian) operator B = L + − L − "conjugated" to the Liouvillian can be expanded in Krylov space as We note that one can establish a formal analogy with the harmonic oscillator: L plays the role of the position of the harmonic oscillator, while iB corresponds to its momentum. However, in general the commutator between L and B is not proportional to the identity, indeed: where it is understood that b 0 has to be replaced with 0. Let us now investigate the conditions under which L, B andK form a closed algebra with respect to the operation [, ]: the so-called complexity algebra [42]. This happens if and only if the commutators [L,K] and [B,K] can be written as linear combinations of the operators L, B andK themselves. These commutators can be expanded over the Krylov basis as follows: where we have defined (21) Now, it is clear that the commutator (19) between L andK cannot contain any element of the complexity algebra other than B = , while the commutator (20) can only contain L = Moreover, the only possibility for the algebra to be closed is that the discrete function f (n) is a constant. By looking at Eq. (21), we conclude that f (n) is constant if and only if for some constants α and γ (the factors 2 are included for convenience). Again, b 0 has to be replaced with 0, so that Eq. (22) holds for n ≥ 1, while 2b 2 1 = α + 2γ. Then, the function f (n) takes the constant value f = −α/2, so that the only possible closure of the complexity algebra is given by: Moreover, from Eq. (22) we immediately conclude that Therefore, if α = 0, the Krylov complexity is related toK by a shift. Conversely, if α = 0 there is no simple relation between the Krylov complexity and the operator K. In this case,K is proportional to the identity and the complexity algebra reduces to the Heisenberg-Weyl algebra [43], being [L + , L − ] = γ1.
Possible scenarios under the closure of the complexity algebra.
As already discussed, if L, B and their commutatorK closes an algebra, then the only possible commutation relations are given by (23). This complexity algebra then reduced to the Heisenberg-Weyl algebra whenever α = 0. We next show that for the cases α < 0 and α > 0, the complexity algebra reduces to the SU(2) algebra and the SL(2, R) algebra, respectively. Let us introduce the operators J + and J − , which are defined by νJ + = L + and νJ − = L − , where ν is a strictly positive scaling parameter. We can then write L = ν(J + + J − ) and B = ν(J + −J − ). Let us also introduce the operator J 0 defined by J 0 = − 1 2ν 2K . By substituting these operators into (23), one can rewrite the commutation relations as By choosing the scaling parameter such that 2ν 2 = α, we find that the algebra (23) is equivalent to What we have shown is that, whenever the simplicity hypothesis holds, then the algebra generated by L, B and their commutator can always be reduced to either SU(2), SL(2, R) or the Heisenberg-Weyl algebra, and for which of these it reduces to depends on the value of α.

Data availability
The datasets generated during and/or analysed during the current study are available from the corresponding author upon reasonable request.

Competing interests
The authors declare no competing interests.

Code availability
The codes generated and used during the current study are available from the corresponding author on reasonable request.

Online content
Methods, additional references, supplementary information, are available.

SUPPLEMENTARY NOTE 1: EXPLICIT MODELS
In this appendix we introduce three dynamical models that, having the structure of a closed complexity algebra, display a maximal growth of complexity, in the sense that the complexity rate saturates the dispersion bound. Moreover, we show that quantum chaos, in the Hamiltonian sense, is not necessary to have maximal complexity growth. In particular, it is shown that the dynamics of simple solvable Hamiltonians can saturate our bound.
We first consider a finite-dimensional model, namely the SU(2) algebra, and then turn to the infinite-dimensional case, which allows us to comment on the famous conjecture by Parker et al. [34]. In particular, we show that our notion of maximal complexity growth is more general than the one proposed in their work, as the latter represents a special case of the former.

SU(2) algebra
Let us start with the SU(2) algebra [J i , J j ] = i ijk J k . That is, let us consider the dynamical evolution generated by the Liouvillian where J ± = J 1 ± iJ 2 are the familiar SU(2) ladder operators and L ± = νJ ± . Now, the Krylov basis corresponds to the usual basis of the representation j: |O n ) = |j, n with −j ≤ n ≤ j. Following [42], let us relabel the vectors with n → n + j, so that n = 0, . . . , 2j, the dimension of the Krylov space being equal to 2j + 1. By construction, the initial operator |O 0 ) is just the highest weight state |j, −j and is annihilated by J − . From the action of the ladder operators on the representation basis: J + |j, −j + n = (n + 1)(2j − n) |j, −j + n + 1 , J − |j, −j + n = n(2j − n + 1) |j, −j + n − 1 , being L − |O n ) = b n |O n−1 ), we can read off the Lanczos coefficients: The Heisenberg evolution of an operator can be understood as the displacement of a generalized coherent state [42]: Indeed, the displacement operator is defined as The generalized coherent state |ξ, j can be expanded over the spin basis |j, −j + n as follows [42]: For this model it is convenient to use complex polar coordinates ξ = e iφ tan θ. Indeed, by replacing θ = νt and φ = π/2 and by using the correspondence between the spin and the Krylov basis |O n ) = |j, −j + n , from above one can read the components of the operator wavefunction: φ n (t) = tan n (νt) cos 2j (νt) Γ(2j + 1) n!Γ(2j − n + 1) , from which we can compute both the mean (i.e. the Krylov complexity) and the variance of the complexity operator K: Since b 1 = ν √ 2j, one can check that |∂K| = 2b 1 ∆K at any time t: that is, as expected from the closure of the 3-dimensional complexity algebra, the dispersion bound is identically saturated.
Given the expression of the Liouvillian in Krylov space, it is generally a difficult task to derive a corresponding Hamiltonian that generates the dynamics in the Hilbert space. In particular, the former contains less information than the latter and therefore many different Hamiltonians can give rise to the same dynamics in Krylov space. Moreover, one has not only to specify the Hamiltonian but also the initial operator O 0 . Nevertheless, we find that the evolution of the operator O 0 = σ 1 + σ 3 under the single-qubit (two-level) Hamiltonian H = νσ 3 , where σ i is the i-th Pauli matrix, is given in Krylov space by the representation j = 1 of the SU(2) algebra. More precisely, by explicitly performing the Lanczos algorithm, which in this case involves only two steps, we find Lanczos coefficients b 1 = b 2 = ν √ 2, which coincide with Eq. (31) for j = 1. We note that here the dimension of the Krylov space is D = 3, which is the maximum allowed for a Hilbert dimension d = 2, being D ≤ d 2 − d + 1 [36]. This is achieved due to the choice made for the initial operator O 0 , which has non-zero components along all the Liouvillian eigenspaces. If instead one starts with an initial operator O 0 = σ 1 , the Krylov dimension shrinks to D = 2, in which case the bound is always trivially saturated, being the complexity algebra given by the representation j = 1/2 of SU (2). From this example, we deduce that non-chaotic Hamiltonian can give rise to maximal complexity growth in Krylov space. Interestingly, the same observation was made also in [40] with respect to the different notion of maximal complexity growth proposed by Parker et al. [34], proving that the exponential growth of complexity can be achieved also without chaos.
As a final remark, let us note that by considering a more general two-level Hamiltonian

Heisenberg-Weyl algebra
Let us now consider the case of infinite-dimensional Krylov space. An emblematic example in which the bound is saturated is the one in which the dynamical evolution is given in terms of the Heisenberg-Weyl (HW) algebra [a, a † ] = 1. In this case, the Liouvillian is given by and the generalized ladder operators L ± are just the raising and lowering operators a † and a, times the constant ν.
Here the initial operator |O 0 ) is represented as the vacuum state |0 and the Krylov basis corresponds to the usual basis constructed by acting with a † on the vacuum: that is, the eigenbasis of the number operator a † a, which coincides with the complexity operator K. We note that in this case the Krylov space has infinite dimension. From the well known relations one can see that b n = ν √ n. The time-evolved operator |O(t)) can be represented as the standard coherent state for ξ = iνt. Therefore, the components of the operator wavefunction are from which we can compute that We thus conclude that, being b 1 = ν, the dispersion bound is always saturated: that is, |∂K| = 2b 1 ∆K ∀t. This model provides an example in which maximal complexity growth (in the sense of saturation of our bound) is achieved, while the conjecture by Parker et al. [34], i.e. linear growth of Lanczos coefficients, does not hold. We therefore see that the two notions of maximal complexity growth are not equivalent.

SYK model
Finally, let us consider the celebrated prototype for quantum chaos: the SYK model of N Majorana fermions with q-body interaction, given by the Hamiltonian In the large-N limit the model can be solved analytically and, for asymptotically large q, has been proven to obey the universal growth hypothesis by Parker et al. [34]: namely, the growth of the Lanczos coefficients is asymptotically linear in n, resulting in the exponential time-behaviour of Krylov complexity. More precisely, it can be shown that, in this limit, the SYK belongs to a family of exact solutions with Lanczos coefficients [34] b n = ν n(n − 1 + η) (45) and amplitudes φ n (t) = (η) n n! tanh n (νt) sech η (νt), where (η) n = η(η + 1) . . . (η + n − 1) is the Pochhammer symbol. From these amplitudes one can extract the complexity K(t) = η sinh 2 (νt), which, as expected from the asymptotic linear behaviour of the Lanczos coefficients, shows an asymptotic exponential growth. Remarkably, the linear growth of the Lanczos coefficients is a sufficient (but not necessary, as shown above) condition for the saturation of the dispersion bound on complexity, as shown in Supplementary Figure 3. This saturation is due to the presence of an underlying complexity algebra: indeed, one of the main results of our work is the proof that the closure of the complexity algebra is both a sufficient and a necessary condition for the dispersion bound to be saturated. For this particular family of solutions, the underlying algebra is that of SL(2, R) [42].

SUPPLEMENTARY NOTE 2: EQUIVALENCE BETWEEN THE SATURATION OF THE DISPERSION BOUND AND THE SIMPLICITY HYPOTHESIS
In this appendix, we show that there is an equivalence between the saturation of the dispersion bound and the simplicity hypothesis being satisfied. When we say that the complexity algebra is closed, we will simply mean that the simplicity hypothesis is satisfied.
The right-hand side of the dispersion bound is equal to two times the norm of the vectors (K − K t )O(t) and (L − L t )O(t), while the left-hand side is obtained by applying the Cauchy-Schwarz inequality. From this, it is clear that the bound is saturated if and only if the two vectors are linearly dependent. In other words, the bound is saturated if and only if the vectors (K − K)|O(t)) and L|O(t)) are linearly dependent, where we have chosen to suppress the time dependence of K. What will follow is a series of steps proving that the complexity algebra being closed is both necessary and sufficient for the vectors (K − K)|O(t)) and L|O(t)) to be linearly dependent. Said differently, the complexity algebra being closed is equivalent to the dispersion bound being saturated. When carrying out the proofs, we will use the convention that b 0 = 0 and for finite Krylov dimension D, we will also introduce b D = 0. For any superoperator M we will write M n,m ≡ (O n |MO m ), where M n,m can be thought of as the entries of a matrix representing M.

Proving necessity
Linear dependence between (K − K)|O(t)) and L|O(t)) is equivalent with linear dependence between e −itL (K − K)e itL O and |O 1 ). To simplify, we will use the notation L n to mean [L, ·] applied to K n times. By Taylor expanding the vector e −itL (K − K)e itL |O) at t = 0, we have that It is clear that L = L − −L + while L 2 = 2[L + , L − ] is diagonal in the Krylov basis with eigenvalues (L 2 ) n,n = −2(b 2 n+1 − b 2 n ). Applying [L, ·] once more, one finds that L 3 consists only of a subdiagonal and superdiagonal with values given by . By the k-diagonal of a matrix, we mean the diagonal of the matrix going top-left to bottom-right direction where k is an offset from the main diagonal. We use the convention that k = 0 is the main diagonal while k = 1 and k = −1 are the superdiagonal and subdiagonal respectively, and so on. From the form of L 3 , it should be clear that k-diagonals of L n+3 for which |k| > 1 + n must only consist of zero-valued entries. Consequently, we must have that (L n+4 ) n+m+2,m = [L + , L n+3 ] n+m+2,m which more explicitly can be written as the recursion relation (L n+4 ) n+m+2,m = b n+m+3 (L n+3 ) n+m+1,m − b m+1 (L n+3 ) n+m+2,m+1 . To simplify some notation, we will write L(n, m) ≡ (L n+4 ) n+m+2,m and the recursion relation can then be written as L(n, m) = b n+m+2 L(n − 1, m) − b m+1 L(n − 1, m + 1) for n > 0.
We now observe that the following proposition must be true: The condition: L(n, 0) = 0 ∀ 0 ≤ n ≤ D − 3, is a necessary condition for the vector e −itL (K − K)e itL |O) to be linearly dependent of |O 1 ), and therefore, a necessary condition for the dispersion bound to be satisfied.
By applying [L, ·] to L 3 , one finds that L(0, m) = 2b m+1 b m+2 g(m), where we have defined g(m) = f (m) − f (m + 1). We will show that the condition L(n, 0) = 0 ∀ 0 ≤ n ≤ D − 3 is equivalent to the complexity algebra being closed. Together with Proposition 1, this would then prove that the algebra being closed is a necessary condition for saturation of the dispersion bound. In order to prove this however, we will first prove another proposition.
Proof. We prove this by using mathematical induction. For the base case we have that For the inductive step we have where, in obtaining the second last line, we have made use of the binomial identity n k = n−1 k−1 + n−1 k .  Proof. We can think of the set of functions g(n) as spanning a subset of R D−3 . It should then be clear from Corollary 3 that the set of functions L(n, 0) must then have the same span. This means that we can express each g(n) as a linear combination of the functions L(n, 0) or vice versa. Equating each function L(n, 0) (g(n)) with zero then results in g(n) = 0 (L(n, 0) = 0) for all 0 ≤ n ≤ D − 3.
We are now ready to prove the following proposition: Proposition 5. The saturation of the dispersion bound implies that the complexity algebra is closed.
Proof. We have that B ≡ L andK ≡ L 2 and the complexity algebra is closed per definition if and only if L 3 = [L,K] can only be written as a linear combination of L, B andK. It should be clear that this is possible if and only if f (n) = C ∀ 0 ≤ n ≤ D − 2, where C ∈ R. This is clearly equivalent to the condition g(n) = 0 ∀ 0 ≤ n ≤ D − 3, which together with Proposition 1 and 4 is implied by saturation of the dispersion bound.
The right hand side of the equivalence sign in (51) is equivalent toK = αK + γ. Consequently, the closed complexity algebra is entirely determined by the commutation relations Since B|O) = b 1 |O 1 ), it follows from the definition of Krylov complexity that the first two terms in the expression above must cancel. We thus have that When α = 0, one has that L = −B, L 2 = −γ and L n = 0 for n > 2. Substituting these into the Taylor expansion, one finds that We thus have that the algebra being closed is a sufficient requirement for saturating the dispersion bound.
The proofs of Proposition 5 and 7 leads to the conclusion that saturation of the dispersion bound is equivalent with the complexity algebra being closed.
Remark 8. We would like to point out that equation (54) and (56) shows that the general solution for Krylov complexity, whenever the dispersion bound is saturated, is given by K(t) = − 2γ α sin 2 √ −αt 2 when α < 0, K(t) = γ 2 t 2 when α = 0 and K(t) = 2γ α sinh 2 √ αt 2 when α > 0. These three scenarios correspond to the three algebraic models discussed above: SL(2, R), HW and SU(2) respectively. Remark 9. The requirement that b n ≥ 0 for all n implies that γ ≥ 0 and α ≥ − 2 n−1 γ for all n. In the infinite dimensional case we see that this implies that α ≥ 0. In the finite-dimensional case, the condition b D = 0 implies