Introduction

One of the great unanswered technological questions is whether Noisy Intermediate Scale Quantum (NISQ) computers will yield a quantum advantage for tasks of practical interest1. At the heart of this discussion are Variational Quantum Algorithms (VQAs), which are believed to be the best hope for near-term quantum advantage2,3,4. Such algorithms leverage classical optimizers to train the parameters in a quantum circuit, while employing a quantum device to efficiently estimate an application-specific cost function or its gradient. By keeping the quantum circuit depth relatively short, VQAs mitigate hardware noise and may enable near-term applications including electronic structure5,6,7,8, dynamics9,10,11,12, optimization13,14,15,16, linear systems17,18, metrology19,20, factoring21, compiling22,23,24, and others25,26,27,28,29,30.

The main open question for VQAs is their scalability to large problem sizes. While performing numerical heuristics for small or intermediate problem sizes is the norm for VQAs, deriving analytical scaling results is rare for this field. Noteworthy exceptions are some recent studies of the scaling of the gradient in VQAs with the number of qubits n31,32,33,34,35,36,37,38,39. For example, it was proven that the gradient vanishes exponentially in n for randomly initialized, deep Hardware Efficient ansatzes31,32 and dissipative quantum neural networks33, and also for shallow depth with global cost functions34. This vanishing gradient phenomenon is now referred to as barren plateaus in the training landscape. Barren plateaus imply that in order to resolve gradients to a fixed precision, on average, an exponential number of shots need to be invested. This places an exponential resource burden on the training process of VQAs. Further, such effects are not avoided by adopting optimizers that use information about higher order derivatives38 or gradient-free methods39. Fortunately, investigations into barren plateaus have spawned several promising strategies to avoid them, including local cost functions34,40, parameter correlation37, pre-training41, and layer-by-layer training42,43. Such strategies give hope that perhaps VQAs may avoid the exponential scaling that otherwise would result from the exponential precision requirements of navigating through a barren plateau.

However, these works do not consider quantum hardware noise, and very little is known about the scalability of VQAs in the presence of noise. One of the main selling points of VQAs is noise mitigation, and indeed VQAs have shown evidence of optimal parameter resilience to noise in the sense that the global minimum of the landscape may be unaffected by noise6,23. While some analysis has been done44,45,46, an important open question, which has not yet been addressed, is how noise affects the asymptotic scaling of VQAs. More specifically, one can ask how noise affects the training process. If the effect of noise on trainability is not severe, and the optimal parameters can be found, then VQAs may be useful even in the presence of high decoherence in one of two ways. First, the end goal of certain algorithms such as the Quantum Approximate Optimization Algorithm (QAOA)47 is to extract an optimized set of parameters, rather than the optimal cost value. Second, error mitigation can be used in conjunction with VQAs that display optimal parameter resilience. Intuitively, incoherent noise is expected to reduce the magnitude of the gradient and hence hinder trainability, and preliminary numerical evidence of this has been seen48,49, although the scaling of this effect has not been studied.

In this work, we analytically study the scaling of gradient for VQAs as a function of n, the circuit depth L, and a noise parameter q < 1. We consider a general class of local noise models that includes depolarizing noise and certain kinds of Pauli noise. Furthermore, we investigate a general, abstract ansatz that allows us to encompass many of the important ansatzes in the literature, hence allowing us to make a general statement about VQAs. This includes the Quantum Alternating Operator Ansatz (QAOA) which is used for solving combinatorial optimization problems13,14,15,16 and the Unitary Coupled Cluster (UCC) Ansatz which is used in the Variational Quantum Eigensolver (VQE) to solve chemistry problems50,51,52. This is also applicable for the Hardware Efficient Ansatz and the Hamiltonian Variational Ansatz (HVA) which are employed for various applications53,54,55,56,57. Our results also generalize to settings that allow for multiple input states or training data, as in machine learning applications, often called quantum neural networks58,59,60,61,62.

Our main result (Theorem 1) is an upper bound on the magnitude of the gradient that decays exponentially with L, namely as 2κ with \(\kappa =-L{{{{{{{\mathrm{log}}}}}}}\,}_{2}(q)\). Hence, we find that the gradient vanishes exponentially in the circuit depth. Moreover, it is typical to consider L scaling as poly(n) (e.g., in the UCC Ansatz52), for which our main result implies an exponential decay of the gradient in n. We refer to this as a Noise-Induced Barren Plateau (NIBP). We remark that NIBPs can be viewed as concomitant to the cost landscape concentrating around the value of the cost for the maximally mixed state, and we make this precise in Lemma 1. See Fig. 1 for a schematic diagram of the NIBP phenomenon.

Fig. 1: Schematic diagram of the Noise-Induced Barren Plateau (NIBP) phenomenon.
figure 1

For various applications such as chemistry and optimization, increasing the problem size often requires one to increase the depth L of the variational ansatz. We show that, in the presence of local noise, the gradient vanishes exponentially in L and hence exponentially in the number of qubits n when L scales linearly in n. This can be seen in the plots on the right, which show the cost function landscapes for a simple variational problem with local noise.

To be clear, any variational algorithm with a NIBP will have exponential scaling. In this sense, NIBPs destroy quantum speedup, as the standard goal of quantum algorithms is to avoid the typical exponential scaling of classical algorithms. NIBPs are conceptually distinct from the noise-free barren plateaus of refs. 31,32,33,34,35,36. Indeed, strategies to avoid noise-free barren plateaus34,37,40,41,42,43 do not appear to solve the NIBPs issue.

The obvious strategy to address NIBPs is to reduce circuit complexity, or more precisely, to reduce the circuit depth. Hence, our work provides quantitative guidance for how small L needs to be to potentially avoid NIBPs.

In what follows, we present our general framework followed by our main result. We also present two extensions of our main result, one involving correlated ansatz parameters and one allowing for measurement noise. The latter indicates that global cost functions exacerbate the NIBP issue. In addition, we provide numerical heuristics that illustrate our main result for MaxCut optimization with the QAOA, and an implementation of the HVA on superconducting hardware, both showing that NIBPs significantly impact this application.

Results

General framework

In this work we analyze a general class of parameterized ansatzes U(θ) that can be expressed as a product of L unitaries sequentially applied by layers

$$U({{{{{{{\boldsymbol{\theta }}}}}}}})={U}_{L}({{{{{{{{\boldsymbol{\theta }}}}}}}}}_{L})\cdots {U}_{2}({{{{{{{{\boldsymbol{\theta }}}}}}}}}_{2})\cdot {U}_{1}({{{{{{{{\boldsymbol{\theta }}}}}}}}}_{1})\ .$$
(1)

Here \({{{{{{{\boldsymbol{\theta }}}}}}}}={\{{{{{{{{{\boldsymbol{\theta }}}}}}}}}_{l}\}}_{l = 1}^{L}\) is a set of vectors of continuous parameters that are optimized to minimize a cost function C that can be expressed as the expectation value of an operator O:

$$C={{{{{{{\rm{Tr}}}}}}}}[OU({{{{{{{\boldsymbol{\theta }}}}}}}})\rho {U}^{{{{\dagger}}} }({{{{{{{\boldsymbol{\theta }}}}}}}})]\ .$$
(2)

As shown in Fig. 2, ρ is an n-qubit input state. Without loss of generality we assume that each Ul(θl) is given by

$${U}_{l}({{{{{{{{\boldsymbol{\theta }}}}}}}}}_{l})=\mathop{\prod}\limits_{m}{e}^{-i{\theta }_{lm}{H}_{lm}}{W}_{lm}\ ,$$
(3)

where Hlm are Hermitian operators, θl  = {θlm} are continuous parameters, and Wlm denote unparametrized gates. We expand Hlm and O in the Pauli basis as

$${H}_{lm}={\eta }_{lm}\cdot {\sigma}_{n}=\mathop{\sum}\limits_{i}{\eta }_{lm}^{i}{\sigma }_{n}^{i}\ ,\quad O=\omega \cdot {\sigma}_{n}=\mathop{\sum}\limits_{i}{\omega }^{i}{\sigma }_{n}^{i}\ ,$$
(4)

where \({\sigma }_{n}^{i}\in {\{{\mathbb{1}},X,Y,Z\}}^{\otimes n}\) are Pauli strings, and \({\eta }_{lm}\) and \({\omega }\) are real-valued vectors that specify the terms present in the expansion. Defining \({N}_{lm}=| {{\eta }}_{lm}|\) and \({N}_{O}=|{\omega }|\) as the number of non-zero elements, i.e., the number of terms in the summations in Eq. (4), we say that Hlm and O admit an efficient Pauli decomposition if \({N}_{lm},{N}_{O}\in {{{{{{{\mathcal{O}}}}}}}}({{{{{{{\rm{poly}}}}}}}}(n))\), respectively.

Fig. 2: Setting for our analysis.
figure 2

An n-qubit input state ρ is sent through a variational ansatz U(θ) composed of L unitary layers Ul(θl) sequentially acting according to Eq. (1). Here, \({{{{{{{{\mathcal{U}}}}}}}}}_{l}\) denotes the quantum channel that implements the unitary Ul(θl). The parameters in the ansatz \({{{{{{{\boldsymbol{\theta }}}}}}}}={\{{{{{{{{{\boldsymbol{\theta }}}}}}}}}_{l}\}}_{l = 1}^{L}\) are trained to minimize a cost function that is expressed as the expectation value of an operator O as in Eq. (2). We consider a noise model where local Pauli noise channels \({{{{{{{{\mathcal{N}}}}}}}}}_{j}\) act on each qubit j before and after each unitary.

We now briefly discuss how the QAOA, UCC, and Hardware Efficient ansatzes fit into this general framework. We refer the reader to the Methods for additional details. In QAOA one sequentially alternates the action of two unitaries as

$$U({\gamma },{\beta })={e}^{-i{\beta }_{p}{H}_{M}}{e}^{-i{\gamma }_{p}{H}_{P}}\cdots {e}^{-i{\beta }_{1}{H}_{M}}{e}^{-i{\gamma }_{1}{H}_{P}}\ ,$$
(5)

where HP and HM are the so-called problem and mixer Hamiltonian, respectively. We define NP (NM) as the number of terms in the Pauli decomposition of HP (HM). On the other hand, Hardware Efficient ansatzes naturally fit into Eqs. (1)–(3) as they are usually composed of fixed gates (e.g, controlled NOTs), and parametrized gates (e.g., single qubit rotations). Finally, as detailed in the Methods, the UCC ansatz can be expressed as

$$U({\theta })=\mathop{\prod}\limits_{lm}{U}_{lm}({\theta }_{lm})=\mathop{\prod}\limits_{lm}{e}^{i{\theta }_{lm}\mathop{\sum}\limits_{i}{\mu }_{lm}^{i}{\sigma }_{n}^{i}},$$
(6)

where \({\mu }_{lm}^{i}\in \{0,\pm\! 1\}\), and where θlm are the coupled cluster amplitudes. Moreover, we denote \({\widehat{N}}_{lm}=| {\mu}_{lm}|\) as the number of non-zero elements in \({\sum }_{i}{\mu }_{lm}^{i}{\sigma }_{n}^{i}\).

As shown in Fig. 2, we consider a noise model where local Pauli noise channels \({{{{{{{{\mathcal{N}}}}}}}}}_{j}\) act on each qubit j before and after each unitary Ul(θl). The action of \({{{{{{{{\mathcal{N}}}}}}}}}_{j}\) on a local Pauli operator σ {X, Y, Z} can be expressed as

$${{{{{{{{\mathcal{N}}}}}}}}}_{j}(\sigma )={q}_{\sigma }\sigma \ ,$$
(7)

where −1 < qX, qY, qZ < 1. Here, we characterize the noise strength with a single parameter \(q=\sqrt{\max \{| {q}_{X}| ,| {q}_{Y}| ,| {q}_{Z}| \}}\). Let \({{{{{{{{\mathcal{U}}}}}}}}}_{l}\) denote the channel that implements the unitary Ul(θl) and let \({{{{{{{\mathcal{N}}}}}}}}={{{{{{{{\mathcal{N}}}}}}}}}_{1}\otimes \cdots \otimes {{{{{{{{\mathcal{N}}}}}}}}}_{n}\) denote the n-qubit noise channel. Then, the noisy cost function is given by

$$\widetilde{C}={{{{{{{\rm{Tr}}}}}}}}\left[O\ \left({{{{{{{\mathcal{N}}}}}}}}\circ {{{{{{{{\mathcal{U}}}}}}}}}_{L}\circ \cdots \circ {{{{{{{\mathcal{N}}}}}}}}\circ {{{{{{{{\mathcal{U}}}}}}}}}_{1}\circ {{{{{{{\mathcal{N}}}}}}}}\right)(\rho )\right]\ .$$
(8)

General analytical results

There are some VQAs, such as the VQE5 for chemistry and other physical systems, where it is important to accurately characterize the value of the cost function itself. We provide an important result below in Lemma 1 that quantitatively bounds the cost function itself, and we envision that this bound will be especially useful in the context of VQE. On the other hand, there are other VQAs, such as those for optimization13,14,15,16, compiling22,23,24, and linear systems17,18, where the key goal is to learn the optimal parameters and the precise value of the cost function is either not important or can be computed classically after learning the parameters. In this case, one is primarily concerned with trainability, and hence the gradient is a key quantity of interest. These applications motivate our main result in Theorem 1, which bounds the magnitude of the gradient. We remark that trainability is of course also important for VQE, and hence Theorem 1 is also of interest for this application.

With this motivation in mind, we now present our main results. We first present our bound on the cost function, since one can view this as a phenomenon that naturally accompanies our main theorem. Namely, in the following lemma, we show that the noisy cost function concentrates around the corresponding value for the maximally mixed state.

Lemma 1

(Concentration of the cost function). Consider an L-layered ansatz of the form in Eq. (1). Suppose that local Pauli noise of the form of Eq. (7) with noise strength q acts before and after each layer as in Fig. 2. Then, for a cost function \(\widetilde{C}\) of the form in Eq. (8), the following bound holds

$$\left|\widetilde{C}-\frac{1}{{2}^{n}}{{{{{{{\rm{Tr}}}}}}}}[O]\right|\ \leqslant \ G(n)\ {\left\Vert \rho -\frac{{\mathbb{1}}}{{2}^{n}}\right\Vert }_{1}\ ,$$
(9)

where

$$G(n)={N}_{O}\parallel{\omega}{\parallel }_{\infty }\ {q}^{2L+2}\ .$$
(10)

Here is the infinity norm, 1 is the trace norm, \({\omega }\) is defined in Eq. (4), and \({N}_{O}=|{\omega}|\) is the number of non-zero elements in the Pauli decomposition of O.

This lemma implies the cost landscape exponentially concentrates on the value \({{{{{{{\rm{Tr}}}}}}}}[O]/{2}^{n}\) for large n, whenever the number of layers L scales linearly with the number of qubits. While this lemma has important applications on its own, particularly for VQE, it also provides intuition for the NIBP phenomenon, which we now state.

Let \({\partial }_{lm}\widetilde{C}=\partial \widetilde{C}/\partial {\theta }_{lm}\) denote the partial derivative of the noisy cost function with respect to the m-th parameter that appears in the l-th layer of the ansatz, as in Eq. (3). For our main technical result, we upper bound \(| {\partial }_{lm}\widetilde{C}|\) as a function of L and n.

Theorem 1

(Upper bound on the partial derivative). Consider an L-layered ansatz as defined in Eq. (1). Let θlm denote the trainable parameter corresponding to the Hamiltonian Hlm in the unitary \({U}_{l}({\theta}_{l})\) appearing in the ansatz. Suppose that local Pauli noise of the form in Eq. (7) with noise parameter q acts before and after each layer as in Fig. 2. Then the following bound holds for the partial derivative of the noisy cost function

$$| {\partial }_{lm}\widetilde{C}| \,\leqslant\, F(n),$$
(11)

where

$$F(n)=\sqrt{8{{{{{{\mathrm{ln}}}}}}}\,2}\ {N}_{O}\parallel\!{H}_{lm}{\parallel}_{\infty }\parallel\!{\omega}{\parallel }_{\infty }{n}^{1/2}{q}^{L+1}\ ,$$
(12)

and \({\omega }\) is defined in Eq. (4), with number of non-zero elements NO.

Let us now consider the asymptotic scaling of the function F(n) in Eq. (12). Under standard assumptions such as that O in Eq. (4) admits an efficient Pauli decomposition and that Hlm has bounded eigenvalues, we now state that F(n) decays exponentially in n, if L grows linearly in n.

Corollary 1

(Noise-induced barren plateaus). Let \({N}_{lm},{N}_{O}\in {{{{{{{\mathcal{O}}}}}}}}({{{{{{{\rm{poly}}}}}}}}(n))\) and let \({\eta }_{lm}^{i},{\omega }^{j}\in {{{{{{{\mathcal{O}}}}}}}}({{{{{{{\rm{poly}}}}}}}}(n))\) for all i, j. Then the upper bound F(n) in Eq. (12) vanishes exponentially in n as

$$F(n)\in {{{{{{{\mathcal{O}}}}}}}}({2}^{-\alpha n})\ ,$$
(13)

for some positive constant α if we have

$$L\in {{\Omega }}(n)\ .$$
(14)

The asymptotic scaling in Eq. (13) is independent of l and m, i.e., the scaling is blind to the layer, or the parameter within the layer, for which the derivative is taken. This corollary implies that when Eq. (14) holds, i.e. L grows at least linearly in n, the partial derivative \(| {\partial }_{lm}\widetilde{C}|\) exponentially vanishes in n across the entire cost landscape. In other words, one observes a Noise-Induced Barren Plateau (NIBP). We note that Eq. (14) is satisfied for all q < 1. That is, NIBPs occur regardless of the noise strength, it only changes the severity of the exponential scaling.

In addition, Corollary 1 implies that NIBPs are conceptually different from noise-free barren plateaus. First, NIBPs are independent of the parameter initialization strategy or the locality of the cost function. Second, NIBPs exhibit exponential decay of the gradient itself; not just of the variance of the gradient, which is the hallmark of noise-free barren plateaus. Noise-free barren plateaus allow the global minimum to sit inside deep, narrow valley in the landscape34, whereas NIBPs flatten the entire landscape.

One of the strategies to avoid the noise-free barren plateaus is to correlate parameters, i.e., to make a subset of the parameters equal to each other37. We generalize Theorem 1 in the following remark to accommodate such a setting, consequently showing that such correlated or degenerate parameters do not help in avoiding NIBPs. In this setting, the result we obtain in Eq. (16) below is essentially identical to that in Eq. (12) except with an additional factor quantifying the amount of degeneracy.

Remark 1

(Degenerate parameters). Consider the ansatz defined in Eqs. (1) and (3). Suppose there is a subset Gst of the set {θlm} in this ansatz such that Gst consists of g parameters that are degenerate:

$${G}_{st}=\left\{{\theta }_{lm}\ | \ {\theta }_{lm}={\theta }_{st}\right\}\ .$$
(15)

Here, θst denotes the parameter in Gst for which \({N}_{lm}\parallel\!{\eta}_{lm}{\parallel }_{\infty }\) takes the largest value in the set. (θst can also be thought of as a reference parameter to which all other parameters are set equal in value.) Then the partial derivative of the noisy cost with respect to θst is bounded as

$$| {\partial }_{st}\widetilde{C}| \,\leqslant \,\sqrt{8{{{{{{\mathrm{ln}}}}}}}\,2}\ g{N}_{O}\parallel\!{H}_{lm}{\parallel }_{\infty }\parallel\!{\omega}{\parallel}_{\infty }{n}^{1/2}{q}^{L+1},$$
(16)

at all points in the cost landscape.

Remark 1 is especially important in the context of the QAOA and the UCC ansatz, as discussed below. We note that, in the general case, a unitary of the form of Eq. (3) cannot be implemented as a single gate on a physical device. In practice one needs to compile the unitary into a sequence of native gates. Moreover, Hamiltonians with non-commuting terms are usually approximated with techniques such as Trotterization. This compilation overhead potentially leads to a sequence of gates that grows with n. Remark 1 enables us to account for such scenarios, and we elaborate on its relevance to specific applications in the next subsection.

In reality, noise on quantum hardware can be non-local. For instance in certain systems one can have cross-talk errors or coherent errors. We address such extensions to our noise model in the following remark.

Remark 2

(Extensions to the noise model). Consider a modification to each layer of noise \({{{{{{{\mathcal{N}}}}}}}}\) in Eq. (8) to include additional k-local Pauli noise and correlated coherent (unitary) noise across multiple qubits. Under such extensions to the noise model, we obtain the same scaling results as those obtained in Lemma 1 and Theorem 1. We discuss this in more detail in Supplementary Note 5.

Finally, we present an extension of our main result to the case of measurement noise. Consider a model of measurement noise where each local measurement independently has some bit-flip probability given by (1 − qM)/2, which we assume to be symmetric with respect to the 0 and 1 outcomes. This leads to an additional reduction of our bounds on the cost function and its gradient that depends on the locality of the observable O.

Proposition 1

(Measurement noise). Consider expanding the observable O as a sum of Pauli strings, as in Eq. (4). Let w denote the minimum weight of these strings, where the weight is defined as the number of non-identity elements for a given string. In addition to the noise process considered in Fig. 2, suppose there is also measurement noise consisting of a tensor product of local bit-flip channels with bit-flip probability (1 − qM)/2. Then we have

$$\left|\widetilde{C}-\frac{1}{{2}^{n}}{{{{{{{\rm{Tr}}}}}}}}[O]\right|\, \,\leqslant\,\, {q}_{M}^{w}\ G(n)\ {\left\Vert \rho -\frac{{\mathbb{1}}}{{2}^{n}}\right\Vert }_{1}$$
(17)

and

$$| {\partial }_{lm}\widetilde{C}|\, \,\leqslant\,\, {q}_{M}^{w}F(n)$$
(18)

where G(n) and F(n) are defined in Lemma 1 and Theorem 1, respectively.

Proposition 1 goes beyond the noise model considered in Theorem 1. It shows that in the presence of measurement noise there is an additional contribution from the locality of the measurement operator. It is interesting to draw a parallel between Proposition 1 and noise-free barren plateaus, which have been shown to be cost-function dependent and in particular depend on the locality of the observable O34. The bounds in Proposition 1 similarly depend on the locality of O. For example, when w = n, i.e., global observables, the factor \({q}_{M}^{w}\) will hasten the exponential decay. On the other hand, when w = 1, i.e., local observables, the scaling is unaltered by measurement noise. In this sense, a global observable exacerbates the NIBP issue by making the decay more rapid with n.

Application-specific analytical results

Here we investigate the implications of our results from the previous subsection for two applications: optimization and chemistry. In particular, we derive explicit conditions for NIBPs for these applications. These conditions are derived in the setting where Trotterization is used, but other compilation strategies incur similar asymptotic behavior. We begin with the QAOA for optimization and then discuss the UCC ansatz for chemistry. Finally, we make a remark about the Hamiltonian Variational Ansatz (HVA), as well as remark that our results also apply to a generalized cost function that can employ training data.

Corollary 2

(Example: QAOA). Consider the QAOA with 2p trainable parameters, as defined in Eq. (5). Suppose that the implementation of unitaries corresponding to the problem Hamiltonian HP and the mixer Hamiltonian HM require kP- and kM-depth circuits, respectively. If local Pauli noise of the form in Eq. (7) with noise parameter q acts before and after each layer of native gates, then we have

$$| {\partial }_{{\beta }_{l}}\widetilde{C}| \ \leqslant \sqrt{8{{{{{{\mathrm{ln}}}}}}}\,2}\ {g}_{l,P}{N}_{P}\parallel\! {H}_{P}{\parallel}_{\infty }\parallel\!{\omega}{\parallel }_{\infty }{n}^{1/2}{q}^{({k}_{P}+{k}_{M})p+1},$$
(19)
$$| {\partial }_{{\gamma }_{l}}\widetilde{C}| \ \leqslant \sqrt{8{{{{{{\mathrm{ln}}}}}}}\,2}\ {g}_{l,M}{N}_{P}\parallel\! {H}_{M}{\parallel }_{\infty }\parallel\! {\omega}{\parallel }_{\infty }\ {n}^{1/2}{q}^{({k}_{P}+{k}_{M})p+1},$$
(20)

for any choice of parameters βl, γl, and where O = HP in Eq. (2). Here gl,P and gl,M are the respective number of native gates parameterized by βl and γl according to the compilation.

Corollary 2 follows from Remark 1 and it has interesting implications for the trainability of the QAOA. From Eqs. (19) and (20), NIBPs are guaranteed if pkP scales linearly in n. This can manifest itself in a number of ways, which we explain below.

First, we look at the depth kP required to implement one application of the problem unitary. Graph problems containing vertices of extensive degree such as the Sherrington-Kirkpatrick model inherently require Ω(n) depth circuits to implement55. On the other hand, generic problems mapped to hardware topologies also have the potential to incur Ω(n) depth or greater in compilation cost. For instance, implementation of MaxCut and k-SAT using SWAP networks on circuits with 1-D connectivity requires depth Ω(n) and Ω(nk−1) respectively15,63. Such mappings with the aforementioned compilation overhead for k 2 are guaranteed to encounter NIBPs even for a fixed number of rounds p.

Second, it appears that p values that grow at least lightly with n may be needed for quantum advantage in certain optimization problems (for example64,65,66,67). In addition, there are problems employing the QAOA that explicitly require p scaling as poly(n)21,68. Thus, without even considering the compilation overhead for the problem unitary, these QAOA problems may run into NIBPs particularly when aiming for quantum advantage. Moreover, weak growth of p with n combined with compilation overhead could still result in an NIBP.

Finally, we note that above we have assumed the contribution of kP dominates that of kM. However, it is possible that for choice of more exotic mixers16, kM also needs to be carefully considered to avoid NIBPs.

Corollary 3

(Example: UCC). Let H denote a molecular Hamiltonian of a system of Me electrons. Consider the UCC ansatz as defined in Eq. (6). If local Pauli noise of the form in Eq. (7) with noise parameter q acts before and after every Ulm(θlm) in Eq. (6), then we have

$$| {\partial }_{{\theta }_{lm}}\widetilde{C}|\, \leqslant\, \sqrt{8{{{{{{\mathrm{ln}}}}}}}\,2}\ {\widehat{N}}_{lm}{N}_{H}\parallel\!{\omega}{\parallel }_{\infty }\ {n}^{1/2}{q}^{L+1},$$
(21)

for any coupled cluster amplitude θlm, and where O = H in Eq. (2).

Corollary 3 allows us to make general statements about the trainability of UCC ansatz. We present the details for the standard UCC ansatz with single and double excitations from occupied to virtual orbitals50,69 (see Methods for more details). Let Mo denote the total number of spin orbitals. Then at least n = Mo qubits are required to simulate such a system and the number of variational parameters grows as \({{\Omega }}({n}^{2}{M}_{e}^{2})\)63,70. To implement the UCC ansatz on a quantum computer, the excitation operators are first mapped to Pauli operators using Jordan-Wigner or Bravyi-Kitaev mappings71,72. Then, using first-order Trotterization and employing SWAP networks63, the UCC ansatz can be implemented in Ω(n2Me) depth, while assuming 1-D connectivity of qubits63. Hence for the UCC ansatz, approximated by single- and double-excitation operators, the upper bound in Eq. (21) (asymptotically) vanishes exponentially in n.

To target strongly correlated states for molecular Hamiltonians, one can employ a UCC ansatz that includes additional, generalized excitations56,73. A Ω(n3) depth circuit is required to implement the first-order Trotterized form of this ansatz63. Hence NIBPs become more prominent for generalized UCC ansatzes. Finally, we remark that a sparse version of the UCC ansatz can be implemented in Ω(n) depth63. NIBPs still would occur for such ansatzes.

Additionally, we can make the following remark about the Hamiltonian Variational Ansatz (HVA). As argued in56,74,75, the HVA has the potential to be an effective ansatz for quantum many-body problems.

Remark 3

(Example: HVA). The HVA can be thought of as a generalization of the QAOA to more than two non-commuting Hamiltonians. It is remarked in ref. 57 that for problems of interest the number of rounds p scales linearly in n. Thus, considering this growth of p and also the potential growth of the compiled unitaries with n, the HVA has the potential to encounter NIBPs, by the same arguments made above for the QAOA (e.g., Corollary 2).

Remark 4

(Quantum Machine Learning). Our results can be extended to generalized cost functions of the form \({C}_{{{{{{{{\rm{train}}}}}}}}}={\sum }_{i}{{{{{{{\rm{Tr}}}}}}}}[{O}_{i}U({{{{{{{\boldsymbol{\theta }}}}}}}}){\rho }_{i}{U}^{{{{\dagger}}} }({{{{{{{\boldsymbol{\theta }}}}}}}})]\) where {Oi} is a set of operators each of the form (4) and {ρi} is a set of states. This can encapsulate certain quantum machine learning settings58,59,60,61,62 that employ training data {ρi}. As an example of an instance where NIBPs can occur, in one study62 an ansatz model has been proposed that requires at least linear circuit depth in n.

Numerical simulations of the QAOA

To illustrate the NIBP phenomenon beyond the conditions assumed in our analytical results, we numerically implement the QAOA to solve MaxCut combinatorial optimization problems. We employ a realistic noise model obtained from gate-set tomography on the IBM Ourense superconducting qubit device. In the Methods we provide additional details on the noise model and the optimization method employed.

Let us first recall that a MaxCut problem is specified by a graph G = (V, E) of nodes V and edges E. The goal is to partition the nodes of G into two sets which maximize the number of edges connecting nodes between sets. Here, the QAOA problem Hamiltonian is given by

$${H}_{P}=-\frac{1}{2}\mathop{\sum}\limits_{ij\in E}{C}_{ij}({\mathbb{1}}-{Z}_{i}{Z}_{j})\ ,$$
(22)

where Zi are local Pauli operators on qubit (node) i, Cij = 1 if the nodes are connected and Cij = 0 otherwise.

We analyze performance in two settings. First, we fix the problem size at n = 5 nodes (qubits) and vary the number of rounds p (Fig. 3). Second, we fix the number of rounds of QAOA at p = 4 and vary the problem size by increasing the number of nodes (Fig. 4).

Fig. 3: QAOA heuristics in the presence of realistic hardware noise: increasing number of rounds for fixed problem size.
figure 3

a The approximation ratio averaged over 100 random graphs of 5 nodes is plotted versus number of rounds p. The black, green, and red curves respectively correspond to noise-free training, noisy training with noise-free final cost evaluation, and noisy training with noisy final cost evaluation. The performance of noise-free training increases with p, similar to the results in Ref. 15. The green curve shows that the training process itself is hindered by noise, with the performance decreasing steadily with p for p > 4. The dotted blue lines correspond to known lower and upper bounds on classical performance in polynomial time: respectively the performance guarantee of the Goemans-Williamson algorithm77 and the boundary of known NP-hardness78,79. b The deviation of the cost from \({{{{{{{\rm{Tr}}}}}}}}[{H}_{P}]/{2}^{n}\) (averaged over graphs and parameter values) is plotted versus p. As p increases, this deviation decays approximately exponentially with p (linear on the log scale). c The absolute value of the largest partial derivative, averaged over graphs and parameter values, is plotted versus p. The partial derivatives decay approximately exponentially with p, showing evidence of Noise-Induced Barren Plateaus (NIBPs).

Fig. 4: QAOA heuristics in the presence of realistic hardware noise: increasing problem size for a fixed number of rounds.
figure 4

The approximation ratio averaged over 60 random graphs of increasing number of nodes n and fixed number of rounds p = 4 is plotted. The black, green, and red curves respectively correspond to noise-free training, noisy training with noise-free final cost evaluation, and noisy training with noisy final cost evaluation. a For a problem size of 8 nodes or larger, the noisily-trained approximation ratio falls below the performance guarantee of the classical Goemans-Williamson algorithm. b The depth of the circuit (red curve) scales linearly with the number of qubits, confirming we are in a regime where we would expect to observe Noise-Induced Barren Plateaus.

In order to quantify performance for a given n and p, we randomly generate 100 graphs according to the Erdős-Rényi model76, such that each graph G is chosen uniformly at random from the set of all graphs of n nodes. For each graph we run 10 instances of the parameter optimization, and we select the run that achieves the smallest energy. At each optimization step the cost is estimated with 1000 shots. Performance is quantified by the average approximation ratio when training the QAOA in the presence and absence of noise. The approximation ratio is defined as the lowest energy obtained via optimizing divided by the exact ground state energy of HP.

In our first setting we observe in Fig. 3a that when training in the absence of noise, the approximation ratio increases with p. However, when training in the presence of noise the performance decreases for p > 2. This result is in accordance with Lemma 1, as the cost function value concentrates around \({{{{{{{\rm{Tr}}}}}}}}[{H}_{P}]/{2}^{n}\) as p increases. This concentration phenomenon can also be seen clearly in Fig. 3b, where in fact we see evidence of exponential decay of cost value with p.

In addition, we can see the effect of NIBPs as Fig. 3a also depicts the value of the approximation ratio computed without noise by utilizing the parameters obtained via noisy training. Note that evaluating the cost in a noise-free setting has practical meaning, since the classicality of the Hamiltonian allows one to compute the cost on a (noise-free) classical computer, after training the parameters. For p > 4 this approximation ratio decreases, meaning that as p becomes larger it becomes increasingly hard to find a minimizing direction to navigate through the cost function landscape. Moreover, the effect of NIPBs is evident in Fig. 3c where we depict the average absolute value of the largest cost function partial derivative (i.e., \(\mathop{\max }\nolimits_{lm}| {\partial }_{lm}\widetilde{C}|\)). This plot shows an exponential decay of the partial derivative with p in accordance with Theorem 1.

Finally, in Fig. 3a we contextualize our results with previously known two-sided bounds on classical polynomial-time performance. The lower bound corresponds to the performance guarantee of the classical Goemans-Williamson algorithm77, whilst the upper bound is at the value 16/17 which is the approximation ratio beyond which Max-Cut is known to be NP-hard78,79.

In our second setting we find complementary results. In Fig. 4a we observe that at a problem size of 8 qubits or larger, 4 rounds of QAOA trained on the noisy circuit falls short of the performance guarantees of the classical Goemans-Williamson algorithm. As we increase the number of qubits, we also observe this increases the depth of the circuit linearly (Fig. 4b), thus confirming we are in a regime of NIBPs.

Our numerical results show that training the QAOA in the presence of a realistic noise model significantly affects the performance. The concentration of cost and the NIBP phenomenon are both also clearly visible in our data. Even though we observe performance for n = 5 and p = 4 that is NP-hard to achieve classically, any possible advantage would be lost for large problem sizes or circuit depth due to bad scaling. Hence, noise appears to be a crucial factor to account for when attempting to understand the performance of the QAOA.

Implementation of the HVA on superconducting hardware

We further demonstrate the NIBP phenomenon in a realistic hardware setting by implementing the Hamiltonian Variational Ansatz (HVA) on the IBM Quantum ibmq_montreal 27-qubit superconducting device. At time of writing this holds the record for the largest quantum volume measured on an IBM Quantum device, which was demonstrated on a line of 6 qubits80.

We implement the HVA for the Transverse Field Ising Model as considered in ref. 57, with a local measurement O = Z0Z1 on the first two qubits of the Ising chain. We assign the number of layers L of the ansatz to increase linearly with the number of qubits n according to the relationship L = n − 1. In order to minimize SWAP gates used in transpilation (and the accompanying noise that they incur), we modify each layer of the HVA ansatz to only include entangling gates between locally connected qubits.

Figure 5 plots the partial derivative of the cost function with respect to the parameter in the final layer of the ansatz, averaged over 100 random parameter sets. We also plot averaged cost differences from the corresponding maximally mixed values, as well as the variance of both quantities. In the noise-free case both the partial derivative and cost value differences decrease at a sub-exponential rate. Meanwhile, in the noisy case we observe that both the partial derivatives and cost value differences vanish exponentially until their variances reach the same order of magnitude as the shot noise floor. (As the shot budget on the IBM Quantum device is limited, this leads to a background of shot noise, and we plot the order of magnitude of this with a dotted line.) This explicitly demonstrates that the problem of barren plateaus is one of resolvability. In principle, if one has access to exact cost values and gradients one may be able to navigate the cost landscape, however, the number of shots required to reach the necessary resolution increases exponentially with n.

Fig. 5: Implementation on the ibmq_montreal superconducting-qubit device.
figure 5

We consider the HVA with the number of layers growing linearly in the number of qubits, n. a The average magnitude of the partial derivative of the noisy and noise-free cost, with respect to the parameter in the final layer, is plotted versus n. The average is taken over 100 randomly selected parameter sets. As n increases, the noisy average partial derivative decreases approximately exponentially, until around n = 9. This shows evidence of Noise Induced Barren Plateaus on real quantum hardware. b The deviation from exponential scaling can be understood by observing that it coincides with the point that the variance of the noisy partial derivatives reaches the same order of magnitude as the shot noise given by a finite sample budget of 8192 shots. Thus, from this point onward we expect fluctuations in the partial derivative to be dominated by shot noise, and gradients to be unresolvable. c The difference of the cost value from its corresponding maximally mixed value is plotted versus n. d The variance of this difference is plotted versus n. Both these quantities also show exponential decay until the variance of cost difference approaches the shot noise floor, which shows evidence of exponential cost concentration on this device.

Discussion

The success of NISQ computing largely depends on the scalability of Variational Quantum Algorithms (VQAs), which are widely viewed as the best hope for near-term quantum advantage for various applications. Only a small handful of works have analytically studied VQA scalability, and there is even less known about the impact of noise on their scaling. Our work represents a breakthrough in understanding the effect of local noise on VQA scalability. We rigorously prove two important and closely related phenomena: the exponential concentration of the cost function in Lemma 1 and the exponential vanishing of the gradient in Theorem 1. We refer to the latter as a Noise-Induced Barren Plateau (NIBP). Like noise-free barren plateaus, NIBPs require the precision and hence the algorithmic complexity to scale exponentially with the problem size. Thus, avoiding NIBPs is necessary for a VQA to have any hope of exponential quantum speedup.

NIBPs have conceptual differences from noise-free barren plateaus31,32,33,34,35,36 as the gradient vanishes with increasing problem size at every point on the cost function landscape, rather than probabilistically. As a consequence, NIBPs cannot be addressed by layer-wise training, correlating parameters and other strategies34,37,40,41,42,43, all of which can help avoid noise-free barren plateaus. We explicitly demonstrate this in Remark 1 for the parameter correlation strategy. Similar to noise-free barren plateaus, NIBPs present a problem for trainability even when utilizing gradient-free optimizers39 (e.g. simplex-based methods such as81 or methods designed specifically for quantum landscapes82) or optimization strategies that use higher-order derivatives38. At the moment, the only strategies we are aware of for avoiding NIBPs are: (1) reducing the hardware noise level, or (2) improving the design of variational ansatzes such that their circuit depth scales more weakly with n. Our work provides quantitative guidance for how to develop these strategies.

We emphasize that naïve mitigation strategies such as artificially increasing gradients cannot remove the exponential scaling of NIBPs as this simply increases the variance of any finite-shot evaluation of derivatives, and it does not improve the resolvability of the landscape. This argument extends simply to include any error mitigation strategy that implements an affine map to cost values83,84,85,86,87,88,89. Further, most error mitigation techniques consist only of postprocessing noisy circuits. Thus, we deem it unlikely many strategies can remove exponential NIBP scaling as information about the cost landscape has fundamentally been lost (or at least been made exponentially inaccessible). This is in contrast to error correction where information is protected and recovered. However, in general it is an open question as to whether or not error mitigation strategies can mitigate NIBPs, and we leave this question for future work.

An elegant feature of our work is its generality, as our results apply to a wide range of VQAs and ansatzes. This includes the two most popular ansatzes, QAOA for optimization and UCC for chemistry, which Corollaries 2 and 3 treat respectively. In recent times QAQA, UCC, and other physically motivated ansatzes have be touted as the potential solution to trainability issues due to (noise-free) barren plateaus, while Hardware Efficient ansatzes, which minimize circuit depth, have been regarded as problematic. Our work swings the pendulum in the other direction: any additional circuit depth that an ansatz incorporates (regardless of whether it is physically motivated) will hurt trainability and potentially lead to a NIBP. This suggests that Hardware Efficient ansatzes are in fact worth exploring further, provided one has an appropriate strategy to avoid noise-free barren plateaus. This claim is supported by recent state-of-the-art implementations for optimization55 and chemistry54 using such ansatzes. Our work also provides additional motivation towards the pursuit of adaptive ansatzes90,91,92,93,94,95,96,97,98 that reduce circuit depth.

We believe our work has particular relevance to optimization. For combinatorial optimization problems, such as MaxCut on 3-regular graphs, the compilation of a single instance of the problem unitary \({e}^{-i\gamma {H}_{P}}\) can require an Ω(n)-depth circuit55. Therefore, for a constant number of rounds p of the QAOA, the circuit depth grows at least linearly with n. From Theorem 1, it follows that NIBPs can occur for practical QAOA problems, even for constant number of rounds. Furthermore, even neglecting the aforementioned linear compilation overhead, NIBPs are guaranteed (asymptotically) if p grows in n. Such growth has been shown to be necessary in certain instances of MaxCut64 as well as for other optimization problems21,68, and hence NIBPs are especially relevant in these cases.

While it is well known that decoherence ultimately limits the depth of quantum circuits in the NISQ era, there was an interesting open question (prior to our work) as to whether one could still train the parameters of a variational ansatz in the high decoherence limit. This question was especially important for VQAs for optimization, compiling, and linear systems, which are applications that do not require accurate estimation of cost functions on the quantum computer. Our work essentially provides a negative answer to this question. Naturally, important future work will involve extending our results to more general (e.g., non-unital) noise models, and numerically testing the tightness of our bounds. Moreover, our work emphasizes the importance of short-depth variational ansatzes. Hence a crucial research direction for the success of VQAs will be the development of methods to reduce ansatz depth.

Methods

Special cases of our ansatz

Here we discuss how the the QAOA, the Hardware Efficient ansatz, and the UCC ansatz fit into the framework as described in the general framework subsection.

1. Quantum Alternating Operator Ansatz. The QAOA can be understood as a discretized adiabatic transformation where the goal is to prepare the ground state of a given Hamiltonian HP. The order p of the Trotterization determines the solution precision and the circuit depth. Given an initial state \(\left|{s}\right\rangle\), usually the linear superposition of all elements of the computational basis \(\left|{s}\right\rangle ={\left|+\right\rangle }^{\otimes n}\), the ansatz corresponds to the sequential application of two unitaries \({U}_{P}({\gamma }_{l})={e}^{-i{\gamma }_{l}{H}_{P}}\) and \({U}_{M}({\beta }_{l})={e}^{-i{\beta }_{l}{H}_{M}}\). These alternating unitaries are usually known as the problem and mixer unitary, respectively. Here \({\gamma}={\{{\gamma }_{k}\}}_{l = 1}^{L}\) and \({\beta }={\{{\beta }_{k}\}}_{l = 1}^{L}\) are vectors of variational parameters which determine how long each unitary is applied and which must be optimized to minimize the cost function C, defined as the expectation value

$$C=\langle{\gamma },{\beta }| {H}_{P}|{\gamma },{\beta }\rangle ={{{{{{{\rm{Tr}}}}}}}}[{H}_{P}\left|{\gamma},{\beta }\right\rangle \,\left\langle{\gamma },{\beta}\right|]\ ,$$
(23)

where \(\left|{\gamma},{\beta}\right\rangle =U({\gamma},{\beta})\left|{s}\right\rangle\) is the QAOA variational state, and where \(U({\gamma},{\beta})\) is given by (5). In Fig. 6a we depict the circuit description of a QAOA ansatz for a specific Hamiltonian where kP = 6.

Fig. 6: Special cases of our general ansatz.
figure 6

a QAOA problem unitary \({e}^{-i\gamma {H}_{P}}\) for the ring-of-disagrees MaxCut problem, with Hamiltonian \({H}_{P}=\frac{1}{2}{\sum }_{j}{Z}_{j}{Z}_{j+1}\). b Hardware Efficient ansatz composed of CNOTs and single qubit rotations around the y-axis Ry(θ). c Unitary for the exponential \({e}^{-i\theta {Y}_{1}{Z}_{2}{Z}_{3}{X}_{4}}\). This type of circuit is a representative component of the UCC ansatz.

2. Hardware Efficient Ansatz. The goal of the Hardware Efficient ansatz is to reduce the gate overhead (and hence the circuit depth) which arises when implementing a general unitary as in (3). Hence, when employing a specific quantum hardware the parametrized gates \({e}^{-i{\theta }_{lm}{H}_{lm}}\) and the unparametrized gates Wlm are taken from a gate alphabet composed of native gates to that hardware. Figure 6b shows an example of a Hardware Efficient ansatz where the gate alphabet is composed of rotations around the y axis and of CNOTs.

3. Unitary Coupled Cluster Ansatz. This ansatz is employed to estimate the ground state energy of the molecular Hamiltonian. In the second quantization, and within the Born-Oppenheimer approximation, the molecular Hamiltonian of a system of Me electrons can be expressed as: \(H={\sum }_{pq}{h}_{pq}{a}_{p}^{{{{\dagger}}} }{a}_{q}+\frac{1}{2}{\sum }_{pqrs}{h}_{pqrs}{a}_{p}^{{{{\dagger}}} }{a}_{q}^{{{{\dagger}}} }{a}_{r}{a}_{s}\), where \(\{{a}_{p}^{{{{\dagger}}} }\}\) ({aq}) are Fermionic creation (annihilation) operators. Here, hpq and hpqrs respectively correspond to the so-called one- and two-electron integrals50,69. The ground state energy of H can be estimated with the VQE algorithm by preparing a reference state, normally taken to be the Hartree-Fock (HF) mean-field state \(\left|{\psi }_{0}\right\rangle\), and acting on it with a parametrized UCC ansatz.

The action of a UCC ansatz with single (T1) and double (T2) excitations is given by \(\left|\psi \right\rangle =\exp (T-{T}^{{{{\dagger}}} })\left|{\psi }_{0}\right\rangle\), where T = T1 + T2, and where

$${T}_{1}=\mathop{\sum}\limits_{{{i\in {{{{{\rm{occ}}}}}}}\atop {a\in {{{{{\rm{vir}}}}}}}}}{t}_{i}^{a}{a}_{a}^{{{{\dagger}}} }{a}_{i},\quad {T}_{2}=\mathop{\sum}\limits_{{{i,j\in {{{{{\rm{occ}}}}}}}\atop {a,b\in {{{{{\rm{vir}}}}}}}}}{t}_{i,j}^{a,b}{a}_{a}^{{{{\dagger}}} }{a}_{b}^{{{{\dagger}}} }{a}_{j}{a}_{i}.$$
(24)

Here the i and j indices range over “occupied” orbitals whereas the a and b indices range over “virtual” orbitals50,69. The coefficients \({t}_{i}^{a}\) and \({t}_{i,j}^{a,b}\) are called coupled cluster amplitudes. For simplicity, we denote these amplitudes \(\{{t}_{i}^{a},{t}_{i,j}^{a,b}\}\) as {θlm}. Similarly, by denoting the excitation operators {\({a}_{a}^{{{{\dagger}}} }{a}_{i}\), \({a}_{a}^{{{{\dagger}}} }{a}_{b}^{{{{\dagger}}} }{a}_{j}{a}_{i}\)} as {τlm}, the UCC ansatz can be written in a compact form as \(U({\theta})={e}^{{\sum }_{lm}{\theta }_{lm}({\tau }_{lm}-{\tau }_{lm}^{{{{\dagger}}} })}\). In order to implement \(U({\theta})\) one maps the fermionic operators to spin operators by means of the Jordan-Wigner or the Bravyi-Kitaev transformations71,72, which allows us to write \(({\tau }_{lm}-{\tau }_{lm}^{{{{\dagger}}} })=i{\sum }_{i}{\mu }_{lm}^{i}{\sigma }_{n}^{i}\). Then, from a first-order Trotterization we obtain (6). Here, \({\mu }_{lm}^{i}\in \{0,\pm\! 1\}\). In Fig. 6c we depict the circuit description of a representative component of the UCC ansatz.

Proof of Theorem 1

Here we outline the proof for our main result on Noise-Induced Barren Plateaus. We refer the reader to Supplementary Note 2 for additional details. We note that Lemma 1 and Remark 1 follow from similar steps and their proofs are detailed in Supplementary Notes 3 and 4 respectively. Moreover, we remark that Corollaries 1, 2 and 3 follow in a straightforward manner from a direct application of Theorem 1 and Remark 1.

Throughout our calculations we find it useful to use the expansion of operators in the Pauli tensor product basis. Given an n-qubit Hermitian operator Λ, one can always consider the decomposition

$${{\Lambda }}={\lambda }_{0}{{\mathbb{1}}}^{\otimes n}+{\lambda }\cdot {\sigma}_{n}\ ,$$
(25)

where \({\lambda }_{0}\in {\mathbb{R}}\) and \({\lambda }\in {{\mathbb{R}}}^{{4}^{n}-1}\). Note that here we redefine the vector of Pauli strings \({\sigma}_{n}\) as a vector of length 4n − 1 which excludes \({{\mathbb{1}}}^{\otimes n}\).

Central to our proof is to understand how operators are mapped by concatenations of unitary transformations and noise channels. We do this through two lenses. First, given an operator Λ we investigate how various p-norms of \({\lambda}\) are related at different points in the evolution. Such quantities are well suited to study in our setting as we can use the transfer matrix formalism in the Pauli basis, that is, to represent a channel \({{{{{{{\mathcal{N}}}}}}}}\) with the matrix \({({T}_{{{{{{{{\mathcal{N}}}}}}}}})}_{ij}=\frac{1}{{2}^{n}}{{{{{{{\rm{Tr}}}}}}}}\left[{\sigma }_{n}^{i}\ {{{{{{{\mathcal{N}}}}}}}}({\sigma }_{n}^{j})\right]\). Indeed, we see that the noise model in (7) has a diagonal Pauli transfer matrix, which motivates this choice of attack. The second quantity we use is the sandwiched 2-Rényi relative entropy \({D}_{2}\left(\rho \parallel {{\mathbb{1}}}^{\otimes n}/{2}^{n}\right)\) between a state ρ and the maximally mixed state. This is also useful to study due to the strong data processing inequality in ref. 99 which quantifies how noise maps ρ closer to the maximally mixed state.

Let us now present two lemmas that reflect these two parts of the proof. The action of the noise in (7) on the operator Λ is to map the elements of \({\lambda }\) as \({\lambda }_{i}\mathop{\to }\limits^{{{{{{{{\mathcal{N}}}}}}}}}{\lambda }_{i}^{\prime}={q}_{X}^{x(i)}{q}_{Y}^{y(i)}{q}_{Z}^{z(i)}{\lambda }_{i}\) where x(i), y(i), and z(i) respectively denote the number of X, Y, and Z operators in the i-th Pauli string. Recall the definition \(q=\max \{| {q}_{X}| ,| {q}_{Y}| ,| {q}_{Z}| \}\). Since x(i) + y(i) + z(i)  1, the inequality \(| \lambda ^{\prime} | \; \leqslant \; q| \lambda |\) always holds. We use this relationship, along with Weyl’s inequality and the unitary invariance of Schatten norms to show that for an operator of the form (25) we have

$${\left\Vert {{{{{{{{\mathcal{W}}}}}}}}}^{k}({{\Lambda }})\right\Vert }_{\infty }\,\,\leqslant\,\, {\lambda }_{0}+{q}^{k}{\left\Vert \overrightarrow{\lambda }\right\Vert }_{1}$$
(26)

where \({{{{{{{{\mathcal{W}}}}}}}}}^{k}\) is a channel composed of k unitaries interleaved with noise channels of the form (7). The second lemma we present is a consequence of a strong data-processing inequality of of the sandwiched 2-Rényi relative entropy of ref. 99, from which we can show

$${D}_{2}\left({{{{{{{{\mathcal{W}}}}}}}}}^{k}(\rho )\left\Vert {{\mathbb{1}}}^{\otimes n}/{2}^{n}\right)\,\,\leqslant \,\,{q}^{2k}{D}_{2}\left(\rho \right\Vert {{\mathbb{1}}}^{\otimes n}/{2}^{n}\right)$$
(27)

where we note that \({D}_{2}\left(\rho \parallel {{\mathbb{1}}}^{\otimes n}/{2}^{n}\right)\) itself is always upper bounded by n for any n-qubit quantum state ρ.

Now that we have the main tools we present a sketch of the proof. In order to analyze the partial derivative of the cost function \({\partial }_{lm}\widetilde{C}={{{{{{{\rm{Tr}}}}}}}}\left[O\ {\partial }_{lm}\ {\rho }_{L}\right]\) we first note that the output state ρL can be expressed as

$$\begin{array}{rc}{\rho }_{L}&=\left({{{{{{{{\mathcal{W}}}}}}}}}_{a}\circ {{{{{{{{\mathcal{W}}}}}}}}}_{b}\right)({\rho }_{0})={{{{{{{{\mathcal{W}}}}}}}}}_{a}({\bar{\rho }}_{l})\ ,\end{array}$$
(28)

where ρ0 is the input state and

$${{{{{{{{\mathcal{W}}}}}}}}}_{a}={{{{{{{\mathcal{N}}}}}}}}\circ {{{{{{{{\mathcal{U}}}}}}}}}_{L}\circ \cdots \circ {{{{{{{{\mathcal{U}}}}}}}}}_{l+1}\circ {{{{{{{\mathcal{N}}}}}}}}\circ {{{{{{{{\mathcal{U}}}}}}}}}_{lm}^{+},$$
(29)
$${{{{{{{{\mathcal{W}}}}}}}}}_{b}={{{{{{{{\mathcal{U}}}}}}}}}_{lm}^{-}\circ {{{{{{{\mathcal{N}}}}}}}}\circ {{{{{{{{\mathcal{U}}}}}}}}}_{l-1}\circ \cdots \circ {{{{{{{\mathcal{N}}}}}}}}\circ {{{{{{{{\mathcal{U}}}}}}}}}_{1}\circ {{{{{{{\mathcal{N}}}}}}}}\ ,$$
(30)

where \({{{{{{{{\mathcal{U}}}}}}}}}_{lm}^{\pm }\) are channels that implement the unitaries \({U}_{lm}^{-}={\prod }_{s\leqslant m}{e}^{-i{\theta }_{ls}{H}_{ls}}\) and \({U}_{lm}^{+}={\prod }_{s \,{ > }\,m}{e}^{-i{\theta }_{ls}{H}_{ls}}\) such that \({U}_{l}={U}_{lm}^{+}\cdot {U}_{lm}^{-}\). For simplicity of notation here we have omitted the parameter dependence on the concatenation of channels. Additionally, we have introduced the notation \({\bar{\rho }}_{l}={{{{{{{{\mathcal{W}}}}}}}}}_{b}({\rho }_{0})\) and it is straightforward to show that

$${\partial }_{lm}{\bar{\rho }}_{l}=-i[{H}_{lm},{\bar{\rho }}_{l}]\ .$$
(31)

Using the tracial matrix Hölder’s inequality100, we can write

$$\left|{\partial }_{lm}\widetilde{C}\right|=\left|{{{{{{{\rm{Tr}}}}}}}}\left[{{{{{{{{\mathcal{W}}}}}}}}}_{a}^{{{{\dagger}}} }(O)\ {\partial }_{lm}\ {\bar{\rho }}_{l}\right]\right|\qquad\quad\;\;\,$$
(32)
$$\leqslant\, {\left\Vert {{{{{{{{\mathcal{W}}}}}}}}}_{a}^{{{{\dagger}}} }(O)\right\Vert }_{\infty }\ {\left\Vert {\partial }_{lm}{\bar{\rho }}_{l}\right\Vert }_{1}\ ,$$
(33)

where \({{{{{{{{\mathcal{W}}}}}}}}}_{a}^{{{{\dagger}}} }\) is the adjoint map of \({{{{{{{{\mathcal{W}}}}}}}}}_{a}\). The two terms in the product can then be bounded with the above two techniques. Using (26) we find \({\left\Vert {{{{{{{{\mathcal{W}}}}}}}}}_{a}^{{{{\dagger}}} }(O)\right\Vert }_{\infty }\,\leqslant\,\, {q}^{L-l+1}{N}_{O}{\left\Vert {\omega}\right\Vert }_{\infty }\) for the first term. We bound the second term by using (31), a bound on Schatten norms of commutators101, quantum Pinsker’s inequality102, and (27) to obtain \({\left\Vert {\partial }_{lm}{\bar{\rho }}_{l}\right\Vert }_{1}\,\leqslant\, \sqrt{8{{{{{{\mathrm{ln}}}}}}}\,2}\ {\left\Vert {H}_{lm}\right\Vert }_{\infty }\ {n}^{1/2}{q}^{l}\). Putting the two parts together we obtain

$$\left|{\partial }_{lm}\widetilde{C}\right|\, \leqslant\, \sqrt{8{{{{{{\mathrm{ln}}}}}}}\,2}\ {N}_{O}\parallel {H}_{lm}{\parallel }_{\infty }\parallel{\omega}{\parallel }_{\infty }{n}^{1/2}{q}^{L+1}\ ,$$
(34)

completing the proof.

Proof of Proposition 1

Here we sketch the proof of Proposition 1, with additional details being presented in Supplementary Note 8.

We model measurement noise as a tensor product of independent local classical bit-flip channels, which mathematically corresponds to modifying the local POVM elements \({P}_{0}=\left|0\right\rangle \,\left\langle 0\right|\) and \({P}_{1}=\left|1\right\rangle \,\left\langle 1\right|\) as follows:

$${P}_{0}=\left|0\right\rangle \,\left\langle 0\right|\to {\widetilde{P}}_{0}=\frac{1+{q}_{M}}{2}\left|0\right\rangle \,\left\langle 0\right|+\frac{1-{q}_{M}}{2}\left|1\right\rangle \,\left\langle 1\right|$$
(35)
$${P}_{1}=\left|1\right\rangle \,\left\langle 1\right|\to {\widetilde{P}}_{1}=\frac{1-{q}_{M}}{2}\left|0\right\rangle \,\left\langle 0\right|+\frac{1+{q}_{M}}{2}\left|1\right\rangle \,\left\langle 1\right|\ .$$
(36)

In turn, it follows that one can also model this measurement noise as a tensor product of local depolarizing channels with depolarizing probability 1  (1 − qM)/2  0, which we indicate by \({{{{{{{{\mathcal{N}}}}}}}}}_{M}\). The channel is applied directly to the measurement operator such that \({{{{{{{{\mathcal{N}}}}}}}}}_{M}(O)={\sum }_{i}{\omega }^{i}{{{{{{{{\mathcal{N}}}}}}}}}_{M}({\sigma }_{n}^{i})=\widetilde{\omega}\cdot{\sigma}_{n}\). Here \(\widetilde{\omega}\) is a vector of coefficients \({\widetilde{\omega }}^{i}={q}_{M}^{w(i)}{\omega }^{i}\), where w(i) = x(i) + y(i) + z(i) is the weight of the Pauli string. Here we recall that we have respectively defined x(i), y(i), z(i) as the number of Pauli operators X, Y, and Z in the i-th Pauli string.

Let us first focus on the partial derivative of the cost. In the presence of measurement noise we then have

$${\partial }_{lm}\widetilde{C}=\frac{1}{{2}^{n}}\,{{\mbox{Tr}}}\,\left[(\widetilde{\omega}\cdot {\sigma}_{n})({g}^{(L)}\cdot {\sigma}_{n})\right]$$
(37)
$$=\widetilde{\omega}\cdot{g}^{(L)}.\qquad\;\;\,\qquad$$
(38)

Which means that \(| {\partial }_{lm}\widetilde{C}| =| \widetilde{\omega}\cdot{g}^{(L)}|\). We then examine the inner product in an element-wise fashion:

$$| \widetilde{\omega}\cdot{g}^{(L)}|\, \leqslant\, \mathop{\sum}\limits_{i}| {\widetilde{\omega }}_{i}| | {g}_{i}^{(L)}| \,\leqslant\, \mathop{\sum}\limits_{i}{q}_{M}^{w(i)}| {\omega }_{i}| | {g}_{i}^{(L)}| \ .$$
(39)

Therefore, defining \(w=\mathop{\min }\limits_{i}w(i)\) as the minimum weight of the Pauli strings in the decomposition of O, we have that \({q}_{M}^{w(i)}\,\leqslant\, {q}_{M}^{w}\), and hence we can replace \({q}_{M}^{w(i)}\) with \({q}_{M}^{w}\) for each term in the sum. This gives an extra locality-dependent factor in the bound on the partial derivative:

$$| {\partial }_{lm}\widetilde{C}|\, \leqslant\, {q}_{M}^{w}F(n).$$
(40)

An analogous reasoning leads to the following result for the concentration of the cost function:

$$\left|\widetilde{C}-\frac{1}{{2}^{n}}{{{{{{{\rm{Tr}}}}}}}}\ O\right|\,\leqslant\, {q}_{M}^{w}G(n).$$
(41)

Details of numerical implementations

The noise model employed in our numerical simulations was obtained by performing one- and two-qubit gate-set tomography103,104 on the five-qubit IBM Q Ourense superconducting qubit device. The process matrices for each gate native to the device’s alphabet, and the state preparation and measurement noise are described in ref. 96,Apendix B]. In addition, the optimization for the MaxCut problems was performed using an optimizer based on the Nelder-Mead simplex method.