Analyzing variational quantum landscapes with information content

The parameters of the quantum circuit in a variational quantum algorithm induce a landscape that contains the relevant information regarding its optimization hardness. In this work we investigate such landscapes through the lens of information content, a measure of the variability between points in parameter space. Our major contribution connects the information content to the average norm of the gradient, for which we provide robust analytical bounds on its estimators. This result holds for any (classical or quantum) variational landscape. We validate the analytical understating by numerically studying the scaling of the gradient in an instance of the barren plateau problem. In such instance we are able to estimate the scaling pre-factors in the gradient. Our work provides a new way to analyze variational quantum algorithms in a data-driven fashion well-suited for near-term quantum computers.


I. INTRODUCTION
Variational quantum algorithms (VQAs) have been marked as a promising path towards quantum advantage in pre-fault-tolerant quantum hardware.In nearly a decade of research since its original proposal [1], the field of VQAs has seen significant progress both theoretical and experimentally [2,3].It is yet to be seen if noisy intermediate-scale quantum (NISQ) [4] devices are able to reach unambiguous quantum advantage through VQA.Issues such as vanishing gradients or barren plateaus (BP) [5][6][7][8], the expressivity of the quantum circuits [9][10][11] or difficulties optimizing a noisy cost function [12] are only a few examples of the hurdles faced by VQA which reduce the hope of quantum advantage in the near-term.
From a computer science point-of-view VQAs are a fascinating object of study.They can be considered classical cost functions with classic input/output.Yet the cost function might not be classically accessible in general.So far, there is no clear evidence that optimizing a VQA is feasible with standard optimization methods [12].Some researchers have attempted to close this gap by developing new optimizers tailored to quantum circuits [13,14] or use machine learning techniques to assist during the optimization [15,16] with inconclusive results.More recently, the authors of [17] introduced a method to visualize variational quantum landscapes through dimensionality reduction.Yet landscape analysis tools from classical optimization have not been widely used to characterize quantum landscapes.
Landscape analysis aims at characterizing the landscape of cost functions by efficiently sampling the parameter space to understand the "hardness" of the optimization problem [18][19][20][21][22][23].For a VQA this implies only classical post-processing of data from a quantum device.
In contrast, the optimization step of a VQA involves constant interaction between quantum and classical resources.In NISQ hardware such interactions might come with a large overhead.To the best of our knowledge, no prior work on data-driven landscape analysis exists in the context of VQA.
In this work, we aim to close the gap between VQA and landscape analysis through the information content (IC) [24] of the quantum landscape.We demonstrate the connection between IC and the average norm of the gradient of the landscape.We derive robust lower/upper bounds of this quantity which provides a crucial understanding of the landscape (e.g.complexity of optimizing the cost function).We apply our results to numerically study the BP problem for local and global cost functions from ref. [6], showing excellent agreement with theoretical asymptotic scaling in the size of the gradient.Also, we demonstrate how to calculate pre-factors of the asymptotic scaling, which are in practice more relevant for implementing algorithms.As far as we know, this is the first work where scaling pre-factors are calculated in the context of VQAs and BPs.
The manuscript is organized as follows.In Section II we give background on VQAs and IC.We connect the average norm of the gradient with IC in Section III, followed by a numerical diagnosis of BP using IC in Section IV.Section VI addresses the estimation of pre-factors in the scaling of BP.In Section VII we discuss the implications of our results and point out future directions.

A. Parameterized Quantum Circuits
In a variational quantum algorithm one aims at exploring the space of quantum states by minimizing a cost function with respect to a set of tunable real-valued parameters ⃗ θ ∈ [0, 2π) m of a parametrized quantum circuit (PQC).A PQC evolves an initial quantum state |ψ 0 ⟩ to generate a parametrized state where U ( ⃗ θ) is a unitary matrix of the form with Here W i are fixed unitary operations and V i are hermitian matrices.In a VQA these parameters are driven by (classically) minimizing a cost function C( ⃗ θ), built as the expectation value of a quantum observable Ô, A successful optimization reaches an approximation to the lowest eigenvalue of Ô, and the optimal parameters represent an approximation to its ground-state [2,3].
Our object of study is the manifold defined by a PQC and Ô which we call a variational quantum landscape.
In order to measure Equation ( 4) in a real device, one must prepare and measure the observable with multiple copies of U ( ⃗ θ) |ψ 0 ⟩.Therefore, the real cost function becomes where κ(R) introduces the uncertainty of sampling the observable with R repetitions.For sufficiently large number of them, κ(R) can be drawn from a Gaussian distribution with variance σ 2 ∼ 1/R [12].Throughout the rest of the text, we assume access to the exact value of the cost function, C( ⃗ θ), unless otherwise stated.

B. Exploratory Landscape Analysis
The goal of exploratory landscape analysis (ELA) [25][26][27] is to characterize a real-valued function by numerically estimating a set of features from its landscape.Examples of such features are multi-modality (the number of local minima), ruggedness, or curvature of the level set of a given landscape (see [27] for a comprehensive description of ELA features).ELA becomes particularly useful when studying functions with unknown analytical form or exceedingly difficult for mathematical analysis where one can gain insight by comparing their features to those of known analytical functions [20,21,28].Similarly, the classical machine learning/optimization community has made use ELA features to select the most suitable optimization algorithms to optimize unknown cost functions [22,29,30].A key aspect of landscape analysis is the fact that the number of evaluations of the cost function is significantly smaller than optimizing it; otherwise it is more efficient to optimize directly.
Among the ELA features, information content (IC) [24,31] is of particular interest as it characterizes the ruggedness of the landscape which is related to its trainability.For example, high information content indicates a trainable landscape, whereas low information content indicates a flat landscape [24,27].We demonstrate that for any classical or quantum variational landscape its trainability can be quantified with information content.

C. Information content
Definition 1 (Information Content (IC)) Given a finite symbolic sequence ϕ = {−, ⊙, +} S of length S and let p ab , a ̸ = b ∈ {−, ⊙, +} denote the probability that ab occurs in the consecutive pairs of ϕ.The information content is defined as In this definition pairs of the same symbols are excluded, leaving only six combinations, The log 6 is necessary to ensure H ≤ 1.
To compute the IC, we use the algorithm given in ref. [24]: (1) Sample M (m) ∈ O(m) points of the parameter space Θ = { ⃗ θ 1 , . . ., ⃗ θ M } ∈ [0, 2π) m .(2) Measure C( ⃗ θ i ) on a quantum computer (this is the only step where it is needed).(3) Generate a random walk W of S + 1 < M (m) steps over Θ, and compute the finite-size approximation of the gradient at each step i (4) Create a sequence ϕ(ϵ) by mapping ∆C i onto a symbol in {−, ⊙, +} with the rule (5) Compute the empirical IC (denoted as H(ϵ) henceforth) by applying Definition 1 to ϕ(ϵ).(6) Repeat these steps for several values of ϵ.
From this algorithm, we are interested in the regimes of high and low H(ϵ) [24,31].The maximum IC (MIC) is defined as, at ϵ M as its corresponding ϵ.The MIC occurs when the variability in the symbols of ϕ is maximum.The other case of interest occurs when H(ϵ) ≤ η with η ≪ 1.This defines the sensitivity IC (SIC) [24,31], with ϵ = ϵ S .The SIC identifies the ϵ at which (almost) all symbols in ϕ are ⊙.All symbols become exactly ⊙ at η = 0.Both MIC and SIC are calculated on a classical computer after collecting ( ⃗ θ, C( ⃗ θ)).The values ϵ M,S can be found by sweeping over multiple values of ϵ with logarithmic scaling, and computing H(ϵ) with Equation ( 6) at each of those values1 .The cost is dominated by computing all quantities ∆C needed to construct ϕ(ϵ) using Equation (10).Since only O(m) samples are available, at most O(m 2 ) differences of the form ∆C, can be computed.Thus, MIC and SIC can be obtained with at most O(m 2 ) classical operations.

III. CONNECTION BETWEEN INFORMATION CONTENT AND NORM OF THE GRADIENT
This section shows the relation between IC and the average norm of the gradient, from now denoted as ∥∇C∥.We take advantage of the fact that each step is isotropically random in W .This allows us to derive the underlying probability of ∆C i .Additionally, we use IC to bound the probability of pairs of symbols appearing along W , which allows us to estimate ∥∇C∥.Although we demonstrate our results for a variational quantum landscape, they extend to any optimization landscape.

A. Estimation of the norm of the gradient
The random walks W over Θ satisfy where ⃗ δ i is drawn from the isotropic distribution and d is fixed before starting the walk but might be varied.By Taylor expanding Equation ( 9) and the mean-value theorem, the finite-size gradient can be written as with t ∈ (0, 1).Since the sampled points Θ are chosen randomly, we can assume that ∆C and ∇C( ⃗ θ) • ⃗ δ are drawn from the same probability distribution, given a sufficiently large Θ.
The isotropic condition of W allows us to calculate the probability distribution of ∇C( ⃗ θ) • ⃗ δ: 2 is a random variable with a beta probability distribution (B) [32] such that The proof can be found in Appendix A 1.
We can use Lemma 1 to bound the probability of where I(x; α, β) is the regularized incomplete beta function with parameters α and β.
The proof of this theorem can be found in Appendix A 2.
It is known that the beta distribution (with the parametrization in Lemma 1) rapidly converges to a normal distribution [33]: This approximation is accurate even for reasonably small values of m.We can naturally interpret the functionality of √ m as dimensionality normalization in Lemma 1. Lemma 1 implies, in the Gaussian limit, where E W denotes the expectation taken over the points in random walk W , and As an immediate consequence, we give the CDF of ∇C( ⃗ θ) • ⃗ δ averaged over a random walk W : where Φ G (•) is the CDF of the standard normal distribution.

B. Probability concentration for information content
Our next goal is to bound the probability of pairs of symbols appearing in ϕ in the high and low IC regimes.

High information content
If one interprets IC as a partial entropy of the landscape, high H necessarily implies approximately equal probabilities p ab .Therefore, a minimal concentration of probabilities must exist such that a high value of H can be reached.One can formally define this statement; Lemma 2 Let H > 2h(1/2) be the IC of a given sequence ϕ.Consider the probabilities in Equation (8) such that the sum of any four of them (p 4 ) is bounded by with q the solution of H = 4h(x) + 2h(1/2 − 2x).
The proof can be found in Appendix A 3. The bound on p 4 in the above lemma gets tighter as q increases, and so does H, which by definition H ≤ 1.The tightest possible bound is achieved for the MIC defined in Equation (11).

Low information content
For a low IC to occur, all p ab must be small, and their values can be upper-bounded.
Lemma 3 Let H be the IC with bound H ≤ η ≤ 1/6 of a given sequence ϕ.Then the probability of consecutive ⊙ steps during a random walk are close to 1.The expected norm of the gradient is bounded by The proof can be found in Appendix A 4. As in the previous case, a tight bound is attained with the SIC defined in Equation (12).High and low IC provides insight into the hardness to optimize the cost function that generated the landscape.As shown in this section, low IC implies a flat landscape, which clearly imposes hardness in optimization.In contrast, high IC is a necessary but not sufficient condition for optimization.In this case there is a guarantee that the landscape can be, at the very least, easy to optimize locally.However, IC does not provide any information on the quality of the accessible minimum (e.g.global or local) or the multi-modality (number of local minima) of the landscape.The reason is the random walk over Θ, which allows for high IC both for convex or multimodal landscapes.

C. Information content to estimate the norm of the gradient
We are ready to show the main result of this work.We make use of the results in Section III A and Section III B to prove that H(ϵ) estimates the average norm of the gradient ∥∇C∥ for any classical or quantum landscape.To the best of our knowledge, this is the first time, including the field of classical optimization, where such bounds are calculated.
First, we relate H M to the norm of the gradient.High values of IC guarantee a minimal probability for individual steps to increase and decrease and thus bounds the compatible values of ϵ/∥∇C∥ W .
Theorem 2 (H M bounds ∥∇C∥ W ) Let H M be the empirical MIC of a given function C( ⃗ θ), and ϵ M its corresponding ϵ.Let q be the solution to the equation The proof is given in Appendix A 5.
The second result connects SIC to the bounds in the norm of the gradient.Small values of IC imply a large probability of consecutive ⊙ steps in ϕ i or equivalently small probabilities for p ab .When this occurs, then ϵ S is used to upper bound ∥∇C∥ W .
Theorem 3 (H S upper bounds ∥∇C∥ W ) Let H S be the empirical SIC of a given function C( ⃗ θ), and ϵ S its corresponding ϵ.Then The proof is given in Appendix A 6.
From Equation (22), it is obvious that ∥∇C∥ W well approximate ∥∇C∥ for a large M .Hence, we can use Theorem 2 and Theorem 3 to bound ∥∇C∥ with a long sequence ϕ.
Theorem 2 and Theorem 3 provide confidence intervals of ∥∇C∥ without prior knowledge of the variational landscape.The former gives both an upper and lower bound of ∥∇C∥ but can only be applied when H has a minimum value, while the latter is always applicable with an arbitrary small η for an upper bound.
Finally, we discuss the tightness of the bounds in Theorem 2 and Theorem 3. Firstly, the bound in Corollary 1 might become loose if the approximation from the Beta to a Gaussian distribution is not accurate.However, the error between these distributions is negligible already at m ∼ 10.Moreover, as the value of m increases, so does the distribution, and the bound gets tighter.Another possibility might come from the entropy arguments of IC (see Lemma 2 and Lemma 3).The bound saturates at H M = 1 but becomes looser as H M decreases.When H ≤ 2h(1/2), they are no longer trustworthy.Nonetheless, for common values of H M ∼ 0.8, the bounds are only loose by small factors.Similarly for H S , the bound tightens as η decreases but becomes uncontrolled with η > 1/6.

D. Information content under sampling noise
In the presence of sampling noise, symbols in ϕ(ϵ) might be flipped due to the uncertainty of the cost function.The amount of flips within the sequence depends on the number of repetitions R. The following condition ensures that symbols in the sequence will not be flipped with high probability: It induces a lower bound ϵ ≥ 1/d √ R. In this scenario, the bound in Corollary 1 and all subsequent results transform The bounds become slightly looser from the sampling noise dependence.Sampling noise posses the same limitation to both IC and optimization.If ∥∇C∥ decays exponentially, then exponentially many repetitions are required to resolve the gradient.Yet, IC is capable of signaling non-resolvable (flat) landscapes with less evaluations than a full optimization.

IV. INFORMATION CONTENT TO DIAGNOSE BARREN PLATEAUS
Our goal is now to apply the previous results to study the problem of barren plateaus (BP) [5,6] We choose this problem because there exist analytical results on the scaling of the ∥∇C∥.This allows us to directly verify that H(ϵ) can be used as a proxy to ∥∇C∥.
BPs are characterized by the following conditions [5], where n is the number of qubits.BP implies exponentially vanishing variances of the derivatives.Similarly, BP can be understood as having a flat optimization landscape.These two concepts are connected by Chebyshev's inequality [35].
The IC allows us to calculate where Var W computes the variance over points of the random walk W . Hence, IC is a proxy of the average variance of each partial derivative in the parameter space.

V. NUMERICAL EXPERIMENTS
To showcase IC as a tool to analyze the landscape of a VQA, we perform a numerical study of the BP problem as described by Cerezo et al. in [6].Here, the authors analytically derive the scaling of Var(∂C( ⃗ θ)) in two different scenarios.Such scaling depends both on the qubit size and circuit depth of the PQC.If the cost function is computed from global observables (e.g.non-trivial support on all qubits), BPs exist irrespective of the depth of the PQC.In the case of local observables (e.g.non-trivial support on a few qubits), one can train shallow PQCs, but BPs gradually appear as the circuit depth increases.These results hold for alternating layered ansatze composed of blocks of 2-local operations (Fig. 4 in [6]).
In our numerical experiments, we use circuits from 2 to 14 qubits, each of them going from 4 to 16 layers.We calculate the cost function from without sampling noise.Further details of the numerical experiments are given in Appendix B.
The results of the BP problem using IC are shown in Figure 1.In all plots we compute the bounds on the ∥∇C∥ from Theorem 2 (solid lines) and Theorem 3 (dashed lines), for the local (blue) and global (orange) cost functions.Additionally we show the value of ϵ M • √ m (dots) and ϵ S • √ m (crosses).The first trend we observe is that the ∥∇C∥ shows two different scalings with respect to qubits (left panels) and layers (right panels).The scaling with qubits shows a O(poly −1 (n)) decay in the local cost function and a remarkable O(exp(−n)) decay with the global cost function.We emphasize the fact that these results are in perfect agreement with the predictions in [6].On the other hand, the scaling with layers strongly depends on the number of qubits.For 4 qubits, ÔGlobal has a constant value ∥∇C∥ ≈ 10 −2 , while ÔLocal shows a small decay with a similar average.For 10 and 14 qubits, we recover the predicted O(poly −1 (n)) decay in the ÔLocal .In contrast, ÔGlobal has ∥∇C∥ ≈ 10 −4 − 10 −6 ÔGlobal which is already close to float precision.Finally, we observe that ϵ M • √ m is close to the lower bound, thus making it a robust proxy for ∥∇C∥.
In Surprisingly, both Figure 1 and 2 show an increase in ϵ M • √ m (or equivalently ∥∇C∥) at a fixed number of qubits as the number of layers grows for the global cost function.We have not been able to find an explanation for this behavior either analytically or in the literature.However, this is an example of how data-driven methods might provide useful insight for deeper understanding.

VI. ESTIMATION OF SCALING PRE-FACTOR
Thus far the numerical results have just confirmed the asymptotic theoretical predictions of the considered BP problem.Our methodology can be used beyond the asymptotic scaling to compute actual pre-factors by fitting ϵ M • √ m to its predicted functional form, including bounds on them from Equation (25).Obtaining such pre-factors is challenging analytically.Yet they are relevant when studying the complexity of an algorithm in practice.In this section, we obtain the scaling pre-factors for the global cost function (in the number of qubits) and the local cost function for the number of qubits and layers (see Appendix B for additional details of these fittings).
First, we study the global cost function scaling with qubits for each number of layers in our data.We fit a linear model f The results of the fit are shown in Table I.For each of the coefficients, we show the fitting values of the lower bound (LB column) and ϵ M • √ m (right column).As the number of layers increases, α → −1.0, which is consistent with exponential decay of the form 2 −n predicted in [6].More importantly, asymptotic scaling is not sensitive to the constant factor β, but it is given by the right column  Global cost function prefactors    in Table I.Based on the trend in this column we speculate that the constant factor is β → −2.0.
In the case of ÔLocal , there are fewer known asymptotic predictions of the gradient norm.In [6] it is shown that there exist three regimes: trainable, BPs, and transition area, depending on the depth with respect to the system size.Due to the small number of qubits and layers of our numerical study, we assume to be in the trainable regime, where theory predicts a ∥∇C∥ scaling in O(poly −1 (n)).We use a second-order polynomial model The results are given in Table II.The first observation is the small value of the quadratic coefficient for all layers.This might lead to thinking that a linear function will be better suited.
To discard this possibility we perform a linear fit (see Figure 4 in Appendix B) leading to comparable values of the slope and intercept, with slightly better fitting statistics for the second-degree polynomial.The β coefficient shows a 10-fold increase as the number of layers grows.In contrast, γ gets increasingly more negative with the number of layers.Note that we can extract the degree of the polynomial, which is impossible from the theory.
Lastly, we estimate the scaling coefficients of the local cost function with respect to layers, where no theoretical scaling is known [6].We choose a second-order polynomial as the hypothesis functional form to fit the data.A linear model clearly under-fits the data (see Figure 5), thus confirming the intuition of a higher-order polynomial scaling.The results are presented in Table III.The quadratic coefficient α increases as the number of qubits grows at n ≥ 8, and so does the constant term γ.In contrast, the linear term β remains roughly constant across all system sizes studied.An opposite trend in the coefficients seems to occur between n = 4 and n = 6, α and γ decrease, while β increases.We have not been able to match this change in the tendency to a change in scaling, leaving a finite-size effect in the fitting as the most possible explanation.
The results presented in this section are a demonstration that data-driven approaches can provide useful insight to complement analytical methods and can be leveraged to get a deeper understanding of a problem.

VII. CONCLUSIONS
Variational quantum algorithms have been intensively studied as a suitable application for near-term quantum hardware.From a computer science perspective, they are simply an optimization problem with classical input/output, yet their cost function is a quantum object.The parameters of any optimization problem induce a landscape that contains information about its "hardness".Landscape analysis is central to classical optimization but has been somewhat ignored in the VQAs community.
In this work, we investigate the information content features of a variational quantum landscape, which can be calculated efficiently by sampling the parameter space.
We prove that for any (classical or quantum) cost function the average norm of its gradient can be rigorously bounded with the information content.We validate our theoretical understating by a numerical experiment, confirming the predicted asymptotic scaling of the gradient in the barren plateau problem.Finally, we apply our results to predict scaling pre-factors of the gradient in a data-driven fashion.To our knowledge, this is the first time that such pre-factors are calculated for a VQA.
The study of optimization landscapes of VQA opens a new avenue to explore their capabilities within the NISQ era.First, landscape analysis does not require constant interaction between quantum and classical hardware.Secondly, only linear (in the number of parameters) queries to a quantum computer suffice to extract the information content instead of polynomially many queries for a standard optimization routine.Finally, information content might be used as an easy and comparable metric between ansatzes.
We envision future research directions with information content such as studying the feasibility of the VQA optimization, estimating the number of shots needed to resolve a gradient, or warm-starting the algorithm from regions of interest in parameter space.Importantly, landscape analysis does not rely on any constraint in the quantum circuit.It can be then used even in the case when analytical approaches are unavailable.Therefore we anticipate that landscape analysis and information content might have a broad range of applications beyond VQAs in the NISQ era.

Figure 2
we show a heatmap of the values of ϵ M • √ m when increasing the number of qubits (x-axis) and the layers (y-axis) for both local (left) and global (right) cost function.The values of results for ÔLocal (left panel) show a rich variety of features: for 2 to 6 layers ϵ M • √ m shows a very mild decay, but for more than 8 layers the decay sharpens.This is exactly as expected for local cost functions: BPs appear gradually as the circuit depth increases.We speculate that the color change at the top right corner of the left panel in Figure 2 corresponds to a transition regime.With regard to the global cost function (panel b), the expected exponential decay (in the number of qubits) is observed.

FIG. 1 :
FIG. 1: Scaling of the average gradient ∥∇C∥.Panels (a,b,c) shows the scaling with with respect to qubits, and panels (d,f,g) the scaling with respect to layers.The solid lines show the bounds from Equation (25) (solid lines) and Equation (26) (dashed lines).∥∇C∥ can take values within the shadow areas in between these lines.The markers refer to the values of ϵ M • √ m (dots) and ϵ S • √ m (crosses).They are calculated from the median of five independent runs, with their standard deviation as the error bars.The colors represent the results for the local (blue) and global (orange) cost functions.

FIG. 2 :
FIG. 2: Heatmap of ϵ M • √ m.Panel (a) shows the values for the local cost function, and panel (b) the global cost function.The number of qubits is depicted on the x-axis, while the number of layers is shown on the y-axis.

1 FIG. 3 :
FIG. 3: Linear fit of the log 2 (ϵ M • √ m) with respect to qubits for the global cost function.

TABLE I :
Estimated qubit scaling pre-factors of the global cost function by linear fitting.α is the slope and β the intercept of the linear model.Each coefficient has two columns showing the results of fitting ϵ M • √ m and its lower bound (LB).

TABLE II :
Estimated qubit scaling pre-factors of the local cost function by fitting a second-degree polynomial.Each coefficient has two columns showing the results of fitting ϵ M • √ m and its lower bound (LB).

TABLE III :
Estimated layer scaling pre-factors of the local cost function by fitting a second-degree polynomial.Each coefficient has two columns showing the results of fitting ϵ M • √ m and its lower bound (LB).

TABLE V :
Extended results on estimating the coefficients for the local cost function scaling with qubits.The additional columns show the upper bound from ϵ M (UB) and upper bound from ϵ S (UBs)..242.953.04 0.61 2.50 48.93 50.89 11.11 10.36

TABLE VI :
Extended results on estimating the coefficients for the local cost function scaling with layers.The additional columns show the upper bound from ϵ M (UB) and upper bound from ϵ S (UBs).