Calculating energy derivatives for quantum chemistry on a quantum computer

Modeling chemical reactions and complicated molecular systems has been proposed as the “killer application” of a future quantum computer. Accurate calculations of derivatives of molecular eigenenergies are essential toward this end, allowing for geometry optimization, transition state searches, predictions of the response to an applied electric or magnetic field, and molecular dynamics simulations. In this work, we survey methods to calculate energy derivatives, and present two new methods: one based on quantum phase estimation, the other on a low-order response approximation. We calculate asymptotic error bounds and approximate computational scalings for the methods presented. Implementing these methods, we perform geometry optimization on an experimental quantum processor, estimating the equilibrium bond length of the dihydrogen molecule to within 0.014\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$0.014$$\end{document} Å of the full configuration interaction value. Within the same experiment, we estimate the polarizability of the H2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${}_{2}$$\end{document} molecule, finding agreement at the equilibrium bond length to within 0.06\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$0.06$$\end{document} a.u. (2%\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$2 \%$$\end{document} relative error).


I. INTRODUCTION
Quantum computers are at the verge of providing solutions for certain classes of problems that are intractable on a classical computer [1]. As this threshold nears, an important next step is to investigate how these new possibilities can be translated into useful algorithms for specific scientific domains. Quantum chemistry has been identified as a key area where quantum computers can stop being science and start doing science [2][3][4][5]. This observation has lead to an intense scientific effort towards developing and improving quantum algorithms for simulating time evolution [6,7] and calculating ground state energies [8][9][10][11] of molecular systems. Small prototypes of these algorithms have been implemented experimentally with much success [10,[12][13][14][15]. However, advances over the last century in classical computational chemistry methods, such as density functional theory (DFT) [16], coupled cluster (CC) theory [17], and quantum Monte-Carlo methods [18], set a high bar for quantum computers to make impact in the field.
The ground and/or excited state energy is only one of the targets for quantum chemistry calculations. For many applications one also needs to be able to calculate the derivatives of the molecular electronic energy with respect to a change in the Hamiltonian [19,20]. For example, the energy gradient (or first-order derivative) for nuclear displacements is used to search for minima, transition states, and reaction paths [21] that characterize a molecular potential energy surface (PES). They also form the basis for molecular dynamics (MD) simulations to dynamically explore the phase space of the system in its electronic ground state [22] or, after a photochemical transition, in its electronically excited state [23]. While classical MD usually relies on force-fields which are parameterized on experimental data, there is a growing need to obtain these parameters on the basis of accurate quantum chemical calculations. One can easily foresee a powerful combination of highly accurate forces generated on a quantum computer with machine learning algorithms for the generation of reliable and broadly applicable force-fields [24]. This route might be particularly important in exploring excited state PES and nonadiabatic coupling terms, which are relevant in describing light-induced chemical reactions [25][26][27]. Apart from these perturbations arising from changing the nuclear positions, it is also of interest to consider the effect that small external electric and/or magnetic fields have on the molecular energy. These determine well-known molecular properties, such as the (hyper)polarizability, magnetizability, A-and g-tensors, nuclear magnetic shieldings, among others.
Although quantum algorithms have been suggested to calculate derivatives of a function represented on a quantum register [28][29][30][31][32], or of derivatives of a variational quantum eigensolver (VQE) for optimization purposes [33,34], the extraction of molecular properties from quantum simulation has received relatively little focus. To the best of our knowledge only three investigations; in geometry optimization and molecular energy derivatives [35], molecular vibrations [36], and the linear response function [37]; have been performed to date.
In this work, we survey methods for the calculation of molecular energy derivatives on a quantum computer.

arXiv:1905.03742v3 [quant-ph] 17 Jul 2019
We calculate estimation errors and asymptotic convergence rates of these methods, and detail the classical pre-and post-processing required to convert quantum computing output to the desired quantities. As part of this, we detail two new methods for such derivative calculations. The first involves simultaneous quantum phase and transition amplitude (or propagator) estimation, which we name 'propagator and phase estimation' (PPE). The second is based on truncating the Hilbert space to an approximate (relevant) set of eigenstates, which we name the 'eigenstate truncation approximation' (ETA). We use these methods to perform geometry optimization of the H 2 molecule on a superconducting quantum processor, as well as its response to a small electric field (polarizability), and find excellent agreement with the full configuration interaction (FCI) solution.

II. MAIN
LetĤ be a Hamiltonian on a 2 Nsys -dimensional Hilbert space (e.g. the Fock space of an N sys -spin orbital system), which has eigenstateŝ ordered by the energies E j . In this definition, the Hamiltonian is parametrized by the specific basis set that is used and has additional coefficients λ 1 , λ 2 , . . ., which reflect fixed external influences on the electronic energy (e.g. change in the structure of the molecule, or an applied magnetic or electric field). An dth-order derivative of the ground state energy with respect to the parameters λ i is then defined as: D d1,d2,... λ1,λ2,... = ∂ d E 0 (λ 1 , λ 2 , . . .) ∂ d1 λ 1 , ∂ d2 λ 2 , . . . , where d = i d i . As quantum computers promise exponential advantages in calculating the ground state E 0 itself, it is a natural question to ask how to efficiently calculate such derivatives on a quantum computer.

A. The quantum chemical Hamiltonian
A major subfield of computational chemistry concerns solving the electronic structure problem. Here, the system takes a second-quantized ab initio Hamiltonian, written in a basis of molecular spinors {φ p (r)} as follows: whereÊ pq =ĉ † pĉq andĉ † p (ĉ p ) creates (annihilates) an electron in the molecular spinor φ p . With equation (3) relativistic and non-relativistic realizations of the method only differ in the definition of the matrix elements h pq and g pqrs [38]. A common technique is to assume pure spinorbitals and integrate over the spin variable. As we want to develop a formalism that is also valid for relativistic calculations, we will remain working with spinors in this work. Adaptation to a spinfree formalism is straightforward, and will not affect computational scaling and error estimates.
The electronic Hamiltonian defined above depends parametrically on the nuclear positions, both explicitly via the nuclear potential and implicitly via the molecular orbitals that change when the nuclei are displaced.

B. Asymptotic convergence of energy derivative estimation methods
In this section, we present and compare various methods for calculating energy derivatives on a quantum computer. In Tab. I, we estimate the computational complexity of all studied methods in terms of the system size N sys and the estimation error . We also indicate which methods require quantum phase estimation, as these require longer coherence times than variational methods. Many methods benefit from the amplitude estimation algorithm of [39], which we have included costings for. We approximate the scaling in N sys between a best-case scenario (a lattice model with a low-weight energy derivative and aggressive truncation of any approximations), and a worst-case scenario (the electronic structure problem with a high-weight energy derivative and less aggressive truncation). The lower bounds obtained here are competitive with classical codes, suggesting that these methods will be tractable for use in large-scale quantum computing. However, the upper bounds will need reduction in future work to be practical, e.g. by implementing the strategies suggested in [11,33,40].
For wavefunctions in which all parameters are variationally optimized, the Hellmann-Feynman theorem allows for ready calculation of energy gradients as the expectation value of the perturbing operator [35,41]: This expectation value may be estimated by repeated measurement of a prepared ground state on a quantum computer (Sec. IV C 1), and classical calculation of the coefficients of the Hermitian operator ∂Ĥ/∂λ (Sec. A). If state preparation is performed using a VQE, estimates of the expectation values in Eq. 4 will often have already been obtained during the variational optimization routine. If one is preparing a state via QPE, one does not get these expectation values for free, and must repeatedly measure the quantum register on top of the phase estimation routine. Such measurement is possible even with single-ancilla QPE methods which do not project the system register into the ground state (see Sec. IV E 1). Regardless of the state preparation method, the estimation error may be calculated by summing the sampling noise  of all measured terms (assuming the basis set error and ground state approximation errors are minimal). The Hellmann-Feynman theorem cannot be so simply extended to higher-order energy derivatives. We now study three possible methods for such calculations. The propagator and phase estimation (PPE) method uses repeated rounds of quantum phase estimation to measure the frequency-domain Green's function, building on previous work on Green's function techniques [37,42,43]. We may write an energy derivative via perturbation theory as a sum of products of path amplitudes A and energy coefficients f A . For example, a second order energy derivative may be written as allowing us to identify two amplitudes and two corresponding energy coefficients The generic form of a d-th order energy derivative may be written as where X A counts the number of excitations in the path. As this is different from the number of responses of the wavefunction, X A does not follow the 2n + 1 rule; rather, X A ≤ d. The amplitudes A take the form [44] A(j 1 , . . . , j X A −1 ) These may be estimated simultaneously with the corresponding energies E jx by applying rounds of QPE in between excitations by operatorsP x (Sec. IV E 2). One may then classically calculate the energy coefficients f A , and evaluate Eq. 9. Performing such calculation over an exponentially large number of eigenstates |Ψ jx would be prohibitive. However, the quantum computer naturally bins small amplitudes of nearby energy with a resolution ∆ controllable by the user. We expect the resolution error to be smaller than the error in estimating the amplitudes A(j 1 , . . . , j X A −1 ) (Sec. IV E 4); we use the latter for the results in Tab. I. In lieu of the ability to perform the long circuits required for phase estimation, one may approximate the sum over (exponentially many) eigenstates |Ψ j in Eq. 9 by taking a truncated set of (polynomially many) approximate eigenstates |Ψ j . We call such an approximation the eigenstate truncation approximation, or ETA for short. However, on a quantum computer, we expect both to better approximate the true ground state |Ψ 0 , and to have a wider range of approximate excited states [14,40,[45][46][47]. In this work, we focus on the quantum subspace expansion (QSE) method of [40]. This method proceeds by generating a set of N E vectors |χ j connected to the ground state |Ψ 0 by excitation opera-torsÊ j , This is similar to truncating the Hilbert space using a linear excitation operator in the (classical) equation of motion coupled cluster (EOMCC) approach [48]. The |χ j states are not guaranteed to be orthonormal; the overlap matrix is not necessarily the identity. To generate the set |Ψ j of orthonormal approximate eigenstates, one can calculate the projected Hamiltonian matrix and solve the generalized eigenvalue problem: Regardless of the method used to generate the eigenstates |Ψ j , the dominant computational cost of the ETA is the need to estimate N 2 E matrix elements. Furthermore, to combine all matrix elements with constant error requires the variance of each estimation to scale as N −2 E (assuming the error in each term is independent). This implies that, in the absence of amplitude amplification, the computational complexity scales as N 4 E . Taking all single-particle excitations sets N E ∝ N 2 sys . However, in a lattice model one might consider taking only local excitations, setting N E ∝ N sys . Further reductions to N E will increase the systematic error from Hilbert space truncation (Sec. IV F), although this may be circumvented somewhat by extrapolation.
For the sake of completeness, we also consider here the cost of numerically estimating an energy derivative by estimating the energy at multiple points; Here, the latter formula is preferable if one has direct access to the derivative in a VQE via the Hellmann-Feynman theorem, whilst the former is preferable when one may estimate the energy directly via QPE. In either case, the sampling noise (Sec. IV C 1 and Sec. IV D) is amplified by the division of δλ. This error then competes with the O(δλ 2 ) finite difference error, the balancing of which leads to the scaling laws in Tab. I. This competition can be negated by coherently copying the energies at different λ to a quantum register of L ancilla qubits and performing the finite difference calculation there [28,49]. Efficient circuits (and lower bounds) for the complexity of such an algorithm have not been determined, and proposed methods involve coherent calculation of the Hamiltonian coefficients on a quantum register. This would present a significant overhead on a near-term device, but with additive and better asymptotic scaling than the QPE step itself (which we use for the results in Tab. I).
C. Geometry optimization on a superconducting quantum device To demonstrate the use of energy derivatives directly calculated from a quantum computing experiment, we first perform geometry optimization of the diatomic H 2 molecule, using two qubits of a superconducting transmon device. (Details of the experiment are given in Sec. IV G.) Geometry optimization aims to find the ground state molecular geometry by minimizing the ground state energy E 0 (R) as a function of the atomic co-ordinates R i . In this small system, rotational and translational symmetries reduce this to a minimization as a function of the bond distance R H−H In Fig. 1, we FIG. 2. Comparison of geometry optimization via different classical optimization routines, using a quantum computer to return energies and Jacobians as required, and estimating Hessians as required either via the ETA on the experimental device, or the Hartree-Fock (HF) approximation on a classical computer. Each algorithm was run till termination with a tolerance of 10 −3 , so as to be comparable to the final error in the system. (Inset) bar plot of the number of function evaluations of the four compared methods. Light blue points correspond to median N fev from 100 density-matrix simulations (Sec. IV H) of geometry optimization, and error bars to the interquartile ranges.
illustrate this process by sketching the path taken by Newton's minimization algorithm from a very distant initial bond distance (R H−H = 1.5Å). At each step of the minimization we show the gradient estimated via the Hellman-Feynman theorem. Newton's method additionally requires access to the Hessian, which we calculated via the ETA (details given in Sec. IV G). The optimization routine takes 5 steps to converge to a minimum bond length of 0.749Å, within 0.014Å of the target FCI equilibrium bond length (given the chosen STO-3G basis set). To demonstrate the optimization stability, we performed 100 simulations of the geometry optimization experiment on the quantumsim density-matrix simulator [50], with realistic sampling noise and coherence time fluctuations (details given in Sec. IV H). We plot all simulated optimization trajectories on Fig. 1, and highlight the median (R H−H , E(R H−H )) of the first 7 steps. Despite the rather dramatic variations between different gradient descent simulations, we observe all converging to within similar error bars, showing that our methods are indeed stable.
To study the advantage in geometry optimization from direct estimation of derivatives on a quantum computer, we compare in Fig. 2 our performance with gradient-free (Nelder-Mead) and Hessian-free (conjugate gradient, or CG) optimization routines. We also compare the performance of Newton's method with an approximate Hessian from Hartree-Fock (HF) theory. All methods converge to near-identical minima, but both Newton methods converge about twice as fast as the raw CG method, which in turn converges about twice as fast as Nelder-Mead. The density-matrix simulations predict that the ETA method Hessians provide less stable convergence than the HF Hessians; we attribute this to the fact that the HF Hessian at a fixed bond distance does not fluctuate between iterations. The density-matrix simulations also predict the CG method performance to be on average much closer to the Newton's method performance. However, we expect the separation between gradient and Hessian-free optimization routines to become more stark at larger system sizes, as is observed typically in numerical optimization [51].
To separate the performance of the energy derivative estimation from the optimization routine, we study the error in the energy E, the Jacobian J and Hessian K given as A = |A FCI − A expt |, (A = E, J, K). In Fig. 3, we plot these errors for different bond distances. For comparison we additionally plot the error in the HF Hessian approximation. We observe that the ETA Hessian is significantly closer than the HF-approximated Hessian to the true value, despite the similar performance in geometry optimization. The accuracy of the ETA improves at large bond distance, where the HF approximation begins to fail, giving hope that the ETA Hessian will remain appropriate in strongly correlated systems where this occurs as well.

D. Polarizability estimation
A key property to model in quantum chemistry is the polarizability, which describes the tendency of an atom or molecule to acquire an induced dipole moment due to a change in an external electric field F . The polarizability tensor may be calculated as α i,j = ∂E( F ) ∂Fi∂Fj F =0 [52]. In Fig. 4, we calculate the z-component of the polarizability tensor of H 2 in the ETA, and compare it to FCI and HF polarizability calculations on a classical computer. We observe good agreement to the target FCI result at low R H−H , finding a 0.060 a.u. (2.1%) error at the equilibrium bond distance (including the inaccuracy in estimating this distance). However our predictions deviate from the exact result significantly at large bond distance (R H−H 1.2Å). We attribute this deviation to the transformation used to reduce the description of H 2 to a two-qubit device (see Sec. IV G), which is no longer valid when adding the dipole moment operator to the Hamiltonian. To confirm this, we classically compute the FCI polarizability following the same transformation (which corresponds to projecting the larger operator onto a 2-qubit Hilbert space). We find excellent agreement between this and the result from the quantum device across the entire bond dissociation curve. This implies that simulations of H 2 on a 4-qubit device should match the FCI result within experimental error.

III. CONCLUSION
In this work, we have surveyed possible methods for estimating energy gradients on a quantum computer, including two new techniques of our own design. We have estimated the computational complexity of these methods, both in terms of the accuracy required for the result and the size of the studied system. We have demonstrated the use of these methods on a small-scale quantum computing experiment, obtaining the equilibrium bond length of the H 2 molecule to 0.014Å (2%) of the target Full-CI value, and estimating the polarizability at this bond length to within 0.060 a.u. (2.1%) of the same target.
Our methods do not particularly target the ground state over any other eigenstate of the system, and so can be used out-of-the-box for gradient estimation for excited state chemistry. They hold further potential for cal-culating frequency-domain Green's functions in strongly correlated physics systems (as PPE estimates the gradient through a Green's function calculation). However, throughout this work we made the assumption that the gap δ between ground and excited state energies was sufficiently large to not be of concern (namely that δ ∝ N −1 sys ). Many systems of interest (such as high-temperature superconductors) are characterized by gap closings in the continuum limit. How this affects PPE is an interesting question for future work. Further investigation is also required to improve some of the results drawn upon for this work, in particular reducing the number of measurements required during a VQE and improving amplitude estimation during single-round QPE.

A. Classical computation
The one-and two-electron integrals defining the fermionic Hamiltonian in Eq. 3 are obtained from a preliminary HF calculation that is assumed to be easily feasible on a classical computer. In non-relativistic theory the one-electron integrals are given by where V (r) is the electron-nuclear attraction potential from fixed nuclei at positions R i . The two-electron integrals are given by, For simplicity we used a finite difference technique to compute the matrix representations of perturbations corresponding to a change in nuclear coordinates and an external electric field and where δλ = 0.001 corresponds to a small change in λ.
The above (perturbed) quantum chemical Hamiltonians have been determined within the Dirac program [53] and transformed into qubit Hamiltonians using the Open-Fermion [54] package. This uses the newly-developed, freely-available [55] OpenFermion-Dirac interface, allowing for the simulation of relativistic quantum chemistry calculations on a quantum computer. While a finite difference technique was sufficient for the present purpose, such schemes are sensitive to numerical noise and have a high computational cost when applied to larger molecular systems. A consideration of the analytical calculation of energy derivatives can be found in the Supplementary Materials.

B. Approximate bound calculation details
In this section we detail our method for calculating the approximate bounds in Table. I. We first estimate the error (Tab. II, first row; details of the non-trivial calculations for the PPE and ETA methods given in Sec. IV E and Sec. IV F) respectively. Separately, we may calculate the time cost by multiplying the number of circuits, the number of repetitions of said circuits (n m , N m , and K depending on the method), and the time cost of each circuit (Tab. II, second row). (This assumes access to only a single quantum processor, and can in some situations be improved by simultaneous measurement of commuting terms, as discussed in Sec. IV C 1.) We then choose the scaling of the number of circuit repetitions as a function of the other metaparameters to fix constant (Tab. II, third row). We finally substitute the lower and upper bounds for these metaparameters in terms of the system size as stated throughout the remaining sections. For reference, we summarize these bounds in Tab. III.
C. Quantum simulation of the electronic structure problem
After a suitable qubit representation has been found, we need to design quantum circuits to implement unitary transformations. Such circuits must be constructed from an appropriate set of primitive units, known as a (universal) gate-set. For example, one might choose the set of all single-qubit operators, and a two-qubit entangling gate such as the controlled-NOT, C-Phase, or iSWAP gates [67]. One can then build the unitary operators e iθP forP ∈ P N exactly (in the absence of experimental noise) with a number of units and time linear in the size ofP [68]. (Here, size refers to the number of non-identity tensor factors ofP .) Optimizing the scheduling and size of these circuits is an open area of research, but many improvements are already known [69].
Transformations of a quantum state must be unitary, which is an issue if one wishes to prepare e.g. ∂Ĥ/∂λ|Ψ 0 on a quantum register (∂Ĥ/∂λ is almost always not unitary). To circumvent this, one must decompose ∂Ĥ/∂λ as a sum of N U unitary operators, perform a separate circuit for each unitary operator, and then combine the resulting measurements as appropriate. Such a decomposition may always be performed using the Pauli group (although better choices may exist). Each such decomposition incurs a multiplicative cost of N U to the computation time, and further increases the error in any final result by at worst a factor of N 1/2 U . This makes the computational complexities reported in Tab. II highly dependent on N U . The scaling of N U with the system size is highly dependent on the operator to be decomposed and the choice of decomposition. When approximating this in Tab. III we use a range between O(N sys ) (which would suffice for a local potential in a lattice model) to O(N 4 sys ) (for a two-body interaction).
To interface with the outside world, a quantum register needs to be appropriately measured. Similarly to unitary transformations, one builds these measurements from primitive operations, typically the measurement of a single qubit in the Z basis. This may be performed by prior unitary rotation, or by decomposing an opera-torÔ into N T Hermitian termsÔ i (which may be measured separately). N T differs from N U defined above, as the first is for a Hermitian decomposition of a derivative operator and the second is for a unitary decomposition. Without a priori knowledge that the system is near an eigenstate of a operatorÔ to be measured, one must perform n m repeated measurements of eachÔ i to estimate Ô to an accuracy ∝ n −1/2 m N 1/2 T . As such measurement is destructive, this process requires n m N T preparations and pre-rotations on top of the measurement time. This makes the computational costs reported in Tab. II highly dependent on N T . The scaling of N T with the system size N sys is highly dependent on the operatorÔ to be measured and the choice of measurements to be made [11,33]. In Tab. III, we assume a range between O(N sys ) and O(N 4 sys ) to calculate the approximate computation cost. This is a slight upper bound, as terms can be measured simultaneously if they commute, and error bounds may be tightened by accounting for the covariance between non-commuting terms [11]. The details on how this would improve the asymptotic scaling are still lacking in the literature however, and so we leave this as an obvious target for future work.
Throughout this text we require the ability to measure a phase e iφ between the |0 and |1 states of a single qubit. (This information is destroyed by a measurement in the Z basis, which may only obtain the amplitude on either state.) Let us generalize this to a mixed state on a single qubit, which has the density matrix [67] Hellmann-Feynman PPE ETA Direct (first order) where p 0 + p 1 = 1, and 0 ≤ p + ≤ √ p 0 p 1 ≤ 0.5. If one repeatedly performs the two circuits in Fig. 5 (which differ by a gate R = I or R = R Z = e i π 4 Z ), and estimates the probability of a final measurement m = 0, 1, one may calculate We define the circuit element M T throughout this work as the combination of the two circuits to extract a phase using this equation. As above, the error in estimating the real and imaginary parts of 2p + e iφ scales as n −1/2 m .

Hamiltonian Simulation
Optimal decompositions for complicated unitary operators are not in general known. For the electronic structure problem, one often wants to perform time evolution by a HamiltonianĤ, requiring a circuit for the unitary operator U = e iĤt . For a local (fermionic or qubit) Hamiltonian, the length T U of the required circuit is polynomial in the system size N sys . However, the coefficient of this polynomial is often quite large; this depends on the chosen Hamiltonian, its basis set representation, the filling factor η (i.e. number of particles), and whether additional ancilla qubits are used [4,5]. Moreover, such circuits usually approximate the target unitary U with someŨ with some bounds on the error H = U −Ũ S . This bound H is proportional to the evolution time t, providing a 'speed limit' for such simulation [70]. For the electronic structure problem, current methods achieve scaling between O(N 2 sys ) [71] and O(N 6 sys ) [69,72] for the circuit length T U , assuming η ∝ N sys (and fixed t, ). (When η is sublinear in N sys , better results exist [73].) The proven O(N 6 sys ) scaling is an upper bound, and most likely reduced by recent work [74,75]. For simpler models, such as the Hubbard model, scalings between O(1) and O(N sys ) are available [66,76]. As we require t ∝ N −1 sys for the purposes of phase estimation (described in Sec. IV D), this scaling is reduced by an additional factor throughout this work (though this cannot reduce the scaling below O(1)). For Tab. III, we use a range of T U = O(1) and T U = O(N 5 sys ) when approximating the scaling of our methods with the system size.

Ground state preparation and measurement
A key requirement for our derivative estimation methods is the ability to prepare the ground state |Ψ 0 or an approximation to it on the system register. Various methods exist for such preparation, including QPE (see Sec. IV D), adiabatic state preparation [77], VQE [10,11], and more recent developments [78,79]. Some of these preparation methods (in particular adiabatic and variational methods) are unitary, whilst others (phase estimation) are projective. Given a unitary preparation method, one may determine whether the system remains in the ground state by inverting the unitary and measuring in the computational basis (Sec. IV C 1). By contrast, such determination for QPE requires another phase estimation round, either via multiple ancilla qubits or by extending the methods in Sec. IV D. Unitary preparation methods have a slight advantage in estimating expectation values of unitary operatorsÛ ; the amplitude amplification algorithm [39] improves convergence of estimating Û from ∝ T −1/2 to ∝ T −1 (in a total computation time T ). However, this algorithm requires repeated application of the unitary preparation whilst maintaining coherence, which is probably not achievable in the nearterm. We list the computation time in Tabs. I and II both when amplitude amplification is (marked with * * ) and is not available.
Regardless of the method used, state preparation has a time cost that scales with the total system size. For quantum phase estimation, this is the time cost KT U of applying the required estimation circuits, where K is the total number of applications of e iĤt [80]. The scaling of a VQE is dependent on the variational ansatz chosen [11,33]. The popular UCCSD ansatz for the electronic structure problem has a O(N 5 sys ) computational cost if implemented naively. However, recent work suggests aggressive truncation of the number of variational terms can reduce this as far as O(N 2 sys ) [33]. We take this as the range of scalings for our approximations in Tab. I.

Systematic error from ground state approximations (state error)
A variational quantum eigensolver is not guaranteed to prepare the true ground state |Ψ 0 , but instead some approximate ground state In general we expect |a 0 | 2 to be relatively large, although this may not be the case for systems with a small gap δ to nearby excited states. One may place very loose bounds on the error this induces in the energy: (24) where here Ĥ S is the spectral norm of the Hamiltonian (its largest eigenvalue). (Note that while in general Ĥ S is difficult to calculate, reasonable approximations are usually obtainable.) As |Ψ 0 is chosen to minimize the approximate energyẼ 0 , one expects to be much closer to the smaller bound than the larger. For an operator D (such as a derivative operator ∂Ĥ/∂λ) other than the Hamiltonian, cross-terms will contribute to an additional error in the expectation value D 0 = |D| : One can bound this above in turn using the fact that which leads to Combining this with the error in the energy gives the bound This ties the error in our derivative to the energy in our error, but with a square root factor that unfortunately slows down the convergence when the error is small.
(This factor comes about precisely because we do not expect to be in an eigenstate of the derivative operator.) Unlike the above energy error, we cannot expect this bound to be loose without a good reason to believe that the orthogonal component j>0 a j |Ψ j has a similar energy gradient to the ground state. This will often not be the case; the low-energy excited state manifold is usually strongly coupled to the ground state by a physically-relevant excitation, causing the energies to move in opposite directions. Finding methods to circumvent this issue are obvious targets for future research. For example, one could optimize a VQE on a cost function other than the energy. One could also calculate the gradient in a reduced Hilbert space (see Sec. IV F) using eigenstates ofĤ (QSE) + D (QSE) with small to ensure the coupling is respected.

D. Quantum Phase Estimation
Non-trivial measurement of a quantum computer is of similar difficulty to non-trivial unitary evolution. Beyond learning the expectation value of a given HamiltonianĤ, one often wishes to know specific eigenvalues E i (in particular for the electronic structure problem, the ground and low-excited state energies). This may be achieved by QPE [9]. QPE entails repeated hamiltonian simulation (as described above), conditional on an ancilla qubit prepared in the |+ = 1 √ 2 (|0 + |1 ) state. (The resource cost in making the evolution conditional is constant in the system size.) Such evolution causes phase kickback on the ancilla qubit; if the system register was prepared in the state j a j |Ψ j , the combined (system plus ancilla) state evolves to j a j |Ψ j ⊗ (|0 + e ikEj t |1 ).
Repeated tomography at various k allows for the eigenphases E j to be inferred, up to a phase E j t + 2π ≡ E j t. This inference can be performed directly with the use of multiple ancilla qubits [9], or indirectly through classical post-processing of a single ancilla tomographed via the M T gate of Fig. 5 [39,[80][81][82][83][84]. The error in phase estimation comes from two sources; the error in Hamiltonian simulation and the error in the phase estimation itself. The error in Hamiltonian simulation may be bounded by H (as calculated in the previous section), which in turn sets the time for a single unitary T U . Assuming a sufficiently large gap to nearby eigenvalues, the optimal scaling of the error in estimating E j is A −1 j t −1 K −1 (where A j = |a j | 2 is the amplitude of the jth eigenstate in the prepared state). Note that the phase equivalence E j t + 2π = E j t sets an upper bound on t; in general we require t ∝ Ĥ S , which typically scales with N sys . (This scaling was incorporated into the estimates of T U in the previous section.) The scaling of the ground state amplitude A 0 with the system size is relatively unknown, although numeric bounds suggest that it scales approximately as 1 − αN sys [3] for reasonable N sys , with α a small constant. Approximating this as N −1 sys implies that K ∝ N 2 sys is required to obtain a constant error in estimating the ground state energy.
The error in estimating an amplitude A j during singleancilla QPE has not been thoroughly investigated. A naive least-squares fit to dense estimation leads to a scaling of n −1/2 m k −1/3 max , where n m is the number of experiments performed at each point k = 1, . . . , k max . One requires to perform controlled time evolution for up to k max ≈ max(t −1 , A −1 j ) coherent steps in order to guarantee separation of φ j from other phases. To obtain a constant error, one must then perform n m ∝ k sys . By contrast, multiple-ancilla QPE requires N m repetitions of e iĤt with k max = max(A −1 0 , t −1 ) to estimate A 0 with an error of (A 0 (1−A 0 )N −1 m ) 1/2 . This implies that N m ∝ A 0 measurements are sufficient, implying K ∝ N m A −1 0 may be fixed constant as a function of the system size for constant error in estimation of A 0 . Though this has not yet been demonstrated for singleround QPE, we expect it to be achievable and assume this scaling in this work.

E. The propagator and phase estimation method
In this section, we outline the circuits required and calculate the estimation error for our newly developed PPE method for derivative estimation.

Estimating expectation values with single-ancilla QPE
Though single-ancilla QPE only weakly measures the system register, and does not project it into an eigenstate |Ψ j of the chosen Hamiltonian, it can still be used to learn properties of the eigenstates beyond their eigenvalues E j . In particular, if one uses the same ancilla qubit to control a further unitary operationÛ on the system register, the combined (system plus ancilla) state evolves from Eq. 29 to j,j a j (|0 ⊗ |Ψ j + e ikEj t Ψ j |Û |Ψ j |1 ⊗ |Ψ j ). (30) The phase accumulated on the ancilla qubit may then be calculated to be Note that the gauge degree of freedom is not present in Eq. 31; if one re-defines |Ψ j → e iθ |Ψ j , one must send a j → e −iφj a j , and the phase cancels out. One may obtain g(k) at multiple points k via tomography of the ancilla qubit (Fig. 5). From here, either Prony's method or Bayesian techniques may be used to extract phases ω j ≈ E j t and corresponding amplitudes α j ≈ j a j a * j Ψ j |Û |Ψ j [84]. The amplitudes α j are often not terribly informative, but this changes if one extends this process over a family of operators U . For instance, if one chooses U = e ik Ĥ tV e ikĤt (withV unitary), an application of Prony's method on k returns amplitudes of the form from which a second application of Prony's method obtains phases ω j = E j t with corresponding amplitudes FIG. 6. A circuit to measure Ψj|V |Ψj without preparing |Ψj on the system register. The tomography box MT is defined in Fig. 5.
Each subsequent application of QPE requires taking data with U k fixed from k = 1, . . . , K to resolve K individual frequencies (and corresponding eigenvalues). However, if one is simply interested in expectation values Ψ j |V |Ψ j (i.e. when j = j ), one may fix k = k and perform a single application of Prony's method, reducing the number of circuits that need to be applied from O(K 2 ) to O(K) (see Fig. 6). The error in the estimator α j,j (Eq. 33) may be bounded above by the error in the estimator α j (Eq. 32). However, to estimate Ψ j |V |Ψ j from Eq. 33, one needs to divide by A j . This propagates directly to the error, which then scales as A . Thus constant error in estimating Ψ j |V |Ψ j is achieved if K ∝ N sys .

PPE circuits
As presented, the operatorV in Fig. 6 must be unitary. However if one applies additional phase estimation withinV itself, one can extend this calculation to nonunitary operators, such as those given in Eq. 9. This is similar in nature to calculating the time-domain Green's function for small t on a quantum computer (which has been studied before in Refs. [41][42][43]), but performing the transformation to frequency space with Prony's method instead of a Fourier transform to obtain better spectral resolution. It can also be considered a generalization of Ref. [37] beyond the linear response regime. To calculate a Xth order amplitude (Eq. 10), one may set which is unitary if theP x are chosen to be a unitary decomposition of ∂Ĥ/∂λ x . In Fig. 7, we show two circuits for the estimation of a second order derivative witĥ P = ∂Ĥ/∂λ 1 ,Q = ∂Ĥ/∂λ 2 (or some piece thereof). The circuits differ by whether QPE or a VQE is used for state preparation. If QPE is used for state preparation, the total phase accumulated by the ancilla qubit over the circuit is g(k 0 , k 1 ) = j,m,n a * m a n Ψ m |P |Ψ j Ψ j |Q|Ψ n × e ik0t(Em+En) e ik1tEj Applying Prony's method at fixed k 1 will obtain a signal at phase 2tE 0 with amplitude (35) A second application of Prony's method in k 1 allows us to obtain the required amplitudes and the eigenvalues ω j ≈ E j t, allowing classical calculation of both the amplitudes and energy coefficients required to evaluate Eq. 9. If a VQE is used for state preparation instead, one must post-select on the system register being returned to | 0 . Following this, the ancilla qubit will be in the state with an accumulated phase g(k) = α 0,0 (k) (where α 0,0 is as defined above). Here, |Ψ (VQE) 0 is the state prepared by the VQE unitary (which may not be the true ground state of the system). Both methods may be extended immediately to estimate higher-order amplitudes by inserting additional excitations and rounds of QPE, resulting in amplitudes of the form α 0,0 (k 1 , . . . , k X ). To explore this in more detail, in App. C we apply this method to a simple toy system.

Energy discretization (resolution error)
The maximum number of frequencies estimatable from a signal g(0), . . . , g(k) is (k + 1)/2. (This can be seen by parameter counting; it differs from the bound of k for QPE [84] as the amplitudes are not real.) As the time required to obtain g(k) scales at best linearly with k (Sec. IV C 2), we cannot expect fine enough resolution of all 2 Nsys eigenvalues present in a N sys -qubit system. Instead, a small amplitude A(j 1 , . . . , j X ) (|A(j 1 , . . . , j X )| ≤ ∆) will be binned with paths A (l 1 , . . . , l X ) of similar energy (δ = max x |E jx − E lx | ∆), and labeled with the same energy E Bx ≈ E jx ≈ E kx [84]. Here, ∆ is controlled by the length of the signal g(k), i.e. ∆ ∝ max(k) −1 . This grouping does not affect the amplitudes; as evolution by e ikĤt does not mix eigenstates (regardless of energy), terms of the form |Ψ jx Ψ lx | do not appear. (This additional amplitude error would occur if one attempted to calculate single amplitudes of the form Ψ j |P |Ψ k on a quantum device, e.g. using the method in Sec. IV D or that of Ref. [37], and multiply them classically to obtain a d > 1-th order derivative.) The PPE method then approximates Eq. 9 as Re(A(j 1 , . . . , j X A −1 )). (38) Classical post-processing then need only sum over the (polynomially-many in ∆) bins B x instead of the (exponentially-many in N sys ) eigenstates |Ψ jx , which is then tractable.
To bound the resolution error in the approximation we consider the error if E j were drawn randomly from bins of width Ĥ S ∆ (where Ĥ S is the spectral norm).
The energy functions f take the form of X A − 1 products of 1 Ej x −E0 . If each term is independent, these may be bounded as where δ is the gap between the ground and excited states. Then, as the excitationsP are unitary, for each amplitude A one may bound Propagating variances then obtains Where N A is the number of amplitudes in the estimation of D. As we must decompose each operator into unitaries to implement in a circuit, N A ∝ N d U . This bound is quite loose; in general we expect excitations ∂Ĥ/∂λ to couple to low-level excited states, which lie in a constant energy window (rather than one of width Ĥ S ), and that contributions from different terms should be correlated (implying that N A should be treated as constant here). This implies that one may take ∆ roughly constant in the system size, which we assume in this work.

Sampling noise error
We now consider the error in calculating Eq. 38 from a finite number of experiments (which is separate to the resolution error above). Following Sec. IV D we have that, if QPE is used for state preparation If one were to use a VQE for state preparation, the factors of A 0 would be replaced by the state error of Sec. IV C 4. We have not included this calculation in Tab. II for the sake of simplicity. Then, assuming each term in Eq. 38 is independently estimated, we obtain Substituting the individual scaling laws one obtains where again N A ∝ N d U . This result is reported in Tab. II.

F. Eigenstate truncation approximation details
In this section, we outline the classical post-processing required to evaluate Eq. 9 in the ETA, using QSE to generate approximate eigenstates. We then calculate the complexity cost of such estimation, and discuss the systematic error in an arbitrary response approximation from Hilbert space truncation.
The chosen set of approximate excited states |Ψ j defines a subspace H (QSE) of the larger FCI Hilbert space H (FCI) . To calculate expectation values within this subspace, we project the operatorsÔ of interest (such as derivatives like ∂Ĥ/∂λ) onto H (QSE) , giving a set of reduced operatorsÔ (QSE) (O (QSE) i,j = Ψ i |Ô|Ψ j ). These are N E × N E -dimensional classical matrices, which may be stored and operated on in polynomial time. Methods to obtain the matrix elements O (QSE) i,j are dependent on the form of the |Ψ j chosen. Within the QSE, one can obtain these by directly measuring [40] using the techniques outlined in Sec. IV C 1, and rotating the basis from {|χ j } to {|Ψ j } (using Eq. 14). The computational complexity for a derivative calculation within the QSE is roughly independent of the choice of |Ψ j . The error may be bounded above by the error in each term of the N E × N E projected matrices, which scales as either N (using the amplitude estimation algorithm). We assume that the N 2 E terms are independently estimated, in which case scales with N E . In general this will not be the case, and could scale as badly as N 2 E , but we do not expect this to be typical. Indeed, one can potentially use the covariance between different matrix elements to improve the error bound [40]. As we do not know the precise improvement this will provide, we leave any potential reduction in the computational complexity stated in Tab. II to future work. The calculation requires n m repetitions of N T circuits for each pair of N E excitations, leading to a total number of n m N T N 2 E preparations (each of which has a time cost T P ), as stated in Tab II. (With the amplitude amplification algorithm, the dominant time cost comes from running O(N T N 2 E ) circuits of length KT P .) Regardless of the method of generating eigenstates, the ETA incurs a systematic truncation error from approximating an exponentially large number of true eigenstates |Ψ j by a polynomial number of approximate eigenstates |Ψ j = lÃ j,l |Ψ l . This truncation error can be split into three pieces. Firstly, an excitationP |Ψ 0 may not lie within the response subspace H (QSE) , in which case the pieces lying outside the space will be truncated away. Secondly, the termP |Ψ j Ψ j |Q may contain terms of the formP |Ψ j Ψ l |Q, which do not appear in the original resolution of the identity. Thirdly, the approximate energies E j may not be close to the true energies E j (especially when |Ψ j is a sum of true eigenstates |Ψ l with large energy separation E j − E l ). If one chooses excitation operatorsÊ j in the QSE so thatP = j p jÊj , one completely avoids the first error source. By contrast, if one chooses a truncated set of true eigenstates |Ψ j = |Ψ j , one avoids the second and third error sources exactly. In App. D we expand on this point, and place some loose bounds on these error sources.

G. Experimental methods
The experimental implementation of the geometry optimization algorithm was performed using two of three transmon qubits in a circuit QED quantum processor. This is the same device used in Ref. [85] (raw data is the same as in Fig.1(e) of this paper, but with heavy subsequent processing). The two qubits have individual microwave lines for single-qubit gating and flux-bias lines for frequency control, and dedicated readout resonators with a common feedline. Individual qubits are addressed in readout via frequency multiplexing. The two qubits are connected via a common bus resonator that is used to achieve an exchange gate, via a flux pulse on the high-frequency qubit, with an uncontrolled additional single-qubit phase that was cancelled out in post-processing. The exchange angle θ may be fixed to a π/6000 resolution by using the pulse duration (with a 1 ns duration) as a rough knob and finetuning with the pulse amplitude. Repeat preparation and measurement of the state generated by exciting to |01 and exchanging through one of 41 different choices of θ resulted in the estimation of 41 two-qubit density matrices ρ i via linear inversion tomography of 10 4 single-shot measurements per pre-rotation [86]. All circuits were executed in eQASM [87] code compiled with the QuTech OpenQL compiler, with measurements performed using the qCoDeS [88] and PycQED [89] packages.
To use the experimental data to perform geometry optimization for H 2 , the ground state was estimated via a VQE [10,11]. The Hamiltonian at a given H-H bond distance R H−H was calculated in the STO-3G basis using the Dirac package [53], and converted to a qubit representation using the Bravyi-Kitaev transformation, and reduced to two qubits via exact block-diagonalization [12] using the Openfermion package [54] and the Openfermion-Dirac interface [55]. With the HamiltonianĤ(R H−H ) fixed, the ground state was chosen variationally: ρ(R H−H ) = min ρi Trace[Ĥ(R H−H )ρ i ]. The gradient and Hessian were then calculated from ρ(R H−H ) using the Hellmann-Feynman theorem (Sec. II B) and ETA (Sec. IV F) respectively. For the ETA, we generated eigenstates using the QSE, with the Pauli operator XY as a single excitation. This acts within the number conserving subspace of the two-qubit Hilbert space, and, being imaginary, will not have the real-valued H 2 ground state as an eigenstate. (This in turn guarantees the generated excited state is linearly independent of the ground state.) For future work, one would want to optimize the choice of θ at each distance R H−H , however this was not performed due to time constraints. We have also not implemented the error mitigation strategies studied in Ref. [85] for the sake of simplicity.

H. Simulation methods
Classical simulations of the quantum device were performed in the full-density-matrix simulator ( quantumsim) [50]. A realistic error model of the device was built using experimentally calibrated parameters to account for qubit decay (T 1 ), pure dephasing (T * 2 ), residual excitations of both qubits, and additional dephasing of qubits fluxed away from the sweet spot (which reduces T * 2 to T * ,red 2 for the duration of the flux pulse). This error model further accounted for differences in the observed noise model on the individual qubits, as well as fluctuations in coherence times and residual excitation numbers. Further details of the error model may be found in Ref. [85] (with device parameters in Tab.S1 of this reference).
With the error model given, 100 simulated experiments were performed at each of the 41 experimental angles given. Each experiment used unique coherence time and residual excitation values (drawn from a distribution of the observed experimental fluctuations), and had appropriate levels of sampling noise added. These density matrices were then resampled 100 times for each simulation.

V. AUTHOR CONTRIBUTIONS
New quantum algorithms were designed by TEO. Errors were estimated by TEO, BS, and AD. Experiment was performed by RS and LDC. Density matrix simulations were performed by XBM. Computational chemistry calculations performed by BS and LV. Gradient descent and polarizability calculations from experimental and simulated data were performed by TEO and BS. Relevance to quantum chemistry and suggested applications provided by FB and LV. Paper written by TEO, BS, FB, and LV (with input from all other authors).

VI. COMPETING INTERESTS
self-consistent field calculation: where {c µp } are the MO coefficients and {χ µ (r)} the atomic orbitals (AO). The atomic orbitals are usually chosen to be Gaussian functions centered at the nuclear positions R i and will move with the nuclei if the geometry is modified. An alternative is to express {φ p (r)} in terms of plane-wave type or Gausslet-type basis functions [90,91], which reduce the position dependence or reduce the number of non-zero g pqrs terms, respectively. With truncated atom-centered basis sets, one needs to consider the so-called 'wavefunction forces' (see Ref. [92] and references therein) resulting from the basis-set dependence on the nuclear geometry. However, with an appropriately basis-dependent second-quantized Hamiltonian, all changes in the basis can be incorporated into those of the Hamiltonian [93][94][95]).
Similarly to classical computing, a quantum computer will be limited by the systematic error in the truncated basis set chosen for the problem. Reducing this by directly increasing the number of basis functions is costly (as this defines the system size N sys ). Indeed, the errors in correlation energies decay slowly as = O(N −1 sys ) due to the presence of the Coulomb cusp in the wavefunction [96]. To bypass this slow convergence, the interelectron distance r 12 can be incorporated explicitly in the wavefunction (such as R12/F12 wavefunctions), thus leading to the so-called explicitly correlated methods [97,98]. In particular for quantum computers, the basis set error must be traded against the number of terms N H in the Hamiltonian, and the length of the circuits required to prepare ground states or perform phase estimation; optimizing this trade-off is a topic of much debate in the field [76,90,99,100].
Let us consider the general case in which both the MO coefficients and the Hamiltonian matrix elements depend on the perturbation, and consider first the analytical calculation of these derivatives with respect to a general perturbation λ. (When λ = R i this yields the Hessian for geometry optimization, and when λ = F i this describes an applied electric field for a polarizability calculation.) According to Eq. (A1), the one-and two-electron integrals in the MO basis read: where The matrix U (1) (λ) parametrizes the first-order changes in the MO coefficients and can be obtained by solving the coupled perturbed Hartree-Fock (CPHF) equations [101][102][103]. When the number of perturbations that need to be evaluated is large, the use of an explicitly λ-dependent U matrix in the evaluation of the energy derivative can be avoided with the Z-vector technique of Handy and Schaeffer [104]. The last terms in Eqs. (A4) and (A5) depend on the derivatives of the one-and twoelectron integrals in the AO basis. Those are simply given by and ∂(µν|ρτ ) ∂λ = ( ∂χ µ ∂λ χ ν |χ ρ χ τ ) + (χ µ ∂χ ν ∂λ |χ ρ χ τ ) can be mapped onto a qubit Hamiltonian via whichever mapping was chosen for said Hamiltonian (see Methods section of main text), as these encodings are Hamiltonian-(and thus perturbation-) independent. In some cases (for instance, the application of an electric field) the derivatives of the basis functions {χ µ (r)} are equal to zero because the basis functions do not depend on the perturbation. The phase measurement M T then obtains a function g(k 0 , k 1 ) = cos(2k 0 λ X t − k 1 λ X t) = 1 2 (e i(2k0λ X t−k1λ X t) + e i(k1λ X t−2k0λ X t) ).
We further Fourier transform α −,− (k 1 ) (in terms of k 1 ), as we are interested in the derivative of the ground state. This obtains a peak at tω + = tλ X , with amplitude To finish the computation of the gradient, we note that |α −,− (k 1 = 0)| = a * − a − = 1 2 , allowing us to compute E − |Z|E + E + |Z|E − = 1. We then substitute into Eq. 5 of the main text.
as required. We observe that the above procedure just as easily obtains the gradient of the excited state |E + , as our starting state was a superposition of both ground and excited states. Let us now repeat the calculation for the other second order derivatives. Running the circuit of Fig. 8 withP = X,Q = Z, the system state evolves to 1 √ 2 |00 + |1 − cos(k 1 λ X t)|1 + i sin(k 1 λ X t)|0 , and the total phase accumulated on the ancilla qubit may be calculated to be g(k 0 , k 1 ) = i sin(k 1 tλ X ) = 1 2 (e ik1tλ X − e −ik1tλ X ).
A Fourier transform of this function in k 0 obtains spurious peaks at ω s = 1 2 (ω + + ω − ) = 0, but none at ω − or ω + , implying all amplitudes are zero, and the total derivative is 0 as well.
This time, a Fourier transform at fixed k 1 obtains a peak at ω − = −tλ X as required, but with an amplitude The second Fourier transform in k 1 will then return a peak at ω − again (and no peak at ω + ). One must then note that the sum over eigenstates in Eq. 5 of the main text is over excited states only, implying that this peak should be excluded, and the final derivative is as required.
The considerations in calculating the last two above derivatives demonstrate facets of the PPE resolution error. In practice, one must take sufficient data to resolve any spurious peaks from the true ground state (so as to not confuse these contributions). In a larger system, spurious peaks may appear halfway between any pair of eigenstates, corresponding to scattering processes that do not return the system to the initial state. Interestingly, these peaks will be absent if a VQE is used for state preparation, which suggests the possibility that they might not be fundamental to the estimation technique. One must also take sufficient data to determine whether peaks from the second Fourier transform correspond to excitations from the ground state to the ground state, so that they may be excluded. Both such methods then require a resolution on the order of the gap between ground and excited states. . (E2) From here, one may construct the vector f k = e ikφ = f 0 k + if 1 k and estimate the amplitude A via least squares asÃ (We may evaluate the denominator immediately as, regardless of any error in φ, f k will satisfy |f k | 2 = 1.) We do not necessarily wish to include all k ∈ {1, . . . , k max } in this estimation, as the error in f k at large k is smaller than the error at small k. To make things worse, errors in f a k are correlated. It was derived in [84] that this correlation depends only on g kmax ; adjusting for the fact that |A| ≤ 1, we obtain ∂f a k ∂g a k ∝ δ k ,kmax k k max |A|