Efficient Verification of Anticoncentrated Quantum States

A promising use of quantum computers is to prepare quantum states that model complex domains, such as correlated electron wavefunctions or the underlying distribution of a complex dataset. Such states need to be verified in view of algorithmic approximations and device imperfections. As quantum computers grow in size, however, verifying the states they produce becomes increasingly problematic. Relatively efficient methods have been devised for verifying sparse quantum states, but dense quantum states have remained costly to verify. Here I present a novel method for estimating the fidelity $F(\mu,\tau)$ between a preparable quantum state $\mu$ and a classically specified target state $\tau$, using simple quantum circuits and on-the-fly classical calculation (or lookup) of selected amplitudes of $\tau$. Notably, in the targeted regime the method demonstrates an exponential quantum advantage in sample efficiency over any classical method. The simplest version of the method is efficient for anticoncentrated quantum states (including many states that are hard to simulate classically), with a sample cost of approximately $4\epsilon^{-2}(1-F)dp_{\text{coll}}$ where $\epsilon$ is the desired precision of the estimate, $d$ is the dimension of the Hilbert space in which $\mu$ and $\tau$ reside, and $p_{\text{coll}}$ is the collision probability of the target distribution. I also present a more sophisticated version of the method, which uses any efficiently preparable and well-characterized quantum state as an importance sampler to further reduce the number of copies of $\mu$ needed. Though some challenges remain, this work takes a significant step toward scalable verification of complex states produced by quantum processors.


I. INTRODUCTION
One of the more promising and imminent applications of quantum computers is the efficient preparation of quantum states that model complex, computationally challenging domains. For example, quantum computers can in principle efficiently simulate states of quantum many-body systems [1,2] and learn to output states that model complex data sets [3,4]. But real quantum processors have imperfections, and the algorithms themselves often involve approximations whose impact is not fully understood. Consequently, for the foreseeable future, complex states produced by quantum processors need to be experimentally verified. Unfortunately, verifying many-qubit states presents a major challenge since the number of parameters specifying a quantum state is generally exponential in the number of qubits. Recently developed quantum processors are already capable of producing states too large and complex to be directly verified using available methods [5].
Nearly all known methods for efficiently verifying large quantum states either require the state to have special structure, or require additional quantum resources. If one has the means to prepare reference copies of the target state τ , the quantum swap test [6] can be used to estimate the fidelity of any quantum state µ with respect to τ using O(1) copies of τ . But often, reference copies of the target state are not available. In such cases, known structure of the state in question may be used to reduce the number of measurements needed to characterize it. For example, pure or low-rank states are described by exponentially fewer parameters than mixed states (though still exponentially many). Compressive sampling can be used to efficiently learn quantum states that are sparse in a known basis [7][8][9][10]. States produced by local dynamics can also be efficiently characterized [11].
Notably, it is not necessary to fully characterize an unknown state in order to verify it. [12,13] propose the approach of expressing F (µ, τ ) as a sum of terms that are sampled and estimated separately. That approach enables scalable verification of states in that are concentrated in the Pauli operator basis.
In this Letter I present a novel hybrid (quantum-classical) algorithm, designated EVAQS, for efficient verification of anticoncentrated quantum states [26]. The algorithm estimates the fidelity F (µ, τ ) between a preparable quantum state µ and a classically-described pure quantum state τ . The underlying idea of the algorithm is to compare randomly selected sparse projections or "snippets" of the unknown state µ against corresponding snippets of the target state τ . Since the snippets of τ are small they can be efficiently calculated and prepared. The fidelity between µ and τ is then estimated as a weighted average of the fidelities of corresponding snippets. The basic version of EVAQS consists of a simple feed-forward quantum circuit applied to multiple copies of the prepared state µ and two ancilla qubits. The feed-forward part of the circuit involves an on-the-fly classical calculation (or lookup) of a few randomly-selected probability amplitudes of τ . (Actually, these amplitudes need be determined only up to a constant of proportionality.). The cost of the algorithm is quantified by the number of times N the testing circuit must be run to estimate F (µ, τ ) to precision . In the important regime in which µ is close to τ , N ≈ 4 −2 (1 − F (µ, τ ))dp (τ ) coll where d is the dimension of the Hilbert space in which µ, τ reside and p (τ ) coll is the collision probability of the target distribution. Note that p coll is a measure of the concentratedness of τ (1/p (τ ) coll is the effective support of τ ). It follows that EVAQS is efficient if τ is anticoncentrated. Optionally, an auxiliary quantum state α may be used to importance sample µ, further reducing the number of repetitions needed. Any state α that is efficiently preparable, well-characterized, and has support over τ may be used. For this more general version of the algorithm I find that where χ 2 (τ, α) is the chi-square divergence between the distributions induced by α and τ . In this version, the efficiency of EVAQS is limited only by how well α samples τ .
The state verification method proposed here constitutes a novel quantum capability, as there is no method of verifying an anticoncentrated classical distribution with fewer than O(d 1/2 ) samples, where d is the effective support [14][27]. And until recently, there was no known efficient method of verifying arbitrary anticoncentrated quantum states. While this manuscript was in preparation, Huang et al. introduced a type of randomized tomography that enables low-rank projections of arbitrary quantum states (e.g., fidelity) to be estimated efficiently [15]. In their method the unknown state is measured in a set of random bases, each obtained by applying a random global Clifford operation to the unknown state. As they acknowledged, most global Clifford operations are nontrivial to implement. Also, estimating a fidelity with their approach evidently requires a substantial portion of the target state to be classically computed. In contrast, the method proposed here involves only very simple circuits and requires the calculation of only a small fraction of the target state. On the other hand, the results of such calculations must be available on demand for a feedforward measurement. Regardless of which method is ultimately found to be more practical, the approach described here is novel and stands to be interesting in its own right.
While this work addresses a major problem (namely, sample complexity) for verification of some large quantum states, it must be acknowledged that some challenges remain. First of all, quantum states that have an "in between" amount of concentration fall in the gap between this work and prior work and remain difficult to verify. For example, spin-glass thermal states can simultaneously have exponentially large support (making them costly for standard methods) and be exponentially sparse (making them difficult for the method described here). Secondly, there is the very fundamental problem that verifying a quantum state requires a specification of its readily measurable properties, whereas the most useful quantum states for computational purposes are the ones for which such a classical specification is likely to be difficult to obtain. While it is perhaps too much to hope that all interesting quantum states would be efficiently verifiable, this work shows that some significant inroads may be made.

A. The EVAQS Algorithm
Let |µ be an efficiently preparable n-qubit state that is intended to approximate a (pure) target state |τ = d x=1 τ x |x where d = 2 n . Let us call a projection of |τ onto a sparse subset of the computational basis a snippet. The EVAQS algorithm is motivated by two observations: The first is that |µ and |τ are similar if and only if all corresponding snippets of |µ and |τ are similar. The second is that, given the ability to compute selected coefficients of |τ , it is not difficult to prepare snippets of |τ for direct comparison with corresponding snippets of |µ . This suggests the following strategy: (a) Project out a random snippet of |µ . (b) Construct the corresponding snippet of |τ and test its fidelity with the snippet of |µ using the quantum swap test [16]. (c) Repeat (a)-(b) many times and estimate the fidelity F (µ, τ ) ≡ | µ|τ | 2 as a weighted average of the fidelities of the projected snippets. In a nutshell, the idea is to verify a quantum state by verifying its projections onto random two-dimensional subspaces.
Below I present two versions of the algorithm: A basic version that samples µ uniformly and is efficient when τ is anticoncentrated, and a more general version that uses an auxiliary quantum state α to importance sample µ when τ is not anticoncentrated. In principle α can be any accurately characterized reference state, but to be useful it should place substantial probability on the support of τ while being significantly easier to prepare than τ itself. The general version of the algorithm involves a larger and more complex quantum circuit than the basic version, but requires far fewer circuit repetitions when α is a substantially better sampler of µ than the uniform distribution. Both versions of the algorithm require the ability to compute, within the decoherence time of the qubits, τ x ∝ τ x for any given x; for the general version, the ability to compute α x ∝ α x is also required.
The basic version of the algorithm is implemented by a repeating a short quantum-classical circuit involving a test register containing the unknown state |µ ∈ H 2 n and two ancilla qubits ( Fig. 1 (top)). The steps of the circuit is as follows: 1. Prepare the unknown state |µ and one of the ancillas in the state (|0 + |1 )/ √ 2.
2. Draw a uniformly distributed random number v ∈ {0, 1} n and perform controlled-NOT operations between the ancilla qubit and each qubit i in the test register for which v i = 1.
3. Measure the test register in the computational basis, obtaining a value x.
5. Prepare the second ancilla qubit in the normalized state 6. Measure the two ancilla qubits in the Bell basis and set (Note that the Bell measurement can be performed by a controlled-NOT between the ancillas followed by a pair of single-qubit measurements, as shown in Fig.   1a.) The general version of the algorithm uses an additional n-qubit register containing an auxiliary state α Fig. 1 (bottom). In principle α can be any well-known quantum state, though later it will be shown that the efficiency of the algorithm directly depends on how well α samples both τ and the error vector µ − τ . In this case the steps of the circuit are: 1. Prepare the test register, auxiliary register, and first ancilla qubit in the state |ψ = 1 √ 2 |µ |α (|0 + |1 ).
2. Perform a controlled swap between the test and auxiliary registers, using the first ancilla qubit as the control.
3. Measure the test and auxiliary registers in the computational basis, obtaining values x, y respectively.
5. Prepare a second ancilla qubit in the normalized state 6. Same as for the basic version.
For either version of the algorithm, steps 1-6 are repeated N 1 times. Let x i , y i , b i denote the values obtained in the ith trial. Define the sample weight where α x = 2 −n/2 for the basic version of the algorithm. Then F (µ, τ ) is estimated by the quantitỹ The simple estimator (4) has a bias of order N −1 . In practice this is usually negligible, but for completeness an estimator that is unbiased to order N −1 is derived in Appendix A.
I note that the circuit for the basic algorithm is quite simple, consisting of (on average) just n/2+1 controlled-NOT gates and a few single-qubit gates and measurements. The general version replaces the (on average) n/2 controlled-NOT gates with n controlled-SWAP (Fredkin) gates between n pairs of qubits. Each controlled-SWAP gate requires a handful of standard 1-and 2-qubit gates [17,18]. Thus the general version uses twice as many qubits and roughly 10 times as many gates as the basic version. However, if α is substantially better at sampling τ than the uniform distribution, the general version will require substantially fewer circuit repetitions to obtain a precise estimate.
B. Performance

Cost to Achieve a Given Precision
The cost of EVAQS is driven by the number of samples needed to obtain an estimate with sufficiently small variance. In a typical application the expectation is that µ is not a bad approximation of τ ; thus the regime of greatest interest is that of small infidelity I ≡ 1 − F . To lowest order in I and statistical fluctuations, the variance ofF can be written as where ≡ e −iφ µ − τ is the error of µ upon correcting its global phase φ, defined via τ |µ = √ F e iφ . As shown in the Appendix, | x | 2 is (to lowest order) also proprtional to I, so that the entire expression is proportional to I. Note that VarF does not depend on the phases of the components of α.
The worst case is that the error resides entirely on the component i.e. the basis state which is most badly undersampled. In non-adversarial scenarios the error may be expected to be distributed across the support of τ . Simulations indicate that for plausible error This leads to the scaling heuristic where 2 is the desired variance and χ 2 (p, q) ≡ x |p x − q x | 2 /q x is the chi-square divergence of p with respect to q. χ 2 is a standard statistical measure that quantifies the "distance" between distributions p and q, heavily weighting differences in which q undersamples p. [28] Note that N has no intrinsic dependence on the dimension of τ .
For the basic version of the algorithm (or when α is the uniform superposition), eq. (8) can be simplified even further to where p may be interpreted as the effective support size d eff of τ . It follows that the basic algorithm is efficient so long as d eff /d is at least 1/poly(n), that is, so long as the effective support of τ is not too small. This make the proposed method complementary to existing methods which are efficient for concentrated distributions. For a slightly different perspective, the factor dp Using standard calculus one can show that the optimal sampling distribution satisfies where σ is the normalized projection of onto the space orthogonal to τ . One might have guessed that it would be sufficient for α to sample either µ or τ well. But that is incorrect: If α doesn't frequently sample the components of µ with the largest errors (whether or not the components themselves are large), then the (in)fidelity will not be estimated with high precision.

Robustness to Error in the Auxiliary State
A key assumption in the general version of EVAQS is that the auxiliary state α is wellcharacterized. Fortunately, EVAQS is robust with respect to this assumption in the sense that small error in the knowledge of αleads to a correspondingly small error in the estimate of F (µ, τ ).
Suppose α is mistakenly characterized asα. Then as shown in Section III C, instead of estimating F (µ, τ ) the procedure estimates F (µ,τ ) whereτ is a perturbed version of τ , namelỹ Intuitively, ifα is close to α thenτ will be close to τ and F (µ,τ ) will be close to F (µ, τ ). In Section III C and Appendix C it is shown that where δ rms is the average relative error inα. Thus a small error in the characterization of α yields a correspondingly small error in the estimate of F (µ, τ ).

C. Simulations
In this section I present the results of several simulation studies demonstrating the validity and efficacy of the proposed method. The first study demonstrates scalable verification of so-called instantaneous quantum polynomial (IQP) circuits [19]. The next two studies demonstrate sampleefficient verification of random quantum circuits, including the kind of circuits used to demonstrate quantum supremacy [5]. In each of these studies the basic version of the method (without an auxiliary state |α ) was employed.

Verification of IQP Circuits
IQP circuits are currently of interest as a family of relatively simple quantum circuits whose output distributions are hard to simulate classically [20,21]. IQP circuits are well-suited for demonstrating EVAQS as their outputs are typically anticoncentrated in the computational basis. Even better, if the output an IQP circuit is transformed into the Hadamard basis, the distribution is not only perfectly uniform (yielding the lowest possible sample complexity), but also easy to calculate on a classical computer. This makes EVAQS a fully scalable way to verify IQP circuits.
An n-qubit IQP circuit of depth m can be defined as a set of m multiqubit X rotations acting on the |0 n state. A multiqubit NOT operation may be written as X a ≡ X a 1 1 ⊗ · · · ⊗ X an n for a ∈ {0, 1} n . In terms of such operators, an IQP circuit has the form for some set of vectors A 1 , . . . , A m ∈ {0, 1} n . The amplitudes of the output state can be written concisely as = v: Av=x where A = [A 1 , . . . , A m ] and The number of terms in v: Av=x is 2 m−r where r = rank(A). Since r ≤ n, the number of terms contributing to τ x is exponential in the circuit depth m once it exceeds n.
In the Hadamard Basis IQP states are substantially easier to analyze in the Hadamard basis.
Let |ξ = H ⊗n |τ . Note that it is experimentally easy to obtain |ξ from |τ . Since HX = ZH, we where |+ ≡ (|0 + |1 )/ √ 2 = H|0 . In this basis, the amplitude is trivial to compute classically. Furthermore, the induced probability distribution is uniform, which is the best case for EVAQS. For each circuit, the verification procedure was performed with 10 4 simulated measurements.
The left plot in Fig. 2 shows the estimated infidelities obtained from these simulated experiments.
(In all the figures, a solid line shows the median value among all realizations of a given experiment and a surrounding shaded band shows the 10th-90th percentiles). As expected, the proposed method is able to accurately estimate the infidelity of the prepared state independent of number of qubits and over a wide range of infidelities. With a fixed number of measurements, the states with smaller infidelity are estimated with larger relative error; however, the absolute error is actually smaller, in accordance with eq. (9). The right panel of Fig. 2 shows the sample cost of the procedure, given by eq. (??) and normalized by the desired precision 2 , as a function of the number of qubits. Notably, the cost is independent of the number of qubits, depending only on the fidelity of the test state.
Again, this is expected from eq. (9) given that the output distribution is uniform.
In the Computational Basis EVAQS is also effective when measurements are performed in the computational basis. In this basis, the output state of a typical IQP circuit is anticoncentrated in the sense that significant fraction of the basis states have probabilities of order 2 −n or larger [22]. Fig. 3,left shows the estimated infidelities for the same set of circuits as described in the previous subsection, but this time using simulated measurements in the computational basis. As before, EVAQS was able to accurately estimate the circuit fidelities. This time, however, the cost tends to increase slowly with the number of qubits. I note that while the median cost does appear to grow exponentially, in going from 4 to 20 qubits it increases only by a factor of about 2.5, whereas the size of the state being verified increases by a factor of 2 20 /2 4 = 65 536. Furthermore, there is noticeable variation in cost for circuits of the same size (Fig. 3, middle). This is because different random circuits of the same size were anticoncentrated to different degrees. The strong link between cost and the degree of anticoncentration in the target distribution is shown in Fig. 3,right. Indeed, the degree of concentration (as measured by the inverse collision probability) is a better predictor of cost than the number of qubits. Also noteworthy is the fact that the estimated costs, given by eq. (9) and shown as dashed lines, are within a small factor of the true costs.

Verification of Random Circuits
The second study involves verification of random quantum circuits, that is, sequences of random 2-qubit unitaries on randomly selected pairs of qubits. Each random unitary was obtained by generating a random complex matrix with normally-distributed elements, then performing Gram-Schmidt orthogonalization on the columns of the matrix. In this study, error was modelled as a perturbation of the output state rather than perturbation of the individual gates. The output state was written as where | consisted of both multiplicative and additive noise, where ξ x , ξ x are independent complex Gaussian random variables.The constant λ was chosen so that the standard deviations of the multiplicative error and additive error are equal when |τ x | equals its mean value. The constant η was then chosen to yield a particular infidelity I. As before, circuits were The results are shown in Fig. 4. Again, EVAQS is able to estimate the output state fidelity accurately with a number of measurements that grows very slowly with the number of qubits. And as with IQP circuits, the cost is well-predicted by the concentration of the target state and the infidelity of the unknown state.

Verification of Supremacy Circuits
Another important class of circuits is that recently used to demonstrate the "quantum supremacy" of a quantum processor [5]. Such circuits consist of alternating rounds of single-qubit rotations drawn from a small discrete set and entangling operations on adjacent pairs of qubits in a particular pattern. Like IQP circuits, such circuits are hard to classically simulate [23,24] in spite of their locality constraints. But unlike IQP circuits, they are universal for quantum computing.
The circuits simulated in this study consisted of n ∈ {4, 9, 12, 16, 20} qubits arranged in a planar square lattice with (nearly) equal sides and 16 cycles of alternating single-qubit operations and entangling operations, as described in [5]. Single-qubit error was modelled as a post-operation unitary of the form exp (i( x X + y Y + z Z) where x , y , z are independent normally-distributed variables of zero mean and small variance. Error on the two-qubit operations was modelled as a small random perturbation of the angles θ, φ parameterizing the entangling gate [5]. The amount of error was chosen to yield a process infidelity [25] on the order of 0.02% per single-qubit operation and 0.2% per two-qubit operation. For each circuit size, 100 random noisy circuits were generated; for each circuit, 10 4 measurements were simulated.  comparison, the dashed line shows the true infidelity. The smallest circuits (n = 4) had output infidelities on the order of 10 −3 , while the largest (n = 20) had infidelities around 0.3. Over this range, the infidelity was estimated to within 20% or better. What is not evident from the plot is that the variance of the estimator varied by an order of magnitude for different random circuits of the same size, due to the different amounts of entropy in their output distributions. Consequently, the expected error for some of the circuits is actually considerably smaller than 20%. For the circuits that were verified less accurately, the expected error could be reduced further by increasing the number of measurements. Interestingly, the relative error does not vary much over the wide range of circuit sizes and circuit infidelities in these simulations. According to eq. (9), one expects the relative error to scale as VarF /I ∝ I −1/2 , that is, to increase as infidelity decreases. Additional simulations confirmed that this scaling does occur with smaller gate errors.

A. Proof of Correctness
To see that the procedures described in Section II A indeed yield an estimate of F (µ, τ ), let us consider the general case first; the correctness of the simpler version will then be established as a special case.

Proof of General Version
In each iteration of the general algorithm, one first prepares the test register, auxiliary register, and the first ancilla qubit in the state In the second step, the ancilla controls a swap between the test and auxiliary registers. This yields the state In the third step the test and auxiliary registers are measured, yielding values x, y. This projects ancilla 1 onto the unnormalized state In the fourth step, the observed values x, y are used to compute τ x , τ y , α x , α y and prepare ancilla qubit 2 in the state Let λ = |τ x /τ x | 2 / |α x /α x | 2 , which is independent of x. Then |r yx can be written as where w xy = w xy /λ. The joint state of the two ancillas is Finally, the ancillas are measured in the Bell basis. Since the probability of outcome b = ±1 is Using | x µ * x τ x | 2 = F (µ, τ ) we obtain x,y w xy p xy± = 1 2 (1 ± F (µ, τ )) . Now, p xy+ + p xy− = b∈{−1,0,1} p xyb b 2 and p xy+ − p xy− = b∈{−1,0,1} p xyb b. Thus and Thus To obtain an experimental estimate of F (µ, τ ), the steps above are repeated N 1 times. Let are unbiased estimators of A = λF and B = λ respectively. As a first approximation F may be estimated by the ratioÃ/B. It is a basic result of statistical analysis that such an estimator has a bias of order N −1 . The estimator which corrects for this bias is derived in Appendix A.

Proof of the Basic Version
To establish the correctness of the basic version of the algorithm, I show that it is equivalent to performing the general algorithm with α as the uniform superposition state.
In the basic version, one starts with just the unknown state |µ and an ancilla qubit in the state One picks a uniform random vector v ∈ {0, 1} n and performs yielding the state Measuring x yields the ancilla state with net probability Now, consider the general algorithm with |α = 2 −n/2 y |y . From eq. (23), the state immediately following the controlled swap between test and auxiliary registers is x,y |x |y (µ x |0 + µ y |1 ).
Measurement of x, y yields the ancilla state with probability Eqs. (46),(47) are the same as (43),(44). Thus, the basic version of the algorithm is equivalent to the general version with a uniform superposition for α.

B. Derivation of the Variance
The cost of EVAQS is driven by the number of samples needed to obtain an estimate with sufficiently small variance. In Appendix B is is shown that, to lowest order in statistical fluctuations, where In a typical application the expectation is that µ is not a bad approximation of τ ; thus the regime of interest is that of small infidelity I ≡ 1 − F . We proceed to simplify Q x for the case that the infidelity I ≡ 1 − F is small, keeping only the lowest order terms. Let τ |µ = e iφ cos θ. Then F = cos 2 θ, I = sin 2 θ, and 1 + F 2 ≈ 2F . This yields Now, µ can be written as µ = e iφ (τ cos θ + σ sin θ) where σ 2 = 1 and σ|τ = 0. Then τ * x µ x e −iφ = |τ x | 2 cos θ + τ * x σ x sin θ. To lowest order in sin θ = √ I, Substituting these expressions into (50) and combining terms yields This yields To obtain eq. (7, we combine the definitions µ = e iφ (τ cos θ + σ sin θ) and ≡ e −iφ µ − τ to obtain = τ (cos θ − 1) + σ sin θ.
Since (I − τ τ † ) = σ sin θ, σ can be understood as the normalized projection of onto the subspace orthogonal to τ . Alternatively, we may use the fact that sin θ = Eq. (7) follows from substituting this expression into (54).
Suppose α is mischaracterized asα. Then upon measuring x, y one is led to prepare the ancilla state and eq. (27) becomes The projection of the ancillas onto the Bell state |Φ ± is Comparing this with (28) shows that τ is effectively replaced byτ , wherẽ That is, error in one's knowledge of the auxiliary state α causes the algorithm to estimate the fidelity of µ with respect to a perturbed version of the target state. Intuitively, ifα is close to α thenτ will be close to τ and F (µ,τ ) will be close to F (µ, τ ). To make this more precise we use the triangle from which it follows that In Appendix C it is shown that where δ rms is the average relative error ofα, defined as Combining this with (62) yields the robustness bound (12).

IV. CONCLUSION
The verification of complex states produced by quantum computers presents daunting experimental and computational challenges. In this paper I presented EVAQS, a novel state verification method that takes a significant step in addressing these challenges. In this method, a preparable quantum state is verified against a classical specification using a combination of relatively simple quantum circuits and on-the-fly calculations on a conventional computer. In contrast to existing verification methods, EVAQS is inherently sample-efficient when the target state is anticoncentrated (i.e., has high entropy) in the chosen measurement basis. In the case that the target state is not anticoncentrated, an auxiliary state may be used to importance sample the unknown state, greatly reducing the number of measurements needed.
The main limitation of EVAQS is the need to calculate selected probability amplitudes of the target state for comparison to the unknown state. If the state is not too large (say, less than 30 qubits), the probability amplitudes may feasibly be calculated ahead of time and stored in a look-up is an estimator of F in which the lowest order bias (N −1 ) has been eliminated. The quantities appearing in the correction terms are unknown, but they can be estimated to the same order of accuracy as the estimator itself: This yields the improved estimator of F ,F The variance ofF is a function of several different expectation values, which are here identified and evaluated. To lowest order in statistical fluctuations, the variance ofF is With the relations Ã = λF and B = λ this simplifies to Now, Ã −BF = 0. To evaluate Ã −BF 2 we use (37) and (38) to writẽ where each G i = w x i .y i b i − b 2 i F is an independent random variable with mean 0. Then Using b 3 = b and b 4 = b 2 we obtain Now, and This gives where Q x is given by eq. (49).
where expectations are taken with respect to p. Then An upper bound on the numerator is where δ rms ≡ |δ x | 2 1 2 . For the denominator, non-negativity of variance gives Now, |1 + δ x | ≥ 1 − | δ x |. This time non-negativity of variance gives | δ x | ≤ δ rms , hence The upper bound on the numerator combined with the lower bound on the denominator yields