Quantum Risk Analysis

We present a quantum algorithm that analyzes risk more efficiently than Monte Carlo simulations traditionally used on classical computers. We employ quantum amplitude estimation to evaluate risk measures such as Value at Risk and Conditional Value at Risk on a gate-based quantum computer. Additionally, we show how to implement this algorithm and how to trade off the convergence rate of the algorithm and the circuit depth. The shortest possible circuit depth - growing polynomially in the number of qubits representing the uncertainty - leads to a convergence rate of $O(M^{-2/3})$. This is already faster than classical Monte Carlo simulations which converge at a rate of $O(M^{-1/2})$. If we allow the circuit depth to grow faster, but still polynomially, the convergence rate quickly approaches the optimum of $O(M^{-1})$. Thus, for slowly increasing circuit depths our algorithm provides a near quadratic speed-up compared to Monte Carlo methods. We demonstrate our algorithm using two toy models. In the first model we use real hardware, such as the IBM Q Experience, to measure the financial risk in a Treasury-bill (T-bill) faced by a possible interest rate increase. In the second model, we simulate our algorithm to illustrate how a quantum computer can determine financial risk for a two-asset portfolio made up of Government debt with different maturity dates. Both models confirm the improved convergence rate over Monte Carlo methods. Using simulations, we also evaluate the impact of cross-talk and energy relaxation errors.


INTRODUCTION
Risk management plays a central role in the financial system. Value at risk (VaR) [1], a quantile of the loss distribution, is a widely used risk metric. Examples of use cases include the Basel III regulations under which banks are required to perform stress tests using VaR [2] and the calculation of haircuts applied to collateral used in security settelement systems [3]. A second important risk metric is conditional value at risk (CVaR, sometimes also called expected shortfall), defined as the expected loss for losses greater than VaR. By contrast to VaR, CVaR is more sensitive to extreme events in the tail of the loss distribution.
Monte Carlo simulations are the method of choice to determine VaR and CVaR of a portfolio [1]. They are done by building a model of the portfolio assets and computing the aggregated value for M different realizations of the model input parameters. VaR calculations are computationally intensive since the width of the confidence interval scales as O M − 1 2 . Many different runs are needed to achieve a representative distribution of the portfolio value.
Quantum computers process information using the laws of quantum mechanics [4]. This has opened up novel ways of addressing some problems, e.g. in quantum chemistry [5], optimization [6], or machine learning [7]. Amplitude estimation is a quantum algorithm used to estimate an unknown parameter and converges as O M −1 , which is a quadratic speed-up over classical algorithms like Monte Carlo [8]. It has already been * Electronic address: wor@zurich.ibm.com shown how amplitude estimation can be used to price options with the Black-Scholes model [9,10].
In Section II of this paper we show how to use amplitude estimation to calculate expectation, variance, VaR and CVaR of random distributions. Section III discusses how to construct the corresponding quantum circuits. In Sections IV and V we show how to apply our algorithm to portfolios made up of debt issued by the United States Treasury (US Treasury), which as of December 2016 had 14.5 trillion USD in outstanding marketable debt held by the public [11]. This debt is an actively traded asset class with typical daily volumes close to 500 billion USD [12] and is regarded as high quality collateral [13]. Additionally, government debt typically lacks some of the more complex features that other types of fixed-income securities have. These features make US Treasuries a highly relevant asset class to study whilst allowing us to use simple models to illustrate our algorithm. In Section IV we introduce a very simple portfolio made up of one T-Bill analyzed on a single period of a binomial tree. We demonstrate amplitude estimation and can approximate the expected value of the T-Bill on a real quantum computer. In Section V we show a more comprehensive two asset portfolio and simulate the presented algorithms assuming a perfect as well as a noisy quantum computer. We discuss our results as well as next steps in Sec. VI.

II. QUANTUM RISK ANALYSIS
In this section, we introduce amplitude estimation and explain how it can be used to estimate properties of random distributions such as risk measures.
Suppose a unitary operator A acting on a register of (n + 1) qubits such that A |0 n+1 = √ 1 − a |ψ 0 n |0 + √ a |ψ 1 n |1 for some normalized states |ψ 0 n and |ψ 1 n , arXiv:1806.06893v1 [quant-ph] 18 Jun 2018 where a ∈ [0, 1] is unknown. Amplitude estimation allows the efficient estimation of a, i.e., the probability of measuring |1 in the last qubit [8]. This is done using an operator Q (formally introduced in Appendix A), based on A, and Quantum Phase Estimation [14] to approximate certain eigenvalues of Q. This requires m additional qubits and M = 2 m applications of Q. The m qubits are first put into equal superposition by applying Hadamard gates. Then, they are used to control different powers of Q. And last, after an inverse Quantum Fourier Transform has been applied, their state is measured, see the circuit in Fig. 1. This results in an integer y ∈ {0, ..., M − 1}, which is classically mapped to the estimatorã = sin 2 (yπ/M ) ∈ [0, 1]. The estimatorã satisfies with probability of at least 8 π 2 . This represents a quadratic speedup compared to the O M −1/2 convergence rate of classical Monte Carlo methods [1].
· · · · · · FIG. 1: Quantum circuit for amplitude estimation as introduced in [8]. H is the Hadamard gate and F † m denotes the inverse Quantum Fourier Transform on m qubits.
We now explain how to use amplitude estimation to approximate the expected value of a random variable [15,16]. Suppose a quantum state where the probability of measuring the state |i n is p i ∈ [0, 1], with N −1 i=0 p i = 1, and N = 2 n . The state |i n is one of the N possible realizations of a bounded discrete random variable X, which, for instance, can represent a discretized interest rate or the value of a portfolio.
We consider a function f : {0, ..., N − 1} → [0, 1] and a corresponding operator for all i ∈ {0, ..., N − 1}, acting on an ancilla qubit. Applying F to |ψ n |0 leads to the state Now we can use amplitude estimation to approximate the probability of measuring |1 in the last qubit, which equals In the remainder of this section we extend this technique and show how to evaluate risk measures such as VaR and CVaR.
For a given confidence level α ∈ [0, 1], VaR α (X) can be defined as the smallest value x ∈ {0, ..., N − 1} such that P[X ≤ x] ≥ (1 − α). To find VaR α (X) on a quantum computer, we define the function f l (i) = 1 if i ≤ l and f l (i) = 0 otherwise, where l ∈ {0, ..., N − 1}. Applying F l , i.e. the operator corresponding to f l , to |ψ n |0 leads to the state The probability of measuring |1 for the last qubit is l i=0 p i = P[X ≤ l]. Therefore, with a bisection search over l we can find the smallest level l α such that P[X ≤ l α ] ≥ 1 − α in at most n steps. The smallest level l α is equal to VaR α (X). This allows us to estimate VaR α (X) as before with accuracy O M −1 , which again is a quadratic speedup compared to classical Monte Carlo methods.
CVaR α (X) is the conditional expectation of X restricted to {0, ..., l α }, where we compute l α = VaR α (X) as before. To estimate CVaR we apply the operator F that corresponds to the function f (i) = i lα · f lα (i) to |ψ n |0 , which leads to the state The probability of measuring |1 for the last qubit equals lα i=0 i lα p i , which we approximate using amplitude estimation. However, we know that lα i=0 p i does not sum up to one but to P[X ≤ l α ] as evaluated during the VaR estimation. Therefore we must normalize the probability of measuring |1 to get We also multiplied by l α , otherwise we would estimate CVaR α X lα . Even though we replace P[X ≤ l α ] by an estimation, the error bound on CVaR, computed in Appendix C, shows that we still achieve a quadratic speed up compared to classical Monte Carlo methods.
We have shown how to calculate the expected value, variance, VaR and CVaR of X. However, if we are instead interested in properties of g(X), for a given function g : {0, ..., N − 1} → {0, ..., N − 1}, N = 2 n , n ∈ N, we can apply an operator G : |i n |0 n → |i n |g(i) n and use the previously introduced algorithms on the second register. Alternatively, as long as we can efficiently perform the bisection search on g(X) ≤ l for l ∈ {0, ..., N − 1}, we can spare the second register and combine f and g and apply all algorithms directly.

III. QUANTUM CIRCUITS
In this section, we show how the algorithms discussed in Sec. II can be mapped to quantum circuits.
We start with the construction of |ψ n as introduced in Eq. (3), representing the probability distribution of a random variable X mapped to {0, ..., N − 1}. In general, the best known upper bound for the number of gates required to create |ψ n is O(2 n ) [17]. However, approximations with polynomial complexity in n are possible for many distributions, e.g., log-concave distributions [18]. In the remainder of this section, we assume a given operator R such that R |0 n = |ψ n .
If we are interested in properties of g(X), as discussed in the previous section, then, depending on g, we can use basic arithmetic operations to construct the operator G. Numerous quantum algorithms exist for arithmetic operations [19][20][21][22][23] as well as tools to translate classical logic into quantum circuits [24,25]. However, since the latter are not necessarily efficient, the development of new and improved algorithms is ongoing research.
Approximating E[X] using amplitude estimation requires the operator F for f (x) = x/(N − 1), defined in Eq. (4). In general, representing F for the expected value or for the CVaR either requires an exponential O(2 n ) number of gates or additional ancillas to pre-compute the (discretized) function f into qubits, using quantum arithmetic, before applying the rotation [26]. The exact number of ancillas depends on the desired accuracy of the approximation of F . Another approach consists of piecewise polynomial approximations of f [27]. However, this also implies a significant overhead in terms of the number of ancillas and gates. In the following, we show how to overcome these hurdles by approximating F without ancillas using polynomially many gates, at the cost of a lower -but still faster than classical -rate of convergence. Note that the operator required for estimating VaR is easier to construct and we can always achieve the optimal rate of convergence as discussed later in this section.
Our contribution rests on the fact that an operator P : |x n |0 → |x n (cos(p(x)) |0 + sin(p(x)) |1 ), for a given polynomial p(x) = k j=0 p j x j of order k, can be efficiently constructed using multi-controlled Y-rotations, as illustrated in Fig. 2. Single qubit operations with n − 1 control qubits can be exactly constructed, e.g., using O(n) gates and O(n) ancillas or O(n 2 ) gates without any ancillas. They can also be approximated with accuracy > 0 using O(n log(1/ )) gates [28]. For simplicity, we use O(n) gates and O(n) ancillas. Since the binary variable representation of p, illustrated in Fig. 2, leads to at most n k terms, the operator P can be constructed using O(n k+1 ) gates and O(n) ancillas.
For every analytic function f , there exists a sequence of polynomials such that the approximation error converges exponentially fast to zero with increasing order of the polynomials [29]. Thus, for simplicity, we assume that f is a polynomial of order s.
If we can find a polynomial p(y) such that sin 2 (p(y)) = y, then we can set y = f (x), and the previous discussion provides a way to construct the operator F . Since the expected value is linear, we may choose to estimate for a parameter c ∈ (0, 1], and then map the result back to an estimator for E[f (X)]. The rationale behind this choice is that . Thus, we want to find p(y) such that c y − 1 2 + 1 2 is sufficiently well approximated by sin 2 cp(y) + π 4 . Setting the two terms equal and solving for p(y) leads to and we choose p(y) as a Taylor approximation of Eq. (8) around y = 1/2. Note that Eq. (8) defines an odd function around y = 1/2, and thus the even terms in the Taylor series equal zero. The Taylor approximation of order 2u + 1 leads to a maximal approximation error for Eq. (8) of for all y ∈ [0, 1], as shown in Appendix B. Now we consider the resulting polynomial p(f (x)) of order s(2u + 1). The number of gates required to construct the corresponding circuit scales as O n s(2u+1)+1 .
The smallest scenario of interest is s = 1 and u = 0, i.e., both, f and p, are linear functions, which leads to a circuit for F where the number of gates scales quadratically as the number of qubits n representing |ψ n grows linearly.
Thus, using amplitude estimation to estimate E c(f (x) − 1 2 ) + 1 2 leads to a maximal error where we ignore the higher order terms in the following. Since our estimation uses cf (x), we also need to analyze the scaled error c , where > 0 denotes the resulting estimation error for E[f (X)]. Setting Eq. (10) equal to c and reformulating it leads to Maximizing the left-hand-side with respect to c, i.e. minimizing the number of required samples M to achieve a target error , results in c * = √ 2 1 2u+2 . Plugging c * into Eq. (11) gives Translating this into a rate of convergence for the estimation error with respect to the number of samples M , which is already better than the classical convergence rate of O M − 1 2 . For increasing u, the convergence rate quickly approaches the optimal rate of O M −1 .
For the estimation of the expectation we exploited sin 2 (y + π 4 ) ≈ y + 1 2 . For the variance we apply the same idea but use sin 2 (y) ≈ y 2 . We employ this approximation to estimate the value of E f (X) 2 and then, together with the estimation for E [f (X)], we evaluate 2u+3 . The previous discussion shows how to build quantum circuits to estimate E[f (X)] and Var(f (X)) more efficiently than possible classically. In the following, we extend this to VaR and CVaR.
To estimate VaR, we need an operator F l that maps |x n |0 to |x n |1 if x ≤ l and to |x n |0 otherwise, for all x ∈ {0, ..., N − 1}. Then, for the fixed l, amplitude estimation can be used to approximate P[X ≤ l], as shown in Eq. (6). With (n + 1) ancillas, adder-circuits can be used to construct F l using O(n) gates [21], and the resulting convergence rate is O M −1 . For a given level α, a bisection search can find the smallest l α such that P[X ≤ l α ] ≥ α in at most n steps, and we get l α = VaR α (X).
To estimate the CVaR, we apply the circuit F l for l α to an ancilla qubit and use this ancilla qubit as a control for the operator F used to estimate the expected value, but with a different normalization, as shown in Eq. (6). Based on the previous discussion, it follows that amplitude estimation can then be used to approximate CVaR α (X) with the same trade-off between circuit depth and convergence rate as for the expected value.

IV. T-BILL ON A SINGLE PERIOD BINOMIAL TREE
Our first model consists of a zero coupon bond discounted at an interest rate r. We seek to find the value of the bond today given that in the next time step there might be a δr rise in r. The value of the bond with face value V F is where p and (1 − p) denote the probabilities of a constant interest rate and a rise, respectively. This model is the first step of a binomial tree. Binomial trees can be used to price securities with a path dependency such as bonds with embedded options [30]. The simple scenario in Eq. (13) could correspond to a market participant who bought a 1 year T-bill the day before a Federal Open Markets Committee announcement and expects a δr = 0.25%-points increase of the Federal Funds Rate with a (1 − p) = 70% probability and no change with a p = 30% probability [50].
We show how to calculate the value of the investor's T-bill using the IBM Q Experience by using amplitude estimation and mapping V to [0, 1] such that V low and V high correspond to $0 and $1, respectively.
Here, we only need a single qubit to represent the uncertainty and the objective and we have A = R y (θ p ), where θ p = 2 sin −1 ( √ p), and thus, For the one-dimensional case, it can be easily seen that the amplitude estimation operator Q = AZA † Z = R y (2θ p ), where Z denotes the corresponding Pauli operator [4]. We discuss this in more detail in Appendix A. In particular, this implies Q 2 j = R y (2 j+1 θ p ), which allows us to construct the amplitude estimation circuit efficiently to approximate the parameter p = E[X] = 30%.
Although a single period binomial tree is a very simple model, it is straight-forward to extend it to multi-period multi-nomial trees with path-dependent assets. Thus, it represents the smallest building block for interesting scenarios of arbitrary complexity.

Results from real quantum hardware
We run several experiments in which we apply amplitude estimation with a different number of evaluation . This requires at most five qubits and can be implemented and run on the IBM Q 5 Yorktown (ibmqx2) quantum processor with five qubits accessible via the IBM Q Experience [31]. As disussed in Sec. II, the success probability of amplitude estimation is larger than 8/π 2 , but not necessarily 100%, and the real hardware introduces additional errors. Thus, we repeat every circuit 8192 times (i.e., the maximal number of shots in the IBM Q Experience) to get a reliable estimate. This implies a constant overhead, which we ignore in the comparison of the algorithms. The quantum circuit for m = 3 compiled to the IBM Q 5 quantum processor is illustrated in Fig.  3. The connectivity of the IBM Q 5 quantum processor, shown in Appendix G, requires swapping two qubits in the middle of the circuit between the application of the controlled Q operators and the inverse Quantum Fourier Transform. The results of the algorithm are illustrated in Fig. 4 where it can be seen that the most frequent estimator approaches the real value p and how the resolution of the algorithm increases with m. The quantum algorithm presented in this paper outperforms the Monte Carlo method already for M = 16 samples (i.e. m = 4 evaluation qubits), which is the largest scenario we performed on the real hardware, see Fig. 5. The details of this convergence analysis are discussed in Appendix E.

V. TWO ASSET PORTFOLIO
We now illustrate how to use our algorithm to calculate the daily risk in a portfolio made up of one-year US Treasury bills and two-year US Treasury notes with face values V F1 and V F2 , respectively. We chose a simple portfolio in order to put the focus on the amplitude estimation algorithm applied to VaR. The portfolio is worth  where c is the coupon rate paid every six months by the two-year treasury note and r 1 and r 2 are the yield to maturity of the one-year bill and two-year note, respectively. US Treasuries are usually assumed to be default free [32]. The cash-flows are thus known ex ante and the changes in the interest rates are the primary risk factors. Therefore, a proper understanding of the yield curve suffices to model the risk in this portfolio. In this work we use the Constant Maturity Treasury (CMT) rates to model the uncertainty in r 1 and r 2 , see Appendix F for a description of the data. To calculate the daily risk of our portfolio we study the difference in the CMT rates from one day to the next. These differences are highly correlated (as are the initial CMT rates), see Fig. 6(a), making it unnecessary to model them all when simulating more complex portfolios. A principal component analysis reveals that the first three principal components, named shift, twist and butterfly account for 96% of the variance [33,34], see Fig. 6(b)-(d). Therefore, when modeling a portfolio of US Treasury securities it suffices to study the distribution of these three factors. This dimensionality reduction also lowers the amount of resources needed by our quantum algorithm.
To study the daily risk in the portfolio we write r i = r i,0 +δr i where r i,0 is the yield to maturity observed today and the random variable δr i follows the historical distribution of the one day changes in the CMT rate with maturity i. For our demonstration we set V F1 = V F2 = $100, r 1,0 = 1.8%, r 2,0 = 2.25%, and c = 2.5% in Eq. (14). We perform a principal component analysis of δr 1 and δr 2 and retain only the shift S and twist T components. Figure 7 illustrates the historical data as well as S and T , related to δr i by The correlation coefficient between shift and twist is −1%. We thus assume them to be independent and fit discrete distributions to each separately, see Fig. 8. We retained only the first two principal components to illustrate the use of principal component analysis despite the fact that, in this example, there is no dimensionality reduction. Furthermore, this allows us to simulate our algorithm in a reasonable time on classical hardware by keeping the number of required qubits low. We expect that all three components would be retained when running this algorithm on real quantum hardware for larger portfolios.

A. Uncertainty representation in the quantum computer
We use three qubits, denoted by q 0 , q 1 , q 2 , to represent the distribution of S, and two, denoted by q 3 , q 4 , for T . As discussed in Sec. III, the probability distributions are encoded by the states |ψ S = 7 i=0 √ p i,S |i 8 and √ p i,T |i 4 for S and T , which can thus take eight and four different values, respectively. We use more qubits for S than for T since the shift explains a larger part of the variance. Additional qubits may be used to represent the probability distributions at a higher resolution. The qubits naturally represent integers via binary encoding and we apply the affine mappings Here x ∈ {0, ..., 7} and y ∈ {0, ..., 3} denote the integer representations of S and T , respectively. Given the almost perfect symmetry of the historical data we fit symmetric distributions to it. The operator R that we define prepares a quantum state R |0 5 , illustrated by the dots in Fig. 8, that represents the distributions of S and T , up to the aforementioned affine mapping.   Next, we show how to construct the operator F to translate the random variables x and y into a portfolio value. Equations (14) through (17) allow us to define the portfolio value V in terms of x and y, instead of r 1 and r 2 . For simplicity, we use a first order approximatioñ f (x, y) = 203.5170 − 13.1896x − 1.8175y (18) of V around the mid points x = 3.5 and y = 1.5. From a financial perspective, the first order approximationf of V corresponds to studying the portfolio from the point of view of its duration [35]. Higher order expansions, e.g. convexity could be considered at the cost of increased circuit depth.

C. Results from simulations of an ideal quantum computer
We simulate the two-asset portfolio for different numbers m of sampling qubits to show the behavior of the accuracy and convergence rate. We repeat this task twice, once for a processor with all-to-all connectivity and once for a processor with a connectivity corresponding to the IBM Q 20 chip, see Appendix G. This highlights the overhead imposed by a realistic chip connectivity. For a number M = 2 m samples, we need a total of m+12 qubits for expected value and VaR, and m + 13 qubits for CVaR. Five of these qubits are used to represent the distribution of the interest rate changes, see Sec. V A, one qubit is needed to create the state in Eq. (4) used by amplitude estimation, and six ancillas are needed to implement the controlled Q operator. For CVaR we need one more ancilla for the comparison to the level l as discussed in Sec. III. Once the shift and twist distributions are loaded into the quantum computer, using the circuit shown in Fig.  8(c) and (d), we apply the operator F to create the state defined in Eq. (4).
We compare the quantum estimation of risk to the exact 95% VaR level of $0.288. When taking into account the mapping of Sec. V B, this classical VaR corresponds to 0.093, shown by the verticle line in Fig. 9. The quantum estimation of risk rapidly approaches this value as m is increased, see  We find that the connectivity of the IBM Q 20 chip increases the number of CNOT gates by a factor 2.5 when compared to a chip with all-to-all connectivity [51].

D. Results from simulations of a noisy quantum computer
Computing risk for the two-asset portfolio requires a long circuit. However, it suffices for amplitude estimation to return the correct state with the highest probability, i.e. measurements do not need to yield this state with 100% probability. We now run simulations with errors to investigate how much imperfections can be tolerated before the correct state can no longer be identified.
We study the effect of two types of errors: energy relaxation and cross-talk, where the latter is only considered for two-qubit gates (CNOT gates). We believe this to be a sufficient approximation to capture the leading error sources. Errors and gate times for single qubit gates are in general an order of magnitude lower than for two-qubit gates [36][37][38]. Furthermore, our algorithm requires the same order of magnitude in the number of single and two-qubit gates, see Tab. I. Energy relaxation is simulated using a relaxation rate γ such that after a time t each qubit has a probability 1 − exp(−γt) of relaxing to |0 [39]. We set the duration of the CNOT gates to 100 ns and assume that the single qubit gates are done instantly and are thus exempt from errors. We also include qubit-qubit cross-talk in our simulation by adding a ZZ error-term in the generator of the CNOT gate Typical cross-resonance [40] CNOT gate rates are of the order of 5 MHz whilst cross-talk on IBM Q chips are of the order of −100 kHz [38]. We thus estimate a reasonable value of α, i.e. the strength of the cross-talk, to be −2% and simulate its effect over the range [−3%, 0%].
We illustrate the effect of these errors by computing the expected value of the portfolio. Since the distributions are symmetric around zero and mapped to the interval [0, 1] we expect a value of 0.5, i.e. from one day to the next we do not expect a change in the portfolio value. This simulation is run with m = 2 sample qubits since this suffices to exactly estimate 0.5. The algorithm is successful if it manages to identify 0.5 with a probability greater than 50%. With our error model this is achieved for relaxation rates rγ < 10 −4 s −1 and cross-talk strength |α| < 1%, see Fig. 10(a)-(c), despite the 4383 gates needed. A generous estimation of current hardware capabilities with γ = 10 −4 s −1 (loosely based on T 1 = 100 µs) and α = −2%, shown as red lines in Fig. 10, indicates that this simulation may be possible in the near future as long as other error sources (such as measurement error and unitary errors resulting from improper gate calibrations) are kept under control.

VI. CONCLUSION
We developed a quantum algorithm to estimate risk, e.g. for portfolios of financial assets, resulting in a quadratic speedup compared to classical Monte Carlo methods. The algorithm has been demonstrated on real hardware for a small model and the scalability and impact of noise has been studied using a more complex model and simulation. Our approach is very flexible and straight-forward to extend to other risk measures such as semi-variance.
More qubits are needed to model realistic scenarios and the errors of actual hardware need to be reduced. Although the quadratic speedup can already be observed for a small number of samples, more is needed to achieve a practical quantum advantage. In practice, Monte Carlo simulations can be massively parallelized, which pushes the border for a quantum advantage even higher.
Our simulations of the two-asset portfolio show that circuit depth is limited for current hardware. In order to perform the calculation of VaR for the two asset portfolio on real quantum hardware it is likely that qubit coherence times will have to be increased by several orders of magnitude and that cross-talk will have to be further suppressed. However, approximating, parallelizing, and decomposing quantum phase estimation is ongoing research and we expect significant improvements in this area not only through hardware, but also algorithms [41][42][43]. This can also help to shorten the required circuit depths, and thus, to reduce the requirements on the hardware to achieve a quantum advantage. Circuit depth can also be shortened by using a more versatile set of gates. For instance, the ability to implement SWAP gates directly in hardware would circumvent the need to synthesize them using CNOT gates [44,45]. In addition, techniques such as error mitigation [46] could be applied to cope with the noisy hardware of the near future.
Another question that has only briefly been addressed in this paper is the loading of considered random distributions or stochastic processes. For auto-correlated processes this can be rather costly and needs to be further investigated. Techniques known from classical Monte Carlo, such as importance sampling [47], might be employed here as well to improve the results or reduce the circuit depth.
where we used u i=1 (2i − 1) ≤ 2u!. For a Taylor approximation of order 2u + 1, the error bound is given by the bound on the next Taylor coefficient in the series, i.e. for 2u + 3.