Cross-Platform Comparison of Arbitrary Quantum Processes

In this work, we present a protocol for comparing the performance of arbitrary quantum processes executed on spatially or temporally disparate quantum platforms using Local Operations and Classical Communication (LOCC). The protocol involves sampling local unitary operators, which are then communicated to each platform via classical communication to construct quantum state preparation and measurement circuits. Subsequently, the local unitary operators are implemented on each platform, resulting in the generation of probability distributions of measurement outcomes. The max process fidelity is estimated from the probability distributions, which ultimately quantifies the relative performance of the quantum processes. Furthermore, we demonstrate that this protocol can be adapted for quantum process tomography. We apply the protocol to compare the performance of five quantum devices from IBM and the"Qianshi"quantum computer from Baidu via the cloud. Remarkably, the experimental results reveal that the protocol can accurately compare the performance of the quantum processes implemented on different quantum computers, requiring significantly fewer measurements than those needed for full quantum process tomography. We view our work as a catalyst for collaborative efforts in cross-platform comparison of quantum computers.


I. INTRODUCTION
As the field of quantum computing and quantum information gains traction, an increasing number of manufacturers are entering the market, producing their own quantum computers.However, the current generation of noisy intermediatescale quantum (NISQ) computers, despite their potential, are still hindered by quantum noise [1].A great challenge is how to compare the performance of the quantum computers fabricated by different manufacturers and located in different laboratories, termed as cross-platform comparison.This task is especially relevant when we move towards regimes where comparing to classical simulations becomes computationally challenging, and therefore a direct comparison of quantum computers is necessary.
A standard method to achieve cross-platform comparison is the quantum tomography [2], in which we first reconstruct the full information of quantum computers under investigation, and then estimate their relative fidelity from the obtained matrices.However, quantum tomography is known to be time consuming and computationally difficult; even learning a few-qubit quantum state is already experimentally challenging [3,4].A more efficient way is to estimate the fidelity of the quantum computer without resorting to the full information.Indeed, a variety of estimation and verification tools [5,6], such as fidelity estimation [7][8][9][10][11][12] and quantum verification [13][14][15][16], have been developed along this way.However, these methods assume that one can access a known and theoretical target, usually simulated by classical computers.They quickly become inaccessible for quantum computers containing several hundreds or even thousands of highly entangled qubits, due to the intrinsic time complexity of classical simulation.
Recently, Elben et al. [17] proposed the first cross-platform protocol for estimating the fidelity of quantum states, which are possibly generated by spatially and temporally separated quantum computers.This protocol requires only local measurements in randomized product bases and classical communication between quantum computers.Numerical simulation shows that it consumes significantly fewer measurements than full quantum state tomography.It is expected be applicable in state-of-the-art quantum computers consisting of a few tens of qubits.Later on, Knörzer et al. [18] extended Elben's protocol to cross-platform comparison of quantum networks, assuming the existence of quantum links that can teleport quantum states.Nevertheless, a quantum link transferring quantum states of many qubits with high accuracy between two distant quantum computers is far from reach in the near future.
In this work, by elaborating the core idea of [17], we present a novel protocol for cross-platform comparing spatially and temporally separated quantum processes.The protocol uses only single-qubit unitary gates and classical communication between quantum computers, without requiring quantum links or ancilla qubits.This approach allows for accurate estimation of the performance of quantum devices manufactured in separate laboratories and companies using different technologies.Furthermore, the protocol can be used to monitor the stable function of target quantum computers over time.We apply the protocol to compare the performance of five quantum devices from IBM and the "Qianshi" quantum computer from Baidu via the cloud.Our experimental results reveal that our protocol can accurately compare the performance of arbitrary quantum processes.Although the sample complexity of our protocol still scales exponentially with the number of qubits, it has a significantly smaller exponent factor compared with that of quantum process tomography.Overall, our proto-col serves as a novel application of the powerful randomized measurement toolbox [19].
The rest of the paper is organized as follows.Section II reviews the cross-platform quantum state comparison protocol in [17] and introduces the quantum process performance metric.Section III elaborates the main result, a new protocol for cross-platform comparing arbitrary quantum processes.Particularly, we summarize the similarities and differences between our protocol and that of [18].Section IV reports a thorough experimental cross-platform comparison on spatially and temporally separated quantum computers, along with a comprehensive investigation of the experimental data.The Appendices summarize technical details of the main text.

II. PRELIMINARIES A. Cross-platform comparison of quantum states
In quantum information, fidelity is an important metric that is widely used to characterize the closeness between quantum states.There are many different proposals for the definition of state fidelity [20].In this work, we will concentrate on the max fidelity, formally defined as [17,20] where ρ i is an n-qubit quantum state produced by the quantum computer, i = 1, 2.
Elben et al. [17] proposed a randomized measurement protocol to estimate F max , which functions as follows.First, we construct an n-qubit unitary U = n k=1 U k , where each U k is identically and independently sampled from a single-qubit set X 2 satisfying unitary 2-design [21,22].This information will be classically communicated to the quantum computers, possibly spatially or temporally separated, that produce the quantum states ρ 1 and ρ 2 , respectively.Then, each quantum computer executes the unitary U , performs a computational basis measurement, and records the measurement outcome s.Repeating the above procedure for fixed U a number of times, we are able to obtain two probability distributions over the outcomes of the form Pr U , where the superscript i represents that the distribution is obtained from quantum state ρ i .Next, we repeat the whole procedure for many different random unitaries U , yielding a set of probability distributions {Pr U } U .From the experimental data, we estimate the overlap between ρ i and ρ j as [17] Tr where • • • denotes the ensemble average over the sampled unitaries U and D[s, s ] denotes the hamming distance between two bitstrings s and s .Specially, Tr[ρ 1 ρ 2 ] can be estimated from (2) by setting i = 1 and j = 2, whereas the purities Tr[ρ 2  1 ] and Tr[ρ 2 2 ] can be obtained by setting i = j = 1 and i = j = 2, respectively.Using the above estimated quantities, we successfully compute the max fidelity F max (ρ 1 , ρ 2 ).
Using experimental data from [23], Elben et al. showcased the experiment-theory fidelities and experiment-experiment fidelities of highly entangled quantum states prepared via quench dynamics in a trapped ion quantum simulator as a proof of principle [17].Recently, Zhu et al. reported thorough cross-platform comparison of quantum states in four ion-trap and five superconducting quantum platforms, with detailed analysis of the results and an intriguing machine learning approach to explore the data [24].

B. Quantum process performance metric
A quantum process, also known as a quantum operation or a quantum channel, is a mathematical description of the evolution of a quantum system.It is mathematically formulated as a completely positive and trace-preserving (CPTP) linear map on the quantum states [25].The Choi-Jamiołkowski isomorphism provides a unique way to represent quantum processes as quantum states in a larger Hilbert space.Formally, the Choi state of an n-qubit quantum process E is defined as [26] η where I is the identity channel and |ψ + := 1/ √ 2 n |ii is a maximally entangled state of a bipartite quantum system composed of two n-qubit subsystems.
One lesson we can learn from the cross-platform state comparison protocol is that, we must choose a process metric before comparing two quantum processes.Gilchrist et al. [27] have introduced a systematic way to generalize a metric originally defined on quantum states to a corresponding metric on quantum processes, utilizing the Choi-Jamiołkowski isomorphism.Specifically, the max fidelity between two n-qubit quantum processes E 1 and E 2 , implemented on different quantum platforms, is defined as where η E is the Choi state of quantum process E. This metric fulfills the axioms for quantum process fidelities following the argument in [27].It is reasonable to believe that, at least to some extent, this metric reveals that the quantum processes have implemented the same quantum evolution.
In the following, we propose an experimentally efficient protocol to estimate this metric.This protocol makes use of only single-qubit unitaries and classical communication, thus can be executed in spatially and temporally separated quantum devices.This enables cross-platform comparison of arbitrary quantum processes.

III. CROSS-PLATFORM COMPARISON
In this section, we first provide a simple example to illustrate the necessity of cross-platform comparison.Then, we introduce a protocol for estimating the max process fidelity that is conceptually straightforward yet experimentally challenging.Next, we propose a modification to the protocol that employs randomized input states and provide a detailed explanation of the approach.Furthermore, we demonstrate that our protocol can be extended to accomplish full process tomography.Our protocol is motivated by the observation that even identical quantum computers cannot produce identical outcomes on each run due to the intrinsic randomness of quantum mechanics, but they do generate identical probability distributions from a statistical perspective.
Cross-platform comparison of quantum computers is essential for at least two reasons.Firstly, comparing the actual implementation with an idealized theoretical simulation can be challenging, as classical simulations become computationally demanding with an increasing number of qubits.Secondly, due to the presence of varying forms of quantum noise across different quantum platforms, the actual implementation of quantum processes can vary significantly, even if they maintain the same process fidelity with respect to the ideal target.To illustrate this point, consider the following example.Suppose Alice has a superconducting quantum computer and Bob has a trapped-ion quantum computer.They implement the single-qubit Hadamard gate H(ρ) = HρH † on their respective quantum computers.However, Alice's implementation E 1 suffers from the depolarizing noise, yielding E 1 (ρ) = (1−p 1 )HρH † +p 1 1/2, where p 1 = 7/30.On the other hand, Bob's implementation E 2 suffers from the dephasing noise, such that E 2 (ρ) = (1 − p 2 )HρH † + p 2 ∆(ρ), where p 2 = 1/5 and ∆(•) is the dephasing operation.After simple calculations, we obtain . Despite E 1 and E 2 having the same fidelity level when compared to the ideal target H, discernible difference exists between them.Therefore, solely comparing the fidelity of a quantum process to an ideal reference is insufficient, and a direct comparison between quantum processes is warranted.

A. Ancilla-assisted cross-platform comparison
In this section, we recover a conceptually simple approach for estimating the max process fidelity defined in Eq. ( 4), which was recently proposed in [18].The key observation is that this fidelity can be seen as the max state fidelity between the Choi states of the corresponding quantum processes.To construct the Choi state of the n-qubit quantum process E, we need to introduce an additional n-qubit clean auxiliary system.Using the auxiliary system, we prepare a 2n-qubit maximally entangled state |ψ + and apply E to half of the whole system, which successfully prepares the Choi state of E. We can then estimate the max state fidelity using the procedure introduced in Section II A. The complete protocol is illustrated in Figure 1(a)-(c).
We refer to this protocol as the ancilla-assisted crossplatform comparison because it requires additional clean ancilla qubits to prepare the Choi state of the quantum process.To perform this protocol, a maximally entangled state is required as input, resulting in a two-fold overhead when comparing 2n-qubit states instead of n-qubit states.Consequently, this protocol may not be practical in scenarios with limited quantum computing resources.Furthermore, preparing high-fidelity maximally entangled states can be experimentally challenging, which may negatively impact the accuracy of the protocol.FIG.1: Two protocols to estimate the max process fidelity F max between quantum processes implemented on different quantum platforms.(a) Ancilla-assisted protocol: Prepare the maximally entangled state, execute the target quantum process, and perform the randomized measurements given by 2 .(b) Ancilla-free protocol: Randomly sample a computational basis |s , execute the unitaries , execute the target quantum process, and perform the randomized measurements given by ⊗ n k=1 U (k) 2 .(c) Run the quantum circuits constructed in (a) or (b) on platform S i to obtain the probability distribution Pr The max process fidelity F max (E i , E j ) is inferred from the probability distributions (see text).

B. Ancilla-free cross-platform comparison
To overcome the limitations of the ancilla-assisted protocol, we propose an efficient and ancilla-free approach for estimating the max process fidelity.Our protocol does not require any additional qubits or the preparation of maximally entangled states.The key observation is that the auxiliary system in the ancilla-assisted protocol only needs to perform random-ized measurements.After the measurement, the auxiliary system collapses to one eigenstate of the sampled measurement operator.Based on the identity (|u u|⊗1) |ψ + = |u ⊗|u * , where 1 is the identity matrix, and the deferred measurement principle [28], we can eliminate the auxiliary system by preparing computational states and applying the transposed unitary operator on the main system.Please refer to Appendix A for a detailed analysis.
We refer to the new protocol as the ancilla-free crossplatform comparison and it works as follows.We consider two n-qubit quantum processes E 1 and E 2 realized on different quantum platforms S 1 and S 2 , whose Choi states are η 1 and η 2 , respectively.The protocol, illustrated in Figure 1(b)-(c), consists of three main steps: sampling unitaries, running circuits, and post-processing.
Step 1. Sampling unitaries: Construct two n-qubit unitaries is identically and independently sampled from a single-qubit set X 2 satisfying unitary 2-design.The information of U i is then communicated to both platforms via classical communication.
Step 2. Running circuits: After receiving the information of the sampled unitaries, Each platform S i (i = 1, 2) initializes its quantum system to the computational states |s and applies the first unitary U 1 to |s .Subsequently, S i implements the quantum process E i and applies the second unitary U 2 .Finally, S i performs the projective measurement in the computational basis and obtains an outcome k.Repeating the above procedure many times, we obtain two probability distributions Pr  K|s,U1,U2 over the measurement outcomes k for the fixed computational state |s and unitaries U 1 and U 2 .By exhausting the computational states and repeatedly sampling the unitaries, we obtain two probability distributions Pr (i) K,S|U1,U2 with respect to the sampled unitaries and computational state inputs.For simplicity, we abbreviate Pr Step 3. Post-processing: From the experimental data, we estimate the overlap between the Choi states η i and η j for i, j = 1, 2 as where • • • denotes the ensemble average over the sampled unitaries U 1 and U 2 .This is proven in Appendix A. By setting i = 1 and j = 2, we can estimate the overlap Tr[η 1 η 2 ] from the above equation, which is the second-order crosscorrelation of the probabilities Pr  U .We can obtain the purities Tr[η 2  1 ] and Tr[η 2 2 ] by setting i = j = 1 and i = j = 2, respectively.These are the second-order autocorrelations of the probabilities.Using the estimated quantities, we compute the max process fidelity F max (E 1 , E 2 ) in Eq. ( 4).
There are several important points to note about our protocol.First, when classical simulation is available, the protocol can be used to compare the experimentally implemented process to the theoretical simulation, providing a useful tool for experiment-theory comparison.Second, our protocol can also estimate the process purity Tr[η 2 E ] of a quantum process E, which measures the extent to which E preserves the purity of the quantum state.This is an important measure for characterizing quantum processes, and our protocol provides an efficient way to estimate it.Finally, it is worth noting that the definition of max process fidelity is not unique, and different approaches exist [27,29].Our protocol, based on statistical correlations of randomized inputs and measurements, can be readily extended to any metric that depends solely on the process overlap Tr[η E1 η E2 ] and the process purities Tr[η 2 E1 ] and Tr[η 2E2 ].This makes our protocol highly versatile and applicable to a wide range of quantum computing scenarios.

C. Randomized quantum process tomography
Here we argue that our protocol is applicable for full quantum process tomography.It is worth noting that in Ref. [30], a method was proposed for performing full quantum state tomography using randomized measurements.For an n-qubit quantum process E, we can first construct the Choi state of E and then use the proposed protocol to obtain the full information of the Choi state η E .However, as previously mentioned, this method is not efficient and is impractical due to the imperfect preparation of maximally entangled states and the requirement for an additional n-qubit auxiliary system.
Likewise, we may use the randomized input states trick introduced in Section III B to overcome the above issues.Specifically, based on the experimental data Pr U collected in Section III B, the full information of an unknown n-qubit quantum process E can be obtained via where U = U 1 ⊗ U 2 and • • • denotes the ensemble average over the sampled unitaries U 1 and U 2 as before.The is proven in Appendix B.

D. Comparison with previous works
Knörzer et al. [18] have recently introduced a new set of protocols that enable pair-wise comparisons between distant nodes in a quantum network.The authors propose four crossplatform state comparison schemes as alternatives to Elben's protocol [17], each of which relies on the presence of quantum links.In addition to this, they present three protocols, referred to as M1, M2, and M3, which facilitate cross-platform comparisons of quantum processes, assuming that a crossplatform state comparison protocol is available.
We will now explain how our protocol differs from M1, M2, and M3.While M1 involves an ancilla-assisted comparison protocol that we have rephrased in Section III A, our protocol does not rely on ancilla qubits.Similarly, M3 involves a series of entanglement tests that are fundamentally different from our protocol.Although our protocol and M2 share some similarities, such as the absence of ancilla qubits and the need to sample random unitaries and computational basis states, there are notable differences.Specifically, our protocol only needs to sample from a single-qubit unitary 2-design, can accurately estimate the max fidelity, and can compare the performance of arbitrary quantum processes.On the other hand, M2(i) estimates the average gate fidelity and requires sampling from a multi-qubit unitary 2-design, which can be resource-intensive as the number of qubits increases.M2(iii) is conceptually straightforward but can only estimate the ability of quantum processes to preserve quantum information in the computational basis.Additionally, M2(i) and M2(iii) are limited to comparing the performance of unitary quantum processes.

IV. EXPERIMENTS
In this section, we report experimental results on crossplatform comparison of quantum processes across various spatially and temporally separated quantum devices.First, we demonstrate the efficacy of our protocol in comparing the H and CNOT gates implemented on different platforms with their ideal counterparts obtained from classical simulation.Next, we monitor the stability of the "Qianshi" quantum computer from Baidu over a week with our protocol.Finally, we conduct an extensive numerical analysis to determine the expected number of experimental runs required to obtain reliable results.All the experiments are conducted using the Quantum Error Processing toolkit developed on the Baidu Quantum Platform [31].

A. Comparing spatially separated quantum processes
We utilize our ancilla-free cross-platform comparison protocol to assess the performance of H and CNOT gates implemented on seven distinct platforms that are freely accessible to the public over the internet.
First of all, it is noteworthy that the random Pauli basis measurements {X, Y, Z} are equivalent to randomized measurements with a single qubit Clifford group [24,32].The Clifford group is a unitary 2-design group, and it can be employed to conduct complete process tomography of n-qubit quantum processes.This equivalence enables us to sample directly from the 3 n Pauli preparation and 3 n Pauli measurement unitaries in our experiments.
To begin with, we utilize our protocol to compare the performance of the single-qubit H gate across seven quantum platforms.To achieve this, we create 2 1 × N U = 20 random circuits and execute M shots = 500 projective measurements for each circuit on each platform.Furthermore, we employ FIG.3: The performance matrices of the single-qubit H and two-qubit CNOT gates generated from the daily data of Baidu's "Qianshi" quantum computer for one week.The entry in the i-th row and j-th column of the matrix represents the max process fidelity between platform-i and platform-j.The entries in the upper right corner are visualized in pie chart format.(a) The performance matrix of the H gate.Each entry is inferred from 2 1 • N U = 20 random circuits and each circuit is repeated M shots = 500 times.(b) The performance matrix of the CNOT gate.Each entry is inferred from 2 2 • N U = 400 random circuits and each circuit is repeated M shots = 500 times.
the same protocol to compare the performance of the CNOT gate implemented on these platforms.To accomplish this, we generate 2 2 × N U = 400 random circuits and performe M shots = 500 repetitions for each quantum circuit on each platform.The performance matrices for the H and CNOT gates are presented in Figure 2.
The experimental results make it clear that, while some quantum devices may achieve fidelities that are comparable to those of the ideal simulator, there remains a significant discrepancy between them.This emphasizes the importance of directly comparing the performance of quantum devices with each other, rather than relying solely on comparisons to an ideal simulator, as such comparisons may not be adequate.

B. Comparing temporally separated quantum processes
Our protocol is also useful for monitoring the stable performance of quantum devices.To this end, we employe the ancilla-free cross-platform comparison protocol to assess the stability of H and CNOT gates implemented on Baidu's "Qianshi" quantum computer (BD_1) over the course of one week.The experimental settings for the H and CNOT gates are identical to those used in the previous section.Specifically, for the single-qubit H gate, we create 2 1 × N U = 20 random circuits daily and execute M shots = 500 projective measurements for each circuit on "Qianshi".For the two-qubit CNOT gate, we create 2 2 × N U = 400 random circuits daily and execute M shots = 500 projective measurements for each circuit on "Qianshi".The performance matrices of the single-qubit H and two-qubit CNOT gates generated from the daily data of "Qianshi" are shown in Figure 3.
After analyzing the cross-platform fidelities presented in Figure 3, we discover several noteworthy features.First, we observe that the stability of the H gate is considerably higher than that of the CNOT gate on "Qianshi," which aligns with the expectation that two-qubit gates are harder to implement and maintain in a superconducting quantum computer than single-qubit gates.Additionally, on the last day of the week (DAY_7), there is a significant drop in the performance of the CNOT gate.After consulting with researchers from Baidu's Quantum Computing Hardware Laboratory, it is determined that the instability is caused by the sudden halt of the dilution cooling system.After the system is restarted, all native quantum gates have to be re-calibrated to achieve optimal performance.Furthermore, it is observed that the temperature variation had a negligible impact on the H gate.This observation might be helpful for the experimenters to identify potential hardware issues.

C. Scaling of the required number of experimental runs
In practice, the accuracy of the estimated fidelity is unavoidably subject to statistical error, as a result of the finite number of random circuits (2 n • N U ) and the finite number of projective measurements (M shots ) performed per random circuit.Therefore, it is experimentally crucial to consider the scaling of the total number of experimental runs, which equals to 2 n •N U •M shots , constituting the measurement budget, in or- The target quantum process is taken to be the n-qubit GHZ state preparation circuit for the entangled case and the rotation circuit composed of n single-qubit rotation gates for the non-entangled case.The data is obtained via numerical simulation.
der to effectively suppress the statistical error to a prespecified threshold when evaluating the performance of an n-qubit quantum process.In the following, we present numerical simulation to investigate this behavior.
In Figure 4, numerical results for the average statistical error as a function of the measurement budget 2 n • N U • M shots are presented and the scaling of the measurement budget with respect to the system size n is derived.In order to keep consistent with previous experiments, we choose the H gate when n = 1 and the CNOT gate when n = 2 in the simulation.Note that in this case the ideal fidelity F max = 1 is known.We repeat our protocol on the ideal simulator 5 times for each point in the figure and record the mean of the statistical errors | F max − 1.0|.We find that the statistical error scales as | F max − 1.0| ∼ 1/(2 n N U M shots ), where F max is the estimated max process fidelity via simulation.Now we investigate the scaling of the required number of experimental runs, 2 n M shots , per unitary to estimate the max fidelity F max within an average statistical error of = 0.05 while fixing N U to 100.We employ our protocol to two very different types of quantum processes, with different numbers of qubits n: (i) a highly entangled quantum process corresponding to an n-qubit GHZ state preparation circuit (Entangled) and (ii) a completely local quantum process composed of n single-qubit rotation gates (Non-Entangled).The numerical results are presented in Figure 5. From the fitted data, we find that 2 n M shots ∼ 2 bn , where b = 2.02 ± 4e−4 for the entangled case and b = 1.94 ± 2e−4 for the non-entangled case.The analysis shows that our ancilla-free cross-platform protocol requires a total number of experimental runs that scales as 2 n N U M shots ∼ 2 bn with b ≈ 2. This scaling, despite exponential, is significantly less than full quantum process tomography (QPT), which has an exponent b ≥ 4 [33].

V. CONCLUSIONS
We have proposed an ancilla-free cross-platform protocol that enables the performance comparison of arbitrary quantum processes, using only single-qubit unitaries and classical communication.This protocol is thus suitable for comparing quantum processes that are independently manufactured over different times and locations, built by different teams using different technologies.We have experimentally demonstrated the cross-platform protocol on six remote quantum computers fabricated by IBM and Baidu, and monitored the stable functioning of Baidu's "Qianshi" quantum computer over one week.The experimental results reveal that our protocol accurately compares the performance of different quantum computers with significantly fewer measurements than quantum process tomography.Additionally, we have shown that our protocol is applicable to quantum process tomography.
However, some problems must be further explored to make the cross-platform protocols more practical.Firstly, the sample complexity of these protocols lacks theoretical guarantees, thereby necessitating the empirical selection of experimental parameters.To address this challenge, it may be possible to adapt techniques from [34].Secondly, it is vital to make the protocols robust against state preparation and measurement errors.One possible solution is to apply quantum error mitigation methods [35][36][37][38][39] to alleviate quantum errors and increase the estimation accuracy.We suggest that ideas and insights from randomized benchmarking [40] and quantum gateset tomography [41] might be helpful for designing error robust cross-platform protocols.

FIG. 2 :
FIG. 2: The performance matrices for the single-qubit H and two-qubit CNOT gates generated from seven different quantum platforms.The entry in the i-th row and j-th column of the matrix represents the max process fidelity between platform-i and platform-j.The entries in the upper right corner are visualized in pie chart format.(a) The performance matrix of the H gate.Each entry is inferred from 2 1 • N U = 20 random circuits and each circuit is repeated M shots = 500 times.(b) The performance matrix of the CNOT gate.Each entry is inferred from 2 2 • N U = 20 random circuits and each circuit is repeated M shots = 500 times.

FIG. 4 :
FIG. 4: Average statistical error | F max − 1| as a function of the total number of experimental runs 2 n N U M shots .The target quantum process is taken to be the H gate for n = 1 and the CNOT gate for n = 2.The green lines obey ∼ 1/(2 n N U M shots ) and are guides for the eye.The data is obtained via numerical simulation.