Density-matrix simulation of small surface codes under current and projected experimental noise

We present a full density-matrix simulation of the quantum memory and computing performance of the distance-3 logical qubit Surface-17, following a recently proposed quantum circuit and using experimental error parameters for transmon qubits in a planar circuit QED architecture. We use this simulation to optimize components of the QEC scheme (e.g., trading off stabilizer measurement infidelity for reduced cycle time) and to investigate the benefits of feedback harnessing the fundamental asymmetry of relaxation-dominated error in the constituent transmons. A lower-order approximate calculation extends these predictions to the distance-$5$ Surface-49. These results clearly indicate error rates below the fault-tolerance threshold of surface code, and the potential for Surface-17 to perform beyond the break-even point of quantum memory. At state-of-the-art qubit relaxation times and readout speeds, Surface-49 could surpass the break-even point of computation.


I. Introduction
Recent experimental demonstrations of small quantum simulations 1-3 and quantum error correction (QEC) 4-7 position superconducting circuits for targeting quantum supremacy 8 and quantum fault tolerance 9 , two outstanding challenges for all quantum information processing platforms. On the theoretical side, much modeling of QEC codes has been made to determine fault-tolerance threshold rates in various models [10][11][12] with different error decoders [13][14][15] . However, the need for computational efficiency has constrained many previous studies to oversimplified noise models, such as depolarizing and bitflip noise channels. This discrepancy between theoretical descriptions and experimental reality compromises the ability to predict the performance of near-term QEC implementations, and offers limited guidance to the experimentalist through the maze of parameter choices and trade-offs. In the planar circuit quantum electrodynamics (cQED) 16 architecture, the major contributions to error are transmon qubit relaxation, dephasing from flux noise and resonator photons leftover from measurement, and leakage from the computational space, none of which are well-approximated by depolarizing or bit-flip channels. Simulations with more complex error models are now essential to accurately pinpoint the leading contributions to the logical error rate in the small-distance surface codes 10,13,17 currently pursued by several groups worldwide.
In this paper, we perform a density-matrix simulation of the distance-3 surface code named Surface-17, using the concrete quantum circuit recently proposed in 18 and the measured performance of current experimental multitransmon cQED platforms [19][20][21][22] . For this purpose, we have developed an open-source density-matrix simulation package named quantumsim 23 . We use quantumsim to extract the logical error rate per QEC cycle, L . This metric allows us to optimize and trade off between QEC cycle parameters, assess the merits of feedback control, predict gains from future improvements in physical qubit performance, and quantify decoder performance. We compare an algorithmic decoder using minimum-weight perfect matching (MWPM) with homemade weight calculation to a simple look-up table (LT) decoder, and weigh both against an upper bound (UB) for decoder performance obtainable from the density-matrix simulation. Finally, we make a low-order approximation to extend our predictions to the distance-5 Surface-49. The combination of results for Surface-17 and -49 allows us to make statements about code scaling and to predict the code size and physical qubit performance required to achieve break-even points for memory and computational performance.

A. Error rates for Surface-17 under current experimental conditions
To quantify the performance of the logical qubit, we first define a test experiment to simulate. Inspired by the recent experimental demonstration of distance-3 and -5 repetition codes 4 , we first focus on the performance of the logical qubit as a quantum memory. Specifically, we quantify the ability to hold a logical |0 state, by initializing this state, holding it for k ∈ {1, . . . , 20} cycles, performing error correction, and determining a final logical state (see Fig. 6 for details). The logical fidelity F L [k] is then given by the probability to match the initial state. We observe identical results when using |1 or |± = 1  (Table I and 24 ), simulated with quantumsim as described in Fig. 6. The results from a MWPM decoder (green) and an implementation of the LT decoder of 13 (blue) are compared to the decoder upper bound (red). The labeled error rate is obtained from the best fit to Eq. (2) (also plotted). A further comparison is given to majority voting (purple, dashed), which ignores the outcome of individual stabilizer measurements, and to the fidelity F phys of a single transmon (black) [Eq. (1)]. Error bars (2 s.d.) are obtained by bootstrapping.
We base our error model for the physical qubits on current typical experimental performance for transmons in planar cQED, using parameters from the literature and in-house results (e.g., gate-set tomography measurements). These are summarized in Table I, and further detailed in 24 . We focus on the QEC cycle proposed in 18 , which pipelines the execution of X-and Ztype stabilizer measurements. Each stabilizer measurement consists of three parts: a coherent step (duration τ c = 2τ g,1Q + 4τ g,2Q ), measurement (τ m ), and photon depletion from readout resonators (τ d ), making the QEC cycle time τ cycle = τ c + τ m + τ d .
Simulating this concrete quantum circuit with the listed parameters using quantumsim, we predict F L [k] of Surface-17 ( Fig. 1). We show F L [k] for both a homemade MWPM decoder (green, described in 24 ), and an implementation of the LT decoder of 13 (blue, described in 24 ). To isolate decoder performance, we can compare the achieved fidelity to an upper bound extractable from the density-matrix simulation (red, described in Sec. IV A 3). To assess the benefit of QEC, we also compare to a single decohering transmon, whose fidelity is calculated by averaging over the six cardinal points of the Bloch sphere: F phys (t) = 1 6 1 + e −t/T1 + 1 3 1 + e −t(1/2T1+1/T φ ) .
(1) The observation of F L [k] > F phys (kτ cycle ) for large k would constitute a demonstration of QEC beyond the quantum memory break-even point 7 . Equivalently, one can extract a logical error rate L from a best fit to F L [k] (as derived in Sec. IV A 2 as the probability of an odd number of errors occurring), Here, k 0 and L are the parameters to be fit. We compare L to the physical error rate We observe L = 1.44 % c for the LT decoder, L = 1.07 % c for the MWPM decoder, and L = 0.68 % c at the decoder upper bound (% c = % per cycle). The latter two fall below phys = 1.33 % c . Defining the decoder efficiency η d = We can also compare the multi-cycle error correction to majority voting, in which the state declaration is based solely on the output of the final data qubit measurements (ancilla measurements are ignored). Majority voting corrects any single data qubit error (over the entire experiment), and thus exhibits a quadratic decay for small k 25 . A decoder should also be able to correct (at least) a single error, and thus should produce the same behavior at low k, delaying the onset of exponential decay in F L [k]. In fact, a good test for the performance of a MWPM decoder is to ensure it can outperform the majority vote at short timescales, as suboptimal configuration will prevent this (as seen for the look-up table decoder).
With the baseline for current performance established, we next investigate L improvements that may be achieved by two means. First, we consider modifications to the QEC cycle at fixed physical performance. Afterwards, we consider the effect of improving physical qubit T 1 and T φ .

B. Optimization of logical error rates with current experimental conditions
Error sources in current cQED setups derive primarily from transmon decoherence, as opposed to gate and measurement errors produced by control electronics. Thus, a path to reducing L may be to decrease τ cycle . Currently, the cycle is dominated by τ m + τ d . At fixed readout power, reducing τ m and τ d will reduce τ cycle at the cost of increased readout infidelity RO (described in Sec. IV B 6). We explore this trade-off in Fig. 2, using a linear-dispersive readout model 26 , keeping τ m = τ d and assuming no leftover photons. Because of the latter, (MWPM) L reduces from 1.07 % c (Fig. 1)  of the code to different types of errors. Indeed, is smaller for τ m = 200 ns than for τ m = 300 ns, even though RO increases to 5 %. It is interesting to note that the optimal τ m for quantum memory, which minimizes logical error per unit time, rather than per cycle, is τ m = 280 ns (Fig. 2 inset). This shows that different cycle parameters might be optimal for computation and memory applications. Next, we consider the possibility to reduce L using feedback control. Since T 1 only affects qubits in the excited state, the error rate of ancillas in Surface-17 is roughly two times higher when in the excited state. The unmodified syndrome extraction circuit flips the ancilla if the corresponding stabilizer value is -1, and since ancillas are not reset between cycles, they will spend significant amounts of time in the excited state. Thus, we consider using feedback to hold each ancilla in the ground state as much as possible. We do not consider feedback on data qubits, as the highly entangled logical states are equally susceptible to T 1 .
The feedback scheme (Inset of Fig. 3) consists of replacing the R y (π/2) gate at the end of the coherent step with a R y (−π/2) gate for some of the ancillas, depending on a classical control bit p for each ancilla. This bit p represents an estimate of the stabilizer value, and the ancilla is held in the ground state whenever this estimate is correct (i.e. in the absence of errors). Figure 3 shows the effect of this feedback on the logical fidelity, both for the MWPM decoder and the decoder upper bound. We observe L improve only 0.05 % c in both cases. Future experiments might opt not to pursue these small gains in view of the technical challenges added by feedback control.

C. Projected improvement with advances in quantum hardware
We now estimate the performance increase that may result from improving the transmon relaxation and dephasing times via materials and filtering improvements. To model this, we return to τ cycle = 800 ns, and adjust T 1 values with both T φ = 2T 1 (common in experiment) and T φ = ∞ (all white-noise dephasing eliminated). We retain the same rates for coherent errors, readout infidelity, and photon-induced dephasing as in Fig. 1. Figure 4 shows the extracted L and phys over the T 1 range covered. For the MWPM decoder (upper bound) and T φ = 2T 1 , the memory figure of merit γ m = phys / L increases from 1.3 (2) at T 1 = 30 µs to 2 (5) at 100 µs. Completely eliminating white-noise dephasing will increase γ m by 10% with MWPM and 30% at the upper bound.
A key question for any QEC code is how L scales with code distance d. Computing power limitations preclude similar density-matrix simulations of the d = 5 surface code Surface-49. However, we can approximate the error rate by summing up all lowest-order error chains (as calculated for the MWPM decoder), and deciding indi-  and Surface-49 (orange) using a MWPM decoder. Details of the calculation of points and error bars are given in 24 . All plots assume T φ = 2T1, and τ cycle = 800 ns (crosses) or 400 ns (dots). Numerical results for Surface-17 with τ cycle = 800 ns are also plotted for comparison (green, dashed). The physicalqubit computation metric is given as the error incurred by a single qubit over the resting time of a single-qubit gate (black, dashed).
vidually whether or not these would be corrected by a MWPM decoder (see 24 for details). Figure 5 shows the lowest-order approximation to the logical error rates of Surface-17 and -49 over a range of T 1 = T φ /2. Comparing the Surface-17 lowest-order approximation to the quantumsim result shows good agreement and validates the approximation. We observe a lower L for Surface-49 than for -17, indicating quantum fault tolerance over the T 1 range covered. The fault-tolerance figure of merit defined in 9 , Λ t = L / L , increases from 2 to 4 as T 1 grows from 30 to 100 µs.
As a rough metric of computational performance, we offer to compare L (per cycle) to the error accrued by a physical qubit idling over τ g,1Q . We define a metric for computation performance, γ c = ( phys τ g,1Q )/( L τ cycle ) and γ c = 1 as a computational break-even point. Clearly, using the QEC cycle parameters of Table I and even with T 1 improvements, neither Surface-17 nor -49 can breakeven computationally. However, including the readout acceleration recently demonstrated in 22 , which allows τ m = τ d = 100 ns and τ cycle = 400 ns, Surface-49 can cross γ c = 1 by T 1 = 40 µs. In view of first reports of T 1 up to 80 µs emerging for planar transmons 27,28 , this important milestone may be within grasp.

A. Computational figure of merit
We note that our metric of computational power is not rigorous, due to the different gate sets available to physical and logical qubits. Logical qubits can execute multiple logical X and Z gates within one QEC cycle, but require a few cycles for two-qubit and Hadamard gates (using the proposals of 12,17 ), and state distillation over many cycles to perform non-Clifford gates. As such, this metric is merely a rough benchmark for computational competitiveness of the QEC code. However, given the amount by which all distance-3 logical fidelities fall above this metric, we find it unlikely that these codes will outperform a physical qubit by any fair comparison in the near future.

B. Decoder performance
A practical question facing quantum error correction is how best to balance the trade-off between decoder complexity and performance. Past proposals for surface-code computation via lattice surgery 17 require the decoder to provide an up-to-date estimate of the Pauli error on physical qubits during each logical T gate. Because tracking Pauli errors through a non-Clifford gate is inefficient, however implemented, equivalent requirements will hold for any QEC code 29 . A decoder is thus required to process ancilla measurements from one cycle within the next (on average). This presents a considerable challenge for transmon-cQED implementations, as τ cycle < 1 µs. This short time makes the use of computationally intensive decoding schemes difficult, even if they provide lower L .
The leading strategy for decoding the surface code is MWPM using the blossom algorithm of Edmonds 10,14,30 . Although this algorithm is challenging to implement, it scales linearly in code distance 30 . The algorithm requires a set of weights (representing the probability that two given error signals are connected by a chain of errors) as input. An important practical question (see 24 ) is whether these weights can be calculated on the fly, or must be precalculated and stored. On-the-fly weight calculation is more flexible. For example, it can take into account the difference in error rates between an ancilla measured in the ground and in the excited state. The main weakness of MWPM is the inability to explicitly detect Y errors. In fact, 24 shows that MWPM is nearly perfect in the absence of Y errors. The decoder efficiency η d may significantly increase by extending MWPM to account for correlations between detected X and Z errors originating from Y errors 31,32 .
If computational limitations preclude a MWPM decoder from keeping up with τ cycle , the look-up table decoder may provide a straightforward solution for Surface-17. However, at current physical performance, the η d reduction will make Surface-17 barely miss memory breakeven ( Fig. 1). Furthermore, memory requirements make look-up table decoding already impractical for Surface-49. Evidently, real-time algorithmic decoding by MWPM or improved variants is an important research direction already at low code distance.

C. Other observations
The simulation results allow some further observations. Although we have focused on superconducting qubits, we surmise that the following statements are fairly general.
We observe that small quasi-static qubit errors are suppressed by the repeated measurement. In our simulations, the 1/f flux noise producing 0.01 radians of phase error per flux pulse on a qubit has a diamond norm approximately equal to the T 1 noise, but a trace distance 100 times smaller. As the flux noise increases L by only 0.01 % c , it appears L is dependent on the trace distance rather than the diamond norm of the underlying noise components. Quasi-static qubit errors can then be easily suppressed, but will also easily poison an experiment if unchecked.
We further observe that above a certain value, ancilla and measurement errors have a diminished effect on L . In our error model, the leading sources of error for a distance d code are chains of (d − 1)/2 data qubit errors plus either a single ancilla qubit error or readout error, which together present the same syndrome as a chain of (d + 1)/2 data qubit errors. An optimal decoder decides which of these chains is more likely, at which point the less-likely chain will be wrongly corrected, completing a logical error. This implies that if readout infidelity ( RO ) or the ancilla error rate ( anc ) is below the data qubit ( phys ) error rate, L ∝ ( anc + RO ) to lowest order. This can be seen in Fig. 2, where the error rate is almost constant as RO exponentially increases. This approximation breaks down with large enough anc and RO , but presents a counterintuitive point for experimental design; L becomes less sensitive to measurement and ancilla errors as these error get worse.
A final, interesting point for future surface-code com-putation is shown in Fig. 2: the optimal cycle parameters for logical error rates per cycle and per unit time are not the same. This implies that logical qubits functioning as a quantum memory should be treated differently to those being used for computation. This idea can be extended further: at any point in time, a large quantum computer performing a computation will have a set S m of memory qubits which are storing part of a large entangled state, whilst a set S c of computation qubits containing the rest of the state undergo operations. To minimize the probability of a logical error occurring on qubits within both S c and S m , the cycle time of the qubits in S c can be reduced to minimize the rest time of qubits in S m . As a simple example, consider a single computational qubit q c and a single memory qubit q m sharing entanglement. Operating all qubits at τ cycle = 720 ns to minimize L would lead to a 1.09% error rate for the two qubits combined. However, shortening the τ cycle of q c reduces the time over which q m decays. If q c operates at τ cycle = 600 ns, the average error per computational cycle drops to 1.06%, as q m completes only 5 cycles for every 6 on q c . Although this is only a meager improvement, one can imagine that when many more qubits are resting than performing computation, the relative gain will be quite significant.

D. Effects not taken into account
Although we have attempted to be thorough in the detailing of the circuit, we have neglected certain effects. We have used a simple model for C-Z gate errors as we lack data from experimental tomography (e.g. one obtained from two-qubit gate-set tomography 33 ). Most importantly, we have neglected leakage, where a transmon is excited out of the two lowest energy states, i.e., out of the computational subspace. Previous experiments have reduced the leakage probability per C-Z gate to ∼ 0.3% 34 , and per single-qubit gate to ∼ 0.001% 35 . Schemes have also been developed to reduce the accumulation of leakage 36 . Extending quantumsim to include and investigate leakage is a next target. However, the representation of the additional quantum state can increase the simulation effort significantly [by a factor of (9/4) 10 ≈ 3000]. To still achieve this goal, some further approximations or modifications to the simulation will be necessary. Future simulations will also investigate the effect of spread in qubit parameters, both in space (i.e., variation of physical error rates between qubits) and time (e.g., T 1 fluctuations), and cross-talk effects such as residual couplings between nearest and next-nearest neighbor transmons, qubit cross-driving, and qubit dephasing by measurement pulses targeting other qubits.

IV. Methods
A. Simulated experimental procedure

Surface-17 basics
A QEC code can be defined by listing the data qubits and the stabilizer measurements that are repeatedly performed upon them 37 . In this way, Surface-17 is defined by a 3×3 grid of data qubits {D 0 , . . . D 8 }. In order to stabilize a single logical qubit, 9 − 1 = 8 commuting measurements are performed. The stabilizers are the weight-two and weight-four X-and Z-type parity operators X 2 X 1 , Pauli operator acting on data qubit D j . Their measurement is realized indirectly using nearest-neighbor interactions between data and ancilla qubits arranged in a square lattices, followed by ancilla measurements [ Fig. 6(a)]. This leads to a total of 17 physical qubits when a separate ancilla is used for each individual measurement. We follow the circuit realization of this code described in 18 , for which we give a schematic description in Fig. 6(b) (see 24 for a full circuit diagram).
In an experimental realization of this circuit, qubits will regularly accumulate errors. Multiple errors that occur within a short period of time (e.g., one cycle) form error 'chains' that spread across the surface. Errors on single qubits, or correlated errors within a small subregion of Surface-17, fail to commute with the stabilizer measurements, creating error signals that allow diagnosis and correction of the error via a decoder. However, errors that spread across more than half the surface in a short enough period of time are misdiagnosed, causing an error on the logical qubit when wrongly corrected 10 . The rate at which these logical errors arise is the main focus of this paper.

Protocol for measurement of logical error rates
As the performance measure of Surface-17, we study the fidelity of the logical qubit as a quantum memory. We describe our protocol with an example 'run' in Fig. 6. We initialize all qubits in |0 and perform k = 1, 2, . . . , 20 QEC cycles [ Fig. 6(b)]. Although this initial state is not a stabilizer eigenstate, the first QEC cycle projects the system into one of the 16 overlapping eigenstates within the +1 eigenspace for Z stabilizers, which form the logical |0 state 10 . This implies that, in the absence of errors, the first measurement of the Z stabilizers will be +1, whilst that of the X stabilizers will be random. In the following cycles, ancilla measurements of each run [ Fig. 6(c)] are processed using a classical decoding algorithm. The decoder computes a Pauli update after each QEC cycle [ Fig. 6(d)]. This is a best estimate of the Pauli operators that must be applied to the data qubits to transform the logical qubit back to the logical |0 state. The run ends with a final measurement of all data qubits in the computational basis. From this 9-bit outcome, a logical measurement result is declared [ Fig. 6(e)]. First, the four Z-type parities are calculated from the 9 data-qubit measurement outcomes and presented to the decoder as a final set of parity measurements. This ensures that the final computed Pauli update will transform the measurement results into a set that measures +1 for all Z stabilizers. This results in one of 32 final measurements, from which the value of a logical Z operator can be calculated to give the measurement result (any choice of logical operator gives the same result). The logical fidelity F L [k] after k QEC cycles is defined as the probability of this declared result matching the initial +1 state.
At long times and with low error rates, Surface codes have a constant logical error rate L . The fidelity F L [k] is obtained by counting the probability of an odd number of errors having occurred in total (as two σ x errors cancel) 20,38 : Here, the combinatorial factor counts the number of combinations of l errors in k rounds, given an L chance of error per round. This can be simplified to However, at small k, the decay is dominated by the majority vote, for which L ∝ (k phys ) (d+1)/2 . For example, for all the Surface-17 decay curves, we observe a quadratic error rate at small k, as opposed to the linear slope predicted by Eq. (5). In order to correct for this, we shift the above equation in k by a free parameter k 0 , resulting in Eq. (2). This function fits well to data with k ≥ 3 in all plots, and thus allows accurate determination of L .

The quantumsim simulation package
Quantumsim performs calculations on density matrices utilizing a graphics processing unit in a standard desktop computer. Ancillas are measured at the end of each cycle, and thus not entangled with the rest of the system. As such, it is possible to obtain the effect of the QEC cycle on the system without explicitly representing the density matrix of all 17 qubits simultaneously. The simulation is set up as follows: the density matrix of the nine data qubits is allocated in memory with all qubits initialized to |0 . One-and two-qubit gates are applied to  The red data qubits are initialized in the ground state |0 , and projected into an eigenstate of the measured X-(blue) and Z-(green) type stabilizer operators. (b) A section of the quantum circuit depicting the four-bit parity measurement implemented by the A3 ancilla qubit (+/− refer to Ry(±π/2) single-qubit rotations). The ancilla qubit (green line, middle) is entangled with the four data qubits (red lines) to measure Z1Z2Z4Z5. Ancillas are not reset between cycles. Instead, the implementation relies on the quantum non-demolition nature of measurements. The stabilizer is then the product of the ancilla measurement results of successive cycles. This circuit is performed for all ancillas and repeated k times before a final measurement of all (data and ancilla) qubits. (c) All syndrome measurements of the k cycles are processed by the decoder. (d) After each cycle, the decoder updates its internal state to represent the most likely set of errors that occurred. (e) After the final measurement, the decoder uses the readout from the data qubits, along with previous syndrome measurements, to declare a final logical state. To this end, the decoder processes the Z-stabilizers obtained directly from the data qubits, finalizing its prediction of most likely errors. The logical parity is then determined as the product of all data qubit parities ( 8 j=0 Dj) once the declared errors are corrected. The logical fidelity FL is the probability that this declaration is the same as the initial state (|0 ).
the density matrix as completely positive, trace preserving maps represented by Pauli transfer matrices. When a gate involving an ancilla qubit must be performed, the density matrix of the system is dynamically enlarged to include that one ancilla.
Qubit measurements are simulated as projective and following the Born rule, with projection probabilities given by the squared overlap of the input state with the measurement basis states. In order to capture empirical measurement errors, we implement a black-box measurement model (Sec. IV B 6) by sandwiching the measurement between idling processes. The measurement projects the system to a product state of the ancilla and the projected sub-block of the density matrix. We can therefore remove the ancilla from the density matrix and only store its state right after projection, and continue the calculation with the partial density matrix of the other qubits. Making use of the specific arrangement of the interactions between ancillas and data qubits in Surface-17, it is possible to apply all operations to the density matrix in such an order (shown in 24 ) that the total size of the density matrix never exceeds 2 10 ×2 10 (nine data qubits plus one ancilla), which allows relatively fast simulation. We emphasize that with the choice of error model in this work, this approach gives the same result as a full simulation on a 17-qubit density matrix. Only the introduction of residual entangling interactions between data and ancilla qubits (which we do not consider in this work) would make the latter necessary. On our hardware (see 24 ), simulating one QEC cycle of Surface-17 with quantumsim takes 25 ms.
We highlight an important advantage of doing densitymatrix calculations with quantumsim. We do not perform projective measurements of the data qubits. Instead, after each cycle, we extract the diagonal of the data-qubit density matrix, which represents the probability distribution if a final measurement were performed. We leave the density matrix undisturbed and continue simulation up to k = 20. This is a very useful property of the density-matrix approach, because having a probability distribution of all final readout events greatly reduces sampling noise.
Our measurement model includes a declaration error probability (see Sec. IV B 6), where the projected state of the ancilla after measurement is not the state reported to the decoder. Before decoding, we thus apply errors to the outcomes of the ancilla projections, and smear the probability distribution of the data qubit measurement. To then determine the fidelity averaged over this probability distribution, we present all 16 possible final Z-type parities to the decoder. This results in 16 different final Pauli updates, allowing us to determine correctness of the decoder for all 512 possible measurement outcomes. These are then averaged over the simulated probability distribution. This produces good results after about ∼ 10 4 simulated runs.
A second highlight of quantumsim is the possibility to quantify the sub-optimality of the decoder. The fidelity of the logical qubit obtained in these numerical simulations is a combination of the error rates of the physical qubits and the approximations made by the decoder. Full density-matrix simulations make it possible to disentangle these two contributions. Namely, the fidelity is obtained by assigning correctness to each of the 512 possible readouts according to 16 outputs of the decoder, and summing the corresponding probabilities accordingly. If the probabilities are known, it is easy to determine the 16 results that a decoder should output in order to maximize fidelity (i.e., the output of the best-possible decoder). This allows placing a decoder upper bound F max L on logical fidelity as limited by the physical qubits independent of the decoder. Conversely, it also allows quantifying sub-optimality in the decoder used. In fact, we can make the following reverse statement: if our measurement model did not include a declaration error, then we could use the simulation to find the final density matrix of the system conditioned on a syndrome measurement. From this, the simulation could output exactly the 16 results that give F max L , so that quantumsim could thus be used as a maximum-likelihood decoder. In this situation, F max L would not only be an upper bound, but indeed the performance of the best-possible decoder. However, as we add the declaration errors after simulation, we can only refer to F max L as the decoder upper bound.

B. Error models
We now describe the error model used in the simulations. Our motivation for the development of this error model is to provide a limited number of free parameters to study, whilst remaining as close to known experimental data as possible. As such, we have taken well-established theoretical models as a base, and used experimental tomography to provide fixed parameters for observed noise beyond these models. The parameters of the error model are provided in 24

Idling qubits
While idling for a time τ , a transmon in |1 can relax to |0 . Furthermore, a transmon in superposition can acquire random quantum phase shifts between |0 and |1 due to 1/f noise sources (e.g., flux noise) and broadband ones (e.g., photon shot noise 39 and quasiparticle tunneling 40 ). These combined effects can be parametrized by probabilities p 1 = exp(−τ /T 1 ) for relaxation, and p φ = exp(−τ /T φ ) for pure dephasing. The combined effects of relaxation and pure dephasing lead to decay of the off-diagonal elements of the qubit density matrix.
We model dephasing from broadband sources in this way, taking for T φ the value extracted from the decay time T 2 of standard echo experiments: We model 1/f sources differently, as discussed below.

Dephasing from photon noise
The dominant broadband dephasing source is the shot noise due to photons in the readout resonator. This dephasing is present whenever the coupled qubit is brought into superposition before the readout resonator has returned to the vacuum state following the last measurement. This leads to an additional, time-dependent pure dephasing (rates given in 24 ).

One-qubit Y rotations
We model y-axis rotations as instantaneous rotations sandwiched by idling periods of duration τ g,1Q /2. The errors in the instantaneous gates are modeled from process matrices measured by gate-set tomography 33,41 in a recent experiment 20 . In this experiment, the GST analysis of single-qubit gates also showed that the errors can mostly be attributed to Markovian noise. For simplicity, we thus model these errors as Markovian.

Dephasing of flux-pulsed qubits
During the coherent step, transmons are repeatedly moved in frequency away from their sweetspot using flux pulses, either to implement a C-Z gate or to avoid one. Away from the sweetspot, transmons become first-order sensitive to flux noise, which causes an additional random phase shift. As this noise typically has a 1/f power spectrum, the largest contribution comes from low-frequency components that are essentially static for a single run, but fluctuating between different runs. In our simulation, we approximate the effect of this noise through ensemble averaging, with quasi-static phase error added to a transmon whenever it is flux pulsed. Gaussian phase errors with the variance (calculated in 24 ) are drawn independently for each qubit and for each run.

C-Z gate error
The C-Z gate is achieved by flux pulsing a transmon into the |11 ↔ |02 avoided crossing with another, where the 2 denotes the second-excited state of the fluxed transmon. Holding the transmons here for τ g,2Q causes the probability amplitudes of |01 and |11 to acquire phases 42 . Careful tuning allows the phase φ 01 acquired by |01 (the single-qubit phase φ 1Q ) to be an even multiple of 2π, and the phase φ 11 acquired by |11 to be π extra. This extra phase acquired by |11 is the two-qubit phase φ 2Q . Single-and two-qubit phases are affected by flux noise because the qubit is first-order sensitive during the gate. Previously, we discussed the single-qubit phase error. In 24 , we calculate the corresponding twoqubit phase error δφ 2Q . Our full (but simplistic) model of the C-Z gate consists of an instantaneous C-Z gate with single-qubit phase error δφ 1Q and two-qubit phase error δφ 2Q = δφ 1Q /2, sandwiched by idling intervals of duration τ g,2Q /2.

Measurement
We model qubit measurement with a black-box description using parameters obtained from experiment. This description consists of the eight probabilities for transitions from an input state |i ∈ {|0 , |1 } into pairs (m,|o ) of measurement outcome m ∈ {+1, −1} and final state |o ∈ {|0 , |1 }. By final state we mean the qubit state following the photon-depletion period. Input superposition states in the computational bases are first projected to |0 and |1 following the Born rule. The probability tree (the butterfly) is then used to obtain an output pair (m, |o ). These experimental parameters can be described by a six-parameter model (described in detail in 24 ), consisting of periods of enhanced noise before and after a point at which the qubit is perfectly projected, and two probabilities |i RO for wrongly declaring the result of this projective measurement. In 24 , a scheme for measuring these butterfly parameters and mapping them to the six-parameter model is described. In experiment, we find that the readout errors |i RO are almost independent of the qubit state |i , and so we describe them with a single readout error parameter RO in this work. distribute reprints for Governmental purposes notwith-standing any copyright annotation thereon.

A. Full circuit diagram for Surface-17 implementation
The quantum circuit 18 (Fig. 7) consists of R y (π/2) ("+") and R y (−π/2) ("−") rotations, C-Z gates, and ancilla measurements. The coherent steps of the X and Z ancillas are pipelined (shifted in time with respect to each other) to prevent transmon-transmon avoided crossings. As long as τ m + τ d ≥ τ c , no time is lost due to this separation.
In a simulation of the given circuit, gates on different qubits commute and may be applied to the density matrix in any order, regardless of the times at which they are performed in an experiment. As described in Sec. IV A 3, by simulating gates in a specific order (Fig. 8), one can ensure that only one ancilla is ancilla is entangled with the data qubits at any point in the simulation. This allows a reduction in the maximum size of the density matrix from 2 17 × 2 17 to 2 10 × 2 10 .

B. Parameters of error models
This appendix provides mathematical details of the sources of error described in the main text. Standard values for the parameters used throughout the text are given in Table II  In the quantumsim module, all gates are applied in the Pauli transfer matrix representation ? . These are given in the form where matrices σ i are the Pauli operators: σ 0 = I, σ 1 = X, σ 2 = Y and σ 3 = Z.

Qubit idling
Idling qubits are described by the amplitude-phase damping model ? , corresponding to the transfer matrices Idling for a duration t is thus described by with p 1 = 1 − e −t/T1 and p φ = 1 − e −t/T φ .

Photon decay
In the presence of photons in a readout resonator, the coupled qubit is affected according to the effective stochastic master equation 26 : Here, ρ is the qubit density matrix, is the measurement-induced detuning (Stark shift), and Γ d = 2χIm(α g α * e ) is the measurementinduced dephasing, with α i the qubit-state-dependent photon field in the resonator and 2χ the qubit frequency shift per photon. At time t − t g after the qubit superposition is created, with t − t m the time since the end of measurement excitation pulse. Integrating over the interval [t 1 , t 2 ] gives a dephasing term with coefficient This dephasing is then implemented via the same Pauli transfer matrix as (B3).  Fig. 7. Throughout a simulation, quantumsim stores the density matrix of all data qubits. Each error correction cycle is split up into 8 steps as labeled. In each step, a single ancilla qubit is added to the density matrix, the correspondingly colored pieces of the circuit are executed, and the ancilla is read out and removed from the density matrix. This scheme is only possible because on each data qubit all gates are executed in order. Note that steps after the final C-Z gate on a data qubit are executed during the next cycle.

Single-qubit Ry(π/2) rotations
Single-qubit rotations are modeled by sandwiching an instantaneous Pauli transfer matrix, representing the rotation, with periods of duration τ g,1Q /2 of amplitude and phase damping. This allows to model the gate for different T 1 and T φ . However, comparison of this model with Pauli transfer matrices obtained from gate-set tomography experiments shows that actual gates are more accurately described when adding a phenomenological depolarizing noise to the instantaneous part. In the Bloch sphere, this decay corresponds to shrinking toward the origin, with factor 1 − p axis along the y axis and 1 − p plane along the x-and z-axes. We thus model and R Ry(π/2) is the Pauli transfer matrix describing a perfect π/2 rotation around the y axis.

Flux noise
Shifting the transmon from its sweetspot f q,max to a lower frequency makes it first-order sensitive to flux noise, with sensitivity Here, Φ is the flux bias and Φ 0 = h/2e is the flux quantum. For a deviation of δΦ, the pulsed transmon incurs a phase error Flux noise has a characteristic (single-sided) spectral density For our quantum circuit based on 18 , we estimate the corresponding rms phase error induced in a pulsed transmon to be δφ rms ≈ 0.01 rad.

C-Z gates
We now focus on the two-qubit phase error. For an adiabatic gate, with t 1 and t 2 = t 1 + τ g,2Q the start and end of the gate and ζ the time-dependent frequency deviation of the lower branch of the |11 ↔ |02 avoided crossing from the sum of frequencies for |01 and |10 . Near the flux center Φ c of the |11 − |02 avoided crossing, where 2J/2π ∼ 50 MHz is the minimum splitting between |11 and |02 , and Differentiating with respect to Φ at Φ c gives To estimate the δφ 2Q error, we make the following simplification: we replace the exact trajectory created by the flux pulse by a shift to Φ = Φ c + δΦ with duration τ g,2Q .
For a deviation of δΦ, Note that this two-qubit phase error is correlated with the single-qubit phase error on the fluxed transmon. The former is smaller by a factor ≈ 2.

Measurement
The probabilities m,o i are calibrated using the statistics of outcomes in back-to-back measurements (a followed by b) with the qubit initialized in |i .
We obtain the six free parameters of the black-box description from these 12 equations, using experimental values on the left-hand side ? . Table III shows the values used, achieved in a recent experiment 19 . For the simulation, we reproduce this behaviour of the measurement process by a model with several steps. The qubit undergoes dephasing, followed by periods of decay or excitation between which the measurement result is sampled. This measurement result is further subject to a state-dependent declaration error RO before reported to the decoder (see Fig.9). The six parameters of this model are in a one-to-one correspondence with the butterfly parameters described above, and can be mapped by solving the corresponding system of equations. The ex-Probability Value Probability Value   perimental results in Tab.III are very well explained by assuming unmodified amplitude-phase damping (withe zero excitation probabilities) during the measurement period, and an outcome-independent declaration error of RO = 1 RO = 0 RO = 0.15%. We use this result to extrapolate measurement performance to different values of T 1 .
Reduction of measurement time is expected to reduce assignment fidelity. For the results presented in Fig. 2, we do not rely on experimental results, but assume a simplified model for measurement, following Ref. 26. A constant drive pulse of amplitude and tuned to the bare resonator frequency, ∆ r = 0, excites the readout resonator for time τ m . The dynamics of the resonator is dependent on the transmon state (we approximate linear behavior), and the transmitted signal is amplified and detected in a homodyne measurement as a noisy transient. This transient is processed by a linear classifier, which declares the measurement outcome. For resonator depletion, we use a two-step clearing pulse with amplitude c1 and c2 , each active for τ d /2 and chosen (by numerical minimization) so that, at the end of the depletion pulse, the transients for both transmon states return to zero. While the resonator dynamics is easily found if the transmon is in the ground state, amplitude damping of the transmon in the excited state leads to non-deterministic behavior. We thus numerically obtain an ensemble of noisy transients for each input qubit state, and optimize the decision boundary of the linear classifier for this en- semble. Generating a second verification ensemble, the "butterfly" of the measurement setup is estimated.
The dynamics of the resonator is determined by the resonator linewidth κ as well as the dispersive shift χ. We chose the parameters of the setup used in 19 , 1/κ = 250 ns and χ/π = −2.6 MHz. The signal-to-noise ratio of the detected transient is reduced by the quantum efficiency η = 12.5%. The driving strength is chosen to approximate the "butterfly" used in most of the main text, and corresponds to a steady-state average photon population of aboutn = 15. We then keep constant while changing the measurement time, keeping τ m = τ d , to obtain the butterflies used in the density matrix simulation. We ignore effects leading to measurement-induced mixing and non-linearity of the readout resonator. Finally, since these simulations do not allow to make a realistic prediction about residual photon numbers achievable in experiments, we ignore this effect when using these results.

C. Effect of over-rotations and two-qubit phase noise on logical error rate
In this section we provide additional numerical data showing the effect of some common noise sources on the logical error rate. In Fig. 10 we show the effect of a coherent over-rotation, whereby the R Y (π/2) operator in Eq. B5 is replaced by R Y (π/2 + δφ). This can be caused by inaccurate calibration of the flux pulse used to perform the gate. In Fig. 11 we show the effect of an increase in the two-qubit flux noise δφ rms as described in Sec. B 4.

D. Calculation of decoder upper bound
We provide a detailed description how the decoder upper bound is obtained from the simulation results. As described in the main text, after each cycle of simulation, the diagonal of the reduced density matrix of the data qubits in the Z basis is stored. It contains the probability distribution for the 2 9 = 512 different possible measurement outcomes of the data qubits. In the quantum memory experiment described in the main text, each of these outcomes are passed to the decoder, which then declares a logical measurement outcome.
It is evident that any decoder must declare opposite logical outcomes if two of the 512 possible measurements m and m' are related by the application of a logical X operator. Thus, any decoder can give the correct result only for half of the measurement outcomes. Subject to this constraint, we can find the set of 256 declarations which maximize the probability that the declaration is correct. It immediately follows that no decoder can achieve a declaration fidelity larger than this maximal probability. We thus refer to it as the decoder upper bound.
In practice, the upper bound is found according to the following approach. Since declarations are opposite if two outcomes differ by a logical X operator, they must be equal if they differ by the application of one or more X stabilizers (applying two different logical X operators amounts to the application of a product of X stabilizers). We thus group the outcomes in 32 cosets which are related by the application of X-stabilizers. (There are 4 Xstabilizers in Surface-17, so there are 512/2 4 = 32 cosets). For outcomes from the same coset, the declaration from a decoder must be the same. We obtain the probability of a final measurement falling within each coset by summing the probabilities from the density matrix diagonal. We further group the 32 cosets to 16 pairs, which differ by the application of a logical operator. The upper bound is then obtained by selecting the more probable coset from each pair and summing the corresponding probabilities. This upper bound can also be interpreted as the internal decoherence of the logical qubit: it represents the maximal overlap of the final state with the initial state, under any possible correction of errors.
We finally emphasize that the this upper bound can be found only because we have access to the complete probability distribution of outcomes (for a given result of syndrome measurements), a major advantage of the density matrix simulation. However, we do not expect that any decoder can actually achieve this upper bound: This is because we add syndrome measurement events independently after the situation, which will decrease the logical error rate further.

E. Hardware requirements of simulation
The simulations are performed using the quantumsim package 43 , which were developed by the authors for this work. The package is accelerated by performing the density matrix manipulations on a GPU (graphics card). The simulations for this work were performed on a NVidia Tesla K40 GPU, on which we observed runtimes of about 0.5 seconds for the simulation of a run of k=20 cycles (25 ms per QEC cycle). We also had the opportunity to test the software on a more modern GPU (NVidia Tesla P100), observing about 15 ms per cycle, and on a consumer-grade GPU (NVidia Quadro M2000), observing about 40 ms per cycle. By comparison, the CPU is mostly idle during the simulation, except for handling of input and output. The memory requirements are modest for both CPU and GPU RAM. They are dominated by the storage of the density matrices and amount to a few ten megabytes.

F. Homemade MWPM decoder with asymmetric weight calculation
Every QEC code requires a decoder to track the most likely errors consistent with a given set of stabilizer measurements. The MWPM decoder has gained popularity since it was shown to have threshold values above 1% 14 . The motivation behind MWPM is that single X or Z errors on data qubits in the bulk of a surface-code fabric cause changes of two stabilizers in the code. These signals can then be considered vertices on a graph, with the error the edge connecting them. Errors in measurement, or errors on a single ancilla qubit, behave as changes in the stabilizer that are separated in time. Multiple errors that would join the same vertices create longer paths in the graph, of which an experiment only records the endpoints. Thus, the problem becomes that of finding the most likely set of generating errors given the error signals that mark their ends. This is made slightly simpler, as in the surface code any chain of errors that forms a closed loop does not change the logical state. This implies that all paths that connect two points are equivalent, and can be considered together. The problem then is to join error signals, either in pairs, or to a 'boundary' vertex. The latter corresponds to errors on data qubits at the boundary, which belong to only one X or Z stabilizer. This pairing P should be chosen as the most likely combination of single-qubit errors that could generate the measured error signals. This has then been reduced to the problem of minimum-weight perfect matching on a graph, which can be solved in polynomial time by the blossom algorithm 10? .
The MWPM decoder we use differs from previous methods by its weight calculation. As part of the decoding process, it is required to calculate to some degree of accuracy ? the probability p e1,e2 of two measured error signals e 1 and e 2 being connected by a chain of individual logical errors. This is then converted to a weight w e1,e2 = − log(p e1,e2 ), which form the input to the blossom algorithm of Edmonds to find the most likely matching of error signals 10? . An exact calculation of p e1,e2 requires a sum over all such chains between e 1 and e 2 that do not cross the boundary (these are equivalent modulo stabilizer operators that do not change the logical state). In this appendix we detail a method of computing this sum, and approximations to make it viable within the runtime of the experiment.
Let us define the ancilla graph G A = (V A , E A ) containing a vertex v ∈ V A for every ancilla measurement, and an edge e ∈ E A connecting v, u ∈ V A if a single component (gate, single-qubit rest period, or faulty measurement) in the simulation can cause the u and v measurements to return an error. We include a special 'boundary' vertex v B , to which we connect another vertex v if single components can cause errors on v alone. Then, to each edge e we associate a probability p e , being the sum of the probabilities of each component causing this error signal. These error rates can be obtained directly from quantumsim, by cutting the circuit at each C-Z gate and measuring the decay of single qubits between. Then, for a given experiment with given syndrome measurements, let us define the syndrome graph G S = (V S , E S ) containing a vertex v ∈ V S for each syndrome measurement that records an error, and an edge λ u,v ∈ E S connecting u, v ∈ V S if u and v are either both X ancilla qubits or both Z ancilla qubits. To each edge λ u,v we associate a probability p u,v given by the sum of the probabilities of a chain of errors causing error signals solely on u and v.
If we assume that single-qubit errors are uncorrelated, we have to lowest order p u,v ≈ paths (e1,e2,...,en) between u and v n j=1 p ej , (F1) Let A A be the adjacency matrix on G A weighted by the probabilities p e (i.e., (A A ) u,v = p e with e connecting u and v), and A S the same for G S . Then, the above be-comes noting that A S contains a subset of the indices that are used to construct A A . The boundary must be treated specially in the above calculation. For the purposes of the surface code, the boundary can be described as a single vertex which has no limit on the number of other vertices it may pair to 10 . For the purposes of weight calculation, any path that passes through the boundary is already counted by pairing both end vertices to the boundary. This can be treated by making G A directed, and breaking the sym- The above calculation requires inversion of a N mat × N mat matrix, with N mat the total number of ancilla measurements per experiment. Furthermore, as ancilla error rates depend upon the previous ancilla state, elements in A A are not completely known until the previous cycle. This implies that in an actual computation with runtime decoding, this inversion would need to be completed within a few microseconds (with a transmon-cQED architecture), which is practically unfeasible. We suggest two approximations that can be made to shorten the decoding time. The first is to average all errors over the ancilla population, ignoring any asymmetry in the system. The adjacency matrix is now the same for any experiment, and can be precalculated and stored as a look-up table for the run-time decoder. We call this the decoder with symmetrized weights. The size of such a look-up table scales poorly with the number of qubits and the number of cycles. However, (A S ) u,v is approximately invariant under simultaneous translation of u and v (excluding boundary effects). This implies that a precalculated A S can be vastly compressed, making this method feasible.
The second approximation to the full A S calculation is to perform it iteratively. We divide our graph G A (G S ) by time steps; let G t A (G t S ) be the subgraph of G A (G S ) containing only ancillas measured before time step t, and let ∂G t A (∂G t S ) be the subgraph of G A (G S ) containing only ancillas measured during time step t. Then, if we assume we have an approximation to the matrix A t S (being the adjacency matrix of G t S ), we can approximate to lowest order in physical errors. Here, ∂A t+1 A is the weighted adjacency matrix on ∂G t+1 A , and the coupling matrix C t+1 S is approximated by We have used the second method for our MWPM decoder, as we expect the error from neglecting higher-order combinations of errors to be small. In order to check this assumption, in Fig. 12 we repeat our simulation protocol with a modified physical error model that excludes all Y and measurement errors. We see that in the absence of these errors, the MWPM decoder performs within the error margin of the decoder upper bound. Note that a small deviation is expected from the discrepancy between a MWPM decoder and a maximum-likelihood decoder 15 . With the parameters used in this work, we do not observe any loss of fidelity when we stop accounting for the difference in error rates between ancilla states. We account this to the large error contribution from photon noise and gate infidelity on the ancilla qubits, which do not have this asymmetry. We further note that we operate in a regime of large ancilla error; as described in the text this makes the system counter-intuitively less sensitive to ancilla noise. In systems where this is not the case, it could be that accounting for ancilla asymmetry provides a useful computational method to improve L .

G. Implementation of a look-up table decoder
In 13 , the authors describe a decoding scheme specific to Surface-17, which is optimized to be implementable with limited computational resources in a short cycle time. This decoding scheme works by using a short decision tree to connect errors to each other in a style similar to blossom. Indeed, this scheme is equivalent to a blossom decoder with all horizontal, vertical and diagonal weights equal 13 . As such, we have implemented the new weights in the blossom decoder rather than utilizing the exact method given.

H. Details of lowest-order approximation
We detail the approximation made to study Surface-49 in Sec. II C. Note that this calculation is only for X errors, which are measured by the Z ancillas. This implies that our approximation should attempt to realize the result of blossom, rather than the decoder upper bound.
We begin with the G A graph defined in App. F. In the absence of correlated errors that cause more than two error signals, any experiment can be approximately described by choosing a set S ⊂ E A of edges on the graph and assuming the errors that correspond to these edges have occurred. Each ancilla measurement corresponds to a vertex in G A , which records an error if an odd number of edges in S point to the vertex. Each combination M a of ancilla measurements can be generated by multiple error sets S.
Formally, let us write M for the set of all combinations of ancilla measurements and S for the set of all combinations of errors (so S = 2 E A ). We then define a function φ : S → M that takes a combination of errors to the resultant measurement outcomes. Let us fix a logical Z operator Z L on the surface-code fabric. Then to each S ∈ S we can assign a parity p(S) = ±1 depending on whether the product of all errors in S commute with Z or not. A decoding then consists of a choice of parity p d (M ) for each M ∈ M. Such a decoding correctly decodes S ∈ S if p d (φ(S)) = p(S), and creates a logical error otherwise. The source of logical errors in a perfect decoder is then precisely the fact that we can have two error combinations S 1 , S 2 ∈ S such that φ(S 1 ) = φ(S 2 ) but p(S 1 ) = p(S 2 ).
The above suggests a method by which a perfect decoder can be constructed. As defined, φ −1 (M ) ⊂ S is the set of error combinations S that return a measurement M ∈ M. For each error combination S, we can calculate the probability of this occurring: At this point the only approximation that has been made is to neglect the T 1 asymmetry in the system, which we have shown previously in this work to be negligible. Unfortunately, the above function cannot be evaluated exactly; the number of error combinations S is approximately 2 200 for 4 cycles of Surface-49. Our goal instead is to approximate this to the lowest order in the physical qubit error rate.
Let us make the approximation that our error combinations S can be split into small, well-separated pieces of errors containing separate correctable and non-correctable parts, S = ∪ i S i . To each S i we can assign a time step t(S i ), being the earliest time of the first error measurement observed (in φ(S i )). The error rate per round, L , can be determined by summing Eq. H3 over all pieces S i of all combinations S such that t(S i ) = T (with arbitrary T ), as the effect of repeated errors from S i , S j ⊂ S is taken into account during the derivation of the logical fidelity equation (Eq. 2 in the main text). If we took this approximation literally and considered the sum over every possible combination S containing S i , the final sum in Eq. H6 would reduce tō However, this includes error combinations S that cannot be easily separated into S i and 'something else', i.e. they contain other errors e that cannot be separated from S i . Eq. H7 is then equivalent to assuming that if S i is an uncorrectable logical error, no nearby combination of physical errors S can be combined such that S i ∪ S is correctable unless S itself is an uncorrectable logical error. Such combinations would serve to reduce the calculated L , and sor (u) gives an upper bound for L in Eq. H5. For a lower bound, we approximate that for any uncorrectable error combination S i , approximately one rounds-worth of single errors would undo the logical error, leading to the approximation We now make one further approximation, and sum Eq. H5 only over the shortest S i that can be expected to contribute to the final error rate. That is, we sum over those S i with |S i | ≤ (d + 1)/2, and that spread directly across the chain. The error incurred from this approximation is roughly proportional to the largest single error, which is no more than 5% throughout our study. We user (u) andr (l) to give the error bars shown in Fig. 5. Points in the plot are taken as a log average of the upper and lower bounds, and thus have no particular relevance themselves. We see that the numerical calculation falls within the corresponding error bars for almost the entire dataset, giving verification for our method, save a slight deviation at one point where it falls below. Moreover, as the simulated Surface-17 error rate lies above the upper bound found for the Surface-49 error rate (with the standard set of parameters from the main text), our claim that Surface-17 will operate below the fault-tolerant threshold is quite strong.