Abstract
Performing large calculations with a quantum computer will likely require a faulttolerant architecture based on quantum errorcorrecting codes. The challenge is to design practical quantum errorcorrecting codes that perform well against realistic noise using modest resources. Here we show that a variant of the surface code—the XZZX code—offers remarkable performance for faulttolerant quantum computation. The error threshold of this code matches what can be achieved with random codes (hashing) for every singlequbit Pauli noise channel; it is the first explicit code shown to have this universal property. We present numerical evidence that the threshold even exceeds this hashing bound for an experimentally relevant range of noise parameters. Focusing on the common situation where qubit dephasing is the dominant noise, we show that this code has a practical, highperformance decoder and surpasses all previously known thresholds in the realistic setting where syndrome measurements are unreliable. We go on to demonstrate the favourable subthreshold resource scaling that can be obtained by specialising a code to exploit structure in the noise. We show that it is possible to maintain all of these advantages when we perform faulttolerant quantum computation.
Introduction
A largescale quantum computer must be able to reliably process data encoded in a nearly noiseless quantum system. To build such a quantum computer using physical qubits that experience errors from noise and faulty control, we require an architecture that operates faulttolerantly^{1,2,3,4}, using quantum error correction to repair errors that occur throughout the computation.
For a faulttolerant architecture to be practical, it will need to correct for physically relevant errors with only a modest overhead. That is, quantum error correction can be used to create nearperfect logical qubits if the rate of relevant errors on the physical qubits is below some threshold, and a good architecture should have a sufficiently high threshold for this to be achievable in practice. These faulttolerant designs should also be efficient, using a reasonable number of physical qubits to achieve the desired logical error rate. The most common architecture for faulttolerant quantum computing is based on the surface code^{5}. It offers thresholds against depolarising noise that are already high, and encouraging recent results have shown that its performance against more structured noise can be considerably improved by tailoring the code to the noise model^{6,7,8,9,10}. While the surface code has already demonstrated promising thresholds, its overheads are daunting^{5,11}. Practical faulttolerant quantum computing will need architectures that provide high thresholds against relevant noise models while minimising overheads through efficiencies in physical qubits and logic gates.
In this paper, we present a highly efficient faulttolerant architecture design that exploits the common structures in the noise experienced by physical qubits. Our central tool is a variant of the surface code^{12,13,14} where the stabilizer checks are given by the product XZZX of Pauli operators around each face on a square lattice^{15}. This seemingly innocuous local change of basis offers a number of significant advantages over its more conventional counterpart for structured noise models that deviate from depolarising noise.
We first consider preserving a logical qubit in a quantum memory using the XZZX code. While some twodimensional codes have been shown to have high error thresholds for certain types of biased noise^{7,16}, we find that the XZZX code gives exceptional thresholds for all singlequbit Pauli noise channels, matching what is known to be achievable with random coding according to the hashing bound^{17,18}. It is particularly striking that the XZZX code can match the threshold performance of a random code, for any singlequbit Pauli error model, while retaining the practical benefits of local stabilizers and an efficient decoder. Intriguingly, for noise that is strongly biased towards X or Z, we have numerical evidence to suggest that the XZZX threshold exceeds this hashing bound, meaning this code could potentially provide a practical demonstration of the superadditivity of coherent information^{19,20,21,22,23}.
We show that these high thresholds persist with efficient, practical decoders by using a generalisation of a matching decoder in the regime where dephasing noise is dominant. In the faulttolerant setting when stabilizer measurements are unreliable, we obtain thresholds in the biasednoise regime that surpass all previously known thresholds.
With qubits and operations that perform below the threshold error rate, the practicality of scalable quantum computation is determined by the overhead, i.e. the number of physical qubits and gates we need to obtain a target logical failure rate. Along with offering high thresholds against structured noise, we show that architectures based on the XZZX code require very low overhead to achieve a given target logical failure rate. Generically, we expect the logical failure rate to decay like O(p^{d/2}) at low error rates p where \(d=O(\sqrt{n})\) is the distance of a surface code and n is the number of physical qubits used in the system. By considering a biasednoise model where dephasing errors occur a factor η more frequently than other types of errors we demonstrate an improved logical failure rate scaling like \(O({(p/\sqrt{\eta })}^{d/2})\). We can therefore achieve a target logical failure rate using considerably fewer qubits at large bias because its scaling is improved by a factor ~η^{−d/4}. We also show that nearterm devices, i.e. smallsized systems with error rates near to threshold, can have a logical failure rate with quadratically improved scaling as a function of distance; \(O({p}^{{d}^{2}/2})\). Thus, we should expect to achieve low logical failure rates using a modest number of physical qubits for experimentally plausible values of the noise bias, for example, 10 ≲ η ≲ 1000^{24,25}.
Finally, we consider faulttolerant quantum computation with biased noise^{26,27,28}, and we show that the advantages of the XZZX code persist in this context. We show how to implement lowoverhead faulttolerant Clifford gates by taking advantage of the noise structure as the XZZX code undergoes measurementbased deformations^{29,30,31}. With an appropriate lattice orientation, noise with bias η is shown to yield a reduction in the required number of physical qubits by a factor of \(\sim {\mathrm{log}}\,\eta\) in a largescale quantum computation. These advantages already manifest at code sizes attainable using presentday quantum devices.
Results
The XZZX surface code
The XZZX surface code is locally equivalent to the conventional surface code^{12,13,14}, differing by a Hadamard rotation on alternate qubits^{32,33}. The code parameters of the surface code are invariant under this rotation. The XZZX code therefore encodes k = O(1) logical qubits using n = O(d^{2}) physical qubits where the code distance is d. Constant factors in these values are determined by details such as the orientation of the squarelattice geometry and boundary conditions. See Fig. 1 for a description. This variant of the surface code was proposed in ref. ^{15}, and has been considered as a topological memory^{34}. To contrast the XZZX surface code with its conventional counterpart, we refer to the latter as the CSS surface code because it is of CalderbankShorSteane type^{35,36}.
Together with a choice of code, we require a decoding algorithm to determine which errors have occurred and correct for them. We will consider Pauli errors \(E\in {\mathcal{P}}\), and we say that E creates a defect at face f if S_{f}E = (−1)ES_{f} with S_{f} the stabilizer associated to f. A decoder takes as input the error syndrome (the locations of the defects) and returns a correction that will recover the encoded information with high probability. The failure probability of the decoder decays rapidly with increasing code distance, d, assuming the noise experienced by the physical qubits is below some threshold rate.
Because of the local change of basis, the XZZX surface code responds differently to Pauli errors compared with the CSS surface code. We can take advantage of this difference to design better decoding algorithms. Let us consider the effect of different types of Pauli errors, starting with PauliZ errors. A single PauliZ error gives rise to two nearby defects. In fact, we can regard a PauliZ error as a segment of a string where defects lie at the endpoints of the string segment, and where multiple PauliZ errors compound into longer strings, see Fig. 1d.
A key feature of the XZZX code that we will exploit is that PauliZ error strings align along the same direction, as shown in Fig. 1d. We can understand this phenomenon in more formal terms from the perspective of symmetries^{10,37}. Indeed, the product of face operators along a diagonal such as that shown in Fig. 1e commute with PauliZ errors. This symmetry guarantees that defects created by PauliZ errors will respect a parity conservation law on the faces of a diagonal oriented along this direction. Using this property, we can decode PauliZ errors on the XZZX code as a series of disjoint repetition codes. It follows that, for a noise model described by independent PauliZ errors, this code has a threshold error rate of 50%.
Likewise, PauliX errors act similarly to PauliZ errors, but with PauliX error strings aligned along the orthogonal direction to the PauliZ error strings. In general, we would like to be able to decode all local Pauli errors, where error configurations of PauliX and PauliZ errors violate the onedimensional symmetries we have introduced, e.g. Fig. 1f. As we will see, we can generalise conventional decoding methods to account for finite but high bias of one Pauli operator relative to others and maintain a very high threshold.
We finally remark that the XZZX surface code responds to PauliY errors in the same way as the CSS surface code. Each PauliY error will create four defects on each of their adjacent faces; see Fig. 1g. The highperformance decoders presented in refs. ^{7,8,10} are therefore readily adapted for the XZZX code for an error model where PauliY errors dominate.
Optimal thresholds
The XZZX code has exceptional thresholds for all singlequbit Pauli noise channels. We demonstrate this fact using an efficient maximumlikelihood decoder^{38}, which gives the optimal threshold attainable with the code for a given noise model. Remarkably, we find that the XZZX surface code achieves codecapacity threshold error rates that closely match the zerorate hashing bound for all singlequbit Pauli noise channels, and appears to exceed this bound in some regimes.
We define the general singlequbit Pauli noise channel
where p is the probability of any error on a single qubit and the channel is parameterised by the stochastic vector r = (r_{X}, r_{Y}, r_{Z}), where r_{X}, r_{Y}, r_{Z} ≥ 0 and r_{X} + r_{Y} + r_{Z} = 1. The surface of all possible values of r parametrise an equilateral triangle, where the centre point (1/3, 1/3, 1/3) corresponds to standard depolarising noise, and vertices (1, 0, 0), (0, 1, 0) and (0, 0, 1) correspond to pure X, Y and Z noise, respectively. We also define biasednoise channels, which are restrictions of this general noise channel, parameterised by the scalar η; for example, in the case of Zbiased noise, we define η = r_{Z}/(r_{X} + r_{Y}) where r_{X} = r_{Y}, such that η = 1/2 corresponds to standard depolarising noise and the limit η → ∞ corresponds to pure Z noise. The hashing bound is defined as R = 1 − H(p) with R an achievable rate, k/n, using random codes and H(p) the Shannon entropy for the vector p = pr. For our noise model, for any r there is a noise strength p for which the achievable rate via random coding goes to zero; we refer to this as the zerorate hashing bound, and it serves as a useful benchmark for codecapacity thresholds.
We estimate the threshold error rate as a function of r for both the XZZX surface code and the CSS surface code using a tensornetwork decoder that gives a controlled approximation to the maximumlikelihood decoder^{7,8,38}; see Methods for details. Our results are summarised in Fig. 2. We find that the thresholds of the XZZX surface code closely match or slightly exceed (as discussed below), the zerorate hashing bound for all investigated values of r, with a global minimum p_{c} = 18.7(1)% at standard depolarising noise and peaks p_{c} ~ 50% at pure X, Y and Z noise. We find that the thresholds of the CSS surface code closely match this hashing bound for Ybiased noise, where Y errors dominate, consistent with prior work^{7,8}, as well as for channels where r_{Y} < r_{X} = r_{Z} such that X and Z errors dominate but are balanced. In contrast to the XZZX surface code, we find that the thresholds of the CSS surface code fall well below this hashing bound as either X or Z errors dominate with a global minimum p_{c} = 10.8(1)% at pure X and pure Z noise.
In some cases, our estimates of XZZX surface code thresholds appear to exceed the zerorate hashing bound. The discovery of such a code would imply that we can create a superadditive coherent channel via code concatenation. To see why, consider an inner code with a high threshold that exceeds the hashing bound, p_{c} > p_{h.b.}, together with a finiterate outer code with rate R_{out} = K_{out}/N_{out} > 0 that has some arbitrary nonzero threshold against independent noise^{39,40,41,42}. Now consider physical qubits with an error rate p below the threshold of the inner code but above the hashing bound, i.e. p_{h.b.} < p < p_{c}. We choose a constantsized inner code using N_{in} qubits such that its logical failure rate is below the threshold of the outer code. Concatenating this inner code into the finiterate outer code will give us a family of codes with rate \(R^{\prime} ={R}_{{\rm{out}}}/{N}_{{\rm{in}}}\,> \; 0\) and a vanishing failure probability as N_{out} → ∞. If both codes have lowdensity parity checks (LDPCs)^{41,42}, the resulting code provides an example of a superadditive LDPC code.
Given the implications of a code that exceeds the zerorate hashing bound we now investigate our numerics in this regime further. For the values of r investigated for Fig. 2, the mean difference between our estimates and the hashing bound is \(\overline{{p}_{c}{p}_{\text{h.b.}}}=0.1(3)\)% and our estimates never fall more than 1.1% below the hashing bound. However, for high bias, η ≥ 100, we observe an asymmetry between Ybiased noise and Zbiased (or, equivalently, Xbiased) noise. In particular, we observe that, while threshold estimates with Ybiased noise match the hashing bound to within error bars, threshold estimates with highly biased Z noise significantly exceed the hashing bound. Our results with Zbiased noise are summarised in Fig. 3, where, since thresholds are defined in the limit of infinite code distance, we provide estimates with sets of increasing code distance for η ≥ 30. Although the gap typically reduces, it appears to stabilise for η = 30, 100, 1000, where we find p_{c} − p_{h.b.} = 1.2(2)%, 1.6(3)%, 3.7(3)%, respectively, with the largest code distances; for η = 300, the gap exceeds 2.9% but has clearly not yet stabilised. This evidence for exceeding the zerorate hashing bound appears to be robust, but warrants further study.
Finally, we evaluate threshold error rates for the XZZX code with rectangular boundaries using a minimumweight perfectmatching decoder, see Fig. 4. Matching decoders are very fast, and so allow us to explore very large systems sizes; they are also readily generalised to the faulttolerant setting as discussed below. Our decoder is described in Methods. Remarkably, the thresholds we obtain closely follow the zerorate hashing bound at high bias. This is despite using a suboptimal decoder that does not use all of the syndrome information. Again, our data appear to marginally exceed this bound at high bias.
Faulttolerant thresholds
Having demonstrated the remarkable codecapacity thresholds of the XZZX surface code, we now demonstrate how to translate these high thresholds into practice using a matching decoder^{14,43,44}. We find exceptionally high faulttolerant thresholds, i.e. allowing for noisy measurements, with respect to a biased phenomenological noise model. Moreover, for unbiased noise models we recover the standard matching decoder^{14,45}.
To detect measurement errors we repeat measurements over a long time^{14}. We can interpret measurement errors as strings that align along the temporal axis with a defect at each endpoint. This allows us to adapt minimumweight perfectmatching for faulttolerant decoding. We explain our simulation in Fig. 5a–d and describe our decoder in Methods.
We evaluate faulttolerant thresholds by finding logical failure rates using MonteCarlo sampling for different system parameters. We simulate the XZZX code on a d × d lattice with periodic boundary conditions, and we perform d rounds of stabilizer measurements. We regard a given sample as a failure if the decoder introduces a logical error to the code qubits, or if the combination of the error string and its correction returned by the decoder includes a nontrivial cycle along the temporal axis. It is important to check for temporal errors, as they can cause logical errors when we perform faulttolerant logic gates by code deformation^{46}.
The phenomenological noise model is defined such that qubits experience errors with probability p per unit time. These errors may be either highrate PauliZ errors that occur with probability p_{h.r.} per unit time, or lowrate PauliX or PauliY errors each occurring with probability p_{l.r.} per unit time. The noise bias with this phenomenological noise model is defined as η = p_{h.r.}/(2p_{l.r.}). One time unit is the time it takes to make a stabilizer measurement, and we assume we can measure all the stabilizers in parallel^{5}. Each stabilizer measurement returns the incorrect outcome with probability q = p_{h.r.} + p_{l.r.}. To leading order, this measurement error rate is consistent with a measurement circuit where an ancilla is prepared in the state \(\left+\right\rangle\) and subsequently entangled to the qubits of S_{f} with biaspreserving controllednot and controlledphase gates before its measurement in the PauliX basis. With such a circuit, PauliY and PauliZ errors on the ancilla will alter the measurement outcome. At η = 1/2 this noise model interpolates to a conventional noise model where q = 2p/3^{47}. We also remark that hook errors^{47,48}, i.e. correlated errors that are introduced by this readout circuit, are lowrate events. This is because highrate PauliZ errors acting on the control qubit commute with the entangling gate, and so no highrate errors are spread to the code.
Intuitively, the decoder will preferentially pair defects along the diagonals associated with the dominant error. In the limit of infinite bias at q = 0, the decoder corrects the PauliZ errors by treating the XZZX code as independent repetition codes. It follows that by extending the syndrome along the temporal direction to account for the phenomenological noise model with infinite bias, we effectively decode d decoupled copies of the twodimensional surface code, see Fig. 5. With the minimumweight perfectmatching decoder, we therefore expect a faulttolerant threshold ~ 10.3%^{14}. Moreover, when η = 1/2 the minimumweight perfectmatching decoder is equivalent to the conventional matching decoder^{14,45}. We use these observations to check that our decoder behaves correctly in these limits.
In Fig. 5e, we present the thresholds we obtain for the phenomenological noise model as a function of the noise bias η. In the faulttolerant case, we find our decoder tends towards a threshold of ~10% as the bias becomes large. We note that the threshold error rate appears lower than the expected ~10.3%; we suggest that this is a smallsize effect. Indeed, the success of the decoder depends on effectively decoding ~d independent copies of the surface code correctly. In practice, this leads us to underestimate the threshold when we perform simulations using finitesized systems.
Notably, our decoder significantly surpasses the thresholds found for the CSS surface code against biased PauliY errors^{10}. We also compare our results to a conventional minimumweight perfectmatching decoder for the CSS surface code where we correct bitflip errors and dephasing errors separately. As we see, our decoder for the XZZX code is equivalent to the conventional decoding strategy at η = 1/2 and outperforms it for all other values of noise bias.
Overheads
We now show that the exceptional error thresholds of the XZZX surface code are accompanied by significant advantages in terms of the scaling of the logical failure rate as a function of the number of physical qubits n when error rates are below threshold. Improvements in scaling will reduce the resource overhead, because fewer physical qubits will be needed to achieve a desired logical failure rate.
The XZZX code with periodic boundary conditions on a lattice with dimensions d × (d + 1) has the remarkable property that it possesses only a single logical operator that consists of only physical PauliZ terms. Moreover, this operator has weight n = d(d + 1). Based on the results of ref. ^{8}, we can expect that the XZZX code on such a lattice will have a logical failure rate that decays like \(O({p}_{\,\text{h.r.}\,}^{{d}^{2}/2})\) at infinite bias. Note we can regard this single logicalZ operator as a string that coils around the torus many times such that it is supported on all n qubits. As such, this model can be regarded as an nqubit repetition code whose logical failure rate decays like O(p^{n/2}).
Here we use the XZZX code on a periodic d × (d + 1) lattice to test the performance of codes with highweight PauliZ operators at finite bias. We find, at high bias and error rates near to threshold, that a small XZZX code can demonstrate this rapid decay in logical failure rate. In general, at more modest biases and at lower error rates, we find that the logical failure rate scales like \(O({(p/\sqrt{\eta })}^{d/2})\) as the system size diverges. This scaling indicates a significant advantage in the overhead cost of architectures that take advantage of biased noise. We demonstrate both of these regimes with numerical data.
In practice, it will be advantageous to find lowoverhead scaling using codes with open boundary conditions. We finally argue that the XZZX code with rectangular open boundary conditions will achieve comparable overhead scaling in the large system size limit.
Let us examine the different failure mechanisms for the XZZX code on the periodic d × (d + 1) lattice more carefully. Restricting to PauliZ errors, the weight of the only nontrivial logical operator is d(d + 1). This means the code can tolerate up to d(d + 1)/2 dephasing errors, and we can therefore expect failures due to highrate errors to occur with probability
below threshold, where \({N}_{{\rm{h.r.}}} \sim {2}^{{d}^{2}}\) is the number of configurations that d^{2}/2 PauliZ errors can take on the support of the weightd^{2} logical operator to cause a failure. We compare this failure rate to the probability of a logical error caused by a string of d/4 highrate errors and d/4 lowrate errors. We thus consider the ansatz
where N_{l.r.} ~ 2^{γd} is an entropy term with 3/2 ≲ γ ≲ 2^{49}. We justify this ansatz and estimate γ in Methods.
This structured noise model thus leads to two distinct regimes, depending on which failure process is dominant. In the first regime where \({\overline{P}}_{{\rm{quad.}}}\gg {\overline{P}}_{{\rm{lin.}}}\), we expect that the logical failure rate will decay like \(\sim {p}_{\,\text{h.r.}\,}^{{d}^{2}/2}\). We find this behaviour with systems of a finite size and at high bias where error rates are near to threshold. We evaluate logical failure rates using numerical simulations to demonstrate the behavior that characterises this regime; see Fig. 6(a). Our data show good agreement with the scaling ansatz \(\overline{P}=A{e}^{B{d}^{2}}\). In contrast, our data are not well described by a scaling \(\overline{P}=A{e}^{Bd}\).
We observe the regime where \({\overline{P}}_{{\rm{lin.}}}\gg {\overline{P}}_{{\rm{quad.}}}\) using numerics at small p and modest η. In this regime, logical errors are caused by a mixture of lowrate and highrate errors that align along a path of weight O(d) on some nontrivial cycle. In Fig. 6b, we show that the data agree well with the ansatz of Eq. (3), with γ ~ 1.8. This remarkable correspondence to our data shows that our decoder is capable of decoding up to ~d/4 lowrate errors, even with a relatively large number of highrate errors occurring simultaneously on the lattice.
In summary, for either scaling regime, we find that there are significant implications for overheads. We emphasise that the generic case for faulttolerant quantum computing is expected to be the regime dominated by \({\overline{P}}_{{\rm{lin.}}}\). In this regime, the logical failure rate of a code is expected to decay as \(\overline{P} \sim {p}^{d/2}\) below threshold^{5,50,51}. Under biased noise, our numerics show that failure rates \(\overline{P} \sim {(p/\sqrt{\eta })}^{d/2}\) can be obtained. This additional decay factor ~η^{−d/4} in our expression for logical failure rate means we can achieve a target logical failure rate with far fewer qubits at high bias.
The regime dominated by \({\overline{P}}_{{\rm{quad.}}}\) scaling is particularly relevant for nearterm devices that have a small number of qubits operating near the threshold error rate. In this situation, we have demonstrated a very rapid decay in logical failure rate like \(\sim {p}^{{d}^{2}/2}\) at high bias, if they can tolerate ~d^{2}/2 dephasing errors.
We finally show that we can obtain a lowoverhead implementation of the XZZX surface code with open boundary conditions using an appropriate choice of lattice geometry. As we explain below, this is important for performing faulttolerant quantum computation with a twodimensional architecture. Specifically, with the geometry shown in Fig. 1h, we can reduce the length of one side of the lattice by a factor of \(O(1/{\mathrm{log}}\,\eta )\), leaving a smaller rectangular array of qubits. This is because highrate error strings of the biasednoise model align along the horizontal direction only. We note that d_{X} (d_{Z}) denote the least weight logical operator comprised of only PauliX (PauliZ) operators. We can therefore choose d_{X} ≪ d_{Z} without compromising the logical failure rate of the code due to PauliZ errors at high bias. This choice may have a dramatic effect on the resource cost of largescale quantum computation. We estimate that the optimal choice is
using approximations that apply at low error rates. To see this, let us suppose that a logical failure due to high(low)rate errors is \({\overline{P}}_{{\rm{h.r.}}}\approx {p}^{{d}_{Z}/2}\) (\({\overline{P}}_{{\rm{l.r.}}}\approx {(p/\eta )}^{{d}_{X}/2}\)) where we have neglected entropy terms and assumed p_{h.r.} ~ p and p_{l.r.} ~ p/η. Equating \({\overline{P}}_{\text{l.r.}}\) and \({\overline{P}}_{\text{h.r.}}\) gives us Eq. (4). Similar results have been obtained in, e.g. refs. ^{16,26,52,53,54} with other codes. Assuming an error rate that is far below threshold, e.g. p ~ 1%, and a reasonable bias we might expect η ~ 100, we find an aspect ratio d_{X} ~ d_{Z}/2.
Lowoverhead faulttolerant quantum computation
As with the CSS surface code, we can perform faulttolerant quantum computation with the XZZX code using code deformations^{29,30,31,55,56,57}. Here we show how to maintain the advantages that the XZZX code demonstrates as a memory experiencing structured noise, namely, its highthreshold error rates and its reduced resource costs, while performing faulttolerant logic gates.
A code deformation is a type of faulttolerant logic gate where we manipulate encoded information by changing the stabilizer group we measure^{55,57}. These altered stabilizer measurements project the system onto another stabilizer code where the encoded information has been transformed or ‘deformed’. These deformations allow for Clifford operations with the surface code; Clifford gates are universal for quantum computation when supplemented with the noisy initialisation of magic states^{58}. Although initialisation circuits have been proposed to exploit a bias in the noise^{59}, here we focus on faulttolerant Clifford operations and the faulttolerant preparation of logical qubits in the computational basis.
Many approaches for code deformations have been proposed that, in principle, could be implemented in a way to take advantage of structured noise using a tailored surface code. These approaches include braiding punctures^{55,56,57,60}, lattice surgery^{29,30,61,62} and computation with twist defects^{30,63,64}. We focus on a single example based on lattice surgery as in refs. ^{31,62}; see Fig. 7a. We will provide a highlevel overview and leave open all detailed questions of implementation and threshold estimates for faulttolerant quantum computation to future work.
Our layout for faulttolerant quantum computation requires the faulttolerant initialisation of a hexon surface code, i.e. a surface code with six twist defects at its boundaries^{30}; see Fig. 7(b). We can faulttolerantly initialise this code in eigenstates of the computational basis through a process detailed in Fig. 7. We remark that the reverse operation, where we measure qubits of the XZZX surface code in this same product basis, will read the code out while respecting the properties required to be robust to the noise bias. Using the arguments presented above for the XZZX code with rectangular boundaries, we find a lowoverhead implementation with dimensions related as d_{Z} = A_{η}d_{X}, where we might choose an aspect ratio \({A}_{\eta }=O({\mathrm{log}}\,\eta )\) at low error rates and high noise bias.
We briefly confirm that this method of initialisation is robust to our biasednoise model. Principally, this method must correct highrate PauliZ errors on the red qubits, as PauliZ errors act trivially on the blue qubits in eigenstates of the PauliZ operator during preparation. Given that the initial state is already in an eigenstate of some of the stabilizers of the XZZX surface code, we can detect these PauliZ errors on red qubits, see, e.g. Fig. 7(v). The shaded faces will identify defects due to the PauliZ errors. Moreover, as we discussed before, strings created by PauliZ errors align along horizontal lines using the XZZX surface code. This, again, is due to the stabilizers of the initial state respecting the onedimensional symmetries of the code under pure dephasing noise. In addition to robustness against highrate errors, lowrate errors as in Fig. 7(vi) can also be detected on blue qubits. The bitflip errors violate the stabilizers we initialise when we prepare the initial product state. As such we can adapt the highthreshold errorcorrection schemes we have proposed for initialisation to detect these errors for the case of finite bias. We therefore benefit from the advantages of the XZZX surface code under a biased error model during initialisation.
Code deformations amount to initialising and reading out different patches of a large surface code lattice. As such, performing arbitrary code deformations while preserving the biasednoise protection offered by the XZZX surface code is no more complicated than what has already been demonstrated. This is with one exception. We might consider generalisations of lattice surgery or other code deformations where we can perform faulttolerant PauliY measurements. In this case, we introduce a twist to the lattice^{63} and, as such, we need to reexamine the symmetries of the system to propose a highperformance decoder. We show the twist in the centre of Fig. 7c together with its weightfive stabilizer operator. A twist introduces a branch in the onedimensional symmetries of the XZZX surface code. A minimumweight perfectmatching decoder can easily be adapted to account for this branch. Moreover, should we consider performing faulttolerant PauliY measurements, we do not expect that a branch on a single location on the lattice will have a significant impact on the performance of the code experiencing structured noise. Indeed, even with a twist on the lattice, the majority of the lattice is decoded as a series of onedimensional repetition codes in the infinite bias limit.
Discussion
We have shown how faulttolerant quantum architectures based on the XZZX surface code yield remarkably high memory thresholds and low overhead as compared with the conventional surface code approach. Our generalised faulttolerant decoder can realise these advantages over a broad range of biased error models representing what is observed in experiments for a variety of physical qubits.
The performance of the XZZX code is underpinned by its exceptional codecapacity thresholds, which match the performance of random coding (hashing) theory, suggesting that this code may be approaching the limits of what is possible. In contrast to this expectation, the XZZX surface code threshold is numerically observed to exceed this hashing bound for certain error models, opening the enticing possibility that random coding is not the limit for practical thresholds. We note that for both code capacities and faulttolerant quantum computing, the highest achievable error thresholds are not yet known.
We emphasise that the full potential of our results lies not just in the demonstrated advantages of using this particular architecture, but rather the indication that further innovations in codes and architectures may still yield significant gains in thresholds and overheads. We have shown that substantial gains on thresholds can be found when the code and decoder are tailored to the relevant noise model. While the standard approach to decoding the surface code considers PauliX and PauliZ errors separately, we have shown that a tailored nonCSS code and decoder can outperform this strategy for essentially all structured error models. There is a clear avenue to generalise our methods and results to the practical setting involving correlated errors arising from more realistic noise models as we perform faulttolerant logic. We suggest that the theory of symmetries^{10,37} may offer a formalism to make progress in this direction.
Because our decoder is based on minimumweight matching, there are no fundamental obstacles to adapt it to the more complex setting of circuit noise^{47,56,65}. We expect that the high numerical thresholds we observe for phenomenological noise will, when adapted to circuit level noise, continue to outperform the conventional surface code, especially when using gates that preserve the structure of the noise^{27,28}. We expect that the largest performance gains will be obtained by using information from a fully characterised Pauli noise model^{66,67,68} that goes beyond the singlequbit error models considered here.
Along with high thresholds, the XZZX surface code architecture can yield significant reductions in the overheads for faulttolerant quantum computing, through improvements to the subthreshold scaling of logical error rates. It is in this direction that further research into tailored codes and decoders may provide the most significant advances, bringing down the astronomical numbers of physical qubits needed for faulttolerant quantum computing. A key future direction of research would be to carry these improvements over to codes and architectures that promise improved (even constant) overheads^{39,40,42}. Recent research on faulttolerant quantum computing using lowdensity parity check (LDPC) codes that generalise concepts from the surface code^{41,69,70,71,72,73,74} provide a natural starting point.
Methods
Optimal thresholds
In the main text, we obtained optimal thresholds using a maximumlikelihood decoder to highlight features of the codes independent of any particular heuristic decoding algorithm. Maximumlikelihood decoding, which selects a correction from the most probable logical coset of error configurations consistent with a given syndrome, is, by definition, optimal. Exact evaluation of the coset probabilities is, in general, inefficient. An algorithm due to Bravyi, Suchara and Vargo^{38} efficiently approximates maximumlikelihood decoding by mapping coset probabilities to tensornetwork contractions. Contractions are approximated by reducing the size of the tensors during contraction through Schmidt decomposition and retention of only the χ largest Schmidt values. This approach, appropriately adapted, has been found to converge well with modest values of χ for a range of Pauli noise channels and surface code layouts^{8,38}. A full description of the tensor network used in our simulations with the rotated CSS surface code is provided in ref. ^{8}; adaptation to the XZZX surface code is a straightforward redefinition of tensor element values for the uniform stabilizers.
Figure 2, which shows threshold values over all singlequbit Pauli noise channels for CSS and XZZX surface codes, is constructed as follows. Each threshold surface is formed using Delaunay triangulation of 211 threshold values. Since both CSS and XZZX square surface codes are symmetric in the exchange of PauliX and Z, 111 threshold values are estimated for each surface. Sample noise channels are distributed radially such that the spacing reduces quadratically towards the sides of the triangle representing all singlequbit Pauli noise channels, see Fig. 8a. Each threshold is estimated over four d × d codes with distances d ∈ {13, 17, 21, 25}, at least six physical error probabilities, and 30,000 simulations per code distance and physical error probability. In all simulations, a tensornetwork decoder approximation parameter of χ = 16 is used to achieve reasonable convergence over all sampled singlequbit Pauli noise channels for the given code sizes.
Figure 3, which investigates threshold estimates exceeding the zerorate hashing bound for the XZZX surface code with Zbiased noise, is constructed as follows. For bias 30 ≤ η ≤ 1000, where XZZX threshold estimates exceed the hashing bound, we run computeintensive simulations; each threshold is estimated over sets of four d × d codes with distances up to d ∈ {65, 69, 73, 77}, at least fifteen physical error probabilities, and 60,000 simulations per code distance and physical error probability. Interestingly, for the XZZX surface code with Zbiased noise, we find the tensornetwork decoder converges extremely well, as summarised in Fig. 8b for code distance d = 77, allowing us to use χ = 8. For η = 30, the shift in logical failure rate between χ = 8 and the largest χ shown is less than one fifth of a standard deviation over 30,000 simulations, and for η > 30 the convergence is complete. All other threshold estimates in Fig. 3, are included for context and use the same simulation parameters as described above for Fig. 2.
All threshold error rates in this work are evaluated use the critical exponent method of ref. ^{45}.
The minimumweight perfectmatching decoder
Decoders based on the minimumweight perfectmatching algorithm^{43,44} are ubiquitous in the quantum errorcorrection literature^{5,14,37,45,75}. The minimumweight perfectmatching algorithm takes a graph with weighted edges and returns a perfect matching using the edges of the input graph such that the sum of the weights of the edges is minimal. We can use this algorithm for decoding by preparing a complete graph as an input such that the edges returned in the output matching correspond to pairs of defects that should be locally paired by the correction. To achieve this we assign each defect a corresponding vertex in the input graph and we assign the edges weights such that the proposed correction corresponds to an error that occurred with high probability.
The runtime of the minimumweight perfectmatching algorithm can scale like O(V^{3}) where V is the number of vertices of the input graph^{44}, and the typical number of vertices is V = O(pd^{2}) for the case where measurements always give the correct outcomes and V = O(pd^{3}) for the case where measurements are unreliable.
The success of the decoder depends on how we choose to weight the edges of the input graph. Here we discuss how we assign weights to the edges of the graph. It is convenient to define an alternative coordinate system that follows the symmetries of the code. Denote by \(f\in {{\mathcal{D}}}_{j}\) sets of faces aligned along a diagonal line such that \(S={\prod }_{f\in {{\mathcal{D}}}_{j}}{S}_{f}\) is a symmetry of the code with respect to PauliZ errors, i.e. S commutes with PauliZ errors. One such diagonal is shown in Fig. 1(e). Let also \({{\mathcal{D}}}_{j}^{\prime}\) be the diagonal sets of faces that respect symmetries introduced by PauliX errors.
Let us first consider the decoder at infinite bias. We find that we can decode the lattice as a series of onedimensional matching problems along the diagonals \({{\mathcal{D}}}_{j}\) at infinite bias. Any error drawn from the set of PauliZ errors \({{\mathcal{E}}}^{Z}\) must create an even number of defects along diagonals \({{\mathcal{D}}}_{j}\). Indeed, \(S={\prod }_{f\in {{\mathcal{D}}}_{j}}{S}_{f}\) is a symmetry with respect to \({{\mathcal{E}}}^{Z}\) since operators S commute with errors \({{\mathcal{E}}}^{Z}\). In fact, this special case of matching along a onedimensional line is equivalent to decoding the repetition code using a majority vote rule. As an aside, it is worth mentioning that the parallelised decoding procedure we have described vastly improves the speed of decoding in this infinite bias limit.
We next consider a finitebias error model where qubits experience errors with probability p. PauliZ errors occur at a higher rate, p_{h.r.} = pη/(η + 1), and PauliX and PauliY errors both occur at the same low error rate p_{l.r.} = p/2(η + 1). At finite bias, stringlike errors can now extend in all directions along the twodimensional lattice. Again, we use minimumweight perfect matching to find a correction by pairing nearby defects with the string operators that correspond to errors that are likely to have created the defect pair.
We decode by giving a complete graph to the minimumweight perfectmatching algorithm where each pair of defects u and v are connected by an edge of weight \(\sim {\mathrm{log}}\,{\rm{prob}}({E}_{u,v})\), where prob(E_{u,v}) is the probability that the most probable string E_{u,v} created defects u and v. It remains to evaluate \({\mathrm{log}}\,{\rm{prob}}({E}_{u,v})\).
For the uncorrelated noise models we consider, \({\mathrm{log}}\,{\rm{prob}}({E}_{u,v})\) depends, anisotropically, on the separation of u and v. We define orthogonal axes \(x^{\prime}\)(\(y^{\prime}\)) that align along (run orthogonal to) the diagonal line that follows the faces of \({{\mathcal{D}}}_{j}\). We can then define separation between u and v along axes \(x^{\prime}\) and \(y^{\prime}\) using the Manhattan distance with integers \({l}_{x^{\prime} }\) and \({l}_{y^{\prime} }\), respectively. On large lattices then, we choose \({\mathrm{log}}\,{\rm{prob}}({E}_{u,v})\propto {w}_{{\rm{h.r.}}}{l}_{x^{\prime} }+{w}_{{\rm{l.r.}}}{l}_{y^{\prime} }\) where
The edges returned from the minimumweight perfectmatching algorithm^{43,44} indicate which pairs of defects should be paired. We note that, for small, rectangular lattices with periodic boundary conditions, it may be that the most probable string E_{u,v} is caused by a large number of highrate errors that create a string that wraps around the torus. It is important that our decoder checks for such strings to achieve the logical failure rate scaling like \(O({p}_{\,\text{h.r.}\,}^{{d}^{2}/2})\). We circumvent the computation of the weight between two defects in every simulation by creating a lookup table from which the required weights can be efficiently retrieved. Moreover, we minimise memory usage by taking advantage of the translational invariance of the lattice.
We finally remark that our minimumweight perfectmatching decoder naturally extends to the faulttolerant regime. We obtain this generalisation by assigning weights to edges connecting pairs of defects in the 2 + 1dimensional syndrome history such that
where now we have l_{t} the separation of u and v along the time axis, \({w}_{t}={\mathrm{log}}\,\left(\frac{q}{1q}\right)\) and q = p_{h.r.} + p_{l.r.}. In the limit that η = 1/2 our decoder is equivalent to the conventional minimumweight perfectmatching decoder for phenomenological noise^{45}.
Ansatz at low error rates
In the main text we proposed a regime at low error rates where the most common cause of logical failure is a sequence of ~ d/4 lowrate and ~ d/4 highrate errors along the support of a weight d logical operator; see Fig. 9. Here we compare our ansatz, Eq. (3) with numerical data to check its validity and to estimate the free parameter γ.
We take the logarithm of Eq. (3) to obtain
Neglecting the small term \(n{\mathrm{log}}\,(1p)\) we can express the this equation as \({\mathrm{log}}\,\overline{P}\approx G(p,\eta )d\) where we have the gradient
In Fig. 10a we plot the data shown in the main text in Fig. 6b as a function of d to read the gradient G(p, η) from the graph. We then plot G(p, η) as a function of \(\beta ={\mathrm{log}}\,[p/(1p)]\) in the inset of Fig. 10a. The plot reveals a gradient ~0.5, consistent with our ansatz where we expect a gradient of 1/2. Furthermore, at p = 0 we define the restricted function
We estimate I(η) from the extrapolated p = 0 intercepts of our plots, such as shown in the inset of Fig. 10a, and present these intercepts a function of \({\mathrm{log}}\,[(\eta +1/2)/{(\eta +1)}^{2}]\); see Fig. 10b. We find a line of best fit with gradient 0.22 ± 0.03, which agrees with the expected value of 1/4. Moreover, from the intercept of this fit, we estimate γ = 1.8 ± 0.06, which is consistent with 3/2 ≤ γ ≤ 2 that we expect^{49}. Thus, our data are consistent with our ansatz, that typical error configurations lead to logical failure with ~d/4 lowrate errors.
Data availability
The data that support the findings of this study are available at https://bitbucket.org/qecsim/qsdxzzx/.
Code availability
Software for all simulations performed for this study is available at https://bitbucket.org/qecsim/qsdxzzx/ and released under the OSIapproved BSD 3Clause licence. This software extends and uses services provided by qecsim^{78,79}, a quantum errorcorrection simulation package, which leverages several scientific software packages^{44,80,81,82}.
References
Shor, P. W. Faulttolerant quantum computation. in Proc. 37th Annual Symposium on Foundations of Computer Science, FOCS ’96 (IEEE Computer Society, 1996).
Aharonov, D. & BenOr, M. Faulttolerant quantum computation with constant error. in Proc. twentyninth annual ACM symposium on Theory of computing. (1997).
Knill, E., Laflamme, R. & Zurek, W. Threshold accuracy for quantum computation. arXiv http://arxiv.org/abs/quantph/9610011 (1996).
Kitaev, A. Y. Quantum computations: algorithms and error correction. Russian Math. Surveys 52, 1191–1249 (1997).
Fowler, A. G., Mariantoni, M., Martinis, J. M. & Cleland, A. N. Surface codes: towards practical largescale quantum computation. Phys. Rev. A 86, 032324 (2012).
Stephens, A. M., Munro, W. J. & Nemoto, K. Highthreshold topological quantum error correction against biased noise. Phys. Rev. A 88, 060301 (2013).
Tuckett, D. K., Bartlett, S. D. & Flammia, S. T. Ultrahigh error threshold for surface codes with biased noise. Phys. Rev. Lett. 120, 050505 (2018).
Tuckett, D. K. et al. Tailoring surface codes for highly biased noise. Phys. Rev. X 9, 041031 (2019).
Xu, X., Zhao, Q., Yuan, X. & Benjamin, S. C. Highthreshold code for modular hardware with asymmetric noise. Phys. Rev. Appl. 12, 064006 (2019).
Tuckett, D. K., Bartlett, S. D., Flammia, S. T. & Brown, B. J. Faulttolerant thresholds for the surface code in excess of 5% under biased noise. Phys. Rev. Lett. 124, 130501 (2020).
Gidney, C. & Ekerå, M. How to factor 2048 bit RSA integers in 8 hours using 20 million noisy qubits. arXiv http://arxiv.org/abs/1905.09749 (2019).
Kitaev, A. Y. Faulttolerant quantum computation by anyons. Ann. Phys. 303, 2–30 (2003).
Bravyi, S. B. & Kitaev, A. Y. "Quantum codes on a lattice with boundary. arXiv http://arxiv.org/abs/quantph/9811052 (1998).
Dennis, E., Kitaev, A., Landahl, A. & Preskill, J. Topological quantum memory. J. Math. Phys. 43, 4452–4505 (2002).
Wen, X.G. Quantum orders in an exact soluble model. Phys. Rev. Lett. 90, 016803 (2003).
Li, M., Miller, D., Newman, M., Wu, Y. & Brown, K. R. 2d compass codes. Phys. Rev. X 9, 021041 (2019).
Bennett, C. H., DiVincenzo, D. P., Smolin, J. A. & Wootters, W. K. Mixedstate entanglement and quantum error correction. Phys. Rev. A 54, 3824–3851 (1996).
Wilde, M. M. Theorem 24.6.2 Quantum Information Theory, 2nd edn. (Cambridge University Press, 2017).
Shor, P. W. & Smolin, J. A. Quantum errorcorrecting codes need not completely reveal the error syndrome. arXiv http://arxiv.org/abs/quantph/9604006 (1996).
DiVincenzo, D. P., Shor, P. W. & Smolin, J. A. Quantumchannel capacity of very noisy channels. Phys. Rev. A 57, 830–839 (1998).
Smith, G. & Smolin, J. A. Degenerate quantum codes for Pauli channels. Phys. Rev. Lett. 98, 030501 (2007).
Fern, J. & Whaley, K. B. Lower bounds on the nonzero capacity of Pauli channels. Phys. Rev. A 78, 062335 (2008).
Bausch, J. & Leditzky, F. Error thresholds for arbitrary Pauli noise. arXiv http://arxiv.org/abs/1910.00471 (2019).
Lescanne, R. et al. Exponential suppression of bitflips in a qubit encoded in an oscillator. Nat. Phys. 16, 509–513 (2020).
Grimm, A. et al. Stabilization and operation of a Kerrcat qubit. Nature 584, 205–209 (2020).
Aliferis, P. & Preskill, J. Faulttolerant quantum computation against biased noise. Phys. Rev. A 78, 052331 (2008).
Puri, S. et al. Biaspreserving gates with stabilized cat qubits. Sci. Adv. 6, eaay5901 (2020).
Guillaud, J. & Mirrahimi, M. Repetition cat qubits for faulttolerant quantum computation. Phys. Rev. X 9, 041053 (2019).
Horsman, C., Fowler, A. G., Devitt, S. & Meter, R. V. Surface code quantum computing by lattice surgery. N. J. Phys. 14, 123011 (2012).
Brown, B. J., Laubscher, K., Kesselring, M. S. & Wootton, J. R. Poking holes and cutting corners to achieve Clifford gates with the surface code. Phys. Rev. X 7, 021029 (2017).
Litinski, D. A game of surface codes: largescale quantum computing with lattice surgery. Quantum 3, 128 (2019).
Nussinov, Z. & Ortiz, G. A symmetry principle for topological quantum order. Ann. Phys. 324, 977 (2009).
Brown, B. J., Son, W., Kraus, C. V., Fazio, R. & Vedral, V. Generating topological order from a twodimensional cluster state. N. J. Phys. 13, 065010 (2011).
Kay, A. Capabilities of a perturbed toric code as a quantum memory. Phys. Rev. Lett. 107, 270502 (2011).
Calderbank, A. R. & Shor, P. W. Good quantum errorcorrecting codes exist. Phys. Rev. A 54, 1098–1105 (1996).
Steane, A. Multipleparticle interference and quantum error correction. Proc. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci. 452, 2551–2577 (1996).
Brown, B. J. & Williamson, D. J. Parallelized quantum error correction with fracton topological codes. Phys. Rev. Res. 2, 013303 (2020).
Bravyi, S., Suchara, M. & Vargo, A. Efficient algorithms for maximum likelihood decoding in the surface code. Phys. Rev. A 90, 032326 (2014).
Poulin, D., Tillich, J.P. & Ollivier, H. Quantum serial turbo codes. IEEE Trans. Inf. Theory 55, 2776–2798 (2009).
Gottesman, D. Faulttolerant quantum computation with constant overhead. Quantum Info. Comput. 14, 1338–1372 (2014).
Hastings, M. B. Decoding in hyperbolic space: LDPC codes with linear rate and efficient error correction. Quant. Inf. Comput. 14, 1187 (2014).
Fawzi, O., Grospellier, A. & Leverrier, A. Constant overhead quantum faulttolerance with quantum expander codes. Commun. ACM 64, 106–114, (2020).
Edmonds, J. Paths, trees and flowers. Can. J. Math. 17, 449 (1965).
Kolmogorov, V. Blossom V: A new implementation of a minimum cost perfect matching algorithm. Math. Prog. Comput. 1, 43–67 (2009).
Wang, C., Harrington, J. & Preskill, J. ConfinementHiggs transition in a disordered gauge theory and the accuracy threshold for quantum memory. Ann. Phys. 303, 31–58 (2003).
Vuillot, C. et al. Code deformation and lattice surgery are gauge fixing. N. J. Phys. 21, 033028 (2019).
Stephens, A. M. Faulttolerant thresholds for quantum error correction with the surface code. Phys. Rev. A 89, 022321 (2014).
Fowler, A. G., Wang, D. S. & Hollenberg, L. C. L. Surface code quantum error correction incorporating accurate error propagation. Quant. Inf. Comput. 11, 0008 (2011).
Beverland, M. E., Brown, B. J., Kastoryano, M. J. & Marolleau, Q. The role of entropy in topological quantum error correction. J. Stat. Mech. Theory Exp. 2019, 073404 (2019).
Watson, F. H. E., Anwar, H. & Browne, D. E. A fast faulttolerant decoder for qubit and qudit surface codes. Phys. Rev. A 92, 032309 (2015).
Bravyi, S. & Haah, J. Quantum selfcorrection in the 3D cubic code model. Phys. Rev. Lett. 111, 200501 (2013).
Aliferis, P. et al. Faulttolerant computing with biasednoise superconducting qubits: a case study. N. J. Phys. 11, 013061 (2009).
Brooks, P. & Preskill, J. Faulttolerant quantum computation with asymmetric BaconS”hor codes. Phys. Rev. A 87, 032310 (2013).
Robertson, A., Granade, C., Bartlett, S. D. & Flammia, S. T. Tailored codes for small quantum memories. Phys. Rev. Appl. 8, 064004 (2017).
Raussendorf, R., Harrington, J. & Goyal, K. A faulttolerant oneway quantum computer. Ann. Phys. 321, 2242–2270 (2006).
Raussendorf, R. & Harrington, J. Faulttolerant quantum computation with high threshold in two dimensions. Phys. Rev. Lett. 98, 190504 (2007).
Bombin, H. & MartinDelgado, M. A. Quantum measurements and gates by code deformation. J. Phys. A Math. Theor. 42, 095302 (2009).
Bravyi, S. & Kitaev, A. Universal quantum computation with ideal Clifford gates and noisy ancillas. Phys. Rev. A 71, 022316 (2005).
Webster, P., Bartlett, S. D. & Poulin, D. Reducing the overhead for quantum computation when noise is biased. Phys. Rev. A 92, 062309 (2015).
Fowler, A. G., Stephens, A. M. & Groszkowski, P. Highthreshold universal quantum computation on the surface code. Phys. Rev. A 80, 052312 (2009).
Yoder, T. J. & Kim, I. H. The surface code with a twist. Quantum 1, 2 (2017).
Litinski, D. & von Oppen, F. Lattice surgery with a twist: Simplifying Clifford gates of surface codes. Quantum 2, 62 (2018).
Bombin, H. Topological order with a twist: Ising anyons from an Abelian model. Phys. Rev. Lett. 105, 030403 (2010).
Hastings, M. B. & Geller, A. Reduced spacetime and time costs Ising dislocation codes and arbitrary ancillas. Quant. Inf. Comput. 15, 0962 (2015).
Chamberland, C., Zhu, G., Yoder, T. J., Hertzberg, J. B. & Cross, A. W. Topological and subsystem codes on lowdegree graphs with flag qubits. Phys. Rev. X 10, 011022 (2020).
Nickerson, N. H. & Brown, B. J. Analysing correlated noise in the surface code using adaptive decoding algorithms. Quantum 3, 131 (2019).
Flammia, S. T. & Wallman, J. J. Efficient estimation of Pauli channels. ACM Trans. Quantum Comput. 1, 1–32 (2020).
Harper, R., Flammia, S. T. & Wallman, J. J. Efficient learning of quantum noise. Nat. Phys. 16, 1184–1188 (2020).
Tillich, J. & Zémor, G. Quantum LDPC codes with positive rate and minimum distance proportional to the square root of the blocklength. IEEE Trans. Inf. Theory 60, 1193–1202 (2014).
Guth, L. & Lubotzky, A. Quantum error correcting codes and 4dimensional arithmetic hyperbolic manifolds. J. Math. Phys. 55, 082202 (2014).
Krishna, A. & Poulin, D. Faulttolerant gates on hypergraph product codes. Phys. Rev. X 11, 011023 (2021).
Krishna, A. & Poulin, D. Topological wormholes: Nonlocal defects on the toric code. Phys. Rev. Res. 2, 023116 (2020).
Breuckmann, N. P. & Londe, V. Singleshot decoding of linear rate LDPC quantum codes with high performance. arXiv http://arxiv.org/abs/2001.03568 (2020).
Hastings, M. B., Haah, J. & O’Donnell, R. Fiber bundle codes: breaking the \({n}^{1/2}\ {\rm{polylog}}(n)\) barrier for quantum LDPC codes. arXiv http://arxiv.org/abs/2009.03921 (2020).
Raussendorf, R., Harrington, J. & Goyal, K. Topological faulttolerance in cluster state quantum computation. N. J. Phys. 9, 199 (2007).
Bennett, C. H. Efficient estimation of free energy differences from Monte Carlo data. J. Comput. Phys. 22, 245–268 (1976).
Bravyi, S. & Vargo, A. Simulation of rare events in quantum error correction. Phys. Rev. A 88, 062308 (2013).
Tuckett, D. K. Tailoring surface codes: Improvements in quantum error correction with biased noise. Ph.D. thesis, University of Sydney (2020).
Tuckett, D. K. qecsim: Quantum error correction simulator. https://qecsim.github.io/ (2021).
Jones, E. et al. SciPy: Open source scientific tools for Python. https://www.scipy.org/ (2001).
Harris, C.R. et al. Array programming with NumPy. Nature 585, 357–362 (2020).
Johansson, F. et al. mpmath: a Python library for arbitraryprecision floatingpoint arithmetic (version 1.0). http://mpmath.org/ (2017).
Acknowledgements
We are grateful to A. Darmawan, A. Grimsmo and S. Puri for discussions, to E. Campbell and C. Jones for insightful questions and comments on an earlier draft, and especially to J. Wootton for recommending consideration of the XZZX code for biased noise. We also thank A. Kubica and B. Terhal for discussions on the hashing bound. This work is supported by the Australian Research Council via the Centre of Excellence in Engineered Quantum Systems (EQUS) project number CE170100009, and by the ARO under Grant Number: W911NF2110007. B.J.B. also received support from the University of Sydney Fellowship Programme. Access to highperformance computing resources was provided by the National Computational Infrastructure (NCI Australia), an NCRIS enabled capability supported by the Australian Government, and the Sydney Informatics Hub, a Core Research Facility of the University of Sydney.
Author information
Affiliations
Contributions
J.P.B.A. produced the code and collected the data for simulations with the minimumweight perfectmatching decoder, and D.K.T. wrote the code and collected data for simulations using the maximumlikelihood decoder. All authors, J.P.B.A., D.K.T., S.D.B., S.T.F. and B.J.B., contributed to the design of the methodology and the data analysis. All authors contributed to the writing of the manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Peer review information Nature Communications thanks Armanda Ottaviano Quintavalle, Joschka Roffe and the other anonymous reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary information
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Bonilla Ataides, J.P., Tuckett, D.K., Bartlett, S.D. et al. The XZZX surface code. Nat Commun 12, 2172 (2021). https://doi.org/10.1038/s41467021222741
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s41467021222741
Further reading

Constraints on magic state protocols from the statistical mechanics of Wigner negativity
npj Quantum Information (2022)

Prospects for quantum enhancement with diabatic quantum annealing
Nature Reviews Physics (2021)
Comments
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.