Abstract
Measurementbased quantum computing (MBQC) in linear optical systems is promising for nearfuture quantum computing architecture. However, the nondeterministic nature of entangling operations and photon losses hinder the largescale generation of graph states and introduce logical errors. In this work, we propose a linear optical topological MBQC protocol employing multiphoton qubits based on the parity encoding, which turns out to be highly photonloss tolerant and resourceefficient even under the effects of nonideal entangling operations that unavoidably corrupt nearby qubits. For the realistic error analysis, we introduce a Bayesian methodology, in conjunction with the stabilizer formalism, to track errors caused by such detrimental effects. We additionally suggest a graphtheoretical optimization scheme for the process of constructing an arbitrary graph state, which greatly reduces its resource overhead. Notably, we show that our protocol is advantageous over several other existing approaches in terms of the faulttolerance and resource overhead.
Introduction
Photonic qubits are a promising candidate for quantum computing with advantages such as long decoherence time even at room temperature. Among different encoding schemes, those of dualrail allow one to detect photon losses by counting the total photon number and manipulate and measure single qubits via linear optical elements and photodetectors^{1}. A representative way to achieve universal quantum computing in linear optical systems is measurementbased quantum computing (MBQC)^{2,3} processed by singlequbit measurements on a multiqubit graph state. In particular, a family of graph states called RaussendorfHarringtonGoyal (RHG) lattices^{4,5,6} permits universal faulttolerant quantum computing^{7,8,9}.
The generation of RHG lattices, which is a significant challenge for realizing faulttolerant optical MBQC, can be done by entangling multiple small resource states with fusions of types I and/or II^{10}. Both types of fusions are not ideal in linear optics because of theoretical limitations and environmental factors such as photon losses. Fusion success rates cannot exceed 50% without additional resources^{11} for singlephoton qubits, which is far too insufficient to implement MBQC^{12}. There exist several types of approaches to overcome this shortcoming. Some examples include (i) different types of encoding strategies with coherent states^{13,14}, hybrid qubits^{15,16}, and multiphoton qubits^{17,18} that significantly improve error thresholds and resource overheads^{18}, (ii) adding ancillary photons to boost the success rate of a typeII fusion to 75%^{19,20}, which enables MBQC with the renormalization method^{21}, (iii) redundant structures added to resource states to replace a single fusion by multiple fusion attempts^{22,23,24}, and (iv) the use of squeezing for teleportation channels^{25} or inlineprocesses^{26,27}.
Previous studies frequently treated fusion failures with bond disconnection^{28,29,30} or qubit removals^{12,15,18,21}. However, to accurately evaluate the performance of computing protocols, the detrimental effects of nonideal fusions affecting nearby qubits should be analyzed more rigorously. In this work, we study how nonideal fusions corrupt stabilizers and how errors arising from such corruption can be tracked during the generation of graph states. Using a Bayesian approach and the stabilizer formalism, we can now assign error rates with strong posterior evidence from measurement data on certain qubits in the final lattice, thereby enabling much more realistic error simulations and adaptive decoding of syndromes.
We then propose a linearoptical faulttolerant MBQC protocol termed a “parityencodingbased topological quantum computing (PTQC),” which employs the parity encoding^{31} and concatenated Bellstate measurement (CBSM)^{32}. The protocol requires onoff or singlephoton resolving detectors, optical switches, delay lines, and threephoton GreenbergerHorneZeilinger (GHZ3) states that can be generated deterministically using current technology^{33}. (A singlephoton resolving detector discriminates between zero, one, and more than one photons entering the detector.) We analyze the losstolerance of the protocol while exhaustively tracking the detrimental effects of nonideal fusions. The resource overhead in terms of the number of required GHZ3 states is also investigated. To minimize it, we introduce a graphtheoretical method for optimizing the process of constructing resource states, which is generalizable for other MBQC schemes. By comparing PTQC with three other known approaches using simple repetition codes, redundant tree graphs, and singlephoton qubits with boosted fusions assisted by ancillary photons, we show that our protocol is advantageous over these protocols in terms of the photon loss threshold per component and the resource overhead.
We denote the four Bell states by \(\left\vert {\phi }^{\pm }\right\rangle := \left\vert 0\right\rangle \left\vert 0\right\rangle \pm \left\vert 1\right\rangle \left\vert 1\right\rangle\) and \(\left\vert {\psi }^{\pm }\right\rangle := \left\vert 0\right\rangle \left\vert 1\right\rangle \pm \left\vert 1\right\rangle \left\vert 0\right\rangle\) (normalization coefficients are omitted) and call “±” its sign and “ϕ” or “ψ” its letter. An ideal Bellstate measurement (BSM) entails the measurements of X⊗X and Z⊗Z on two qubits, whose outcomes are addressed as its sign and letter outcomes, respectively. We use the polarization of photons as the degree of freedom to encode quantum information and denote the horizontally (vertically) polarized singlephoton state by \(\left\vert {{{\rm{H}}}}\right\rangle\) (\(\left\vert {{{\rm{V}}}}\right\rangle\)).
For a given graph G of qubits, a graph state \(\left\vert G\right\rangle\) is defined as the state stabilized by \({S}_{v}:= {X}_{v}{\prod }_{{v}^{{\prime} }\in N(v)}{Z}_{{v}^{{\prime} }}\) (that is, \({S}_{v}\left\vert G\right\rangle =\left\vert G\right\rangle\)) for each vertex v, where X_{v} and Z_{v} are respectively PauliX and Z operators on the qubit v and N(v) is the set of the vertices connected with v. \(\left\vert G\right\rangle\) can be generated by placing a qubit initialized as \(\left\vert +\right\rangle := \left\vert {{{\rm{H}}}}\right\rangle +\left\vert {{{\rm{V}}}}\right\rangle\) on each vertex of G and applying a controlledZ gate on every pair of qubits connected by an edge in G. However, since the direct implementation of a controlledZ gate for photonic qubits demands multiphoton interaction, linear optical MBQC typically takes an approach to construct a graph state by merging multiple small resource graph states via fusion operations^{10,15,18,21,22,23,24,28,29,30,34}.
Among the two types of fusions^{10}, we only consider type II because of two reasons: (i) A typeI fusion may convert photon losses into unknown Pauli errors^{24}, which is not desirable since Pauli errors are generally harder to overcome than photon losses. (ii) There are known faulttolerant typeII fusion schemes for encoded logical qubits, such as the one in Ref. ^{32}. A typeII fusion is done by measuring X⊗Z and Z⊗X on two qubits. In practice, it is realized by applying the Hadamard gate on one of the qubits and then performing a BSM on them. For two qubits (v_{1},v_{2}), if {v_{1}}∪N(v_{1}) and {v_{2}}∪N(v_{2}) are disjoint, the effect of a fusion on the qubits is to connect (disconnect) every possible pair of disconnected (connected) qubits, one from N(v_{1}) and the other from N(v_{2}), up to several PauliZ operators determined by the BSM outcome. These PauliZ operators are compensated by updating the Pauli frame^{35} classically. This effect can be checked by tracking stabilizers, as shown in the example of Fig. 1a. Here, the stabilizer \({X}_{1}{Z}_{0}{X}_{{0}^{{\prime} }}{Z}_{{1}^{{\prime} }}{Z}_{{2}^{{\prime} }}\) (colored in green) before the fusion is transformed into \({m}_{{{{\rm{sign}}}}}{X}_{1}{Z}_{{1}^{{\prime} }}{Z}_{{2}^{{\prime} }}\) after the fusion, where m_{sign}∈{±1} is the sign outcome of the BSM if the Hadamard gate is applied on qubit 0. The other two stabilizers \({Z}_{1}{X}_{0}{Z}_{{0}^{{\prime} }}{X}_{{1}^{{\prime} }}\) (colored in purple) and \({Z}_{1}{X}_{0}{Z}_{{0}^{{\prime} }}{X}_{{2}^{{\prime} }}\) that commute with the fusion can be transformed in similar ways. Consequently, the marginal state on the unmeasured qubits is equal to the merged graph state up to several PauliZ operators, as presented in Fig. 1b.
We consider errors of qubits in the “vacuum” measured in the Xbasis, which occupies most of the area in the RHG lattice^{4}; thus, Xerrors do not affect the results. Henceforth, every error mentioned is a Zerror.
Results
Bayesian error tracking for nonideal fusions
We now introduce the methodology to track the errors caused by nonideal fusions. Let us revisit the example in Fig. 1, supposing that the qubits are singlephoton polarization ones and there are no photon losses. Then a BSM can discriminate between only two Bell states (say, \(\left\vert {\psi }^{\pm }\right\rangle\)) among the four without additional resources^{36}; see Fig. 2 for the scheme. The intact final state \(\left\vert {C}_{{{{\rm{f}}}}}\right\rangle\) is obtained only when the BSM succeeds. When the BSM fails (which is heralded), m_{lett} is determined while m_{sign} is left completely ambiguous. In other words, the respective posterior probabilities that the input states are \(\left\vert {\phi }^{+}\right\rangle \,{\text{and}}\, \left\vert {\phi }^{}\right\rangle\) for the obtained photodetector outcomes are equal, assuming that the four Bell states have the same prior probability. This assumption can be justified by the fact that the marginal state on qubits 0 and \({0}^{{\prime} }\) before the fusion is maximally mixed; see Supplementary Note 1 for the proof. Therefore, we fix the value of m_{lett} and randomly assign that of m_{sign}. Then, the operator \({m}_{{{{\rm{sign}}}}}{X}_{1}{Z}_{{1}^{{\prime} }}{Z}_{{2}^{{\prime} }}\), which is originally a stabilizer of \(\left\vert {C}_{{{{\rm{f}}}}}\right\rangle\), gives ±1 randomly when it is measured after the failed BSM. Whereas, the other two stabilizers \({m}_{{{{\rm{lett}}}}}{Z}_{1}{X}_{{1}^{{\prime} }}\) and \({m}_{{{{\rm{lett}}}}}{Z}_{1}{X}_{{2}^{{\prime} }}\) are left unperturbed. The key point is that this situation is equivalent to a 50% chance of an erroneous qubit 1 in \(\left\vert {C}_{{{{\rm{f}}}}}\right\rangle\) in terms of stabilizer statistics. In other words, both situations give the same statistics if the stabilizers of \(\left\vert {C}_{{{{\rm{f}}}}}\right\rangle\) are measured; thus, every process in MBQC described with the stabilizer formalism works in the same way.
Generally, a nonideal BSM gives one of the possible outcomes and the posterior probability of each Bell state for the outcome can be calculated with the Bayesian theorem, assuming equal prior probabilities for all four Bell states. Accordingly, the Bell state with the highest posterior probability is selected as the result of the BSM, and the probability q_{sign} (q_{lett}) that the selected sign (letter) is wrong can be obtained as well. These error probabilities are “propagated” into nearby qubits in a way that the stabilizer statistics are preserved. For example, if the fusion in Fig. 1 is nonideal in such a way, it is equivalent to qubit 1 having an error with probability q_{sign} and qubits \({1}^{{\prime} }\) and \({2}^{{\prime} }\) having correlated errors with probability q_{lett}. We term a qubit with a nonzero error rate “deficient.”
Additionally, if a qubit participating in a fusion is erroneous, this error is propagated to the qubits on the opposite side. For example, an erroneous qubit 0 in Fig. 1 induces an error in the \({X}_{0}{Z}_{{0}^{{\prime} }}\) measurement, which is equivalent to erroneous qubits \({1}^{{\prime} }\) and \({2}^{{\prime} }\).
The above error tracking methodology can be utilized for accurate and effective error simulations. The method can precisely locate qubits affected by unsuccessful fusions, which is closer to reality than simple bond disconnection or qubit removal. Since unsuccessful fusions are now regarded as Pauli error sources, we no longer need lattice deformation and the construction of supercheck operators^{12,37}. Instead, the error probabilities on individual qubits are employed for decoding syndromes in an adaptive manner (with decoders such as the weighted minimumweight perfect matching one), which may be particularly effective if the probabilities are between 0 and 1/2 since regarding such errors as just removal of qubits is a loss of information.
Building an RHG lattice
An RHG lattice can be built with two types of linear threequbit graph states called central and side “microclusters”^{21,28}. The process is composed of two steps (see Fig. 3): In step 1, a central microcluster and two side microclusters are merged by two fusions to form a fivequbit graph state named a “star cluster” composed of one central qubit and four side qubits. In step 2, the side qubits of star clusters are fused to form an RHG lattice. Eventually, the lattice includes only the central qubits, which are measured in appropriate bases for MBQC. For step 2, we consider two options: (i) Star clusters with successful step1 fusions may be postselected, or (ii) all generated star clusters are used regardless of the fusion results. The locations of the Hadamard gates during fusions (called “Hconfiguration”) may be chosen arbitrarily. Here, we define two specific Hconfigurations: “Hadamardincenter (HIC)” and “Hadamardinside (HIS).” In the HIC (HIS) configuration, the Hadamard gates in step 1 are applied on qubits in the central (side) microclusters, as shown in Fig. 3. Whereas the Hadamard gates in step 2 are arranged in the same pattern for both configurations.
Nonideal fusions during lattice building render some central qubits in the final lattice deficient, as shown in Fig. 3 when the HIC configuration is used. When the HIS configuration is used, the positions of q_{sign} and q_{lett} in the figure are swapped. Note that errors in the side qubits are propagated to the nearest central qubits after step 2. Correlation between the sign and letter errors of a fusion, if any, can be neglected if the primal and dual lattices are considered separately, since these errors respectively affect primal and dual^{4} qubits (or vice versa).
Noise model
For analyzing the following linear optical quantum computing protocols, we consider a noise model where each photon suffers an independent loss with probability η, which arises from imperfections throughout the protocol: GHZ3 states (which are initial resource states), delay lines, beam splitters, optical switches, and photodetectors. We assume that noise that cannot be modeled with photon losses such as dark counts is negligible. Note that not only nonideal fusions but also photon losses in central qubits, which are detectable by onoff detectors, may incur deficiency. If the measurement outcome of a central qubit cannot be determined due to photon losses, we select the outcome randomly and assign an error rate of 50% to the qubit.
Parityencodingbased topological quantum computing
We introduce the linearoptical parityencodingbased topological quantum computing (PTQC) protocol, where fusion success rates are boosted by using multiphoton qubits for all qubits that participate in fusions and singlephoton polarization encoding is used for central qubits. The parity encoding^{31} is employed for the multiphoton qubits, which are fused by CBSM^{32}. Onoff or singlephoton resolving detectors are used as photodetectors, and GHZ3 states, which can be generated linearoptically^{38}, are regarded as basic resource states. The (n, m) parity encoding defines a basis as
where
The Hilbert space has a hierarchical structure composed of three levels: the lattice, block, and physical levels with respective bases \(\{\left\vert {0}_{{{{\rm{L}}}}}\right\rangle ,\left\vert {1}_{{{{\rm{L}}}}}\right\rangle \}\), \(\{\left\vert {\pm }^{(m)}\right\rangle \}\), and \(\{\left\vert {{{\rm{H}}}}\right\rangle ,\left\vert {{{\rm{V}}}}\right\rangle \}\). In the original CBSM scheme^{32}, a BSM of a certain level is decomposed into multiple BSMs of one level below. Our current CBSM scheme slightly differs from the original one in the following two areas: (i) We consider two types of photodetectors: singlephoton resolving and onoff detectors. A physicallevel BSM can discriminate between a photon loss and failure only if singlephoton resolving detectors are used. (ii) The letter outcome of a latticelevel BSM is obtained by a weighted majority vote of blocklevel letter outcomes. See the Methods section for details of the CBSM scheme and its error rates.
For practical reasons, we consider generating “postH” microclusters (that is, the states obtained by applying several latticelevel Hadamard gates on microclusters) directly from GHZ3 states, instead of generating microclusters first and then applying the latticelevel Hadamard gates for the fusions. Figure 4a depicts the central and side postH microclusters for the HIC and HIS configurations. A postH microcluster can be generated up to several physicallevel Hadamard gates by performing physicallevel BSMs or fusions (referred to as “merging operations”) between multiple GHZ3 states according to a predetermined “merging graph,” as shown in the example of Fig. 4b. Note that the merging graph may be not unique for a postH microcluster. However, each merging operation has a low success rate of less than or equal to 50%, which may lead to extensive usage of GHZ3 states for generating a postH microcluster successfully. Thus, the generation process, which is determined by the merging graph and the order of the merging operations, should be adjusted carefully to minimize the resource overhead. To optimize the merging order, our protocol utilizes a graph edge coloring algorithm, based on the idea that merging operations for nonadjacent edges can be performed simultaneously. See the Methods section for details of the structures of postH microclusters, their generation, and the resource optimization problem.
For error simulations, we consider the logical identity gate with the length T of 4d + 1 unit cells along the simulated time axis, where d is the code distance. All the fusion outcomes are sampled from appropriate probability distributions, and the corresponding error rates are assigned to individual central qubits according to the process described earlier. These error rates are exploited when decoding syndromes by the weighted minimumweight perfect matching in the PyMatching package^{39}. The loss thresholds are calculated by finding the intersections of logical error rates for d = 9 and d = 11. See Supplementary Note 2 for the detailed method of error simulations.
The resource overhead of PTQC is quantified by two quantities: (i) the average number \({N}_{{{{\rm{GHZ}}}}}^{* }\) of GHZ3 states required per central qubit and (ii) the average total number \({{{{\mathcal{N}}}}}_{{p}_{{{{\rm{L}}}}}}\) of GHZ3 states to achieve a target logical error rate of p_{L} for the logical identity gate of T = d − 1. Both of them depend on the photon loss rate η. \({N}_{{{{\rm{GHZ}}}}}^{* }\) can be used to evaluate the resource overhead independently of its errorcorrecting capability, while \({{{{\mathcal{N}}}}}_{{p}_{{{{\rm{L}}}}}}\) reflects the comprehensive resource overhead of actual computation. See Supplementary Note 3 for the detailed method of resource calculations.
The simulation results of the loss thresholds and the resource overheads (quantified by \({{{{\mathcal{N}}}}}_{1{0}^{6}}\)) are respectively presented in Figs. 5a and 5b for the two types of photodetectors, the two options for the postselection of star clusters, and the two Hconfigurations. Figure 5a shows that, if singlephoton resolving detectors are used, η_{th} reaches up to 8.5% (n = 5, m = 4, j = 2) when star clusters are postselected and up to 6.3% (n = m = 5, j = 3, HIC) when they are not. If onoff detectors are used, η_{th} reaches up to 4.4% (n = 5, m = 4, j = 1) when star clusters are postselected and up to 3.3% (n = 5, m = 4, j = 1, HIS) when they are not. The postselection of star clusters increases the photon loss thresholds by about 1–2%p. From Fig. 5b, it is observed that the protocol using singlephoton resolving detectors is most resourceefficient with \({{{{\mathcal{N}}}}}_{1{0}^{6}}\approx 5\times 1{0}^{5}\) (n = 4, m = 3, j = 1, HIC) when star clusters are postselected and with \({{{{\mathcal{N}}}}}_{1{0}^{6}}\approx 1\times 1{0}^{6}\) (n = m = 4, j = 2, HIS) when they are not. If onoff detectors are used, the protocol is most resourceefficient with \({{{{\mathcal{N}}}}}_{1{0}^{6}}\approx 2\times 1{0}^{7}\) (n = m = 4, j = 2, HIC) when star clusters are postselected and with \({{{{\mathcal{N}}}}}_{1{0}^{6}}\approx 3\times 1{0}^{7}\) (n = m = 5, j = 2, HIC) when they are not. It is worth noting that, compared to the protocol without the postselection, the protocol with it requires fewer GHZ3 states to achieve a target logical error rate. In other words, further faulttolerance obtained by using only successfullygenerated star clusters leads to a positive overall effect that surpasses the negative effect caused by the increase in the number of required GHZ3 states for one central qubit in the final lattice.
Additionally, Fig. 6 presents the photon loss threshold as a function of \({N}_{{{{\rm{GHZ}}}}}^{* }\) for various parameter settings. Here, \({N}_{{{{\rm{GHZ}}}}}^{* }\) is calculated while fixing η to 0.01 or varying it as η = η_{th}/2. It shows that at least about 400 GHZ3 states are required per central qubit for PTQC to work and more than 1000 GHZ3 states are required to achieve η_{th} ≿ 0.04. The explicit information of the data points along the upper envelope lines in the left plot of Fig. 6a is listed in Supplementary Table 1.
Comparison with other approaches
We now compare the PTQC protocol with three other known approaches for linear optical quantum computing, which respectively use (i) simple repetition codes, (ii) tree encoding, and (iii) singlephoton qubits with fusions assisted by ancillary photons. Their detailed descriptions are as follows:

1.
The approach of (i) that uses simple repetition codes and multiphoton BSMs, which is called the multiphotonqubitbased topological quantum computing (MTQC) protocol, was proposed in our previous work^{18}. There, the photon loss thresholds and resource overheads are analyzed in detail, but a rigorous analysis of the effects of nonideal fusions like that done for PTQC is lacking.

2.
The approach of (ii) utilizes redundant tree structures on graph states to replace a single fusion with multiple fusion attempts. Among the related works^{22,23,24}, Ref. ^{24} presents the current most advanced version of the protocol where an RHG lattice is constructed by entangling multiple GHZ3 states like PTQC.

3.
The approach of (iii) uses singlephoton qubits with boosted fusions assisted by ancillary entangled^{19} or unentangled photons^{20}. This approach has been widely studied in the context of ballistic quantum computing^{21,28,29,30}. In these works, nonRHG lattices are considered except for Ref. ^{21}; however, RHG lattices should be used to enable a solid error correction, as also mentioned in Refs. ^{28,30}. Moreover, in these works, the detrimental effects of failed fusions corrupting nearby qubits are not treated comprehensively; instead, they (except Ref. ^{21}) regard a fusion failure as removing the corresponding edge and mainly focus on finding percolation thresholds. For the comparison, we consider using the BSM scheme in Ref. ^{19} where a 2^{N−1}photon GHZ state (for an integer N≥2) is used to suppress its failure probability to 1/2^{N} in the lossless case.
For the comparison, we suppose that each initial GHZ3 state is generated from singlephoton sources by using the linear optical scheme in Ref. ^{38}. Each trial of the scheme requires six photons and has a 1/32 chance of success (in the lossless case), which is heralded. We also assume the same photon loss rate η_{comp} between components. Here, each component means a singlephoton source, beam splitter, switch, delay line (for the time period required to process classical data for a switch), and photodetector. The above two assumptions are made to impose the same condition as the analysis of the treeencoding protocol in Ref. ^{24}.
In Fig. 7, PTQC is compared with the above three approaches in terms of the photon loss threshold per component as a function of \({N}_{{{{\rm{GHZ}}}}}^{* }\). \({N}_{{{{\rm{GHZ}}}}}^{* }\) is calculated at η = 0 to be consistent with Ref. ^{24}. The loss threshold per component is chosen as a measure instead of the total loss threshold for a fair comparison between schemes that have different numbers of components. See the Methods section for the calculation methods. We note that the values for the treeencoding approach are those reported in Ref. ^{24}, just with the conversion of the resource measure.
Figure 7 shows a strong evidence that our PTQC protocol is highly losstolerant and resourceefficient compared to other approaches. The MTQC protocol and the singlephotonqubit method cannot achieve high enough loss thresholds. The treeencoding method consumes 10–100 times more resources than PTQC, although it has an advantage that higher thresholds can be achieved if enough resources are provided.
We note several limitations of the above comparison:

(i)
The assumption that the photon loss rate is the same between different types of components is unrealistic. The results may vary if realistic weights are considered.

(ii)
Photons may have different overall loss rates depending on their paths, but only the largest one (corresponding to the longest path) is considered for simplicity of calculation. Hence, the loss thresholds in Fig. 7 are actually their lower bounds.

(iii)
The lengths of delay lines are obtained under the assumption of complete parallelization; namely, it is assumed that actions that are not causally related can be done in parallel. However, whether it is indeed possible depends on the actual experimental system and architecture.

(iv)
We only consider noises that can be modeled with photon losses. It is uncertain how much the results will change if other types of noises (especially unheralded Pauli errors) are introduced.

(v)
The singlephotonqubit approach might be improved by using the lattice renormalization method in Ref. ^{21}, which is not considered here. However, this method has a shortcoming that the renormalized lattice may be significantly smaller than the original lattice; namely, about 20^{3} photons are consumed to generate one node^{21}.
Discussion
In this work we address the problem of overcoming the negative effects of nonideal fusions and photon losses during linearoptical measurementbased quantum computing (MBQC). We first introduced a Bayesian methodology for tracking errors caused by nonideal fusions during the construction of graph states, which enables accurate and effective error simulations. We then proposed the parityencodingbased topological quantum computing (PTQC) protocol that uses the parity encoding and concatenated Bellstate measurement, which turns out to have a high loss threshold of at most ~ 8.5%. Moreover, logical error rates near 10^{−6} can be achieved using about 10^{6} or fewer threephoton GreenbergerHorneZeilinger states (GHZ3) states in total when the photon loss rate is 1%, which outperforms other known linear optical computing protocols^{18}. It is worth noting that GHZ3 states can be deterministically generated using current techonology^{33}. We presented comprehensive and systematic methods to construct a graph state from GHZ3 states, including the graphtheoretical algorithm that can minimize the resource overhead efficiently.
Additionally, we investigated three other known approaches that respectively use simple repetition codes, and tree encoding, and singlephoton qubits with fusions assisted by ancillary photons. We presented evidence that PTQC is highly competitive compared to others, as it exhibits high loss thresholds per component relative to the amount of resources it consumes. For instance, a loss threshold of ~ 0.35% per component can be achieved by using ~ 10^{4} GHZ3 states per data qubit, which is at least ~ 100 times resourceefficient than previous methods.
One may apply the Bayesian error tracking method to other encoding schemes or decoding algorithms (such as the unionfind decoder^{40}) to improve faulttolerance or resource overheads. More careful consideration of componentwise errors, including both heralded photon losses and unheralded errors (such as dark counts on photodetectors), shall give rise to more realistic analyses. If typeI fusions are used for generating microclusters, the resource overhead may be reduced at the cost of additional Pauli errors, which will be worth investigating. Our protocol might be enhanced by using other codes such as the Steane 7qubit code^{41} instead of the parity code. Resource analysis will be more comprehensive if other factors such as the number of optical switches or delay lines are considered. Our graphtheoretical optimization scheme for generating graph states can be applied to arbitrary graph states as well as microclusters for PTQC. It will be interesting future work to investigate the resource reduction effect of this scheme for various MBQC protocols or other applications of graph states such as quantum repeaters. Lastly, our methods may be generalized to fusionbased quantum computing^{42} that is attracting attention recently, or other MBQC protocols such as the colorcodebased one^{43}.
Methods
In this section, we describe the details of the PTQC protocol including the CBSM scheme, the closedform expressions of error probabilities, the method to generate postH microclusters, and the resource optimization problem.
Bell states for the parity encoding
For the lattice, block, and physical levels of the (n, m) parity encoding, the Bell states are respectively defined as
where \(\left\vert {0}_{{{{\rm{L}}}}}\right\rangle\), \(\left\vert {1}_{{{{\rm{L}}}}}\right\rangle\), and \(\left\vert {\pm }^{(m)}\right\rangle\) are defined in Eqs. (1) and (2). The Bell states of each level can be decomposed into those of one level below as follows:
where \({{{\mathcal{P}}}}[\cdot ]\) means the summation of all the permutations of the tensor products inside the bracket. Therefore, a BSM can be performed in a concatenated manner: A latticelevel BSM (BSM_{lat}) is done by n blocklevel BSMs (BSM_{blc}’s), each of which is again done by m physicallevel BSMs (BSM_{phy}’s). We refer to the sign (letter) result obtained from a lattice, block, or physicallevel BSM as the lattice, block, or physicallevel sign (letter), respectively.
Original CBSM scheme
We review the original CBSM scheme of the parity encoding in Ref. ^{32}. A BSM_{phy} can discriminate between only two among the four Bell states. Three types of BSM_{phy}’s (B_{ψ}, B_{+}, and B_{−}) are considered, which discriminate between \(\{\left\vert {\psi }^{+}\right\rangle ,\left\vert {\psi }^{}\right\rangle \}\), \(\{\left\vert {\phi }^{+}\right\rangle ,\left\vert {\psi }^{+}\right\rangle \}\), and \(\{\left\vert {\phi }^{}\right\rangle ,\left\vert {\psi }^{}\right\rangle \}\), respectively. B_{ψ} can be implemented by the process in Fig. 2, which can be modified to implement B_{+} instead by adding a 45^{∘} wave plate on each input line just before the first PBS. If the 90^{∘} wave plate on the second input line is removed in the setting for B_{+}, B_{−} is executed alternatively. A BSM_{phy} has four possible outcomes: two successful cases (e.g., for B_{ψ}, \(\left\vert {\psi }^{+}\right\rangle\) and \(\left\vert {\psi }^{}\right\rangle\)), “failure,” and “detecting a photon loss.” Failure and loss can be distinguished by the number of total photons detected by the photon detectors. Since two photons may enter a single detector, it is assumed that singlephoton resolving detectors are used. Note that, even in the failure cases, either sign or letter still can be determined. (For example, even if a B_{ψ} fails, we can still learn that the letter is ϕ.) On the other hand, if it detects a loss, we can get neither a sign nor a letter.
A BSM_{blc} is done by mtimes of BSM_{phy}’s. Each block is composed of m photons, thus we consider m pairs of photons selected respectively in the two blocks. The types of the BSM_{phy}’s are selected as follows: First, B_{ψ} is performed on each pair of photons in order until it either succeeds, detects a loss, or consecutively fails j times, where j≤m − 1 is a predetermined number. Then a sign s = ± is selected by the sign of the last B_{ψ} outcome if it succeeds or selected randomly if it fails or detects a loss. After that, B_{s}’s are performed for all the left pairs of photons.
The blocklevel sign (letter) is determined by the physicallevel signs (letters) of the mtimes of BSM_{phy}’s. In detail, the blocklevel sign is chosen (i) to be the same as s if the last B_{ψ} succeeds or any B_{s} succeeds, and (ii) to be the opposite of s if the last B_{ψ} does not succeed and any B_{s} fails. (iii) Otherwise (namely, if the last B_{ψ} does not succeed and all the B_{s}’s detect losses), the blocklevel sign is not determined. The blocklevel letter is determined only when all the physicallevel letters are determined, namely, when no losses are detected and all B_{s}’s succeed. For such cases, the blocklevel letter is ϕ (ψ) if the number of ψ in the BSM_{phy} results is even (odd).
Next, a BSM_{lat} is done by ntimes of BSM_{blc}’s. The latticelevel sign is determined only when all the blocklevel signs are determined; it is (+) if the number of (−) in the BSM_{blc} results is even and it is (−) if the number is odd. The latticelevel letter is equal to any determined blocklevel letter. Thus, if all BSM_{blc}’s cannot determine letters, the latticelevel letter is not determined as well.
Modified CBSM scheme for PTQC
In our PTQC protocol, we consider using either singlephoton resolving or onoff detectors. The CBSM scheme should be slightly modified for this case.
Since failure and loss cannot be distinguished, a BSM_{phy} now has three possible outcomes: two successful cases and failure. Consequently, in a BSM_{blc}, B_{ψ}’s are performed until it either succeeds or consecutively fails j times. The way to determine the blocklevel sign and letter is the same as the original scheme, except that case (iii) when determining the sign no longer occurs. The biggest difference from the original scheme is that the determined sign and letter may be wrong. These error probabilities are presented in the next subsection.
In a BSM_{lat}, the latticelevel sign is determined from the blocklevel signs by the same method as the original scheme, although it may be wrong with a nonzero probability as well. On the other hand, the latticelevel letter is not determined by a single blocklevel letter unlike the original scheme; instead, we use a weighted majority vote of blocklevel letters. The weight of each blocklevel letter is given as \(w:= \log [(1{q}_{{{{\rm{lett}}}}}^{{{{\rm{blc}}}}})/{q}_{{{{\rm{lett}}}}}^{{{{\rm{blc}}}}}]\), where \({q}_{{{{\rm{lett}}}}}^{{{{\rm{blc}}}}}\) is the probability that the blocklevel letter is wrong. This weight factor is justified as follows: Let I_{ϕ} (I_{ψ}) denote the set of the indices of block pairs where the blocklevel letters are ϕ (ψ). Assuming that the two latticelevel letters (Φ and Ψ) have the same prior probability, we get
where \({q}_{{{{\rm{lett}}}}}^{(i)}\) and w^{(i)} are respectively the letter error probability and the weight of the ith block. Note that the third equality comes from the fact that a latticelevel Bell state is decomposed into blocklevel Bell states of the same letter, as shown in Eqs. (6) and (7).
Error probabilities of a CBSM under a lossy environment
We here present the possible outcomes of a CBSM using either singlephoton resolving or onoff detectors and the corresponding error probabilities (q_{sign}, q_{lett}). We denote x ≔ (1−η)^{2}, which is the probability that a BSM_{phy} does not detect photon losses. It is assumed that the four Bell states have the same prior probabilities; namely, the initial marginal state on qubits 1 and 2 before suffering losses is the equal mixture of four latticelevel Bell states, which is justified in Supplementary Note 1. For a BSM_{blc} or BSM_{lat}, to avoid confusion, we use the term “outcome” to indicate the tuple of the outcomes of the BSM_{phy}’s constituting the BSM_{blc} or BSM_{lat}, and use the term “result” to indicate one of the four Bell states that gives the largest posterior probability under its outcome. Note that the result of a BSM may be not deterministically determined by its outcome; if multiple Bell states have the same posterior probability, one of them is randomly selected as the result.
The case using singlephoton resolving detectors is analyzed in Ref. ^{32} and we here review the contents to be selfcontained. The outcome of a BSM_{blc} is included in one of the following three cases: (Success) Both the sign and letter are identified if no losses are detected and all the B_{±}’s succeed. (Failure) Neither sign nor letter is identified if no B_{ψ}’s succeed and all B_{±}’s detect losses. (Sign discrimination) Only the sign is identified if otherwise. The blocklevel sign (or letter) is selected randomly if it is not identified. The probabilities of these cases are respectively
For a BSM_{lat}, let N_{s} (N_{f}) denote the number of successful (failed) BSM_{blc}’s. The latticelevel letter is identified if N_{s}≥1 (namely, if at least one blocklevel letter is identified) and the sign is identified if N_{f} = 0 (namely, if all blocklevel signs are identified). Hence, the outcome of a BSM_{lat} is included in one of the following four events:
The sign and letter error probabilities (q_{sign}, q_{lett}) of the BSM_{lat} for each event are (0, 0) for S, (1/2, 0) for D_{L}, (0, 1/2) for D_{S}, and (1/2, 1/2) for F. The probabilities of the events are respectively given as
We now consider using onoff detectors for fusions. Each outcome of a BSM_{blc} is uniquely identified by a triple O = (r, s, U), where \(r\in {{\mathbb{Z}}}_{j+1}:= \{0,\cdots \,,j\}\) is the number of failed B_{ψ}’s, s = ± is the sign chosen by the successful (r + 1)th B_{ψ} (if r < j) or randomly (if r = j), and U is an (m − r)element tuple composed of “ϕ,” “ψ,” and “f” (failure) indicating the outcomes of the BSM_{phy}’s from the (r + 1)th to the the last. (If r < j, the first component of U is always ψ, and the other components are determined by the B_{s}’s. If r = j, all the components are determined by the B_{s}’s.) Let N_{e}(U) for e ∈ {ϕ, ψ, f} denote the number of e in U. Then a BSM_{blc} outcome O is included in one of the following j + 3 events:
where \({{{\mathcal{O}}}}\) is the set of all possible outcomes. Note that the events \({{{{\mathcal{S}}}}}_{r}\), \({{{\mathcal{F}}}}\), and \({{{\mathcal{D}}}}\) correspond to success, failure, and sign discrimination when η = 0. For each event \({{{\mathcal{E}}}}\) in Eq. (14), its sign and letter error probabilities \({q}_{{{{\rm{sign/lett}}}}}^{{{{\rm{blc}}}}}({{{\mathcal{E}}}})\) and the probability \({p}_{{{{\mathcal{E}}}}}\) that the event occurs are given as follows (see Supplementary Note 4 for their derivation):
A possible outcome of a BSM_{lat} corresponds to an ntuple of events composed of \({{{{\mathcal{S}}}}}_{r}\) (0 ≤ r ≤ j), \({{{\mathcal{F}}}}\), and \({{{\mathcal{D}}}}\), which can be regarded as an independent event for the outcomes of the BSM_{lat}. The probability that an event \({{{\bf{E}}}}=({{{{\mathcal{E}}}}}_{1},\cdots \,,{{{{\mathcal{E}}}}}_{n})\) occurs is
and the sign and letter error probabilities of \({{{\bf{E}}}}=({{{{\mathcal{E}}}}}_{1},\cdots \,,{{{{\mathcal{E}}}}}_{n})\) are respectively
where \({N}_{{{{\mathcal{F}}}}}\) is the number of \({{{\mathcal{F}}}}\)’s in E, \({q}_{i}:= {q}_{{{{\rm{lett}}}}}^{{{{\rm{blc}}}}}({{{{\mathcal{E}}}}}_{i})\), and \({{{\rm{sgn}}}}(a)\) is a/∣a∣ if a ≠ 0 and 0 if a = 0. See Supplementary Note 4 for their derivation.
Generation of postH microclusters
In this subsection, we first present the physicallevel graphs of postH microclusters for PTQC and then describe the method to generate them. A postH microcluster, which is composed of three latticelevel qubits or two of them and one photon (physicallevel qubit), can be regarded as a graph state of photons up to several physicallevel Hadamard gates. The graph of this graph state, called the “physicallevel graph” of the postH microcluster, is visualized in Fig. 8 for each postH microcluster; see Supplementary Note 5 for their derivation. Here, the squares (circles) indicate latticelevel (physicallevel) qubits. If a square (circle) is filled with black, it means that the latticelevel (physicallevel) Hadamard gate is applied on the qubit after the involved edges are connected. Recurrent subgraphs are abbreviated as blue dashed squares or circles with numbers; see Fig. 8b–d for the detailed interpretation of these notations.
We now depict the ways to generate a specific postH microcluster from GHZ3 states. We first describe a straightforward method and then adjust or generalize it. The final method can be summarized as follows:

1.
Determine a merging graph G for the postH microcluster that we want to create by the algorithm presented below. Each edge of G is labeled as either “internal" or “external."

2.
For each vertex v in G, Prepare a GHZ3 state \({\left\vert {{{{\rm{GHZ}}}}}_{3}\right\rangle }_{v}\).

3.
For each edge e in G that connects v_{1} and v_{2}, perform a BSM (fusion) on two photons selected respectively from \({\left\vert {{{{\rm{GHZ}}}}}_{3}\right\rangle }_{{v}_{1}}\) and \({\left\vert {{{{\rm{GHZ}}}}}_{3}\right\rangle }_{{v}_{2}}\) if e is an internal (external) edge. The order of the operations does not matter.
We define the “GHZl state” for an integer l≥3 by the state \(\left\vert {{{{\rm{GHZ}}}}}_{l}\right\rangle := {\left\vert {{{\rm{H}}}}\right\rangle }^{\otimes l}+{\left\vert {{{\rm{V}}}}\right\rangle }^{\otimes l}\). Note that it is a state obtained from a graph state with a star graph (where the number of vertices is l) by applying Hadamard gates on all the leaves of the graph; namely,
We refer to the first photon of the above expression as the “root photon” of the state (which can be chosen arbitrarily) and the other photons as its “leaf photons.”
If a BSM is performed on the root photon of a GHZl_{1} state and a leaf photon of a GHZl_{2} state, the resulting state on the remaining photons is a GHZ(l_{1} + l_{2} − 2) state. Thus, an arbitrary GHZ state can be constructed by performing BSMs on multiple GHZ3 states appropriately. On the other hand, if a fusion is performed on two leaf photons selected respectively from GHZl_{1} and GHZl_{2} states, the resulting state is no longer a GHZ state, but it is a graph state (up to some Hadamard gates) with a graph containing a vertex with degree l_{1} − 1, a vertex with degree l_{2} − 1, and multiple vertices with degree one. (The degree d_{v} of a vertex v means the number of edges connected to v.)
Combining the above facts, a postH microcluster (or an arbitrary graph state) with the physicallevel graph G can be generated from GHZ3 states up to physicallevel Hadamard gates in the following way: For each vertex v of G with a degree larger than one, prepare a state \({\left\vert {{{{\rm{GHZ}}}}}_{{d}_{v}+1}\right\rangle }_{v}\) through BSMs on GHZ3 states. Then, for each edge (v_{1}, v_{2}) of G, perform a fusion on two photons selected respectively from \({\vert {{{{\rm{GHZ}}}}}_{{d}_{{v}_{1}}+1}\rangle }_{{v}_{1}}\) and \({\vert {{{{\rm{GHZ}}}}}_{{d}_{{v}_{2}}+1}\rangle }_{{v}_{2}}\). We refer to each BSM or fusion during this process as a merging operation.
However, the above method still has room for improvement. The physicallevel graphs in Fig. 8 can be decomposed into multiple components that are combined by fusions through the process shown in Fig. 9. Here, each recurrent subgraph connected with multiple vertices is separated and connected with only one vertex. The decomposition of different postH microclusters is explicitly presented in Supplementary Figure 7. To generate a postH microcluster, we prepare the individual components first by the aforementioned method, then merge them through fusions. This process may greatly reduce the number of required merging operations since the number of edges decreases as shown in Fig. 9.
Furthermore, we can generalize the method using the fact that every merging operation commutes with each other. That is, even if all the fusions and BSMs in the above process are performed in an arbitrary order, the final state does not vary (up to the change of the Pauli frame). To systematically address this feature, we define a merging graph of a postH microcluster or one of its components by a graph in which the vertices correspond to initial GHZ3 states and the edges indicate the merging operations between them required to generate the state. Each edge of a merging graph is either internal or external that corresponds to BSMs or fusions, respectively.
A merging graph of a component can be constructed by the following method starting from its physicallevel graph (see Fig. 10 for two examples): First, for each vertex v satisfying d_{v} ≥ 2 in the physicallevel graph, replace it with d_{v} − 1 new vertices connected by internal edges in series with each other. This process means decomposing a GHZ(d_{v} + 1) state into d_{v} − 1 GHZ3 states. The edges originally connected to v are distributed to the new vertices in a way that every new vertex is connected to three or fewer edges. Then there is only one vertex connected to two edges, which is called the “seed vertex” of v. This seed vertex means that one photon in the corresponding GHZ3 state does not participate in any merging operation and remains in the final state. Lastly, the merging graph is obtained by removing all the vertices with degree one. See Supplementary Note 6 for a stricter stepbystep description of the method.
The merging graph of a postH microcluster is constructed by combining the merging graphs of its components. That is, for each fusion between different components, the corresponding seed vertices in the merging graphs are connected by an external edge. Then we finally get the method summarized at the beginning of this subsection.
Optimization of resource overheads
The process of generating a postH microcluster described above is determined by two factors: the merging graph and the order of the merging operations. Here, we discuss their optimization for minimizing resource overhead. The merging graph is selected randomly by the algorithm in Supplementary Note 6. Based on it, we determine the order of the merging operations through an algorithm found heuristically and calculate the expected number \({N}_{{{{\rm{GHZ}}}}}^{{{{\rm{MC}}}}}\) of GHZ3 states required to generate the state. We repeat this process for a large enough number to obtain as low resource overhead as possible. \({N}_{{{{\rm{GHZ}}}}}^{* }\) and \({{{{\mathcal{N}}}}}_{{p}_{{{{\rm{L}}}}}}\) can be calculated by using the obtained optimal resource overheads; see Supplementary Note 3 for details.
During the generation process, performing each merging operation can be regarded as contracting the corresponding edge, which means removing the edge, merging the two vertices (v_{1}, v_{2}) that it previously joined into a new vertex w, and reconnecting all the edges that were connected to v_{1} and v_{2} with w. Here, each vertex indicates a connected subgraph (a group of entangled photons) of the intermediate graph state. We assign a “weight” N_{v} (which is initialized to 1) on each vertex v, which is the average number of GHZ3 states required to generate the connected subgraph. If the edge between two vertices v_{1} and v_{2} are contracted, the new vertex w has the weight of
where the factor 2/(1−η)^{2} is the inverse of the success probability of the merging operation. By repeating this process, the postH microcluster is obtained when there is only one vertex left, whose weight is equal to \({N}_{{{{\rm{GHZ}}}}}^{{{{\rm{MC}}}}}\).
To find an optimal order of merging operations, we use the following strategy:

1.
Find the set E_{min.wgt} of edges with the smallest weight, where the weight of an edge (v_{1}, v_{2}) is defined as \({N}_{{v}_{1}}{+}_{m}{N}_{{v}_{2}}\).

2.
Using an edge coloring algorithm, allocate “colors” to all edges so that different edges sharing a vertex have different colors and as few colors as possible are used.

3.
Partition E_{min.wgt} into disjoint subsets by the colors of the edges. Find the largest subset E_{mrg} among them. If such a subset is not unique, choose one randomly.

4.
Contract each edge in E_{mrg} in an arbitrary order.

5.
Repeat all the above steps until only one vertex is left.
The strategy is based on the following two intuitions: First, it is better to merge vertices with small weights first, since (N_{1} + _{m}N_{2}) + _{m}N_{3} < N_{1} + _{m}(N_{2} + _{m}N_{3}) if N_{1} < N_{2} < N_{3}. Secondly, it is better to perform merging operations in parallel as much as possible. Such a set of edges can be found by the edge coloring algorithm. For our results, we have used the function coloring.greedy_color in NetworkX package^{44} with the strategy largest_first. (Since the function performs vertex coloring, we input the line graph of G_{mrg} into the function.)
In Supplementary Note 7, we show an evidence that this optimizing strategy is indeed highly effective in terms of both the optimality of the calculated overhead and searching time, by comparing its performance with those of its variants constructed by omitting or altering specific steps. We conjecture that this strategy is powerful for generating general graph states as well as those for PTQC, which will be worth investigating.
Methods for comparing PTQC and previous approaches
Here, we describe the methods to obtain the results in Fig. 7 where PTQC and three known approaches are compared. Let η_{e}, η_{b}, η_{s}, η_{d}, and η_{m} denote the photon loss rates in a singlephoton source, beam splitter, switch, delay line (for the time period required to process classical data for a switch), and photodetector, respectively. We consider “fancy” switches^{24} that have multiple input and output modes; thus, N states can be postselected from M input states (N < M) with a single switch. For the analysis, we assume η_{e} = η_{b} = η_{s} = η_{d} = η_{m} = η_{comp}. The total photon loss rate η is upper bounded by
where N_{b}, N_{s}, and N_{d} are respectively the maximal numbers of beam splitters, switches, and delay lines that a single photon encounters between its creation and measurement. When counting delay lines, we assume that actions that are not causally related can be done in parallel; namely, delay lines are compressed as much as possible so that those required for switches are dominant. The loss threshold per component is lower bounded by η_{th}/N_{comp}, where η_{th} is the total loss threshold.
Let us first consider PTQC without postselection of star clusters. During the generation of GHZ3 states and their postselection, a photon passes through one beam splitter, switch, and delay line^{38}. While constructing microclusters, a photon encounters at most N_{step} pairs of a switch and delay line that are used for postselecting photons by the outcomes of merging operations. Here, N_{step} is the number of steps (groups of merging operations that can be conducted in parallel). In a CBSM, physicallevel BSMs (BSM_{phy}’s) in each block should be done one by one since their types (B_{ψ}, B_{+}, and B_{−}) depend on the previous BSM_{phy} outcomes. Therefore, a photon passes through at most one switch (zero if m = 1) and m − 1 delay lines before the BSM_{phy} of the photon is performed. Lastly, a photon enters two beam splitters during the BSM_{phy}. To sum up, we get
where δ is the Kronecker delta. If star clusters are postselected, we should add one switch and delay line (for the postselection) and m − 1 delay lines (to wait for step1 fusions to finish) to the formula, which leads to
MTQC is the same as PTQC except that m is fixed to 1 and central qubits are encoded. Since only side qubits are involved in CBSMs, Eqs. (24) and (25) for m = 1 can be directly applied to this case. Figure 7 shows only the case that star clusters are postselected, which gives higher thresholds than the case without it. In Supplementary Note 8, we discuss the effects of the central qubit encoding in detail, showing that it is not helpful for improving the performance of the protocol unlike claimed in Ref. ^{18}.
For the treeencoding method, the thresholds per component are already reported in Fig. 4b of Ref. ^{24}. The number \({N}_{\det }^{* }\) of detectors per data qubit, which is used as a resource measure in Ref. ^{24}, can be approximately converted to \({N}_{{{{\rm{GHZ}}}}}^{* }\) by multiplying 1/198. It is because a single successfully generated GHZ3 state requires 6 × 32 = 192 detectors for its generation and two detectors for measuring each photon in the state.
We lastly investigate the approach with singlephoton qubits and boosted fusions. Employing the BSM scheme in Ref. ^{19}, a GHZ2^{m} state for each m ∈ {1, 2, ⋯ , N − 1} (for an integer N≥2) is used as an ancillary state for a single BSM, where a GHZ2 state is a Bell state \(\left\vert 00\right\rangle +\left\vert 11\right\rangle\). (Bell states are ignored in resource analysis.) The BSM scheme succeeds with the probability of \((11/{2}^{N}){(1\eta )}^{{2}^{N}}\) and discriminates only the letter of the Bell state (q_{sign} = 1/2, q_{lett} = 0) with the probability of \({(1\eta )}^{{2}^{N}}/{2}^{N}\). In other cases, photon losses are detected and the data is discarded (q_{sign} = q_{lett} = 1/2). Based on this information, we can conduct error simulations for various values of N and obtain the loss thresholds, as done for PTQC. To compute the threshold per component, we consider the path of a photon in an ancillary GHZ2^{N−1} state (which is the longest):
where \({N}_{{{{\rm{step}}}}}^{{{{\rm{anc}}}}}\) is the number of steps during the generation of the GHZ2^{N−1} state. The factor 2^{N} in front of η_{b} comes from the number of beam splitters in a boosted fusion^{19}. Considering ancillary GHZ states for two step1 and two step2 fusions per central qubit, \({N}_{{{{\rm{GHZ}}}}}^{* }\) is given as
where “PS” means postselection of star clusters and \({N}_{{{{\rm{GHZ}}}}}^{{{{\rm{anc}}}}}\) is the average number of GHZ3 states required to generate all the ancillary GHZ states for one BSM. The calculation results are as follows: For the cases of N≤3, we get η_{th} < 10^{−3}. For the case of N = 4 with PS, we get η_{th} = 2.7 × 10^{−3}, η_{th}/N_{comp} = 1.0 × 10^{−4}, and \({N}_{{{{\rm{GHZ}}}}}^{* }=185\). For the case of N = 5 with PS, we get η_{th} = 1.8 × 10^{−3}, η_{th}/N_{comp} = 4.0 × 10^{−5}, and \({N}_{{{{\rm{GHZ}}}}}^{* }=1027\). Only the case of N = 4 with PS is marked in Fig. 7.
Data availability
The simulation data for obtaining thresholds and resource overheads of our protocol is available at https://github.com/seokhyunglee/PTQC.
Code availability
The Python codes used for numerical simulations are available from the corresponding author upon reasonable request.
References
Ralph, T. C. & Pryde, G. J. in Chapter 4  Optical Quantum Computation (ed.Wolf, E.) Progress in Optics, Vol. 54209269 (Elsevier, 2010).
Raussendorf, R. & Briegel, H. J. A oneway quantum computer. Phys. Rev. Lett. 86, 5188–5191 (2001).
Raussendorf, R., Browne, D. E. & Briegel, H. J. Measurementbased quantum computation on cluster states. Phys. Rev. 68, 022312 (2003).
Raussendorf, R., Harrington, J. & Goyal, K. A faulttolerant oneway quantum computer. Ann. Phys. 321, 2242–2270 (2006).
Raussendorf, R., Harrington, J. & Goyal, K. Topological faulttolerance in cluster state quantum computation. New J. Phys. 9, 199 (2007).
Fowler, A. G. & Goyal, K. Topological cluster state quantum computing. Quantum Inf. Comput. 9, 721–738 (2009).
Herr, D., Paler, A., Devitt, S. J. & Nori, F. Lattice surgery on the Raussendorf lattice. Quantum Sci. Technol. 3, 035011 (2018).
Brown, B. J. & Roberts, S. Universal faulttolerant measurementbased quantum computation. Phys. Rev. Res. 2, 033305 (2020).
Bombin, H. et al. Logical blocks for faulttolerant topological quantum computation. PRX Quantum 4, 020303 (2023).
Browne, D. E. & Rudolph, T. Resourceefficient linear optical quantum computation. Phys. Rev. Lett. 95, 010501 (2005).
Braunstein, S. L. & Mann, A. Measurement of the Bell operator and quantum teleportation. Phys. Rev. 51, R1727(R) (1995).
Auger, J. M., Anwar, H., GimenoSegovia, M., Stace, T. M. & Browne, D. E. Faulttolerant quantum computation with nondeterministic entangling gates. Phys. Rev. 97, 030301(R) (2018).
Jeong, H., Kim, M. S. & Lee, J. Quantuminformation processing for a coherent superposition state via a mixedentangled coherent channel. Phys. Rev. 64, 052308 (2001).
Jeong, H. & Kim, M. S. Efficient quantum computation using coherent states. Phys. Rev. 65, 042305 (2002).
Omkar, S., Teo, Y. S. & Jeong, H. Resourceefficient topological faulttolerant quantum computation with hybrid entanglement of light. Phys. Rev. Lett. 125, 060501 (2020).
Omkar, S., Teo, Y. S., Lee, S.W. & Jeong, H. Highly photonlosstolerant quantum computing using hybrid qubits. Phys. Rev.103, 032602 (2021).
Lee, S.W., Park, K., Ralph, T. C. & Jeong, H. Nearly deterministic Bell measurement with multiphoton entanglement for efficient quantuminformation processing. Phys. Rev.92, 052324 (2015).
Omkar, S., Lee, S.H., Teo, Y. S., Lee, S.W. & Jeong, H. Allphotonic architecture for scalable quantum computing with GreenbergerHorneZeilinger states. PRX Quantum. 3, 030309 (2022).
Grice, W. P. Arbitrarily complete Bellstate measurement using only linear optical elements. Phys. Rev. 84, 042331 (2011).
Ewert, F. & van Loock, P. 3/4Efficient Bell measurement with passive linear optics and unentangled ancillae. Phys. Rev. Lett. 113, 140403 (2014).
Herr, D., Paler, A., Devitt, S. J. & Nori, F. A local and scalable lattice renormalization method for ballistic quantum computation. npj Quantum Inf. 4, 27 (2018).
Fujii, K. & Tokunaga, Y. Faulttolerant topological oneway quantum computation with probabilistic twoqubit gates. Phys. Rev. Lett. 105, 250503 (2010).
Li, Y., Barrett, S. D., Stace, T. M. & Benjamin, S. C. Fault tolerant quantum computation with nondeterministic gates. Phys. Rev. Lett. 105, 250502 (2010).
Li, Y., Humphreys, P. C., Mendoza, G. J. & Benjamin, S. C. Resource costs for faulttolerant linear optical quantum computing. Phys. Rev. 5, 041007 (2015).
Takeda, S., Mizuta, T., Fuwa, M., Van Loock, P. & Furusawa, A. Deterministic quantum teleportation of photonic quantum bits by a hybrid technique. Nature 500, 315–318 (2013).
Zaidi, H. A. & van Loock, P. Beating the onehalf limit of ancillafree linear optics Bell measurements. Phys. Rev. Lett. 110, 260501 (2013).
Kilmer, T. & Guha, S. Boosting linearoptical Bell measurement success probability with predetection squeezing and imperfect photonnumberresolving detectors. Phys. Rev. 99, 032302 (2019).
GimenoSegovia, M., Shadbolt, P., Browne, D. E. & Rudolph, T. From threephoton greenbergerhornezeilinger states to ballistic universal quantum computation. Phys. Rev. Lett. 115, 020502 (2015).
Zaidi, H. A., Dawson, C., van Loock, P. & Rudolph, T. Neardeterministic creation of universal cluster states with probabilistic Bell measurements and threequbit resource states. Phys. Rev. 91, 042301 (2015).
Pant, M., Towsley, D., Englund, D. & Guha, S. Percolation thresholds for photonic quantum computing. Nat. Commun. 10, 1070 (2019).
Ralph, T. C., Hayes, A. J. F. & Gilchrist, A. Losstolerant optical qubits. Phys. Rev. Lett. 95, 100501 (2005).
Lee, S.W., Ralph, T. C. & Jeong, H. Fundamental building block for alloptical scalable quantum networks. Phys. Rev.100, 052303 (2019).
Besse, J.C. et al. Realizing a deterministic source of multipartiteentangled photonic qubits. Nat. Commun. 11, 4877 (2020).
Kieling, K., Rudolph, T. & Eisert, J. Percolation, renormalization, and quantum computing with nondeterministic gates. Phys. Rev. Lett. 99, 130501 (2007).
Knill, E. Quantum computing with realistically noisy devices. Nature 434, 39–44 (2005).
Lütkenhaus, N., Calsamiglia, J. & Suominen, K.A. Bell measurements for teleportation. Phys. Rev. 59, 3295–3300 (1999).
Barrett, S. D. & Stace, T. M. Fault tolerant quantum computation with very high threshold for loss errors. Phys. Rev. Lett. 105, 200502 (2010).
Varnava, M., Browne, D. E. & Rudolph, T. How good must single photon sources and detectors be for efficient linear optical quantum computation? Phys. Rev. Lett. 100, 060502 (2008).
Higgott, O. PyMatching: A Python package for decoding quantum codes with minimumweight perfect matching. ACM Trans. Quantum Comput. 3, 1–16 (2021) .
Delfosse, N. & Nickerson, N. H. Almostlinear time decoding algorithm for topological codes. Quantum 5, 595 (2021).
Steane, A. Multipleparticle interference and quantum error correction. P. Roy. Soc. Lond. A Mat. 452, 2551–2577 (1996).
Bartolucci, S. et al. Fusionbased quantum computation. Nat. Commun. 14, 912 (2023).
Lee, S.H. & Jeong, H. Universal hardwareefficient topological measurementbased quantum computation via colorcodebased cluster states. Phys. Rev. Res. 4, 013010 (2022).
Hagberg, A. A., Schult, D. A. & Swart, P. J. Varoquaux, G., Vaught, T. & Millman, J. (eds) Exploring network structure, dynamics, and function using NetworkX. (eds Varoquaux, G., Vaught, T. & Millman, J.) Proceedings of the 7th Python in Science Conference, 1115 (Pasadena, CA USA, 2008).
Acknowledgements
This work was supported by the National Research Foundation of Korea (NRF) grants funded by the Korean government (Grant Nos. 2019M3E4A1080074, 2023R1A2C1006115, NRF2022M3E4A107609, and NRF2022M3K4A1097117) via the Institute of Applied Physics at Seoul National University, by the Institute of Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (IITP2021001059 and IITP20232020001606), and by the Brain Korea 21 FOUR Project grant funded by the Korean Ministry of Education. We thank Kamil Bradler, Brendan Pankovich, Angus Kan, and Alex Neville for insightful discussions.
Author information
Authors and Affiliations
Contributions
All authors (S.H.L., S.O., Y.S.T., and H.J.) contributed to developing the main idea. S.H.L. concretized the idea with mathematical analysis, wrote the codes, and ran numerical simulations. S.O. suggested important ideas on resource analysis. Y.S.T. checked the results and H.J. supervised the project. All authors helped write the manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary information
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Lee, SH., Omkar, S., Teo, Y.S. et al. Parityencodingbased quantum computing with Bayesian error tracking. npj Quantum Inf 9, 39 (2023). https://doi.org/10.1038/s41534023007059
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s41534023007059