Percolation-based architecture for cluster state creation using photon-mediated entanglement between atomic memories

A central challenge for many quantum technologies concerns the generation of large entangled states of individually addressable quantum memories. Here, we show that percolation theory allows the rapid generation of arbitrarily large graph states by heralding the entanglement in a lattice of atomic memories with single-photon detection. This approach greatly reduces the time required to produce large cluster states for quantum information processing including universal one-way quantum computing. This reduction puts our architecture in an operational regime where demonstrated coupling, collection, detection efficiencies, and coherence time are sufficient. The approach also dispenses the need for time-consuming feed-forward, high cooperativity interfaces and ancilla single photons, and can tolerate a high rate of site imperfections. We derive the minimum coherence time to scalably create large cluster states, as a function of photon-collection efficiency. We also propose a variant of the architecture with long-range connections, which is even more resilient to site yields. We analyze our architecture for nitrogen vacancy (NV) centers in diamond, but the approach applies to any atomic or atom-like systems.


INTRODUCTION
The past years have seen rapid advances in controlling small groups of qubits encoded in atomic or atom-like quantum memories. An important question now concerns the development of architectures to efficiently combine these memories into largescale systems capable of general-purpose quantum computing, 1-3 quantum simulation, 4 and metrology near the quantum limit. 5 A promising approach is entangling the atomic qubits with optical links to generate cluster states, which can perform generalpurpose quantum computing with adaptive single-qubit measurements. 6 A key challenge is to produce this cluster state fast enough to allow computation and error correction within a coherence time of memories.
Here, we show that percolation of heralded entanglement allows us to create arbitrarily large cluster states. Our cluster statebased architecture does not require reconfiguration of physical qubits by the result of the entanglement success unlike conventional approaches. 7 Instead, the optical network reconfigures the connectivity. Meanwhile, the concept of percolation greatly relaxes the high success probability required in the previous proposals. [1][2][3] The process is fast enough for implementation with the device parameters demonstrated; one does not need high cooperativity cavities and ancilla single photons. It requires 'feed-forward', operations conditioned on the previous measurement results. In the absence of errors, the missing bonds can be compensated with constant overhead. [8][9][10][11] In the presence of errors, the scheme can be adapted to be fault-tolerant. 12,13 Our approach also provides tolerance for site imperfections, and we show trade-offs between reaching percolation threshold and siteloss-resilience. Combined with transparent nodes implemented on nanophotonic platform, long-range connections are possible, reducing the percolation threshold. We derive a theoretical lower bound on the minimum time required to percolate in any lattice, and found that the proposed lattices are only a factor of 1:6 $ 3 above this limit.
We focus on nitrogen vacancy (NV) centers in diamond 14 and propose the control sequence to map the physical properties to cluster state quantum computation. NV centers have favorable properties as quantum memories. The NV À charge state has a robust optical transition for heralded entanglement between remote NV centers 15,16 and a long electronic spin (S=1) coherence time approaching 1 second. 17 Recently, single-qubit gates with fidelities up to 99% were achieved with optimal control techniques. 18 Electronic spins of NV centers can be coupled with nearby nuclear spins, which have coherence times exceeding 1 minute. 19 In addition, they can be coupled with integrated nanophotonic devices. 20 Our approach possibly applies to a number of physical systems, including atomic gases, 21 ion traps, 22 semiconductor quantum dots, 23 and rare earth ions. 24 Figure 1 illustrates the percolation approach to generate cluster states with NV centers. We work in the framework of cluster states where nodes represent qubits in the superposition state ð 0 j i þ 1 j iÞ= ffiffi ffi 2 p and bonds represent the controlled-Z gate. Consider a square lattice where edges exist with probability p (Fig. 1(a-c)). The computational power of the cluster state corresponding to the graph is related to the size of the largest connected component (LCC) (shown in red). When p < 0:5, the graph produces small disconnected islands. For a lattice with N nodes, the size of the LCC is OðlogðNÞÞ. 25 Single-qubit measurements on such a cluster state can be efficiently simulated on classical computers. When the bond probability p exceeds the critical value p c ¼ 0:5, called the percolation threshold of the lattice, there is a 1 sudden transition in the size of the LCC. On crossing this boundary, the size of the LCC jumps to ΘðNÞ and the lattice is 'percolated'. A large percolated lattice can be 'renormalized' with single-qubit measurements to obtain a perfect lattice. 8 Renormalization consumes a constant fraction of qubits 8 and requires classical computation (pathfinding). 26,27 Resulting perfect lattice is a resource for universal quantum computation. Thus, the percolation transition is accompanied by a sudden transition in computational power; adaptive single-qubit measurements on the cluster state enable universal quantum computing. 11 Figure 1d shows the physical implementation of bond creation (entanglement) with NVs. The nuclear spins (red spheres) are 'client qubits' that store entanglement. They are coupled to NV electronic spins ('broker qubits'), that can be entangled remotely by photon-mediated Bell measurements. In each time step, we attempt to create one bond (entanglement) at each node. We consider the Barrett-Kok protocol 28 on the broker qubits of neighboring nodes. If the probabilistic Bell measurement succeeds, the electron spins in each node are entangled. This entanglement is transferred to the nuclear spins with the entanglement swapping procedure, as illustrated in Fig. 1(d) 1 . The whole cycle from initialization to entanglement swapping is assumed to be approximately t 0 = 5 µs based on experimental demonstrations 16 (See Supplemental Material, which includes a discussion of emitters coupled with nanophotonic circuits, 29 ultrasmall mode volume cavities, 30,31 waveguide-integrated single-photon detectors, 32 low jitter detectors, 33 effects of frequency mismatch between two emitters, 34,35 spectral diffusion, 36,37 nitrogen implantation, 38,39 and charge state initialization 40,41 ).

Protocol
If the Bell measurement fails, one needs to restore the nuclear spins as before the trial. In addition, the electronic spins should be initialized to spin ground state without disrupting the nuclear spin. For the first problem, we just wait for the nuclear spin and electronic spin to be decoupled, which happens after a time period of the hyperfine interaction. This time period serves as a global clock and can be synchronized across the whole graph only using the 15 N nuclear spin, which is the host atom of NV centers. For the electron spin initialization, optical pumping technique cannot be used as in many experiments, 15,16 because hyperfine coupling persists during a random amount of time when the electronic spin is in m s = 1 states. Instead, we readout the spinstate via m s = 0 optical transition where the electronic spin decouples from the nuclear spin. If the spin is in the other state, one can use a fast microwave pulse to put it in the spin ground state (See Supplemental Material).
We assume the Barrett-Kok protocol because it does not require ancilla single photons or high cooperativity cavities, unlike reflection-based protocols. 1 Furthermore, unlike single-photon detection protocols, 42 photon loss does not degrade the entanglement fidelity Ψ h jρ Ψ j i where Ψ is the Bell state, andρ is density matrix operator of the system after successful heralding. High entanglement fidelity is important for reducing the error correction overhead in a fault-tolerant architecture. High fidelity comes at the price of low bond success probability, 28 that can be overcome in the percolation based architecture.

Physical unit
The physical unit cell could be very small, on the order of tens of microns, so that the entire lattice may be integrated on a chip, as illustrated in Fig. 2. Each node in the architecture requires an atomic memory and a 1 d switch, where d is the number of nearest neighbors in the underlying lattice, i.e., the degree of the lattice. Each bond in the lattice requires waveguides, a beamsplitter, and two detectors for the Bell measurement.
At each time step, a state-selective optical π-pulse entangles a photonic mode with the electronic spin. Photonic modes coupled to neighboring electronic spins undergo probabilistic linear optic Spheres and lines represent nodes and bonds, respectively, and the red spheres are in LCC. When the bond probability (p) goes above the percolation threshold (p c ), the size of the LCC suddenly increases. Corresponding graph states change from being classically simulable to a resource for universal quantum computation. c Expanded view of a. d Physical implementation of nodes and bonds with NV centers in diamond. ① Probabilistic Bell measurement is attempted on two nearest neighbor electronic spins (blue spheres). ② Conditioned on two single-photon detection events, the two electron spins are entangled (Bell states). ③ Hyperfine interaction entangles electronic spins and nuclear spins ( 15 N) through controlled-Z gates. ④ Measurement of electronic spins in transverse direction projects remote nuclear spins onto an entangled state (entanglement swapping). Bell measurement. Successful Bell measurement corresponds to the creation of a bond in the lattice.
Let us define n att as the total number of entanglement trials per node. Then each bond is attempted n att =d times. If a bond is created before n att =d allocated trials, the node idles on the rest of the attempts for that bond. Each switch only needs to be flipped d À 1 times during cluster generation, and hence the switching time is negligible compared with the whole process. For example, electro-optic modulators can switch at sub-nanosecond time scales, but the time spent on each bond is on the order of milliseconds as we will see.
The probability of successfully heralding entanglement of two NV centers is 28 where η is the efficiency of emitting, transmitting, and detecting the photon entangled with the electronic spin (zero phonon line) from the NV-excited state. Table 1 summarizes p 0 for three representative types of emitterphoton interfaces: low-efficiency interfaces with p 0 ¼ 5 10 À5 representative of today's state-of the art circular gratings or solid immersion lenses (SILs), 15,16,43 medium-efficiency interfaces with p 0 ¼ 2 10 À4 for NV centers coupled to diamond waveguides, 20 and high-efficiency p 0 ¼ 5 10 À2 for nanocavity-coupled NV centers. 44 For all three coupling mechanisms, we assumed coupling efficiencies that are realistic today (See Supplemental Material). After n att =d entanglement attempts with a nearest neighbor, the probability of having generated a bond is

Percolation thresholds
We evaluate the growth of clusters using the Newman-Ziff algorithm with nine million nodes. Figure 3a plots the fraction of nodes that are in the largest cluster component (f LCC ), as a function of time from the start of the protocol for the three values of p 0 , assuming t 0 = 5 µs, with an underlying square lattice. 25 where N is the total number of nodes in the lattice. As the bond success probability passes the bond percolation threshold (p c ), f LCC rapidly rises approaching Θð1Þ. For a degree d lattice, the bond probability after time t is p ¼ 1 À ð1 À p 0 Þ t=t0d . The time required to obtain a resource for universal quantum computation is t c ¼ t 0 dlnð1 À p c Þ=lnð1 À p 0 Þ, which is shown with vertical dashed lines in the figure. The transition becomes sharper as the number of nodes in the lattice ( N) increases.
In all collection schemes, the bond success probability exceeds the percolation threshold within 1 second, which is a conservative estimate of the nuclear spin coherence time in nanostructured, integrated systems based on the experimentally demonstrated coherence time, 1 minute. 19 These simulations reveal that an arbitrarily large cluster can be generated even with free space couplings.
Degree of the lattice and imperfection It is known that higher degree lattices have a lower percolation threshold. However, t c is nearly the same for the honeycomb ( d ¼ 3), the square (d ¼ 4) and the triangular (d ¼ 6) lattices (Fig.  3b). This is because increasing d lowers the bond percolation threshold, but it also decreases the number of entanglement attempts per bond, n att =d; a single broker qubit per NV requires entanglement attempts to proceed serially. Increasing d would in fact substantially lower t c if each site contained multiple broker qubits that could be entangled simultaneously. We do not explore the possibility of multi-NV nodes owing to a lack of studies. 45 Ion trap systems are presently more mature for this purpose. 2 Let's consider the most general case where we can attempt Bell measurements on any pair of NVs at any time step. What is the minimum time, t ðLBÞ c , required to obtain a resource for universal quantum computation over all lattice geometries? The bond probability after time t is p ¼ 1 À ð1 À p 0 Þ t=t0d . For percolation, p ! p c , i.e., t ! t 0 d Á lnð1 À p c Þ=lnð1 À p 0 Þ. For a degree d lattice, p c ! 1=ðd À 1Þ, 46 with equality for a degree-d Bethe lattice (infinite tree with fixed degree at each node). This leads to is plotted as a black dashed line in Fig. 3b. The lattice corresponding to the lower bound is the infinite-degree Bethe lattice, and such lattices are not a resource for universal quantum computing. 47 Meanwhile, we find that the simple 2D lattices with nearest neighbor connectivity are only a factor 3 above this limit and are resources for universal quantum computing.
Practically, a scalable architecture should be able to tolerate non-functional sites. For example, trapping in a metastable state, a far-detuned transition, failed charge initialization greatly reduced the system performance in the recent state-of-the art experiment. 48 Even if all faulty nodes and the bonds connected to it are removed, the lattice can retain enough bonds for a percolated lattice (Fig. 4a, inset). The problem maps to site-bond percolation. We define the site yield q as the fraction of functional nodes. Figure 4a plots the minimum time required to obtain a percolated lattice as a function of q, assuming NVs coupled to diamond waveguides (p 0 ¼ 0:02%). In general, a reduced site yield can be  compensated with a larger bond probability, which would require a longer time (more attempts). The site percolation threshold, q c , corresponds to the minimum possible site yield for percolation with all bonds having succeeded (p ¼ 1). The triangular (q c ¼ 0:5) performs better than the square lattice (q c % 0:593) that outperforms the honeycomb lattice (q c % 0:697). The architecture that we have discussed thus far only allows for nearest neighbor interaction. Adding long-range connections shown in the inset of Fig. 4b can decrease the threshold time and increase tolerance to imperfect site yield. Such an architecture can be implemented by replacing the 1 4 switch in Fig. 2 with a 5 5 optical switch, which is depicted in Supplementary Fig. S1. Seven Mach-Zehnder interferometers (MZIs) with two phase shifters can implement switching between the set of input and output modes (Fig. S1c). The MZI arrays allow a node to be 'transparent'. If a node is transparent, photons pass through the node to the next adjacent node for interference (Fig. 4b inset). By turning multiple adjacent nodes transparent, this bypass enables long-range entanglement in a planar architecture.
Interestingly, this transparent node architecture can be used to reduce percolation threshold by randomly turning a fraction 1 À ϵ of the nodes transparent. Figure 4b plots f LCC vs. time. As ϵ decreases, the maximum possible value of f LCC is reduced from one to ϵ because only a fraction ϵ nodes are active. However, reduced ϵ decreases t c because the transparent nodes increase the effective dimensionality of the lattice. Therefore, there is an optimum value of ϵ that maximizes LCC for a given time. We numerically found a minimum p c of 0:33 with transparent nodes, which is achieved when 1=N ( ϵ ( 1, i.e., ϵ ! 0 but the number of non-transparent nodes in the lattice is still ΘðNÞ. Faulty sites can be incorporated into the fraction of transparent nodes as long as the yield exceeds 1=N. Small size system We used nine million qubits (nodes) in the simulations above in order to show the limiting behavior of the cluster creation. However, the same qualitative behavior is also observed in smaller systems. In Fig. 5a, we plot the mean (line) and the standard deviation (shade) of f LCC as a function of time for a 5 5 (green) and 10 10 (red) square lattices (p 0 ¼ 0:02%). In the simulation, we generated 300 lattice instances at each time with a periodic boundary condition and calculated the statistical mean and standard deviation. The transition from small disconnected islands to a large cluster on the order of the lattice size becomes sharper as the size of the system increases, but even for a 5 5 qubit system, there is a clearly visible transition near the percolation threshold (dashed line). Because of the small system size, there is statistical variation in f LCC and the shaded regions represent one standard deviation. As expected, the relative variation become smaller in larger systems.
Fault tolerance It is possible to obtain a regular lattice from the percolated lattices in Fig. 3 with a constant overhead by finding crossing paths and using single-qubit measurements (renormalization). [8][9][10][11] The single-qubit measurements used to obtain the regular lattice may bring additional errors, but we can adapt our architecture to  the Raussendorf lattice. 12,13 The Raussendorf lattice is a 3D lattice with degree-4 (Fig. S3a) and a means of translating surface code error correction into the cluster state model of quantum computation. The Raussendorf lattice can be constructed in a 2 +1D architecture 49 where qubits are arranged in 2D, and an additional dimension is constructed in time (Fig. S3 b) 1 . Then, one needs only two layers of 2D lattices to implement a 2+1D lattice because memories are reused after measurement (Fig. S3c). This architecture requires a small number of waveguide-crossings, which can be implemented with low loss. 50 Alternatively, they can be absorbed in the optical switches.
Recent results have evaluated the fault tolerance in the Raussendorf lattice with non-deterministic entangling gates. 26,51 Following, 51 fault tolerance requires the bond probability (p) and measurement error to be in the two shaded regions in Fig. 5b, which correspond to adaptive and non-adaptive measurement schemes; in the non-adaptive scheme, qubits are measured in the X-basis regardless of the failure of bonds; in the adaptive scheme, one of the qubits connected to the missing bond is measured in the Z-basis transforming bond-loss to site-loss. The error threshold of each scheme vanishes at p $ 0:935 and p $ 0:855, respectively. These correspond to the site percolation threshold of the Raussendorf lattice (q c $ 0:75) when lost bonds are transformed to missing sites (see more details in ref. 51 ).
We assume that the single-qubit measurement error probability increases with time as 1 À e Àt=t coh , where we use t coh ¼ 1 sec. At t ¼ 0, the measurement error probability and bond probability (p) are both zero. Both of these probabilities increase as we spend more time attempting entanglement generation, resulting in the curves shown in Fig. 5b depending on the entanglement success probability per attempt (p 0 ). Assuming a bond trial time t 0 ¼ 5μs (Table 1), we find that a value of p 0 % 4 10 À3 is required to meet the fault tolerance threshold, which corresponds to an NV to detector coupling efficiency of $ 9%.
As an example, we estimated the resource overhead for faulttolerant factorization of 2000-bit numbers using Shor's algorithm (Fig. 6). Various methods from refs. [52][53][54][55][56][57][58][59][60][61][62][63][64] are used for the calculation. Reference 54 in particular, has the least resource overhead even with the additional overhead from space-time layout and routing. With the assumptions on the error model described in the Supplementary Discussion, 2000-bit Shor's algorithm requires 2.9 billion (280 million) qubits and 2.2 hours (43 mins) of computation given a physical error rate p ¼ 10 À3 ð10 À5 Þ. Alternatively, 57 million (3.8 million) physical qubits can run the algorithm in a day (half a day) as a result of space-time trade-off. Essentially, a large number of high purity T-gates for modular exponentiation in Shor's algorithm require a formidable amount of physical qubits. On the other hand, a lower bound is set at 12.2 million (1.46 million) physical qubits for 6000 logical qubits (2028 (243) physical qubits per a logical qubit) for distance-25 (8) surface code.

DISCUSSION
We assumed here that both bit and phase-flip probabilities are the same. However, our results may be significantly improved using recent work on tailoring the surface code 65 if the noise is biased. We consider other important properties of NVs in the Supplementary Notes. For example, NVs can be ionized under strong optical pulses, but their charge state can be recovered by optical repumping. We designed the microwave and optical pulse sequences so that the nuclear spin is not disturbed by failed trials and subsequent spin-state initialization. However, precise control of the microwave and optical transitions presents technical challenges and mark an area of active research. 18,66 Future work should also refine the error model to explicitly include contributions from the higher order internal dynamics of NVs.
Though there have been huge progress in reducing the resource overhead in fault-tolerant quantum computing, it still requires a large number of physical qubits and long computation time owing to slow clock cycles. In this aspect, noisy intermediate scale quantum (NISQ) technology 67 can be a more near to medium term path for our proposal. Two observations are favorable to the NISQ direction; the proposed system well behaves with only 25 qubits as shown in Fig. 5a; the LCC size scales linearly with the total number of qubits in the supercritical regime. Thus, the quantum simulation on this architecture quickly reaches thermodynamic behavior. 68 We emphasize that the architecture is compatible with any quantum memories with enough coherence time. Especially, silicon vacancy centers and other group IV emitters such as GeV, SnV, or PbV showed promising emission properties. 69,70 For example, silicon vacancy centers have higher Debye-Waller factor of $ 0:7, with narrow inhomogeneous distribution of $ 51 GHz. 71 Emission wavelength of emitters can be matched by strain tuning with tuning range upto $ 440 GHz. 72 The spin coherence time of 10 ms has been shown at dilution fridge temperature. 73 Fig. 6 Number of physical qubits vs. time for factoring 2000-bit numbers with Shor's algorithm with a physical error rate p ¼ 10 À3 for a and p ¼ 10 À5 for b. Darker lines assume a bond trial time t 0 = 1 µs, and lighter lines denote t 0 ¼ 100 ns. See Supplementary Discussion for detailed information and calculation. Results marked with red lines use (double defect) braiding qubits 52 with two-step 15-to-1 distillation for high purity T-gate creation. Green lines show results with the lattice surgery qubits 62 and catalyzed-2T factories. 53 Windowed arithmetic and autoCCZ factories dramatically reduce the resource overhead (blue). 54 The last result incorporates space-time layout 64 implying that the improvement is even larger. Results are terminated on the left-hand side by either measurement time (red and green) or surface code cycle (blue) and on the right hand side by logical error rates.