Introduction

Quantum communication unlocks network applications that are provably impossible to realize using only classical communication. One striking example is secure communication using quantum key distribution1,2, but many other applications are already known. Examples of these are secret sharing3 and clock synchronization4. Several stages of quantum network development have been identified5, where a higher stage of network development offers the potential to execute ever more advanced quantum network applications, at the expense of making higher demands on the end nodes running applications, as well as on the network that connects the end nodes.

Efficiently distributing quantum states over long distances is an outstanding technological challenge. Direct photon transmission that is used to carry quantum information over optical fibers is subject to a loss that is exponential in the length of the fiber. Quantum repeaters6,7 promise to enable quantum communication over global distances, mitigating the loss in the fiber by the introduction of intermediary nodes. A variety of different repeater platforms have been proposed (see, e.g., refs. 8,9) including repeaters featuring quantum memories such as atomic ensembles10,11 or processing nodes12,13,14 that are capable not only of storing quantum information but also of performing quantum gates. Examples of processing nodes include trapped ions15,16, neutral atoms16,17, or color centers such as nitrogen-vacancy (NV), silicon-vacancy (SiV) or tin-vacancy (SnV) centers in diamond18. Despite proof-of-principle demonstrations of repeater nodes17,19, as well as entanglement swapping via an intermediary processing node20, at present, no quantum repeater has been realized that bridges long distances.

Part of the challenge in building quantum repeaters is that their hardware requirements remain largely unknown. Extensive studies have been conducted to estimate such requirements both analytically (see, e.g., refs. 12,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43), as well as using numerical simulations (see, e.g., refs. 44,45,46,47,48,49,50,51,52). While greatly informative in helping us understand minimal hardware requirements needed to bridge long distances, they have mostly been conducted in idealized settings where all repeaters are equally spaced, and one assumes a uniform loss of typically 0.2 dB km−1 on each fiber segment (exceptions are refs. 12,40,41). Furthermore, with few exceptions12,28,41,43,50, such studies only provide rough approximations of time-dependent noise, and do not take into account platform-specific physical effects such as noise on the memory qubits during entanglement generation on NV centers53 or collective Gaussian dephasing in ion traps (see Fig. 1).

Fig. 1: Investigated setup.
figure 1

a Satellite photo of the Netherlands overlaid with a depiction of the hypothetical one-repeater connection between the Dutch cities of Delft and Eindhoven that we investigate. The white circles represent processing nodes, connected to each other and to heralding stations through fiber drawn in white. The black dots within the processing nodes represent qubits (the distinction between communication and memory qubits is not represented here). The placement of nodes and heralding stations is constrained by the fiber network, and their position on the figure roughly approximates their actual geographic location. All distances are given in kilometers, with a total fiber distance between Delft and Eindhoven of 226.5 km. b Heralding station. Photons emitted by a processing node travel through the optical fiber and are interfered at a beam splitter. Photon detection heralds entanglement between processing nodes. This process is affected by the overall probability that emitted photons are detected, the coincidence probability, i.e., the probability that photons arrive in the same time window, the imperfect indistinguishability of the photons as measured by the visibility and dark counts in the detector. c Color center in diamond, one of the processing nodes we investigate. We consider an optically-active electronic spin used as a communication qubit, and a carbon spin used as a memory qubit. Decoherence in both qubits is modeled through amplitude damping and phase damping channels with characteristic times T1 and T2, respectively. These are different for the two qubits. The existence of an always-on interaction between the qubits allows for the execution of two-qubit gates, but also means that entangling attempts with the communication qubit induce noise on the memory qubit. d Ion trap, the other processing node we investigate. We consider two optically active ions trapped in an electromagnetic field generated by electrodes, whose energy levels are used as qubits. The ions interact through their collective motional modes, which enables the implementation of two-qubit gates. They are subject to collective Gaussian dephasing noise characterized by a coherence time.

Results

Summary of results

Here, we present a study that takes into account time-dependent noise, platform-specific noise sources and classical control communication, as well as constraints imposed by a real-world fiber network, and optimizes over parameters of the repeater protocols used to generate entanglement. Our investigation is conducted using fiber data from SURF, an organization that provides connectivity to educational institutions in the Netherlands. Specifically, we will consider a network path connecting the Dutch cities of Delft and Eindhoven, separated by 226.5 km of optical fiber (see Fig. 1a). In placing equipment, we restrict ourselves to SURF locations, which leads to the repeater being located closer to Delft than to Eindhoven. Intermediary stations used for heralded entanglement generation (see Fig. 1b) cannot be placed equidistantly from both nodes either, as is generally assumed in idealized studies. We emphasize that we restrict ourselves to existing infrastructure, and, therefore, do not investigate the possibility of altering the fiber links. We direct the interested reader to related work which focuses on determining hardware requirements while taking into account how many repeaters to use and their placement54.

We consider the case where the network path is used to support an advanced quantum application, namely Verifiable Blind Quantum Computation (VBQC)55, with a client located in Eindhoven and a powerful quantum-computing server located in Delft. We chose VBQC because since their introduction blind-quantum-computing protocols have attracted a lot of interest, being widely cited as one of the principal future applications of quantum networks (see, e.g., refs. 55,56,57,58,59,60,61,62). While it is true that VBQC is somewhat unique in that it is highly asymmetrical in terms of the resources it requires from client and server, it is representative for many other quantum-networking applications in that it requires multiple live qubits. Additionally, the noise resilience of the specific VBQC protocol we consider55 makes it particularly suitable to study the performance of such applications in the presence of hardware imperfections. Specifically, we consider the smallest instance of VBQC, where two entangled pairs are generated between the client and the server. Such entanglement is used to send qubits from the client to the server. We show in Supplementary Note 2 that this can be done through remote state preparation63. To set the requirements of our quantum-network path, we impose that its hardware must be good enough to execute VBQC with the largest acceptable error rate55. This demand can be translated to requirements on the fidelity and rate at which entanglement is produced. Both depend on the lifetime of the server’s memory, as the server needs to be able to wait until both qubit states have been generated before it can begin processing. Additionally, the requirements on the fidelity and rate can also be understood as the fidelity and rate at which we can deterministically teleport unknown data qubits between the client and the server. Therefore, while our investigation focuses on VBQC, our results can also be interpreted from the perspective of quantum teleportation.

In our study, we obtain the following results, described in more detail below: First, we investigate the minimal hardware requirements that are needed to realize target fidelities and rates that allow executing VBQC using our network path. These correspond to the minimal improvements over state-of-the-art hardware parameters that enable meeting the targets. Specifically, we consider parameters measured for networked color centers (specifically, for NV centers in diamond)20,64,65,66,67,68,69,70 and ion traps71,72,73,74,75,76,77,78. We find that considerable improvements are needed even to bridge relatively modest distances, with our study also shining light on which parameters require significantly more improvement than others. To obtain this result, we have built an extensive simulation framework on top of the discrete event simulator NetSquid79, which includes models of color centers (specifically adapted from NV centers in diamond), ion traps, a general abstract model applicable to all processing nodes, as well as different schemes of entanglement generation. Our framework can be readily re-configured to study other network paths of this form, including the ability to configure other types of processing-node hardware, or entanglement-generation schemes. Being able to simulate the Delft-Eindhoven path, we then perform parameter optimization based on genetic algorithms to search for parameter improvements that minimize a cost function (see Section “Methods” for details) on SURF’s high-performance-computing cluster Snellius.

Second, we examine the absolute minimal requirements for all parameters in our models (for color centers and ion traps), if all other parameters are set to their perfect value (except for photon loss in fiber). We observe that the minimal hardware requirements impose higher demands on each individual parameter than the absolute minimal requirements. This highlights potential dangers in trying to maximize individual parameters without taking into account global requirement trade-offs. However, somewhat surprisingly, we find that the absolute minimal requirements are typically of the same order of magnitude as the minimal requirements, and can, therefore, still be valuable as a first-order approximation. Our results are obtained using the same NetSquid simulation, by incrementally increasing the value of a parameter until the target requirements are met.

Finally, we investigate whether the idealized network paths usually employed in the repeater literature would lead to significantly different minimal hardware improvements. Specifically, in such idealized setups, all repeaters and heralding stations are equally spaced, all fibers are taken to have 0.2 dB km−1 attenuation, and the models employed for the processing-node hardware are largely platform-agnostic. We find that considering real-world network topologies such as the SURF grid imposes significantly more stringent demands.

Let us now be more precise about the setup of our network path, as well as the requirements imposed by VBQC:

Quantum-network path

The network path we consider consists of three processing nodes that are assumed to all have the same hardware. That is, the stated hardware requirements are sufficient for all nodes and we do not differentiate between the three nodes. On an abstract level, all processing nodes have at least one so-called communication qubit, which can be used to generate entanglement with a photon. The repeater node in the middle (Nieuwegein, Fig. 1a) has two qubits available (at least one of which is a communication qubit) that it can use to simultaneously hold entanglement with the node in Delft, as well as the one in Eindhoven. Once entanglement has been generated with both Delft and Eindhoven, the repeater node may perform an entanglement swap6 in order to create end-to-end entanglement between Delft and Eindhoven (see Fig. 2). On processing nodes, such a swap can be realized deterministically, i.e., with success probability 1, since it can be implemented using quantum gates and measurements on the processor. We note that even when the gates and measurements are noisy, the swap remains deterministic, although it will induce noise on the resulting entangled state.

Fig. 2: Protocol executed in the setup we investigate.
figure 2

a No entanglement is shared a priori. E.N. stands for End Node, R.N. stands for Repeater Node and H.S. stands for Heralding Station. b Entanglement generation attempts begin along the longer link, which connects the repeater node to the Eindhoven node. c After entanglement has been established along the longer link, attempts for entanglement generation along the shorter link start. In case this takes longer than a given cut-off time, the previously generated entanglement is discarded and we go back to (b). d After entanglement is generated on both links, the repeater node performs an entanglement swap, creating an end-to-end entangled state. e The Delft node maps its half of the state to a powerful quantum-computing server, while the Eindhoven node measures its half.

For all types of processing nodes, we here assume the repeater to act sequentially41 due to hardware restrictions. That is, it can only generate entanglement with one of the other two nodes at a time. To minimize the memory requirements at the repeater node (Nieuwegein), we will always first produce entanglement with the farthest node (Eindhoven). Once this entanglement has been produced, the repeater generates entanglement with the closest node (Delft). To combat the effect of memory decoherence, entangled qubits are discarded after a cut-off time41. This means that if entanglement between Delft and Nieuwegein is not produced within a specific time window following the successful generation of entanglement between Nieuwegein and Eindhoven, all entanglement is discarded and we restart the protocol by regenerating entanglement between Nieuwegein and Eindhoven. Classical communication is used to initiate entanglement generation between nodes and notify all nodes when swaps or discards are performed.

We consider three types of processing nodes (see Fig. 1c, d): (1) color centers, specifically modeled on NV centers in diamond, (2) ion traps and (3) a general abstract model applicable to all processing nodes. Let us now provide more specific details on each of these models required for the parameter analysis below.

  1. (1)

    NV centers are a prominent example of color centers for which significant data is available from quantum-networking experiments20,64,65,66,67,68. Here, the color center’s optically-active electronic spin is employed as a communication qubit. The second qubit is given by the long-lived spin state of a Carbon-13 atom, which is coupled to the communication qubit and used as a memory qubit. Our color-center model accounts for the following:

    • Restricted topology, with one optically-active communication qubit and one memory qubit (note however that larger registers have been realized, for example in ref. 70);

    • Restricted gate set, with arbitrary rotations on the communication qubit, Z-rotations on the memory qubit and a controlled rotation gate between the two qubits;

    • Depolarizing noise in all gates, bit-flip noise in measurement;

    • Qubit decoherence in memory modeled through amplitude damping and dephasing channels with decay times T1 and T2 (we consider the experimentally-realized times of T1 = 1 h (10 h) and T2 = 0.5 s (1 s) for the communication (memory) qubit68,69,70);

    • Induced dephasing noise on the memory qubit whenever entanglement generation using the communication qubit is attempted20,53.

    The efficiency of the photonic interface in NV centers is limited to 3% due to the zero-phonon line (ZPL). It is likely that executing VBQC using the path we investigate will require overall photon detection probabilities higher than 3%. Little data is presently available for other color centers (SiV, SnV). We hence focus on the NV model, but do allow a higher emission probability, which could be achieved either by using a color center with a more favorable ZPL (65–90% for SiV18, 57% for SnV18), or by placing the NV in a cavity80. More details about our color-center model, and a validation of the model against experimental data for NV centers, can be found in Supplementary Note 1.

  2. (2)

    Trapped ions are charged atoms suspended in an electromagnetic trap, the energy levels of which can be used as qubits. Our trapped-ion model accounts for the following:

    • Two identical, optically active ions in a trap;

    • Restricted gate set as described in ref. 81, with arbitrary single-qubit Z rotations, arbitrary collective rotations around axes in the XY plane, and an entangling Mølmer-Sørensen gate82;

    • Depolarizing noise in all gates, bit-flip noise in measurement;

    • Qubit decoherence modeled as collective Gaussian dephasing, with a characteristic coherence time50;

    • Off-resonant scattering that adds a random delay to the emission time of photons, which is counteracted using a tunable coincidence time window (as captured by a toy model introduced in Supplementary Note 4).

    More details about our trapped-ion model, and a validation of the model against experimental data, can be found in Supplementary Note 1.

  3. (3)

    We further investigate an abstract, platform-agnostic processing-node model. This model accounts for depolarizing noise in all gates and in photon emission, as well as amplitude-damping and phase-damping noise in the memory. It does not account for any platform-specific restrictions on topology, gate set or noise sources. Later on, we show that using the abstract model instead of hardware-specific models leads to an inaccurate picture of minimal hardware requirements. Even so, the abstract model can be valuable to study systems for which hardware-specific models are as of yet unavailable. Additionally, we note that the smaller number of hardware parameters in the abstract model as compared to the hardware-specific models means that the parameter space can be explored more efficiently, making it easier to, e.g., find minimal hardware parameters.

To entangle two processing nodes, one can use different schemes for entanglement generation, and we here consider the so-called single-click83 and double-click schemes84. Both of these start with two distant nodes generating matter-photon entanglement and sending the photon to a heralding station. In the single (double)-click protocol, matter-matter entanglement is heralded by the detection of one (two) photons after interference. The trapped-ion nodes we investigate perform only double-click entanglement generation as single-click entanglement generation has not been realized for the type of trapped-ion devices we consider, i.e., trapped ions in a cavity. The color-center nodes and abstract nodes perform both single and double click. Our entanglement-generation models account for the following physical effects:

  • Emission of the photon in the correct mode, modeled through a loss channel;

  • Imperfect photon emission modeled through a depolarizing channel;

  • Capture of the photon into the fiber, modeled through a loss channel;

  • Photon frequency conversion, modeled through a loss channel (as a first-order approximation, we assume this is a noiseless process);

  • Photon attenuation in fiber, modeled through a loss channel;

  • Photon delay in fiber;

  • Photon detection at the detector, modeled through a loss channel;

  • Detector dark counts;

  • Photon arrival at the detector at different times;

  • Imperfect photon indistinguishability.

While photon attenuation losses depend on the characteristics (such as the length) of the fiber that is used to deploy a quantum network, the other losses depend only on the quantum hardware that is used. For convenience, we collect all the hardware-related losses into a single parameter, called the photon detection probability excluding attenuation losses.

The hardware parameters used in our models are based on quantum-networking experiments with NV centers (single-click20,66,67,68 and double-click64,65), and trapped ions (double-click72).

Blind quantum computation

Having discussed our modeling of the path between Delft and Eindhoven, we turn to the end nodes.

Both end nodes are processing nodes. The end node in Eindhoven takes the role of client in the VBQC protocol. In Delft, there is not only an end node, but also a powerful quantum-computing server. After entanglement is established by the end node in Delft it transfers its half of the entangled state to this server. The client in Eindhoven simply measures its half of the entangled state. The Delft scenario is similar to the setting investigated in ref. 85, where the authors consider an architecture in which a node contains two NV centers, one of them used for networking and the other for computing. Here, we make some simplifying assumptions that allow us to focus on the network path: we take the state transfer process to be instantaneous and noiseless, and assume that the computing node is always available to receive the state. Further, we assume that the quantum gates performed by the server are noiseless and instantaneous, and that their qubits are subject to depolarizing noise with memory coherence time T = 100 s. Because of these assumptions, the requirements we find are limited primarily by imperfections in the network path itself rather than in the computing node.

We investigate hardware requirements on three processing nodes (two end nodes and one repeater node) so that a client in Eindhoven can perform 2-qubit VBQC, a particular case of the protocol described in ref. 55, using the Delft server. In this protocol, the client prepares qubits at the server, which are then used to perform either computation or test rounds. In test rounds, the results of the computation returned by the server are compared to expected results. The protocol is only robust to noise if the noise does not cause too large an error rate. The protocol is shown in ref. 55 to remain correct if the maximal probability of error in a test round can be upper-bounded by 25%. We prove in Supplementary Note 2 that the protocol is still correct if the average probability of error in a test round can be upper-bounded by 25%. We further prove in the same supplementary note that if the entangled pairs distributed by the network path can be used to perform quantum teleportation at a given rate and quality, the protocol can be executed successfully. Namely, this is true if the average fidelity at which unknown pure quantum states can be teleported using the entangled pairs distributed by the network path (Ftel) and the entangling rate R satisfy a specific bound. We note that this bound takes into account potential jitter in the delivery of entanglement (i.e., the fact that the time required to generate entanglement, and hence the time entangled states need to be stored in memory, can fluctuate around its expected value). We consider two distinct pairs of Ftel and R that satisfy this bound as our target metrics, namely:

  • Target 1: Ftel = 0.8717, R = 0.1 Hz,

  • Target 2: Ftel = 0.8571, R = 0.5 Hz.

The choice of these specific values was motivated by the fact that there is no fidelity Ftel ≤ 1 for R ≈ 0.014 Hz such that the VBQC condition is satisfied, therefore, all target rates should satisfy R > 0.014 Hz, preferably with some margin to avoid trivial solutions. Additionally, Target 1 is achievable using either the single-click or double-click protocol and using either one or zero repeaters on the fiber path under consideration, given sufficient hardware improvements. In contrast, Target 2 is achievable only using the single-click protocol and one repeater (see also Supplementary Note 9 C and D). This suggests that the difference between the two targets is large enough to lead to significantly different results.

The derivation of this bound assumes that the client prepares qubits at the server by first generating them locally and then transmitting them to the server using quantum teleportation. We note that alternatively the remote-state-preparation protocol63 can be used, which will likely be more feasible in a real experiment as it requires fewer quantum operations by the client. In Supplementary Note 2, we describe a way how the VBQC protocol55 can be performed using remote state preparation. Note, however, that we have not investigated the security of the protocol in this case. We show that under the assumption that local operations are noiseless, quantum teleportation and remote state preparation lead to the exact same requirements on the network path. Thus, in case the target is met, VBQC can be successfully executed using either quantum teleportation or remote state preparation. Lastly, we note that there is a linear relation between the average teleportation fidelity Ftel and the fidelity of the entangled pair86.

Minimal hardware requirements

Here, we aim to find the smallest improvements over current hardware to generate entanglement enabling VBQC. These are shown at the bottom of Fig. 3 for color centers (left) and trapped ions (right). In the table at the top of Fig. 3 we show a selection of the actual values for the minimal hardware requirements (the set of parameters representing the smallest improvement over state-of-the-art parameters, see Section “Methods” for details on how we determine this), as well as the absolute minimal requirements (the minimal value for each parameter assuming that every other parameter except for photon loss in fiber is perfect). All the parameters are explained in section “Methods”, and their state-of-the-art values that we consider are given in Table 1.

Fig. 3: Improvements required to connect the Dutch cities of Delft and Eindhoven using color-center (CC) and trapped-ion (TI) repeaters for an entanglement-generation rate of 0.1 Hz and an average teleportation fidelity of 0.8717 (Target 1) and a rate of 0.5 Hz and average teleportation fidelity of 0.8571 (Target 2).
figure 3

a The values that are required for the photon detection probability excluding attenuation losses and the coherence time. The baseline parameter values have been demonstrated in state-of-the-art experiments. The absolute minimal requirements are the required parameter values assuming that there are no other sources of noise or loss with the exception of fiber attenuation. The coherence-time values in the table are the communication-qubit dephasing time for CC and the collective dephasing time for TI (see Section “Methods” for an explanation of these parameters). The TI requirements are for running a double-click entanglement-generation protocol. The CC requirements are for running a double-click protocol for Target 1, and a single-click protocol for Target 2. We note that all the minimal requirements found have a photon detection probability excluding attenuation losses above 30%, the current state-of-the-art value for frequency conversion75. b, c Directions along which hardware must be improved to connect the Dutch cities of Delft and Eindhoven using, respectively, a CC or a TI repeater. The further away the line is from the center towards a given parameter, the larger improvement that parameter requires. Improvement is measured in terms of the “improvement factor'', which tends to infinity as a parameter tends to its perfect value (see Section “Methods” for the definition). In both plots, a logarithmic scale is used. The origin of the plots corresponds to an improvement factor of 1, i.e., no improvement with respect to the state of the art. b (CC), the blue (orange) line corresponds to the minimal requirements for Target 1 (Target 2). Improvement is depicted for the following parameters, clockwise from the top: photon detection probability excluding attenuation losses in fiber, dephasing time of the communication qubit, dephasing time of the memory qubit, noise in the two-qubit gate, visibility of photon interference and dephasing noise induced on memory qubits when entanglement generation is attempted. c (TI), the line corresponds to the minimal requirements for Target 1. Improvement is depicted for the following parameters, clockwise from the top: photon detection probability excluding attenuation losses in fiber, qubit collective dephasing coherence time, spin-photon emission fidelity, visibility of photon interference and probability that two emitted photons coincide at the detection station. All parameters are explained in section “Methods”, and their state-of-the-art values that are being improved upon are given in Table 1.

Table 1 State-of-the-art color center and trapped-ion hardware parameters.

The minimal color-center hardware requirements for Target 1 (blue line in Fig. 3, bottom left) correspond to the usage of a double-click protocol, as we found that this allows for laxer requirements than using a single-click protocol. On the other hand, the minimal requirements for Target 2 (orange line in Fig. 3, bottom left) correspond to the usage of a single-click entanglement-generation protocol. This is because achieving Target 2 in the setup we studied is not possible at all with a double-click protocol even if every parameter except for photon loss in fiber is perfect. Therefore, and since we do not model single-click entanglement generation with trapped ions, the bottom-right plot of Fig. 3 depicts only the requirements for trapped ions to achieve Target 1.

We thus find that in the setup we investigated performance targets with relatively higher fidelity and lower rate are better met by using a double-click protocol. On the other hand, higher rates can only be achieved with single-click protocols. This was to be expected, as (a) states generated with single-click protocols are inherently imperfect, even with perfect hardware and (b) the entanglement-generation rate of double-click protocols scales poorly with both the distance and the detection probability due to the fact that two photons must be detected to herald success.

Absolute minimal requirements

We now aim to find the minimal parameter values that enable meeting the targets, if the only other imperfection were photon loss in fiber. These are the absolute minimal requirements, presented in the table at the top of Fig. 3. We observe that while there is a gap between them and the minimal hardware requirements, it is perhaps surprisingly small. For example, the minimal photon detection probability excluding attenuation losses required to achieve Target 1 with color centers is roughly 1.5 times larger than the corresponding absolute minimal requirement. However, both requirements represent a three order of magnitude increase with respect to the state-of-the-art, which makes a factor of 1.5 seem small in comparison.

We remark on the feasibility of achieving the minimal hardware requirements for color centers. NV centers, on which we have based the state-of-the-art parameters used in this work, are the color center that has been most extensively used in quantum-networking experiments (see ref. 18 for a review). As discussed in Section “Results”, the efficiency of the photonic interface in this system is limited to 3% due to the zero-phonon line. Both targets we investigated place an absolute minimal requirement on the photon detection probability excluding attenuation losses above this value. Improving the photonic interface of NV centers beyond the limit imposed by the zero-phonon line is only possible through integration of the NV center into a resonant cavity80. Alternatively, other color centers with a more efficient photonic interface could be considered as alternatives for long-distance quantum communication18.

Hardware requirements in simplified settings

Since we made use of real-life fiber data and elaborate, platform-specific hardware models, the results above would be difficult to obtain analytically. For instance, collective Gaussian dephasing in ion traps could be challenging to analyze. Analytical results are however attractive, as they provide a more intuitive picture of the problem at hand. In order to find them, an approach commonly taken in the literature is to simplify the setup under study so that it becomes analytically tractable. A usual simplification is to assume what we name the standard scenario, in which nodes and heralding stations are equally spaced, and where the fiber attenuation is 0.2 dB km−1 throughout. Another common simplification is to consider simplified physical models for the nodes and the entangled states they generate (see, among others10,39,87,88). In order to investigate how hardware requirements change if such simplifications are used, we now apply our methodology to these two simplified situations and compare the resulting hardware requirements with the ones for our setup. We hope to understand whether considering these setups leads to similar results, indicating that the simplifying approach is a good one, or if doing so paints an unrealistic picture of the hardware requirements, which would favor our approach.

Effect of existing fiber networks on hardware requirements

We investigate how the hardware requirements in the standard scenario differ from the fiber-network-based setup. We thus present in Fig. 4 a comparison of the hardware requirements for color centers in the two situations. In both cases, we consider double-click entanglement generation, targeting an entanglement-generation rate of 0.1 Hz and an average teleportation fidelity of 0.8717.

Fig. 4: Hardware requirements for connecting the Dutch cities of Delft and Eindhoven using a color center repeater performing double-click entanglement generation on an actual fiber network (blue) and assuming the standard scenario (orange, dashed).
figure 4

Requirements are for achieving an entanglement-generation rate of 0.1 Hz and an average teleportation fidelity of 0.8717. Parameters shown are, from top to bottom: visibility of photon interference, dephasing noise induced on memory qubits when entanglement generation is attempted, dephasing time of communication qubit, dephasing time of memory qubit, photon detection probability excluding attenuation losses in fiber and two-qubit gate fidelity.

Significant improvements over the state-of-the-art are required in both scenarios, but the magnitude of these improvements would be understated in case one were to consider the standard scenario and ignore existing fiber infrastructure. For example, doing so would lead to underestimating the required coherence time of the memory qubits by a factor of four. More broadly, we see that the improvement required is larger in the fiber-network scenario for (i) the photon detection probability excluding attenuation losses and (ii) memory parameters (coherence times and tolerance to entanglement-generation attempts). Both of these results can be explained by the fact that when a real-world fiber network is considered there is more attenuation and the nodes are not evenly spaced. As a consequence, better photonic interfaces are required to achieve similar rates, and states likely spend a longer time in memory, necessitating longer coherence times. This emphasizes the need for considering limitations imposed by existing fiber infrastructure when estimating requirements on repeater hardware.

Effect of platform-specific modeling on hardware requirements

Finally, we look into how the hardware requirements are affected if the processing nodes are modeled in a simplified, platform-agnostic way. We thus compare the hardware requirements for color-center and trapped-ion repeaters with those for a platform-agnostic abstract model for a quantum repeater. This is a simple processing-node model that accounts for generic noise sources such as memory decoherence and imperfect photon indistinguishability, but does not take platform-specific considerations such as restricted topologies into account. For more details on the platform-agnostic abstract model, see Supplementary Note 1 G. We consider double-click entanglement generation in the fiber-network-based setup, targeting an entanglement-generation rate of 0.1 Hz and an average teleportation fidelity of 0.8717.

To perform the comparison, we proceed as follows: (i) map the state-of-the-art hardware parameters to abstract-model parameters, (ii) run the optimization process for the platform-specific model and the abstract model in order to find the minimal hardware requirements for both, (iii) map the obtained platform-specific hardware requirements to the abstract model and (iv) compare them to the hardware requirements obtained by running the optimization process for the abstract model. The results of this comparison can be seen in Fig. 5.

Fig. 5: Comparison of hardware requirements for connecting the Dutch cities of Delft and Eindhoven using a repeater performing double-click entanglement generation considering a simple abstract model and more detailed color center and ion trap models.
figure 5

(a) color center and (b) ion trap models. Requirements are for achieving an entanglement-generation rate of 0.1 Hz and an average teleportation fidelity of 0.8717. Parameters shown are, from top to bottom: spin-photon emission fidelity (trapped ion only), visibility of photon interference, photon detection probability excluding attenuation losses in fiber, fidelity of entanglement swap and qubit coherence time.

The hardware requirements are significantly different for the abstract model and for the trapped-ion and color-center models. This can be explained by the greater simplicity of the abstract model. Take coherence time as an example. The communication and memory qubits of color centers decohere at different rates, a complexity which is not present in the abstract model. Therefore, improving the coherence time in the abstract model has a bigger impact than improving a given coherence time in the color center model. This means that in the abstract model, it is comparatively cheaper to achieve the same performance by improving the coherence time rather than other parameters. The fact that memory noise in trapped ions is modeled differently than in the abstract model (the trapped-ion memory noise is Gaussian, arising from a collective dephasing process. See Eqs. (7) and (9)) could also explain the difference in the requirements for the coherence times seen in that case.

Entanglement without a repeater

We note that one of the set of targets we investigated, namely an entanglement-generation rate of 0.1 Hz and an average teleportation fidelity of 0.8717, could also be achieved in the setup we investigated without using a repeater node if a single-click entanglement-generation protocol were employed. Furthermore, the hardware improvements required would be more modest in this case than if a repeater were used. For more details on this, see Supplementary Note 9 C.

Outlook

In order to design and realize real-world quantum networks, it is important to determine minimal hardware requirements in more complex scenarios such as heterogeneous networks with multiple repeaters and end nodes. The method presented in this work is well suited for this. Furthermore, it would be valuable to investigate what limitations the assumptions we have made in our modeling place on our results. For example, we did not consider the effects of fiber dispersion. These effects could hamper entanglement generation and hence affect the minimal hardware requirements. Even though preliminary investigations suggest that these effects might be small, quantifying them would represent a step forward in determining realistic minimal repeater-hardware requirements. Another interesting open question is what effect the use of entanglement-distillation protocols (see ref. 89 for a review) would have on the minimal hardware requirements.

Methods

In this section, we elaborate on our approach for determining the minimal and absolute minimal hardware requirements for processing-node repeaters to generate entangled states enabling VBQC.

Conditions on network path to enable VBQC

In our setup, a client wishes to perform 2-qubit VBQC, a particular case of the protocol described in ref. 55, on a powerful remote server whose qubits are assumed to suffer from depolarizing noise with coherence time T = 100 s. We further assume that the computation itself is perfect, with the only imperfections arising from the network path used to remotely prepare the qubits. This protocol is shown in ref. 55 to be robust to noise, remaining correct if the maximal probability of error in a test round can be upper-bounded by 25%. We argue in Supplementary Note 2 that the protocol is still correct if the average probability of error in a test round can be upper-bounded by 25%, as long as we assume that the error probabilities are independent and identically distributed across different rounds of the protocol. This is the case for the setup studied here, as the state of the network is fully reset after entanglement swapping takes place at the repeater node. This condition, together with the assumption on the server’s coherence time, can be used to derive bounds on the required average teleportation fidelity and entanglement-generation rate, as shown in Supplementary Note 2.

Average teleportation fidelity

We use the average teleportation fidelity Ftel that can be obtained with the teleportation channel Λσ arising from the end-to-end entangled state σ generated by the network we investigate as a target metric:

$${F}_{{{\mbox{tel}}}}(\sigma )\equiv {\int}_{\psi }\left\langle \psi \right\vert {\Lambda }_{\sigma }(\left\vert \psi \right\rangle \left\langle \psi \right\vert )\left\vert \psi \right\rangle d\psi ,$$
(1)

where the integral is taken over the Haar measure. See Supplementary Note 2 A for more details.

Hardware improvement for VBQC as an optimization problem

We want to find the minimal hardware requirements that achieve a given average teleportation fidelity Ftarget and entanglement-generation rate Rtarget. We restate this as a constrained optimization problem: we wish to minimize the hardware improvement, while ensuring that the performance constraints are met. These constraints are relaxed through scalarization, resulting in a single-objective problem in which we aim to minimize the sum of the hardware improvement and two penalty terms, one for the rate target and one for the teleportation fidelity target. The resulting cost function is given by

$$\begin{array}{l}C\,=\,{w}_{1}\left(1+{\left({F}_{{{\mathrm{target}}}}-{F}_{{{\mathrm{tel}}}}\right)}^{2}\right)\Theta \left({F}_{{{\mathrm{target}}}}-{F}_{{{\mathrm{tel}}}}\right)\\ \qquad +\,{w}_{2}\left(1+{\left({R}_{{{\mathrm{target}}}}-R\right)}^{2}\right)\Theta \left({R}_{{{\mathrm{target}}}}-R\right)\\ \qquad +\,{w}_{3}{H}_{{{\mathrm{C}}}}\left({x}_{1},...,{x}_{{{\mathrm{N}}}}\right),\end{array}$$
(2)

where HC is the hardware cost associated with parameter set {x1, . . . , xN}, wi are the weights of the objectives, Θ is the Heaviside function and Ftel and R are the average teleportation fidelity and entanglement-generation rate achieved by the parameter set, respectively. The hardware cost function HC maps sets of hardware parameters to a cost that represents how large of an improvement over state-of-the-art the set requires. To compute this consistently across different parameters, we use no-imperfection probabilities, as done in ref. 79 (where they are called no-error probabilities). A parameter is improved by a factor k, called the improvement factor, if its corresponding no-imperfection probability pni becomes \(\root k \of {{p}_{{{\mbox{ni}}}}}\). For example, if the error probability of a gate is 40%, its probability of no-imperfection is 0.6. After improving it by a factor of 4, the no-imperfection probability becomes \(\root 4 \of {0.6}\approx 0.88\), corresponding to an error probability of approximately 12%. The hardware cost associated with a set of hardware parameters is the sum of the respective improvement factors, i.e.,

$${H}_{{{\mathrm{C}}}}\left({x}_{1},...,{x}_{{{\mathrm{N}}}}\right)=\mathop{\sum }\limits_{i=1}^{N}\frac{\ln \{{p}_{{{\mathrm{ni}}}}({b}_{{{\mathrm{i}}}})\}}{\ln \{{p}_{{{\mathrm{ni}}}}({x}_{{{\mathrm{i}}}})\}},$$
(3)

where pni(xi) is the no-imperfection probability corresponding to the value xi of parameter i and pni(bi) is the no-imperfection probability corresponding to the baseline value bi of parameter i. We have here for concreteness used natural logarithms, but the hardware cost is invariant to changes in the logarithms’ bases. We note that these improvement factors are the quantities shown in Fig. 3. The weights wi are chosen such that the first two terms are larger than the last one for near-term parameters, guaranteeing that the set of parameters minimizing C meets performance targets. We are then effectively restricted to the region of parameter space in which the performance constraints are satisfied, as all points corresponding to near-term parameters in this region have a lower cost than points outside it. The problem then becomes one of minimizing the hardware cost in this region. We have verified that the expected values of the average teleportation fidelity and entanglement-generation rate of the parameter sets found meet the constraints, thus enabling VBQC conditional on our assumptions. Our method guarantees that the set of parameters found is “minimal” in the sense that making any of the parameters worse would result in the target not being met. However, we note that there exist many such solutions, and if specific knowledge is available about how hard it is to improve particular parameters, the cost function could be adapted to pick out minimal parameter sets that may be easier to attain. An example of this is the efficiency of the NV center’s photonic interface, which is limited to 3% due to the ZPL. Going beyond this limit requires integration into a cavity, which carries with it a host of challenges18,80. One could then modify the cost function to make improving the efficiency of the photonic interface beyond 3% more expensive than improving other parameters. However, as it is challenging to accurately estimate the hardness associated with specific improvements and, furthermore, the hardness may depend on the specific expertise available within a given research group, we have refrained from making such estimates.

Optimization parameters

Using the methodology described later on in this section, we perform an optimization over both protocol and hardware parameters. First, we enumerate the protocol parameters:

  • Cut-off time, the time after which a stored qubit is discarded;

  • Bright-state parameter (single-click entanglement generation only), the fraction of a matter qubit’s superposition state that is optically active;

  • Coincidence time window (double-click entanglement generation with ion traps only), the maximum amount of time between the detection of two photons for which a success is heralded. We model the effect of the coincidence time window using a toy model, see Supplementary Note 4.

Second, we enumerate the hardware parameters:

  • The Hong-Ou-Mandel visibility90 is a measure for the indistinguishability of interfering photons and is defined by ref. 91

    $$1-\frac{{C}_{{{\mathrm{min}}}}}{{C}_{{{\mathrm{max}}}}}.$$
    (4)

    Here Cmin is the probability (coincidence count rate) that two photons that are interfered on a 50:50 beamsplitter are detected at two different detectors when the indistinguishability is optimized (as is the case when using interference to generate entanglement), while Cmax is the same probability when the photons are made distinguishable.

  • The probability of double excitation is the probability that two photons are emitted instead of one in entanglement generation with color centers;

  • The induced memory qubit noise is the dephasing suffered by the memory qubit when the communication qubit is used to attempt entanglement generation. The number given for this parameter in Table 1 corresponds to the number of electron spin pumping cycles after which the Bloch vector length of the memory qubit in the state \((\left\vert 0\right\rangle +\left\vert 1\right\rangle )/\sqrt{2}\) in the X − Y plane of the Bloch sphere has shrunk to 1/e when the communication qubit has bright-state parameter 0.553;

  • The interferometric phase uncertainty is the uncertainty in the phase acquired by the two interfering photons when they travel through the fiber in single-click entanglement generation with color centers;

  • The photon detection probability excluding attenuation losses is the probability that a photon is detected given that emission was attempted, and assuming that the fiber length is negligible, i.e., considering every form of photon loss (including coupling to fiber) except the length-dependent attenuation loss in fiber;

  • Every gate is parameterized by a depolarizing-channel fidelity;

  • For color centers, T1 and T2 are the characteristic times of the time-dependent amplitude damping and phase damping channels affecting the qubits, and are different for the communication and memory qubits. The effect of the amplitude (phase) dam ping channel after time t is given by Eqs. (5), (6)

    $$\begin{array}{l}\rho \to \left(\left\vert 0\right\rangle \left\langle 0\right\vert +\sqrt{{e}^{-t/{T}_{1}}}\left\vert 1\right\rangle \left\langle 1\right\vert \right)\rho \\ \qquad {\left(\left\vert 0\right\rangle \left\langle 0\right\vert +\sqrt{{e}^{-t/{T}_{1}}}\left\vert 1\right\rangle \left\langle 1\right\vert \right)}^{{\dagger} }\\ \qquad +\sqrt{1-{e}^{-t/{T}_{1}}}\left\vert 0\right\rangle \left\langle 1\right\vert \rho {\left(\sqrt{1-{e}^{-t/{T}_{1}}}\left\vert 0\right\rangle \left\langle 1\right\vert \right)}^{{\dagger} }\end{array}$$
    (5)
    $$\begin{array}{l}\rho \,\to \,\Big(1-\frac{1}{2}\Big(1-{e}^{-t/{T}_{2}}{e}^{-t/(2{T}_{1})}\Big)\Big)\rho \\ \qquad \,+\,\frac{1}{2}\Big(1-{e}^{-t/{T}_{2}}{e}^{-t/(2{T}_{1})}\Big)Z\rho Z;\end{array}$$
    (6)
  • For ion traps, the coherence time characterizes the time-dependent collective Gaussian dephasing process that the qubits undergo, which is given by ref. 50:

    $$\rho \to \int\nolimits_{-\infty }^{\infty }{K}_{{{\mbox{r}}}}\rho {K}_{{{\mbox{r}}}\,}^{{\dagger} }p(r)dr,$$
    (7)

    where

    $${K}_{{{\mbox{r}}}}=\exp \left(-{{\mbox{i}}}r\frac{t}{\tau }\mathop{\sum }\limits_{j=1}^{n}{Z}_{{{\mbox{j}}}}\right),$$
    (8)

    Zj denotes a Pauli Z acting on qubit j, n is the total number of ions in the trap, τ the coherence time and t the storage time, and

    $$p(r)=\frac{1}{\sqrt{2\pi }}{{{\mbox{e}}}}^{-{{{\mbox{r}}}}^{2}/2};$$
    (9)
  • The noise on matter-photon emission is parameterized by a depolarizing-channel fidelity (i.e., the matter-photon state directly after emission is a mixture between a maximally entangled state and a maximally mixed state);

  • The dark-count probability is the probability that a detection event is registered at a detector without a photon arriving.

The state-of-the-art values we use for the hardware parameters are shown in Table 1. For more details on how the effects of the different hardware parameters are included in our models, see Supplementary Note 1. We note that some of the hardware parameters we consider, in fact, conceal trade-offs. For example, the probability of getting a double excitation when using color centers to emit photons can to an extent be tuned. In this case, a lower probability of double excitation would come at the cost of getting fewer events. However, optimizing over all such trade-offs is beyond the scope of this work.

Evaluating hardware quality

In order to minimize the cost function C, we require an efficient way of evaluating the performance attained by each parameter set. We do this through simulation of end-to-end entanglement generation using NetSquid. The full density matrix of the states generated, as well as how long their generation took in simulation time are recorded and used to compute the average teleportation fidelity and rate of entanglement generation. Since entanglement generation is a stochastic process, multiple simulation runs are performed in order to collect representative statistics.

Framework for simulating quantum repeaters

In our NetSquid simulation framework, we have implemented hardware models for color centers, trapped ions and a platform-agnostic abstract model. This includes the implementation of different circuits for entanglement swapping and moving states for each platform, conditioned on their respective topologies and gate sets. Additionally, we have implemented both single and double-click entanglement-generation protocols. In order to combine these different building blocks that are required to simulate end-to-end entanglement distribution, we define services that each have a well-defined input and output but can have different implementations. For example, the entanglement-generation service can either use the single-click or double-click protocol, and entanglement swapping can be executed on either color center or trapped-ion hardware. End-to-end entanglement generation is then orchestrated using a link-layer protocol (inspired on the one proposed in ref. 92) that makes calls to the different services, agnostically of how the services are implemented. This allows us to use the same protocol for each different configuration of the simulation. Switching between configurations in our simulation framework then only requires editing a human-readable configuration file. The modularity of the simulation framework would make it simple to investigate further hardware platforms and protocols.

The link-layer protocol is itself an implementation of the link-layer service defined in ref. 92. From a user perspective, this simplifies using the simulation as all that needs to be done to generate entanglement is make a call to the well-defined link-layer service, without any knowledge of the protocol that implements the service. In this work, the link-layer protocol is the one for a single sequential repeater illustrated in Fig. 2. However, the protocols included in our simulation code are able to simulate entanglement generation on chains of an arbitrary number of (sequential) repeaters that use classical communication to negotiate when to generate entanglement and that implement local cut-off times.

Finding minimal hardware improvements

In order to find the sets of parameters minimizing the cost function C, we employ the optimization methodology introduced in ref. 93, which integrates genetic algorithms and NetSquid simulations. A genetic algorithm is an iterative optimization method, which initiates by randomly generating a population consisting of many sets of parameters, also known as individuals. These are then evaluated using the NetSquid simulation and the cost function, and a new population is bred through mutation and crossover of individuals in the previous population. The process then iterates, with better-performing individuals being more likely to propagate to further iterations. For further details on the optimization methodology employed, see Supplementary Note 6 and ref. 93.

This methodology is computationally intensive, so we execute it on the Snellius supercomputer. We use one node of the Snellius supercomputer, which contains 128 2.6 GHz cores and a total of 256 GiB of memory. Based on previously observed data reported in ref. 93, we employ a population size of 150 evolving for 200 generations. The simulation is run 100 times for each set of parameters, as we have empirically determined that this constitutes a good balance between accuracy and computation time. The time required for the procedure to conclude is hardware, protocol and parameter dependent, but we have observed that 10 wall-clock hours are typically enough. We stress that this approach is general, modular and freely available93.

Finding absolute minimal hardware requirements

In order to find these requirements, which are the minimal parameter values enabling meeting the performance targets if the only other imperfection is photon loss in fiber, we perform a sweep of each parameter, starting at the state-of-the-art value and terminating when the targets are met. For each value of each parameter, we sweep also over the protocol parameters, i.e., the cut-off time, coincidence time window (for double-click entanglement generation with ion traps) and bright-state parameter (for single-click entanglement generation).