Abstract
A quantum computer attains computational advantage when outperforming the best classical computers running the bestknown algorithms on welldefined tasks. No photonic machine offering programmability over all its quantum gates has demonstrated quantum computational advantage: previous machines^{1,2} were largely restricted to static gate sequences. Earlier photonic demonstrations were also vulnerable to spoofing^{3}, in which classical heuristics produce samples, without direct simulation, lying closer to the ideal distribution than do samples from the quantum hardware. Here we report quantum computational advantage using Borealis, a photonic processor offering dynamic programmability on all gates implemented. We carry out Gaussian boson sampling^{4} (GBS) on 216 squeezed modes entangled with threedimensional connectivity^{5}, using a timemultiplexed and photonnumberresolving architecture. On average, it would take more than 9,000 years for the best available algorithms and supercomputers to produce, using exact methods, a single sample from the programmed distribution, whereas Borealis requires only 36 μs. This runtime advantage is over 50 million times as extreme as that reported from earlier photonic machines. Ours constitutes a very large GBS experiment, registering events with up to 219 photons and a mean photon number of 125. This work is a critical milestone on the path to a practical quantum computer, validating key technological features of photonics as a platform for this goal.
Similar content being viewed by others
Main
Only a handful of experiments have used quantum devices to carry out computational tasks that are outside the reach of presentday classical computers^{1,2,6,7}. In all of these, the computational task involved sampling from probability distributions that are widely believed to be exponentially hard to simulate using classical computation. One such demonstration relied on a 53qubit programmable superconducting processor^{6}, whereas another used a nonprogrammable photonic platform implementing Gaussian boson sampling (GBS) with 50 squeezed states fed into a static random 100mode interferometer^{1}. Both were shortly followed by larger versions, respectively enjoying more qubits^{7,8} and increased control over brightness and a limited set of circuit parameters^{2}. In these examples, comparison of the duration of the quantum sampling experiment to the estimated runtime and scaling of the bestknown classical algorithms placed their respective platforms within the regime of quantum computational advantage.
The superconducting quantum supremacy demonstrations serve as crucial milestones on the path to fullscale quantum computation. On the other hand, the choice of technologies used in the photonic machines^{1,2}, and their consequential lack of programmability and scalability, places them outside any current proposed roadmap for faulttolerant photonic quantum computing^{9,10,11} or any GBS application^{12,13,14,15,16,17,18}. A demonstration of photonic quantum computational advantage incorporating hardware capabilities required for the platform to progress along the road to faulttolerance is still lacking.
In photonics, timedomain multiplexing offers a comparatively hardwareefficient^{19} path for building faulttolerant quantum computers, but also nearterm subuniversal machines showing quantum computational advantage. By encoding quantum information in sequential pulses of light—effectively multiplexing a small number of optical channels to process information on a large number of modes^{20}—large and highly entangled states can be processed with a relatively small number of optical components. This decouples the required component count and physical extent of the machine from the size of the quantum circuit being executed; provided device imperfections can be maintained sufficiently small, this decoupling represents a substantial advantage for scaling. Moreover, the relatively modest number of optical pathways and control components avoids many of the challenges of traditional, planar twodimensional implementations of optical interferometers, which suffer from high complexity and burdensome parallel control requirements, especially when longrange connectivity is desired. Although attractive for scaling, hardware efficiency must not come at the cost of unnacceptably large errors. Implementations of timedomain multiplexing must therefore be tested in demanding contexts to validate their promise for building practically useful quantum computers.
Using timedomain multiplexing, large one and twodimensional cluster states have been deterministically generated^{21,22,23} with programmable linear operations implemented by projective measurements^{24,25}, whereas similar operations have been implemented in ref. ^{26} using a single loop with reconfigurable phase. These demonstrations leverage lowloss optical fibre for delay lines, which allows photonic quantum information to be effectively buffered. Although groundbreaking, these demonstrations have remained well outside the domain of quantum computational advantage, as they lacked nonGaussian elements and were unable to synthesize states of sufficient complexity to evade efficient classical simulation^{27}. The demonstration of a set of hardware capabilities needed for universal faulttolerant quantum computing, in the demanding context of quantum computational advantage, would serve as a validating signal that the corresponding technologies are advancing as needed. Yet no such demonstration is available for timedomain multiplexing.
In this work, we solve technological hurdles associated with timedomain multiplexing, fast electrooptical switching, highspeed photonnumberresolving detection technology and nonclassical light generation, to build a scalable and programmable Gaussian boson sampler, which we name Borealis. These features allow us to synthesize a 216mode state with a threedimensional entanglement topology. This is particularly notable because threedimensional cluster states are sufficient for measurementbased faulttolerant quantum computing^{28,29}; although the states we synthesize are themselves not cluster states, the device can be readily programmed to generate cluster states by selecting appropriate phase and beamsplitting ratios at the loops. Borealis uses 216 independent quantum systems to achieve quantum computational advantage, placing it well beyond the capabilities of current stateoftheart classical simulation algorithms^{30}. Our use of photonnumberresolving detectors unlocks access to sampling events with much larger total photon number, a regime inaccessible to earlier experiments that used traditional threshold detectors. In the same vein, our use of timedomain multiplexing allows us access to more squeezed modes without increasing the physical extent or complexity of the system. In addition, its output cannot be efficiently spoofed in crossentropy benchmarks using a generalization of the most recent polynomialtime algorithms^{3}. We leave as an open question to the community whether better polynomialtime algorithms for spoofing can be developed.
Experiment
The optical circuit we implement, depicted in Fig. 1, is fully programmable, provides longrange coupling between different modes and allows all such couplings to be dynamically programmed. It implements linearoptical transformations on a train of input squeezedlight pulses, using a sequence of three variable beamsplitters (VBSs) and phasestabilized fibre loops that act as effective buffer memory for light, allowing interference between modes that are either temporally adjacent, or separated by six or 36 time bins. This system synthesizes a programmable multimode entangled Gaussian state in a 6 MHz pulse train, which is then partially demultiplexed to 16 output channels and sampled from using photonnumberresolving detectors.
Unlike some quantum algorithms whose correct functioning on a quantum computer can be readily verified using a classical computer, it remains an open question how to verify that a GBS device is operating correctly. In what follows, we present evidence that our machine is operating correctly, that is, it samples from the GBS distribution specified by the device transfer matrix T and vector of squeezing parameters r, which together define the ground truth of the experiment. In previous experiments^{1,2} the results were benchmarked against a ground truth obtained from tomographic measurements of a static interferometer, whereas for Borealis, the ground truth is obtained from the quantum program specified by the user, that is the squeezing parameters and phases sent to the VBS components in the device.
The transfer matrix is obtained by combining the three layers of VBSs acting over the different modes, together with common (to all modes) losses due to propagation and the finite escape efficiency of the source, as well as imperfect transmittance through the demultiplexing and detection systems; it corresponds classically (quantum mechanically) to the linear transformation connecting input and output electric fields (annihilation operators).
As noted in refs. ^{5,31}, if one were to target a universal and programmable interferometer, with depth equal to the number of modes, that covers densely the set of unitary matrices, the exponential accumulation of loss would prohibit showing a quantum advantage. There are then two ways around this nogo result: one can either give up programmability and build an ultralow loss fixed static interferometer, as implemented in refs. ^{1,2}, or give up universality while maintaining a high degree of multimode entanglement using longranged gates.
We first consider the regime of few modes and low photon number, in which it is possible to collect enough samples to estimate outcome probabilities, and also calculate these from the experimentally characterized lossy transmission matrix T and the experimentally obtained squeezing parameters r programmed into the device. In Fig. 2 we show the probabilities inferred from the random samples collected in the experiment and compare them against the probabilities for different samples S obtained from simulations, under the ground truth assumption. We cover the output pattern of all possible permutations \((\begin{array}{c}N+M1\\ N\end{array})\), in which N is the number of photons, from 3 to 6, and M = 16 is the number of modes. To quantify the performance of Borealis we calculate the fidelity (F) and total variation distance (TVD) of the 3, 4, 5 and 6 total photonnumber probabilities relative to the ground truth. For a particular total photon number, fidelity and TVD are, respectively, defined as \(F={\sum }_{i}\sqrt{{p}_{i}{q}_{i}}\) (also known as the Bhattacharyya coefficient) and \({\rm{TVD}}={\sum }_{i}{p}_{i}{q}_{i}/2\). Parameters p_{i} and q_{i} represent the theoretical and experimental probability of the ith output pattern, respectively, and are normalized by the probability of the respective total photon number. For the total photonnumber sectors considered we find fidelities in excess of 99% and TVDs below or equal to 6.5%, thus showing that our machine is reasonably close to the ground truth in the lowN regime addressed by these data. Note that, because we are calculating all the possible probabilities with N photons, estimating outcome probabilities from the experimentally characterized transmission matrix would require us to obtain orders of magnitude more samples, beyond our current processing abilities. This limitation will lead to TVD growing as N increases and, beside the impractical computational cost, is the reason that data past N > 6 were left for subsequent benchmarks.
In an intermediate mode and photonnumber regime, we calculate the cross entropy of the samples generated by the experiment for each total photonnumber sector for a highdimensional GBS instance with M = 216 computational modes and total mean photon number \(\bar{N}=21.120\pm 0.006\). For a set of K samples \({\{{S}_{i}\}}_{i=1}^{K}\), each having a total of N photons, the crossentropy benchmark under the ground truth given by (r, T) is
where \({\mathscr{N}}={{\rm{\Pr }}}^{(0)}(N)/(\begin{array}{c}N+M1\\ N\end{array})\) is a normalization constant determined by the total number of ways in which N photons can be placed in M modes and Pr^{(0)}(N) is the probability of obtaining a total of N photons under the ground truth assumption.
We then compare the average score (Fig. 3a) of the 10^{6} samples, divided in 10,000 samples per total photon number N, generated by our machine in the cross entropy against classical adversarial spoofers that try to mimic the ground truth distribution (r, T). These adversaries are constructed with the extra constraint that they must have the same firstorder (mean) photonnumber cumulants as the ground truth distribution. The five adversaries considered send (1) squashed, (2) thermal, (3) coherent and (4) distinguishable squeezed light into the interferometer specified by T, or (5) use a greedy algorithm to mimic the one and twomode marginal distributions of the ground truth, as was used in ref. ^{3} to spoof earlier large GBS experiments^{1,2}. Squashed states (1) are the classicalGaussian states with the highest fidelity to lossysqueezed states^{31}, that is they are optimal within the family of Gaussian states that are classical, and thus provide a more powerful adversary than thermal, coherent or distinguishable squeezed states, which were the only adversaries considered in previous photonic quantum computational advantage claims^{1,2}. In all cases, the samples from Borealis perform significantly better than any adversary at having a high cross entropy with respect to the ground truth; equivalently, none of the adversaries are successful spoofers in this benchmark. In particular, the bestperforming adversary—the greedy sampler—remains significantly below the experiment in crossentropy, and shows no trend towards outperforming the experiment for larger N. Given the supercomputing resources and time needed to estimate all scores for N = 26 (22 h), we can extrapolate this time and estimate that it would take roughly 20 days to benchmark our data for N = 30. For this reason, and the lack of evidence that the scores may change in favour of any alternative to the ground truth, we are confident that the studied range of N = [10,26] is sufficient to rule out all classical spoofers considered, even in the regime in which it is unfeasible to perform these benchmarks.
Next, we consider another test—a Bayesian method similar to that used in other GBS demonstrations^{1,2}. For each subset of samples generated in the experiment with a given total photon number N, we calculate the ratio of the probability that a sample S could have come from the lossy ground truth specified by T and r to the probability that S came from any of the alternative spoofing hypotheses (1)–(4). For a particular sample S_{i} and a particular adversary I this ratio is given by
which allows us to form the Bayesian log average
If \(\Delta {H}_{0I} > 0\) we conclude that the samples generated by Borealis are more likely to have come from the ground truth than from the adversarial distribution corresponding to the first four spoofers (1)–(4); the greedy adversary (5) can generate samples mimicking the ground truth but there is no known expression or algorithm to obtain the ‘greedy probability distribution’, thus we cannot use it to generate a Bayesian score. One can see in Fig. 3b that the Bayesian log average is strictly above zero for all remaining adversaries.
Finally, we consider the regime of many modes and large photon number, in which calculating the probability of even a single event using a classical computer is unfeasible. In this regime we consider the first and secondorder cumulants of the photonnumber distributions of 216 modes and 10^{6} samples against the lossy ground truth and the different spoofer distributions. Note that these samples are generated from the same family of unitaries as the samples generated in the intermediate regime, we only change the brightness of the squeezed input light. In Fig. 4a we plot the total photonnumber probability distributions measured in the experiment, and calculated from the ground truth and different spoofers. By construction, the samples generated from each classical adversary have the same firstorder cumulants (mode photonnumber means) as the ground truth and thus they also have the same total mean photon number centred at \(\bar{N}=125\). Deliberately matching the first moments exactly to the ground truth ensures that we give our adversaries fair conditions to spoof our experiment. However, their secondorder cumulants, defined between mode i and mode j as \({C}_{ij}=\langle {n}_{i}{n}_{j}\rangle \langle {n}_{i}\rangle \langle {n}_{j}\rangle \) with n_{i} the photon number in mode i, are different. We calculate the distribution of all C_{ij} obtained experimentally and compare the result with those obtained from theoretical predictions and different adversaries, as shown in Fig. 4b. These cumulants can be calculated efficiently. Overall, it is clear that the statistics of experimental samples diverge from the adversarial hypotheses considered and agree with the ground truth of our device (as seen in the top left panel of Fig. 4b) where they cluster around the identity line at 45°.
Unlike earlier experiments^{1,2} in which more than half of the input ports of the interferometer are empty, in this current work every input port of the timedomain interferometer is populated with a squeezed state. This property indicates that the third and fourthorder photonnumber cumulants with no modes repeated are extremely small (≈10^{−6}) in our ground truth. The greedy spoofer we implemented using first and secondorder cumulant information automatically produces thirdorder cumulants on the order of 10^{−5}, and thus no extra gain can be attained by using a greedy algorithm with thirdorder correlations, as they are well explained using only singlemode and pairwise correlations. Note that the difference between the ground truth cumulants and the ones from the greedy samples are more than accounted for by finite size statistics.
For Gaussian states undergoing only common loss (including the special case of lossless GBS), it is straightforward to show that the thirdorder photonnumber cumulants involving any three distinct modes are all strictly zero. Thus, the fact that significant third and fourthorder cumulants are observed in refs. ^{1,2} is simply a reflection of the fact that most of their inputs are vacuum and that their experiment lacks photonnumber resolution. The latter observation could in principle be exploited by a classical adversary to speed up the simulation of GBS with mostly vacuum inputs because strategies exist to speed up the simulation of GBS when the number of input squeezed states is fixed and is a small fraction of the total number of photons observed. These strategies used the fact that hafnians of lowrank matrices^{32,33} can be calculated faster than hafnians of full rank matrices of equal size. For our system, the matrices needed for simulation are all full rank as every input is illuminated with squeezed light.
Finally, note that in Fig. 4b, we do not compare against the cumulants of the greedy sampler. These are, by construction, very close to the ground truth (see details in Supplementary Information). But for the brightnesses for which one calculates cross entropy, they do not perform as well as the samples from our machine.
In the experimental distribution of the total photon number in Fig. 4a, the outcome with the highest probability is N = 124.35 ± 0.02 and the distribution has significant support past 160 photons as shown in the inset. The bestknown algorithm to simulate GBS^{30,34} scales with the total number of modes and the time it takes to calculate a probability amplitude of a purestate GBS instance. Thus we can estimate the time it would take to simulate a particular sample S = (n_{1}, …, n_{m}) in Fugaku, the current most powerful supercomputer in the world^{35}, to be
where the collision parameter is \(G={({\prod }_{i=1}^{M}({n}_{i}+1))}^{1/{N}_{c}}\), n_{i} is the number of photons in the ith mode and N_{c} is the number of nonzero detector outcomes. We estimate c_{Fugaku} = c_{Niagara}/122.8 from the LINPACK benchmark (a measure of a computer’s floatingpoint rate of execution) ratio of floating operations per second measured on Fugaku and Niagara^{5} found c_{Niagara} = 5.42 × 10^{−15} s from which we get c_{Fugaku} = 4.41 × 10^{−17} s. Finally, we take M = 216 for both our system and the experiment in ref. ^{2}. This assumption slightly overestimates the time it takes a supercomputer to simulate the experiment of ref. ^{2}, as it has twothirds the number of modes of the largest Borealis instance we consider but simplifies the analysis.
Equation (4) captures the collisionfree complexity of the hafnian of an N × N matrix of \(O({N}_{c}^{3}{2}^{{N}_{c}/2})\) because in that case G = 2. For the purposes of sampling, a threshold detection event that in an experiment can be caused by one or many photons, can always be assumed to have been caused by a single photon, thus threshold samples have the same complexity as in the formula above with G = 2 (ref. ^{30}), which is quadratically faster than the estimates in refs. ^{1,2,36}. One could hope that tensor networks techniques^{37} could speed up the simulation of a circuit such as the one we consider here, but this possibility is ruled out in ref. ^{5} where it is shown that, even when giving tensor network algorithms effectively infinite memory, they require significantly more time than hafnian based methods to calculate probability amplitudes.
On the basis of these assumptions we estimate that, on average, it would take Fugaku 9,000 years to generate one sample, or 9 billion years for the million samples we collected from Borealis. Using the same assumptions, we estimate that Fugaku would require 1.5 h, on average, to generate one sample from the experiment in ref. ^{2}, or 8,500 years for the 50 million generated in their experiment. In Fig. 4c, we plot the distribution of classical runtimes of Fukagu for each sample drawn in the experiment, and show the sample with the largest runtime as a star. For comparison, we also compare to the highest brightness experiment from Jiuzhang 2.0 (ref. ^{2}). The regime we explore in our experiment is seven orders of magnitude harder to simulate than previous experiments and, moreover, we believe it cannot be spoofed by current stateoftheart greedy algorithms or classicalGaussian states in cross entropy.
Discussion and outlook
We have successfully demonstrated quantum computational advantage in GBS using a photonic timemultiplexed machine. Unlike previous photonic devices used for such demonstrations, Borealis offers dynamic programmability over all gates used, shows true photonnumberresolved detection and requires a much more modest number of optical components and paths. Among all photonic demonstrations of quantum computational advantage–photonic or otherwise–our machine uses the largest number of independent quantum systems: 216 squeezed modes injected into a 216mode interferometer having threedimensional connectivity, with up to 219 detected photons. Our demonstration is also more resistant to classical spoofing attacks than all previous photonic demonstrations, enabled by the high photon numbers and photonnumber resolution implemented in the experiment.
The programmability and stability of our machine enables its deployment for remote access by users wishing to encode their own gate sequences in the device. Indeed, the machine can be accessed by such users without any knowledge of the underlying hardware, a key property for exploring its use at addressing problems on structured, rather than randomized data. Furthermore, besides demonstrating variable beamsplitting and switching (both in the loops and demultiplexing system), the successful use in our machine of several phasestabilized fibre loops to act as effective buffer memory for quantum modes is a strong statement on the viability of this technique, which is a requirement in many proposed architectures for faulttolerant photonic quantum computers^{9,10,11,38}. Our demonstration thus marks a significant advance in photonic technology for quantum computing.
Methods
Optical circuit
The input of the interferometer is provided by a single optical parametric oscillator (OPO), emitting pulsed singlemode squeezed states at a 6 MHz rate that are then sent to three concatenated, programmable, loopbased interferometers. Each loop contains a VBS, including a programmable phase shifter, and an optical fibre delay line acting as a buffer memory for light, and allows for the interference of modes that are temporally adjacent (τ = (6 MHz)^{−1}), or separated by six or 36 time bins (6 τ or 36 τ) in the first, second and third loop, respectively. Optical delays provide a compact and elegant method to mediate short and longrange couplings between modes. The highdimensional Gaussian state generated for this experiment can be visualized, as depicted above the three loops in Fig. 1, using a threedimensional lattice representation. Given a lattice of size a = 6, where a is the number of modes separating two interacting pulses in the second loop, one can form a cubic lattice by injecting M = a^{3} = 216 squeezedlight pulses into the interferometer.
Owing to the use of a single timemultiplexed squeezedlight source, all temporal modes are, to very good approximation, indistinguishable in all degrees of freedom except time signature, and passively phased locked with respect to each other; the squeezer is driven by pump pulses engineered to generate nearly singletemporalmode squeezedlight pulses on a 6 MHz clock. Spatial overlap is ensured by using singlemode fibre coupling at the entrance and exit of each loop delay, and samples are collected using an array of photonnumber resolving (PNR) detectors based on superconducting transitionedge sensors (TES) with 95% detection efficiency^{39,40}. These samples consist of 216 (integer) photonnumber measurement outcomes for as many modes. To bridge the gap between the 6 MHz clock, chosen to maintain manageable fibre loop lengths, and the slower relaxation time of the TES detectors, a 1to16 binarytree switching network was used to partially demultiplex the pulse train after the loops and before the detectors.
Experimental challenges
Despite the simple conceptual design of Borealis (Fig. 1), building a machine capable of delivering quantum computational advantage in a programmable fashion using photonics, in a large photonnumber regime, required solving considerable technological hurdles that were previously outstanding. These include: (1) lack of PNRcompatible singlemode squeezedlight sources and noninvasive phase stabilization techniques requiring bright laser beams, (2) slow PNR reset times that would necessitate unfeasibly long fibre loops and (3) lack of sufficiently fast and lowloss electrooptic modulators (EOMs) preventing programmability. Our solutions to these challenges for this work are, respectively, (1) the design of a bright and tunable source of singlemode squeezed states and phase stabilization techniques (OPO and interferometer) using locking schemes compatible with PNR detectors, (2) active demultiplexing to increase the effective rate of PNR acquisition by a factor of 60, compared to previous systems^{40}, by constructing a lowloss 1to16 binary switch tree and developing new photonnumber extraction techniques and (3) the use of new, efficient and fast customized EOMs (QUBIG GmbH) that enable arbitrary dynamic programming of photonic gates with low loss and high speeds. The success of this experiment also relies on a robust calibration routine, accurately extracting all experimental parameters contained in the transfer matrix T and the squeezing parameters r that define each GBS instance. We describe each of these advances in the following sections. Other details pertinent to the apparatus can be found in the Supplementary Information.
With further fabrication and device optimization, the raw operational speed of PNR detectors can be increased, eliminating the need for the demultiplexer (demux) and associated losses (roughly 15%). Improvements to the filter stack (20% loss) would also considerably increase performance. Several paths thus exist to even further increase the robustness of our machine against hypothetical improved classical adversaries. In addition, in trial runs we have extended the number of accessible modes to 288 (see Supplementary Information) without any changes to the physical architecture, and expect further scalability in this number to be readily achievable by improving the longtime stabilization of the device. Such scaling will place the device even further ahead of the regime of classical simulability and potential vulnerability to spoofing.
For applications requiring a universal interferometer, a recirculation loop long enough to accommodate all 216 modes could be implemented^{41}, replacing any two of the three existing loops. The remaining existing loop would be nested in the larger 216mode loop, allowing repeated application of the remaining VBS to all 216 modes, albeit at the cost of higher losses.
Pulsed squeezedlight source
The main laser is an ultralow phase noise fibre laser with a sub100 Hz linewidth centred at 1,550 nm, branched out into different paths. To prepare the pump, in one path pulses are carved using a 4 GHz lithium niobate electrooptic intensity modulator. It is then amplified and upconverted to 775 nm using a fibrecoupled MgO:LN ridge waveguide. The resulting pump is a 6 MHz stream of 3nsduration rectangular pulses with an average power of 3.7 mW. Squeezedlight pulses are generated in a doubly resonant, phasestabilized hemilithic cavity^{42} comprising a 10mmlong planoconvex potassium titanyl phosphate crystal with its temperature stabilized at 32.90 °C using a Peltier element, for optimal Type0 phase matching (Supplementary Information). All spectral side bands of the OPO cavity, around the degenerate frequency band, are suppressed by more than 25 dB using a pair of fibre Bragg gratings (0.04 nm bandwidth at 0.5 dB), one in reflection and the other in transmission (more details in Supplementary Information).
Programmable photonic processor
A train of singlemode squeezed vacuum pulses is emitted by the OPO, coupled into a singlemode fibre and directed towards the programmable photonic processor consisting of three loopbased interferometers in series, as shown in Fig. 1. Each loop \({\ell }=0,1,2\) is characterized by a VBS with transfer matrix
where each phase ϕ_{k} = [−π/2, π/2] and α_{k} = [0, π/2] can be programmed independently, \({\mu }_{{\ell }}\) is a phase offset associated with each loop and \({\eta }_{{\ell }}\) is the energy transmittance coefficient associated with one complete circulation in loop \({\ell }\). The time delay experienced in the first loop is τ = 1/(6 MHz), equals the delay between two consecutive squeezedlight pulses, whereas the second and third loops have 6 τ and 36 τ time delay, respectively. The transmittance t_{k} of a VBS with parameter α_{k} is given by t_{k} = cos^{2}α_{k}. For t_{k} = 1 all the incoming light is directed into the fibre delay, whereas the light entering the VBS from the fibre delay is fully coupled out. The output of the last loop is coupled into a singlemode fibre and directed towards the final sampling stage of the experiment.
All three loops are independently phase stabilized using a counterpropagating laser beam, piezo transducers and lockin techniques. To avoid stray light from reflections of this beam towards the detectors, we alternate between measurement (65 μs) and phase stabilization of the loops (35 μs), leading to a sampling rate of 10 kHz. The estimated phase noise (standard deviation from the mean) inside the interferometer is 0.02, 0.03 and 0.15 rad for the first, second and third loops, respectively, as measured with classical pulses. We carefully reduced mode mismatch throughout the entire interferometer: spatial overlap is ensured using singlemode fibres, with coupling efficiencies >97%, and the length of each loop delay is carefully adjusted to have >80% classical visibility between 250pslong classical pulses, which gives >99% temporal overlap for the squeezed states.
Connectivity
The programmable timedomain multiplexed architecture implemented here and introduced in ref. ^{5} generates sufficiently connected transmission matrices (in which twothirds of the entries of the matrix are nonzero) to furnish a high level of entanglement between the modes (we estimate the log negativity between modes 0…i−1 and i…216 for the ground truth to be on average 5.96 for \(i\in \{36,72,108,144,180\}\)), while keeping losses sufficiently low (with transmission above 33%). This is not the case for other architectures in which one either has to give up programmability^{1,2} or suffer steep losses that, in the asymptotic limit of many modes, render the sampling task roughly simulable as the loss scales exponentially with the system size^{31}. In a universal programmable interferometer each mode passes through several lossy components (with transmission η_{unit}) proportional to the number of modes. For the interferometers considered here, each mode sees a fixed number (six) of beamsplitters in which the loss is dominated by the transmission of the largest loop. If the shortest loop, which accommodates only one mode, has transmission η_{unit} then the largest loss is given by \({\eta }_{{\rm{unit}}}^{36}\), which should be contrasted with \({\eta }_{{\rm{unit}}}^{216}\) for a universal interferometer. Whereas we sacrifice some connectivity, the manymode entanglement predicted in our ground truth (logarithmic negativity^{43} of 6.08 when splitting the modes of the ground truth between the first and last 108) is comparable to the one found in Gaussian state prepared using a random Haarinterferometer with a comparable net transmission and brightness (for which the logarithmic negativity across the same bipartition is 15.22). For the largest experiment considered below, the net transmittance is around 33%. As discussed in the Methods, combined with the high brightness of our source averaging r ~ 1.1, places our experiment well beyond any attempt at a nowknown polynomialtime approximate classical simulation^{31}.
Sampling of highdimensional GBS instances
All temporal modes of our synthesized highdimensional Gaussian states are sampled using superconducting TES allowing photonnumber resolution up to 13 photons per detector in our data. Relaxation time of our TES, back to baseline following illumination, is of the order of 10 to 20 μs corresponding to 50–100 kHz (ref. ^{40}), and depends on the expected photon number. At this speed, the length of the shortest loop delay would be 2 km, leading to excessive losses and more challenging phase stabilization in our system. Thus our pulse train and thus processing speed of 6 MHz, chosen to maintain manageable loop lengths, is too fast for a reliable photonnumber extraction. To bridge the gap between the typical PNR speed and our processing speed, we use a demultiplexing device allowing to speed up by effectively 16×, and to develop a postprocessing scheme, described below, for ‘tailsubtraction’ enabling operation of each PNR at 375 kHz.
The role of the demux, depicted as a binary tree in Fig. 1, is to reroute squeezedlight pulse modes from the incoming train into 16 separate and independent spatial modes, each containing a fibrecoupled PNRTES detector. There are 15 lowloss resonant EOMs grouped in four different layers. EOMs in each layer have a preset frequency: one at 3 MHz, two at 1.5 MHz, four at 750 kHz and eight at 375 kHz. Each EOM is sandwiched between two polarizing beamsplitter and a quarterwaveplate at 45° in front. The modulators are driven by a standalone unit, generating several phaselocked sine wave signals temporally synchronized with the input train. The switching extinction ratio is measured to be above 200:1 for all modulators.
Several methods have been demonstrated to extract photon numbers from a PNR’s output voltage waveform, each with their own advantages^{44,45,46,47}. Here we use a modified version of the method presented in ref. ^{47}. First, each detector is calibrated using well separated pulses of squeezed light with a high mean photon number around \(\langle n\rangle \approx 1\) and 500 × 10^{3} repetitions. This gives enough high photonnumber events to ensure that at least the 0 to 11 photon clusters can be identified using the area method. From each cluster, the mean shape of the waveforms is defined. To extract the photonnumber arrays from experiment, the mean square distance between each waveform and the mean shape is estimated. The photon number is then assigned to the closest cluster. Because we operate the individual PNRs at 375 kHz, faster than the relaxation time (back to baseline following illumination), the tail of each pulse still persists when the next pulse arrives at the same PNR. To avoid these tails reducing photonnumber extraction fidelity in a pulse, the mean shape for the identified previous photon number is subtracted. See Supplementary Information for details.
Estimation of the ground truth parameters
Given that all the squeezed states come from the same squeezer and the programmability of our system, we can parametrize and characterize the loss budget of our system using a very small set of parameters. The first set of parameters correspond to the relative efficiencies of the 16 different demuxdetector channels, η_{demux,i} for \(i\in \{0,1,\ldots ,15\}\). The second parameter is simply the common transmittance η_{C}. Finally, we have the transmittance associated with a roundtrip through each loop η_{k} for \(k\in \{0,1,2\}\).
To characterize the first two parameter sets, namely the demux and common loss, we set all the loops to a ‘bar’ state (α_{k} = π/2), preventing any light from entering the delays. As the input energy is the same, we can simply estimate the ratio of the transmittance of the different demuxdetector channels as \({\eta }_{{\rm{demux}},i}/{\eta }_{{\rm{demux}},j}={\bar{n}}_{i}/{\bar{n}}_{j}\) where \({\bar{n}}_{j}\) is the mean photon number measured in the detector j. Without loss of generality, we can take the largest of the η_{demux,i} to be equal to one and assign any absolute loss from this and any other channel into the common loss η_{C}. To determine the common loss, we use the noise reduction factor (NRF), defined as^{48,49}
where n_{i} and n_{j} are the photonnumber random variables measured in mode i and j, and we write variances as \({\Delta }^{2}X={\langle X\rangle }^{2}{\langle X\rangle }^{2}\).
If losses can be considered as uniform, which is an excellent approximation if we use only the loop with the shortest delay, it is straightforward to show that the NRF of a twomode squeezed vacuum gives directly the loss seen by the two modes as NRF_{TMSV} = 1−η. To prepare the twomode squeezed vacuum we set our VBS matrix to be proportional to \((\begin{array}{ll}1 & i\\ i & 1\end{array})\) when the two singlemode squeezed pulses meet at the beamsplitter. To this end, we use the following sequence [t_{0} = 0, t_{1} = 1/2, t_{2} = 0], where, recall, we write t_{i} = cos^{2}α_{i} to indicate the transmittance of a particular loop time bin i. We can now scan the controllable phase of the VBS, ϕ_{k}, and determine where the minimum occurs \(({\varphi }_{k}^{{\rm{\min }}}={\mu }_{0}\,{\rm{mod}}\,\pi )\), and at the same time provide the relative offset in the first loop and the net transmittance of the setup. This observation can be used to obtain the phase offset of any other loop roundtrip. Although in the current version of our system these are set by the locking system, they can in principle also be made programmable. The transmittance η = 1 − NRF_{TMSV} = η_{C} × η_{0} × η_{demux} is the product of the common transmittance η_{C}, the roundtrip in the first loop η_{0} and the average transmittance associated with two demuxdetector channels used to detect the two halves of the twomode squeezed vacuum \({\eta }_{{\rm{demux}}}=\frac{1}{2}\{{\eta }_{{\rm{demux}},i}+{\eta }_{{\rm{demux}},j}\}\). From this relation, we can find
This calibration depends on knowing the value of the roundtrip transmittance factor associated with the first loop. To estimate the roundtrip transmittance of a particular loop \({\ell }\), we bypass the other loop delays and compare the amount of light detected when light undergoes a roundtrip through a particular loop, relative to when all the roundtrip channels are closed, that is, all loops in a ‘bar’ state. We obtain \({\eta }_{{\ell }}\), which we can then plug in equation (7) to complete the calibration sequence.
Finally, having characterized the loss budget in the experiment, we can obtain the brightness and squeezing parameters at the source by measuring photon numbers when all the loops are closed and then dividing by the net transmittance. For any of the three regimes considered in the main text the standard deviation of the estimated squeezing parameters and mean photon numbers is below 1% of the respective means.
From the same data acquired above for a pair of modes, we calculate the unheralded secondorder correlation
for each pair of temporal modes. When we attain the minimum NRF at ϕ_{k} = μ_{0}, that is, when we prepare twomode squeezed vacuum, it is easy to see that^{50}
where K is the socalled Schmidt number of the source. This quantifies the amount of spectral mixedness in the generated squeezed light. An ideal squeezed vacuum light source would yield g^{(2)} = 2. We report K = 1.12 for g^{(2)} = 1.89 for the dataset used in the large mode and photonnumber regime.
Theory sections
Transfer matrix, T
The loopbased interferometer, as well as any other interferometer, can be described by a transfer matrix T that uniquely specifies the transformation effected on the input light. For our GBS implementation, this interferometer is obtained by combining three layers of phase gates and beamsplitters (twomode gates), interfering modes that are contiguous, or separated by six or 36 time bins, which we write as
where in our case D = 3 gives the number of loops, while \({a}^{d}{}_{d\in \{0,1,2\}}=\) {1, 6, 36} with a = 6 gives the number of modes that each loop can hold. \({B}_{i,i+{a}^{d}}\)(VBS) is an M × M transfer matrix that acts like the VBS in the subspace of modes i and j = i + a^{d} and like the identity elsewhere.
In the last equation, η_{C} is the common transmittance throughout the interferometer associated with the escape efficiency of the squeezer cavity and the propagation loss in common elements. T_{demux} is a diagonal matrix that contains the square roots of the energy transmittance into which any of the modes are rerouted for measurement using the demux. Because the demux has 16 channels, it holds that \({({T}_{{\rm{demux}}})}_{i,i}={({T}_{{\rm{demux}}})}_{i+16,i+16}=\sqrt{{\eta }_{{\rm{demux}},i}}\). Finally, we set the phases of the VBS to be uniformly distributed in the range [−π/2, π/2] and the transmittances to be uniformly in the range [0.45, 0.55]. This range highlights the programmability of the device while also generating high degrees of entanglement that are typically achieved when the transmittance is half.
In the idealized limit of a lossless interferometer, the matrix representing it is unitary, otherwise the matrix T is subunitary (meaning its singular values are bounded by 1). The matrix T together with the input squeezing parameters r defines a GBS instance. Squeezed states interfered in an interferometer (lossy or lossless) always lead to a Gaussian state, that is, one that has a Gaussian Wigner function. Moreover, loss is never able to map a nonclassical state (having noise in a quadrature below the vacuum level) to a classical state. Thus there exists a finite separation in Hilbert space between lossysqueezed states and classical states. To gauge this separation, and how it influences sampling, we use the results from ref. ^{31} to show in the section ‘Regimes of classical simulability’ that the probability distribution associated with the ground truth programmed into the device cannot be wellapproximated by any classicalGaussian state.
Similar to previous GBS experiments in which the ground truth to which a quantum computer is compared contains imperfections due to loss, we also benchmark our machine against the operation of a lossy unitary. In this more realistic scenario in which losses are included, the state generated at the output cannot be described by a state vector and thus one cannot assign probability amplitudes to an event. In this case, probabilities are calculated from the density matrix of the Gaussian state using the standard Born rule and then the probability of an N photon event is proportional to the hafnian of a 2N × 2N matrix.
Regimes of classical simulability
As a necessary but not sufficient test for beyondclassical capabilities of our machine, we consider the GBS test introduced in ref. ^{31}. This test states that a noisy GBS device can be classically efficiently simulated up to error ϵ if the following condition is satisfied:
Here q_{D} is the dark count probability of the detectors, η is the overall transmittance of the interferometer, r is the squeezing parameter of the M input squeezed states (assumed to be identical) and ϵ is a bound in the TVD of the photonnumber probability distributions of GBS instance and the classical adversary. For our experiment, we estimate an average transmittance of η = Tr(TT^{†})/M = 0.32, q_{D} = 10^{−3}, an average squeezing parameter of r = 1.10 and M is the total number of modes. With these parameters we find that the inequality above has no solution for \({\epsilon }\in [0,1]\), meaning that our machine passes this nonclassicality test.
Greedy adversarial spoofer
The greedy adversarial spoofer tries to mimic the low order correlations of the distribution and takes as input the k order, \(k\in \{1,2\}\), marginal distributions and optimizes a set of samples (represented as an array of size M × K) so as to minimize the distance between the marginals associated with this array and the ones associated with the ground truth. In a recent preprint Villalonga et al.^{3} argue that, using a greedy algorithm such as the one just described, they can obtain a better score at the crossentropy benchmark against the ground truth of the experiment in refs. ^{1,2} than the samples generated in the same experiment. We generalized the greedy algorithm introduced by Villalonga et al.^{3} to work with photonnumberresolved samples and find that it is unable to spoof the samples generated by our machine at the crossentropy benchmark that we use for scoring the different adversaries. Details of the algorithm are provided in the Supplementary Information.
Data availability
The datasets generated and analysed for this study are available from this link: https://github.com/XanaduAI/xanaduqcadata.
References
Zhong, H.S. et al. Quantum computational advantage using photons. Science 370, 1460–1463 (2020).
Zhong, H.S. et al. Phaseprogrammable Gaussian boson sampling using stimulated squeezed light. Phys. Rev. Lett. 127, 180502 (2021).
Villalonga, B. et al. Efficient approximation of experimental Gaussian boson sampling. Preprint at https://arxiv.org/abs/2109.11525 (2021).
Hamilton, C. S. et al. Gaussian boson sampling. Phys. Rev. Lett. 119, 170501 (2017).
Deshpande, A. et al. Quantum computational advantage via highdimensional gaussian boson sampling. Sci. Adv. 8, eabi7894 (2022).
Arute, F. et al. Quantum supremacy using a programmable superconducting processor. Nature 574, 505–510 (2019).
Wu, Y. et al. Strong quantum computational advantage using a superconducting quantum processor. Phys. Rev. Lett. 127, 180501 (2021).
Zhu, Q. et al. Quantum computational advantage via 60qubit 24cycle random circuit sampling. Sci. Bull. 67, 240–245 (2022).
Bourassa, J. E. et al. Blueprint for a scalable photonic faulttolerant quantum computer. Quantum 5, 392 (2021).
Bartolucci, S. et al. Fusionbased quantum computation. Preprint at https://arxiv.org/abs/2101.09310 (2021).
Larsen, M. V., Chamberland, C., Noh, K., NeergaardNielsen, J. S. & Andersen, U. L. Faulttolerant continuousvariable measurementbased quantum computation architecture. PRX Quantum 2, 030325 (2021).
Bromley, T. R. et al. Applications of nearterm photonic quantum computers: software and algorithms. Quantum Sci. Technol. 5, 034010 (2020).
Huh, J., Guerreschi, G. G., Peropadre, B., McClean, J. R. & AspuruGuzik, A. Boson sampling for molecular vibronic spectra. Nat. Photonics 9, 615–620 (2015).
Arrazola, J. M. & Bromley, T. R. Using Gaussian boson sampling to find dense subgraphs. Phys. Rev. Lett. 121, 030503 (2018).
Banchi, L., Fingerhuth, M., Babej, T., Ing, C. & Arrazola, J. M. Molecular docking with Gaussian boson sampling. Sci. Adv. 6, eaax1950 (2020).
Jahangiri, S., Arrazola, J. M., Quesada, N. & Killoran, N. Point processes with Gaussian boson sampling. Phys. Rev. E 101, 022134 (2020).
Jahangiri, S., Arrazola, J. M., Quesada, N. & Delgado, A. Quantum algorithm for simulating molecular vibrational excitations. Phys. Chem. Chem. Phys. 22, 25528–25537 (2020).
Banchi, L., Quesada, N. & Arrazola, J. M. Training Gaussian boson sampling distributions. Phys. Rev. A 102, 012417 (2020).
Takeda, S. & Furusawa, A. Toward largescale faulttolerant universal photonic quantum computing. APL Photonics 4, 060902 (2019).
Motes, K. R., Gilchrist, A., Dowling, J. P. & Rohde, P. P. Scalable boson sampling with timebin encoding using a loopbased architecture. Phys. Rev. Lett. 113, 120501 (2014).
Yoshikawa, J.i et al. Invited article: generation of onemillionmode continuousvariable cluster state by unlimited timedomain multiplexing. APL Photonics 1, 060801 (2016).
Larsen, M. V., Guo, X., Breum, C. R., NeergaardNielsen, J. S. & Andersen, U. L. Deterministic generation of a twodimensional cluster state. Science 366, 369–372 (2019).
Asavanant, W. et al. Generation of timedomainmultiplexed twodimensional cluster state. Science 366, 373–376 (2019).
Asavanant, W. et al. Timedomainmultiplexed measurementbased quantum operations with 25MHz clock frequency. Phys. Rev. Appl. 16, 034005 (2021).
Larsen, M. V., Guo, X., Breum, C. R., NeergaardNielsen, J. S. & Andersen, U. L. Deterministic multimode gates on a scalable photonic quantum computing platform. Nat. Phys. 17, 1018–1023 (2021).
Enomoto, Y., Yonezu, K., Mitsuhashi, Y., Takase, K. & Takeda, S. Programmable and sequential Gaussian gates in a loopbased singlemode photonic quantum processor. Sci. Adv. 7, eabj6624 (2021).
Bartlett, S. D., Sanders, B. C., Braunstein, S. L. & Nemoto, K. Efficient classical simulation of continuous variable quantum information processes. Phys. Rev. Lett. 88, 097904 (2002).
Raussendorf, R., Harrington, J. & Goyal, K. A faulttolerant oneway quantum computer. Ann. Phys. 321, 2242–2270 (2006).
Raussendorf, R., Harrington, J. & Goyal, K. Topological faulttolerance in cluster state quantum computation. New J. Phys. 9, 199 (2007).
Bulmer, J. F. et al. The boundary for quantum advantage in Gaussian boson sampling. Sci. Adv. 8, eabl9236 (2021).
Qi, H., Brod, D. J., Quesada, N. & GarcíaPatrón, R. Regimes of classical simulability for noisy Gaussian boson sampling. Phys. Rev. Lett. 124, 100502 (2020).
Björklund, A., Gupt, B. & Quesada, N. A faster Hafnian formula for complex matrices and its benchmarking on a supercomputer. J. Exp. Algor. 24, 11 (2019).
Gupt, B., Izaac, J. & Quesada, N. The walrus: a library for the calculation of Hafnians, hermite polynomials and Gaussian boson sampling. J. Open Source Softw. 4, 1705 (2019).
Quesada, N. et al. Quadratic speedup for simulating gaussian boson sampling. PRX Quantum 3, 010306 (2022).
56^{th} edition of the top 500 Top 500 the List https://www.top500.org/lists/top500/2020/11/ (2020).
Li, Y. et al. Benchmarking 50photon Gaussian boson sampling on the sunway taihulight. IEEE Trans. Parallel Distrib. Syst. 33, 13571372 (2021).
Gray, J. & Kourtis, S. Hyperoptimized tensor network contraction. Quantum 5, 410 (2021).
Rohde, P. P. Simple scheme for universal linearoptics quantum computing with constant experimental complexity using fiber loops. Phys. Rev. A 91, 012306 (2015).
Lita, A. E., Miller, A. J. & Nam, S. W. Counting nearinfrared singlephotons with 95% efficiency. Opt. Express 16, 3032–3040 (2008).
Arrazola, J. M. et al. Quantum circuits with many photons on a programmable nanophotonic chip. Nature 591, 54–60 (2021).
Qi, H., Helt, L. G., Su, D., Vernon, Z. & Brádler, K. Linear multiport photonic interferometers: loss analysis of temporallyencoded architectures. Preprint at https://arxiv.org/abs/1812.07015 (2018).
Mehmet, M. et al. Squeezed light at 1550 nm with a quantum noise reduction of 12.3 dB. Opt. Express 19, 25763–25772 (2011).
Weedbrook, C. et al. Gaussian quantum information. Rev. Mod. Phys. 84, 621 (2012).
FigueroaFeliciano, E. et al. Optimal filter analysis of energydependent pulse shapes and its application to tes detectors. Nucl. Instrum. Methods Phys. Res., Sect. A 444, 453–456 (2000).
Humphreys, P. C. et al. Tomography of photonnumber resolving continuousoutput detectors. New J. Phys. 17, 103044 (2015).
Morais, L. A. et al. Precisely determining photonnumber in realtime. Preprint at https://arxiv.org/abs/2012.10158 (2020).
Levine, Z. H. et al. Algorithm for finding clusters with a known distribution and its application to photonnumber resolution using a superconducting transitionedge sensor. J. Opt. Soc. Am. B. 29, 2066–2073 (2012).
Harder, G. et al. Singlemode parametricdownconversion states with 50 photons as a source for mesoscopic quantum optics. Phys. Rev. Lett. 116, 143601 (2016).
Aytür, O. & Kumar, P. Pulsed twin beams of light. Phys. Rev. Lett. 65, 1551 (1990).
Christ, A., Laiho, K., Eckstein, A., Cassemiro, K. N. & Silberhorn, C. Probing multimode squeezing with correlation functions. New J. Phys. 13, 033027 (2011).
Acknowledgements
We thank J. M. Arrazola and M. V. Larsen for providing feedback on the manuscript, S. Fayer and D. Phillips for assistance with the PNR detectors, M. Seymour and J. Hundal for assistance with data acquisition code, D.H. Mahler for helpful discussions, K. Brádler for guidance and A. Fumagalli for assistance with software. N.Q. thanks H. Qi, A. Deshpande, A. Mehta, B. Fefferman, S. S. Nezhadi and B. A. Bell for discussions. We thank SOSCIP for their computational resources and financial support. We acknowledge the computational resources and support from SciNet. SciNet is supported by: the Canada Foundation for Innovation; the Government of Ontario; Ontario Research Fund: Research Excellence and the University of Toronto. SOSCIP is supported by the Federal Economic Development Agency of Southern Ontario, IBM Canada Ltd and Ontario academic member institutions. Certain commercial equipment, instruments or materials are identified in this paper to foster understanding. Such identification does not imply recommendation or endorsement by the National Institute of Standards and Technology, nor does it imply that the materials or equipment identified are necessarily the best available for the purpose.
Author information
Authors and Affiliations
Contributions
L.S.M., M.F.A. and J.L. designed and built the experiment. F.L. developed the software stack for programmable hardware and data analysis with L.G.H. and L.N. F.R., M.J.C., T.G., A.E.L. and S.W.N. developed and built the PNR detector system. T.V. carried out highperformance computations and generated plots for the manuscript. J.F.F.B., I.D. and N.Q. provided guidance on theory, approach and benchmarking. F.M.M. implemented the greedy sampler algorithm. V.D.V. and M.M. designed and simulated the squeezedlight source. N.Q. and J.L. led the project, and cowrote the manuscript with Z.V., with input from all authors.
Corresponding authors
Ethics declarations
Competing interests
The authors declare no competing interests.
Peer review
Peer review information
Nature thanks Sergio Boixo and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary information
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Madsen, L.S., Laudenbach, F., Askarani, M.F. et al. Quantum computational advantage with a programmable photonic processor. Nature 606, 75–81 (2022). https://doi.org/10.1038/s4158602204725x
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1038/s4158602204725x
This article is cited by

Speeding up the classical simulation of Gaussian boson sampling with limited connectivity
Scientific Reports (2024)

Quantuminspired classical algorithms for molecular vibronic spectra
Nature Physics (2024)

Drug design on quantum computers
Nature Physics (2024)

Photophysical properties of materials for highspeed photodetection
Nature Reviews Physics (2024)

Language models for quantum simulation
Nature Computational Science (2024)
Comments
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.