Quantifying entanglement in a 68-billion-dimensional quantum state space

Entanglement is the powerful and enigmatic resource central to quantum information processing, which promises capabilities in computing, simulation, secure communication, and metrology beyond what is possible for classical devices. Exactly quantifying the entanglement of an unknown system requires completely determining its quantum state, a task which demands an intractable number of measurements even for modestly-sized systems. Here we demonstrate a method for rigorously quantifying high-dimensional entanglement from extremely limited data. We improve an entropic, quantitative entanglement witness to operate directly on compressed experimental data acquired via an adaptive, multilevel sampling procedure. Only 6,456 measurements are needed to certify an entanglement-of-formation of 7.11 ± .04 ebits shared by two spatially-entangled photons. With a Hilbert space exceeding 68 billion dimensions, we need 20-million-times fewer measurements than the uncompressed approach and 1018-times fewer measurements than tomography. Our technique offers a universal method for quantifying entanglement in any large quantum system shared by two parties.

Next, we define the mixed probability distributionP 1 ≡ (λP 1 + (1 − λ)P 1 ), and defineP 2 similarly. Since P 1 and P 2 are respectively related to P 1 and P 2 by the same permutation χ, we have that D(P 1 ||P 2 ) = D(P 1 ||P 2 ). Therefore, we obtain the inequality: D(P 1 ||P 2 ) ≥ D(P 1 ||P 2 ). ( This result that mixing (i.e., majorization) cannot increase relative entropy has far reaching applications. In particular, coarse-graining is a form of majorization between adjacent elements in a probability distribution. Because all (Shannon) entropic functions can be expressed in terms of relative entropies, it immediately follows that: where the subscripts P andP represent the probability distribution before and after coarse-graining, respectively. In addition, the mutual information and the conditional mutual information obey the inequalities HP(X A : HP(X A : where again, the subscripts P andP denote the true and coarse grained probability distribution, respectively. Furthermore, both the continuous mutual information h(x A : x B ) and the continuous conditional mutual information h(x A : x B |x C ) are expressible as high-resolution limits of corresponding discrete mutual informations. Because successive coarse grainings cannot increase these quantities, the following inequalities hold between discrete and continuous mutual information While the former inequality (8) can be found with alternative methods, the latter inequality (9) is new to the literature.

Proof of inequality 2
Inequality (2) derives from two fundamental properties of Shannon entropy. To expand notation, we have: First, is that the joint shannon entropy is less than or equal to the sum of the marginal entropies: Second, is that conditioning on additional variables cannot increase entropy, or conversely that removing conditioning variables cannot reduce entropy: Together, this proves inequality (2):

Monte Carlo error analysis
For the results shown in the manuscript, we used standard, first-order propagation-of-uncertainty for error analysis. Each coincidence-count measurement is assumed to have Poissonian uncertainty, and this uncertainty is analytically propagated through the analysis (e.g.
To confirm the validity of our propagation-style error analysis, we also estimated our uncertainty with Monte Carlo simulations. This approach does not suffer any potential issues that may arise where our equations may not be sufficiently well-behaved for the first-order propagation of error. However, it does replace a simple analytical result with the need for computational simulations.
To perform Monte Carlo simulations, each coincidence count measurement is used to sample from a Poissonian distribution. Then, we follow our previously described process for generating joint-probability distributions (with or without accidental subtraction) and calculating the amount of entanglement. This process is repeated many times to see how the Poissonian counting statistics propagate to our final result.
In Supplemental Figure 1, we recreate Figure 3 from the main text using this approach with 100 trials. The error bars shown enclose two standard deviations. The uncertainties from this approach behave similarly to the analytic propagation-of-error used in the main manuscript, however the uncertainties are even smaller. The values obtained for the entanglement of formation are 7.154 ± .015 (7.112 ± .0412) ebits with background subtraction and 3.459 ± .012 (3.425 ± .038) ebits, where the analytic result is given in parentheses. The two outcomes are in good agreement, with between two-times and four-times lower uncertainty with the Monte Carlo simulations.

Maximum possible entanglement that can be certified with this technique
For photon statistics contained within a finite window, the maximum possible entanglement our relation can characterize is when a pixel in the signal arm is correlated to only a single pixel in the idler arm, or when all conditional entropies are zero. In this case, the inequality reads: For perfect diagonal correlations, the number of measurements we need with our technique scales favorably with resolution, improving better with tighter correlations. For example, for N ×N resolution in both position and momentum (assuming N is a power of two for simplicity), then one needs only about 12(N − log 2 (N ) − 2) measurements, which, for N = 512 would be about 6096 measurements. This does not include the number of measurements needed to acquire this partitioning, which scales similarly. When the correlations are less tight, more pixels are required at maximum resolution, increasing this total.