Classical Communication Enhanced Quantum State Verification

Quantum state verification provides an efficient approach to characterize the reliability of quantum devices for generating certain target states. The figure of merit of a specific strategy is the estimated infidelity $\epsilon$ of the tested state to the target state, given a certain number of performed measurements n. Entangled measurements constitute the globally optimal strategy and achieve the scaling that \epsilon is inversely proportional to n. Recent advances show that it is possible to achieve the same scaling simply with non-adaptive local measurements, however, the performance is still worse than the globally optimal bound up to a constant factor. In this work, by introducing classical communication, we experimentally implement an adaptive quantum state verification. The constant-factor is minimized from ~2.5 to 1.5 in this experiment, which means that only 60% measurements are required to achieve a certain value of \epsilon compared to optimal non-adaptive local strategy. Our results indicate that classical communication significantly enhances the performance of quantum state verification, and leads to an efficiency that further approaches the globally optimal bound.


INTRODUCTION
Quantum information science aims to enhance traditional information techniques by introducing the advantage of "quantumness". To date, the major subfields in quantum information include quantum computation [1], quantum cryptography [2], and quantum metrology [3,4], which are respectively in pursuit of more efficient computation, more secure communication, and more precise measurement. To achieve these innovations, one needs to manufacture quantum devices and verify that these devices indeed operate as expected. Various techniques have been developed for the task to inspect the quantum states generated from these devices. Quantum state tomography (QST) [5] provides full information about an unknown state by reconstructing the density matrix and constitutes a popular point estimation method. However, the conventional tomographic reconstruction of a state is an exponentially time-consuming and computationally difficult process [6].
In order to reduce the measurement complexity to certify the quantum states, substantial efforts have been made to formalizing more efficient methods. These improved methods normally require prior information or access partial knowledge about the states. On one hand, it has been found that with prior information about the category of the tested states, compressed sensing [7,8] and matrix product state tomography [9] can be used to simplify the measurement of quantum states. On the other hand, entanglement witnesses can justify the appearance of entanglement with far fewer measurements [10,11]; in a radical case, it is shown that local measurement on few copies is sufficient to certify the appearance of entanglement for multipartite entangled systems [12,13].
Furthermore, when the applied measurements are correlated through classical communication, quantum tomography can be implemented in a significantly more efficient way [14][15][16].
In quantum information processing, the quantum device is generally designed to generate a specific target state. In this case, the user only needs to confirm that the actual state is sufficiently close to the target state, in the sense that the full knowledge about the exact form of the state is excessive for this requirement. Quantum state verification (QSV) provides an efficient solution applicable to this scenario. As mentioned above, tomography aims to address the following question: What is the state? While QSV addresses a different question: Is the state identical/close to the target state? From a practical point of view, answering this question is sufficient for many quantum information applications. By performing a series of measurements on the output copies of state, QSV reaches a conclusion like "the device outputs copies of a state that has at least 1 − fidelity with the target, with 1 − δ confidence".
In order to verify a specific quantum state, different kinds of strategies can be constructed; and thus, it is profitable for the user to seek an optimal strategy. Rigorously, this optimization can be achieved by minimizing the number of measurements of n for given values of and δ. Similar to the realm of quantum metrology [17,18], an optimal QSV strategy also strives for a 1/n scaling of , with a minimum constant-factor before. For QSV, if the target state is a pure state, the best strategy is the projection onto the target state and its complementary space, then the 1/n scaling is reached, we call this strategy the globally optimal QSV strategy. Unfortunately, if the target is entangled state, entangled measurements are demanded while they are rare resources and difficult to obtain [19]. Recently, several works have shown that 1/n scaling can be achieved with a nonadaptive local (LO) strategy [20][21][22], the LO here means that the applied measurement operators are separable as oppose to the entangled ones used in globally optimal strategy. However, this nonadaptive LO strategy is still worse than the globally optimal strategy by a constant-factor, which represents the number of additional measurements required to compete with the globally optimal strategy.
In this work, we demonstrate adaptive QSV using a photonic apparatus with active bidirectional feed-forward of classical communications between entangled photon pairs based on recent theoretical works [23][24][25]. The achieved efficiency not only attains the 1/n scaling but also further minimizes the constant-factor from before. Both bi-and uni-directional classical communications are utilized in our experiment, and the results show that these adaptive strategies significantly outperform the non-adaptive LO strategy. Furthermore, the bi-directional strategy achieves higher efficiency than the uni-directional strategy, and the number of required measurements is reduced by ∼ 40% compared to the non-adaptive LO strategy. Our results indicate that classical communication is beneficial resources in QSV, which enhances the performance to a level comparable with the globally optimal strategy.

Theoretical Framework
In a QSV task, the verifier is assigned to certify that his on-hand quantum device does produce a series of quantum states (σ 1 , σ 2 , σ 3 , ..., σ n ) satisfying the following inequality: where |Ψ is the target state that the device is supposed to produce. Eq. (1) assumes a different scenario from that of QST, for which all σ i are required to be independent and identically distributed.
Typically, with the probability as p l (l = 1, 2, ..., m), the verifier randomly performs a twooutcome local measurement M l , which is accepted with certainty when performed on the target state. When all the measurement outcomes are accepted, the verifier can reach a statistical inference that the state from the tested device has a minimum fidelity 1 − to the target state, with a statistical confidence level of 1 − δ.
For a specific strategy Ω = Σ l p l M l , the minimum number of measurements n required to achieve certain values of and δ is then given by [23] n Ω local = ln(δ) This result indicates that it is possible to achieve the 1/n scaling of in the QSV of pure entangled states. Furthermore, the verifier can optimize the strategy by minimizing the second largest eigenvalue λ ↓ 2 (Ω), as well as the constant-factor . For LO strategies with non-adaptive local measurements, the optimal strategy to verify |Ψ(θ) = cos θ|HV − sin θ|V H is identified with The globally optimal strategy can be realized by projecting σ i to the target state |Ψ and its orthogonal state |Ψ ⊥ , under which λ ↓ 2 (Ω) = 0; and thus, the globally optimal bound is calculated as For QSV of entangled states, entangled measurements are required to implement the globally optimal strategy, which are sophisticated to perform [26][27][28][29]. Therefore, local measurements are preferred from a practical view of point. This realistic contradiction naturally yields a question that how to further minimize the gap between locally and globally optimal strategies with currently accessible techniques.
Recently, a theoretical work generalizes the non-adaptive LO strategy to adaptive versions by introducing classical communication between the two parties sharing entanglement [23]. The elementary adaptive strategy utilizes local measurements and uni-directional classical communication (Uni-LOCC), as diagrammed in Fig. 1. The optimal Uni-LOCC QSV for |Ψ(θ) can be implemented by randomly choosing M 1 , M 2 or M 3 (see Methods for details) with prior probabil- , and the corresponding strategy can be written as [23] A bi-directional LOCC (Bi-LOCC) strategy can be implemented by randomly switching the role between Alice and Bob, which can be denoted as . Although both of these two strategies utilize one-step adaptive measurement, the Bi-LOCC strategy outperforms the Uni-LOCC when θ = 45 • .
When verifying entangled states with local measurements, adaptive strategies Ω → and Ω → ← achieve higher efficiency compared to the non-adaptive LO strategy [23,25]. The efficiency of LO, Uni-LOCC and Bi-LOCC strategies depend on their respective constant-factors (Ω) , which are 2 + sin θ cos θ, 1 + sin 2 θ, and 3/2. Although the performance of all these strategies coincides with two-qubit maximally entangled states (θ = 45 • ), the adaptive strategies are still preferred in most practical scenarios, where the realistic states are always different from the maximally entangled ones and actually closer to target states with θ = 45 • .

Experimental implementation and results
In the above QSV proposals, a valid statement about the tested states is based on the fact that all the outcomes are accepted, while a single appearance of rejection will cease the verification without a quantified conclusion. In practice, the generated states from the quantum devices are unavoidably non-ideal with a limited fidelity to the target state; and thus, there is always a certain probability to be rejected in each measurement. Even that the probability of single rejection is small, it is natural to observe rejection events in an experiment involving a sequence of measurements. As a result, these original proposals are likely to mistakenly characterize qualified quantum devices as unqualified, which is inadequate for experimental implementation.
By considering the proportion of accepted outcomes, a modified strategy is thus developed here, which is robust to a certain proportion of rejection events. Quantitatively, we have the corollary that if Ψ|σ i |Ψ ≤ 1− for all the measured states, the probability for each outcome to be accepted is smaller than 1 − (1 − λ ↓ 2 (Ω)) * . As a result, in the case that the verifier observes an accepted probability p ≥ 1 − (1 − λ ↓ 2 (Ω)) * , it should be concluded that the actual state satisfies Eq. (1) with a confidence level of 1 − δ, where and δ are calculated from the inequality [12] δ ≤ e −D( m n ||1−(1−λ ↓ 2 (Ω)) )n , with and m results are accepted when n measurements are performed. As a result of this modification, in the case that the final accepted probability p ≥ 1−(1−λ ↓ 2 (Ω)) * , the verification can eventually reach a conclusion quantifying the distance between the actual and target states.
Benefiting from this modification, QSV can be applied to realistic non-ideal states, which allows us to experimentally verify two-qubit entangled states using the above adaptive proposals.
With the setup shown in Fig. 2 Fig. 3. In Fig. 3 (a), the results of 50 trials are averaged, which approximately coincide with the theoretical lines for the first few measurements and deviate from the predicted linearity afterward. This deviation mainly results from the difference between verified states and the ideal target state, which leads to rejection outcomes in QSV. In other words, only if the verified states are perfectly identical to the target state, a persistent 1/n scaling can be observed in a practical QSV.
Since the occurrence rates of rejections are in principle equal for different strategies, a distinct gap in the estimated fidelity can be seen between the adaptive and non-adaptive strategies as predicted in the theory part. These results indicate the power of classical communication in boosting the performance of QSV. However, the practical scaling is not only determined the optimality of the strategy, but also the quality of the actual state. In this sense, we can only access the intrinsic performance of a strategy by testing an ideal state. Although it is impossible to generate an ideal state in experiment, we can circumvent this difficulty by studying the first few measurements, of which the occurrence of rejections is fairly rare. In Fig. 3 (b), the first 25 measurements of single trials with all the outputs to be accepted are plotted, accompanied by the averaged results in Fig.   3

DISCUSSION
One main motivation to explore the quantum resources, such as entangled states and measurements, is their potential power to surpass the classical approaches. On the other hand, the fact that the quantum resources are generally complicated to produce and control inspires another interest-ing question: how to use classical resources exhaustively to approach the bound set by quantum resources? In the task to verify an entangled state, the utilization of entangled measurements constitutes a globally optimal strategy that achieves the best possible efficiency. Surprisingly, one can also construct strategies merely with local measurements and achieve the same scaling. In this experiment, we show that by introducing classical communications into QSV, the performance with local measurements can be further enhanced to approach the globally optimal bound. As a result, to verify the states to a certain level of fidelity, the number of required measurements is only 60% of that for non-adaptive local strategy. Meanwhile, the gap between the locally and globally optimal bound is distinctly reduced, with the constant-factor minimized to 1.5 before 1/n scaling. Furthermore, recently QSV has been generalized to the adversarial scenario where arbitrary correlated or entangled state preparation is allowed [30,31].

Generation of entangled photon pairs.
In the first part of the setup, tunable two-qubit entangled states are prepared by pumping a nonlinear crystal placed into a phase-stable Sagnac interferometer (SI). Concretely, a 405.4 nm singlemode laser is used to pump a 5mm long bulk type-II nonlinear periodically poled potassium titanyl phosphate (PPKTP) nonlinear crystal placed into a phase-stable SI to produce polarizationentangled photon pairs at 810.8 nm. A PBS followed by an HWP and a PCP are used to control the polarization mode of the pump beam. These lenses before and after the SI are used to focus the pump light and collimate the entangled photons, respectively. The interferometer is composed of two highly reflective and polarization-maintaining mirrors, a Di-HWP and a Di-PBS.

Measurement setting for adaptive QSV
For the QSV of two-qubit pure entangled states, Alice's measurement Π i (i=1,2,3) are selected to be Pauli X,Y and Z measurements. When the outcome of X, Y, and Z is 1(0), Bob performs |H H|), respectively, and the vectors are defined as |υ ± = sin θ|H ∓ cos θ|V and |ω ± = sin θ|H ± i cos θ|V . These adaptive measurement settings constitute the optimal Uni-LQCC strategy which has the form [23] where |+ ≡ 1 √ 2 (|H + |V ) and |− ≡ 1 √ 2 (|H − |V ) denote the eigenstates of Pauli X operator, |R ≡ 1 √ 2 (|H + i|V ) and |L ≡ 1 √ 2 (|H − i|V ) denote the eigenstates of Pauli Y operator. In each of these three combined local measurement settings, the choice of Bob's measurement setting is determined by the outcome of Alice's measurement, which can be achieved by controlling the local operation of Bob's EOM according to Alice's outcome.

DATA AVAILABILITY
The authors declare that all data supporting the findings of this study are available within the article or from the corresponding author upon reasonable request. i.e., if the outcome of Π i is 0 (1), Bob performs measurement Π i0 (Π i1 ) accordingly. Bob's outcome 1 and 0 are coarse-grained as accepted ( √ ) and rejected (×) events, respectively. Similarly, Alice can perform measurements according to Bob's outcome. A bi-directional strategy can be applied by performing these two uni-directional strategies randomly. Through a statistical analysis of the sequence of accepted and rejected events, the verifier can ascertain the largest possible distance between the actual and target states up to some finite statistical confidence.