Optimal verification of general bipartite pure states

The efficient and reliable verification of quantum states plays a crucial role in various quantum information processing tasks. We consider the task of verifying entangled states using one-way and two-way classical communication and completely characterize the optimal strategies via convex optimization. We solve these optimization problems using both analytical and numerical methods, and the optimal strategies can be constructed for any bipartite pure state. Compared with the nonadaptive approach, our adaptive strategies significantly improve the efficiency of quantum state verification. Moreover, these strategies are experimentally feasible, as only few local projective measurements are required.

The efficient and reliable verification of quantum states plays a crucial role in various quantum information processing tasks. We consider the task of verifying entangled states using one-way and two-way classical communication and completely characterize the optimal strategies via convex optimization. We solve these optimization problems using both analytical and numerical methods, and the optimal strategies can be constructed for any bipartite pure state. Compared with the nonadaptive approach, our adaptive strategies significantly improve the efficiency of quantum state verification. Moreover, these strategies are experimentally feasible, as only few local projective measurements are required.
Introduction.-A basic yet important step in most quantum information processing tasks is to efficiently and reliably characterize a quantum state. The standard approach is to perform quantum state tomography by fully reconstructing the density matrix [1]. However, tomography is known to be both time consuming and computationally hard due to the exponentially increasing number of parameters to be reconstructed [2,3]; moreover, the underlying approximations may be conceptually problematic [4]. In fact, full tomographic information is often not required, and a lot of effort has been devoted to characterizing quantum states with non-tomographic methods [5][6][7][8]. Recently, an alternative statistical approach, namely quantum state verification, has triggered much research interest due to its powerful efficacy [9][10][11][12][13].
Quantum state verification is a procedure for gaining confidence that the output of some quantum device is a particular state by employing local measurements [9]. Consider a device that is supposed to produce the target state |ψ , but may in practice produce σ 1 , σ 2 , . . . , σ N in N runs. In the ideal scenario, the verifier has the promise that either σ k = |ψ ψ| for all k or that σ k have a finite distance to |ψ , i.e., ψ|σ k |ψ ≤ 1 − ε for all k. Given access to some set of allowed measurements, the verifier must certify that the source prepares |ψ . One cannot exclude that he certifies the source to be correct although it is not, but this failure probability δ should be as small as possible.
In general, for each state σ k the verifier may apply a different measurement with some predefined probability. So a state verification strategy can be expressed as Ω = n i=1 p i Ω i , where (p 1 , p 2 , . . . , p n ) is a probability distribution, and {Ω i , 1 − Ω i } are allowed measurements with outcomes labeled by "pass" and "fail" respectively. For each output state σ k , the verifier randomly chooses a measurement {Ω i , 1 − Ω i } with probability p i , then performs the test. In a pass instance, the verifier continues to state σ k+1 , otherwise the verification ends and the verifier concludes that the state was not |ψ . To guarantee that the perfect state |ψ is never rejected we assume Ω i satisfies ψ|Ω i |ψ = 1; it has been observed in Ref. [9] that such strategies are better than others. The worst-case failure probability of each run is given by max ψ|σ|ψ ≤1−ε Tr(Ωσ) = 1 − εv(Ω), where v(Ω) represents the spectral gap between the largest and the second largest eigenvalues of Ω [9].
In the case that all N states pass the test, we achieve the In reality, however, quantum devices are never perfect, so the verifier cannot be promised that either σ k = |ψ ψ| or ψ|σ k |ψ ≤ 1 − ε for all k. Instead, a more practical task is to certify with high confidence that the fidelity of the output state is larger than a threshold value 1 − ε. In this case, the verifier measures the frequency f of the pass instances. If f > 1 − εv(Ω), the confidence 1 − δ can be derived from the Chernoff bound [14,15] δ where The advantage of the state verification approach is that the failure probability δ decreases exponentially with N, hence the target state |ψ can be potentially verified using only few copies of the state. As seen from Eqs. (1) and (2), the performance of a verification strategy depends solely on v(Ω). Therefore, to achieve an optimal strategy, we need to maximize v(Ω) over all accessible measurements. Although lots of effort has been devoted to this research line, few optimal strategies have been found. To the best of our knowledge, the only optimal strategy reported by now is the verification of two-qubit pure states with local projective measurements [9].
In this work, we introduce adaptive measurements, i.e., measurements assisted by local operations and classical communication (LOCC) [16,17] to the task of quantum state verification. We show that the efficiency of the verification can be significantly improved by considering adaptive measurements. For any d 1 × d 2 bipartite pure state, we explicitly construct the optimal one-way as well as near-optimal two-way adaptive verification strategies. Best of all, these strategies are experimentally friendly as only few local projective measurements are needed for their implementation in the laboratory.

arXiv:1901.09856v3 [quant-ph] 6 Dec 2019
Optimal state verification as convex optimization.-In the following, we derive two convex optimization problems that completely characterize the optimal adaptive state verification strategies assisted by one-way and one-round two-way classical communication respectively. In general, to get an optimal verification strategy, we need to consider the optimization problem where |ψ is the target state we want to verify, and M denotes the set of all allowed measurements. Be reminded that v(Ω) represents the spectral gap between the largest and the second largest eigenvalues of Ω.
As Ω i ≤ 1, the last constraint leads to Ω i |ψ = |ψ and P ⊥ Ω i P where · denotes the largest eigenvalue. Generally speaking, the optimization in Eq. (3) is difficult to solve, if not impossible at all, because the set of all possible measurements cannot be easily characterized. Here, we give a complete characterization of Ω for both one-way and one-round two-way adaptive measurements, then reduce the corresponding problems to convex optimization. These optimization problems can be further simplified and solved. For succinctness, hereafter we restrict the two-way adaptive measurements to one-round communication only. In addition, the accessible measurements allowed in our verification strategies are not restricted to projective measurements (PMs), i.e., positive operator-valued measures (POVMs) are possible, although in the end we show that the optimal strategies can be achieved with PMs in most cases.
Without loss of generality, a bipartite pure state can be writ- [18]. We start with the analysis of one-way communication. In this case, Alice first performs a measurement, and sends the measurement outcome to Bob. Bob then chooses his measurement in accordance with Alice's measurement outcome. Hence, the one-way adaptive strategy Ω → takes the form where {M a|i } a are measurements on Alice's system, and each {N a|i , 1 − N a|i } is a "pass" or "fail" measurement on Bob's system depending on Alice's measurement outcome. Here, we can assume that the M a|i are rank-one, otherwise some further decomposition can make this assumption satisfied. If the joint system is in state |ψ , Bob's subsystem would collapse to some pure state P a|i = Tr A (M a|i ⊗ 1|ψ ψ|)/ Tr(M a|i ⊗ 1|ψ ψ|) after Alice's measurement {M a|i } a . Then the best strategy for Bob is to perform the measurement {P a|i , 1 − P a|i } to verify whether his subsystem is in state P a|i . Mathematically, to ensure that ψ|Ω → i |ψ = 1, N a|i must satisfy that N a|i ≥ P a|i . If all N a|i satisfy N a|i = P a|i , we call the one-way adaptive strategy Ω → semi-optimal. Hence, to maximize v(Ω → ), i.e., to minimize i p i P ⊥ Ω → i P ⊥ , we can restrict Ω → to be semi-optimal strategies.
From the definition, we get the following necessary conditions for Ω → being semi-optimal where S is the set of separable operators, i.e., unnormalized separable states [17]. Next, we show that these constraints are also sufficient. Ω → is separable implies that there exists a decomposition Ω → = a M a ⊗ N a , such that M a are positive semidefinite and N a are rank-one projectors. Then, on Alice's system. This concludes our proof by taking into account the last constraint. Thus, the optimization in Eq. (3) can be written as for one-way adaptive verification strategies. We move on to discuss the one-round two-way communication scenario. In this case, Alice and Bob use shared randomness to decide who performs the measurement first. After the measurement, he/she sends the measurement outcome to the other party. Then the receiver chooses her/his measurement according to the received measurement outcome. Thanks to the permutation symmetry of |ψ , the optimization in this setting can be easily simplified. Let S be the SWAP operator, i.e., S |i | j = | j |i for all i, j = 1, 2, . . . , d, then we have S |ψ = |ψ . This indicates that, for two-way adaptive measurements, if Ω satisfies the constraints in Eq. (3), so does Hence, we can focus on the two-way adaptive strategies Ω ↔ that are invariant under the SWAP operation, i.e., where Ω → is a one-way adaptive strategy and Ω ← = S Ω → S † . Similarly, to optimize v(Ω ↔ ), we can also restrict Ω → to be semi-optimal. Thus, the optimization in Eq. (3) can be written as for two-way adaptive verification strategies.
Optimal verification of two-qubit states.-Without loss of generality, we write the two-qubit entangled pure state as |ψ = cos θ|00 + sin θ|11 with 0 < θ ≤ π/4. Then the subspace P ⊥ is spanned by First, we need a group G to simplify the optimizations. The group G is defined to be generated by the unitary operator g = Φ ⊗ Φ † , where Φ is the phase gate, i.e., Φ|0 = |0 and Φ|1 = i|1 . Then we can show see Appendix A for the proof. As g|ψ = |ψ ,Ω also satisfies the constraints in Eqs. (7) and (9) if Ω does. Furthermore, Eq. (4) implies Thus, we can restrict to the diagonal Ω → as in Eq. (10) for the optimizations in Eqs. (7) and (9). Then, we consider the case of one-way adaptive verification. For two-qubit quantum states, the positive partial transpose (PPT) criterion is necessary and sufficient to characterize their separability [19,20]. Thus, by combining Eq. (10) with the PPT criterion, the optimization in Eq. (7) can be written as where the constraints arise only from Ω → ≥ 0 and Tr B (Ω → ) = 1, since the PPT criterion gives the redundant condition w 1 w 2 ≥ sin 2 θ cos 2 θ(1 − w 3 ) 2 . As 0 < θ ≤ π/4, we have w 2 ≥ w 1 . Thus, the solution of Eq. (12) is attained when w 2 = w 3 , and In general, the measurements associated with the optimal solution are POVMs. However, one can directly calculate that the bound in Eq. (13) can be achieved already with PMs Nonadaptive strategy [9] One-way adaptive strategy Two-way adaptive strategy

FIG. 1. Optimal values of v(Ω) with different verification strategies
for the two-qubit entangled pure state |ψ = cos θ|00 + sin θ|11 with 0 < θ < π/4. Note that when θ = π/4, i.e., |ψ is the maximally entangled state, all three strategies give the same optimal value v(Ω) = 2/3. where with |ϕ 0 = 1 √ 2 (|0 +|1 )⊗(cos θ|0 +sin θ|1 ) and |ϕ k = g k |ϕ 0 . Next, we discuss the case of two-way adaptive verification. By combining Eq. (10) and the PPT criterion, we can get a simplification of the optimization in Eq. (9) by simply replacing the objective function in Eq. (12) with whose solution is given by Again, we explicitly write down the PMs where P + ZZ , X → ψ , and Y → ψ are defined as in Eq. (15), and X ← ψ = S X → ψ S † and Y ← ψ = S Y → ψ S † . Finally, we compare the adaptive strategies with the nonadaptive approach in Ref. [9]. For two-qubit entangled states, we plot the optimal values of v(Ω) for different strategies in Fig. 1. As can be seen, the two-way strategy works much better than the one-way strategy, whereas both the adaptive strategies significantly outperform the nonadaptive one. Concerning the resources used in each strategy, we have the following remarks. Although no classical communication is involved in the measurement process of the nonadaptive strategy, it is still a necessary resource for the data processing after the measurement. On the contrary, the one-way adaptive strategy relies on classical communication for the measurements, but no classical communication is needed for the data processing as one party alone can determine whether the result is a pass or fail instance. The case for the two-way adaptive strategy is similar, but to obtain the final frequency of the pass instances, the two parties need to cooperate.
Optimal verification of general bipartite states.-We move on to discuss the optimal adaptive verification of general bipartite states. Firstly, we need a larger group G for the general bipartite (two-qudit) pure state |ψ = d i=1 λ i |ii . The group G is defined to be generated by the unitary operators ρ i j |ii j j|, (19) where |G| is the order of G; see Appendix A for the proof. Similar to the two-qubit case, if Ω satisfies the constraints in Eqs. (7) and (9), so doesΩ, since g|ψ = |ψ for all g ∈ G.
Secondly, we consider the case of one-way adaptive verification. The main difference between two-qudit and two-qubit states is that the PPT criterion is only necessary but not sufficient to characterize the separability for d ≥ 3 [20]. Hence, by replacing Ω → ∈ S with (Ω → ) T B ≥ 0, Eqs. (7) and (19) only give us a relaxation of the original optimization maximize w i j , ρ i j where the constraints arise from 0 ≤ Ω → ≤ 1, the PPT criterion, Tr B (Ω → ) = 1, and ψ|Ω → |ψ = 1 respectively. Therefore, the solution of this relaxed problem sets an upper bound of the optimal v(Ω → ). To show that the solution is a valid strategy, we still need to prove that the optimal Ω → obtained from Eq. (21) is separable. Here, instead of resorting to numerical methods, we can analytically solve the optimization in Eq. (21), which gives for all d ≥ 2. Moreover, the bound in Eq. (22) can be achieved with PMs where with γ d = e 2πi d and w = λ 2 1 /(1 + λ 2 1 ); see Appendix B for more details. In passing, we note two special cases of Eq. (24). When |ψ is separable, i.e., d = 1, Eq. (24) gives the optimal nonadaptive strategy with v(Ω) = 1. When |ψ is maximally entangled, {|φ k } d k=1 forms an orthogonal basis. Hence, Eq. (24) gives the optimal nonadaptive strategy [9,21].
In practice, the above strategy can be easily implemented. Alice first randomly chooses one of the two measurements {|k } d k=1 and {| f k } d k=1 with probabilities w and 1 − w respectively. The former measurement can be performed directly, while the latter one requires some random phase shifts from G in advance. Then Alice sends all the information to Bob via classical communication, upon receiving which Bob can proceed to perform the corresponding test.
Lastly, we consider the case of two-way adaptive verification. By the same token, the efficiency can be improved by averaging Ω → and its swap Ω ← . Specifically, we can get when Ω → is of the form in Eq. (23) with w = λ 2 /(1 + λ 2 ) and λ 2 = 1 2 (λ 2 1 + λ 2 2 ). However, unlike the two-qubit case, this strategy is only near-optimal for general bipartite states. To get the optimal strategy, we can numerically solve the optimization in Eq. (25), then explicitly decompose the obtained strategy with the method in Ref. [22]. Our testing results show that the optimal strategy is at most 4% better in efficiency than the near-optimal strategy for all d ≤ 10, whereas the measurement settings can be more complicated; see Appendix C for more details.
Before concluding, two remarks are in order. First, Eqs. (22) and (25) imply that v(Ω) ≥ 1/2 for all of our adaptive strategies. This implies that N 2ε −1 log δ −1 copies of states are enough for verifying any bipartite states, which is independent of the dimension d. This is of the same scale with the best global strategies with entangled measurements, which need N ≈ ε −1 log δ −1 copies [9]. On the contrary, the best nonadaptive strategies known so far need N dε −1 log δ −1 to verify a generic two-qudit state for d ≥ 3 [23], which is worse than our adaptive strategies by an order O(d). Second, it is possible to further improve the efficiency of the adaptive strategies by involving many-round communication [24]. However, these strategies require coherence-preserving measurements and can only improve the efficiency up to a constant factor c with c ≤ 2 for all dimensions.
Conclusions.-Quantum state verification is an efficient and reliable method for gaining confidence about the quality of quantum devices, which is a crucial step in almost all quantum information processing tasks and foundational studies. In this work, we integrated adaptive measurements to the problem of state verification and formulated two convex optimization problems that completely characterize the optimal adaptive strategies for one-way and one-round two-way classical communication. We solve these optimization problems using both analytical and numerical methods, and the optimal or near-optimal strategies are constructed explicitly for any bipartite pure state. As a demonstration, we compared the optimal adaptive strategies with the nonadaptive one, and find that the verification efficiency can be significantly improved if classical communication is allowed. Finally, our adaptive verification strategies are readily applicable in experiments as only few local projective measurements are involved. For future research, it is very interesting to consider the multipartite case, which is more relevant for applications. Moreover, it is meaningful to discuss how the present approach needs to be modified, if the measurement devices are not perfectly characterized. Statistical tools developed for quantum state discrimination [25,26] may be helpful for this purpose.
We would like to thank Mariami Gachechiladze and Chau Nguyen for discussions. This work was supported by the DFG and the ERC (Consolidator Grant 683107/TempoQ). X.D.Y. acknowledges funding from a CSC-DAAD scholarship. J.S. acknowledges support by the Beijing Institute of Technology Research Fund Program for Young Scholars and the National Natural Science Foundation of China through Grant No. 11805010.
Note added.-During the preparation of the manuscript we became aware of related works by Wang and Hayashi [24], and Li et al. [27].
Appendix A: Proofs of Equations (10) and (19) It is easy to see that Eq. (10) is a special case of Eq. (19) when d = 2. Hence, we just need to prove Eq. (19), which we restate below where |G| is the number of elements in group G. Recall that G is defined to be generated by where Φ k satisfies We also note that w i j = i j|Ω|i j = i j|Ω|i j , as |i j i j| and |ii j j| are invariant under the group action G. To prove Eq. (26), we just need to show kl|Ω|i j = 0 (30) unless k = l, i = j, or k = i, l = j.
Note that for all g ∈ G, we havẽ becauseΩ is invariant under the group action, i.e., gΩg † =Ω.
To prove Eq. (30), we classify the quadruple (k, l, i, j) into two different cases. Case 1: Certain index in (k, l, i, j) appears only once. Without loss of generality, we assume it is k. In this case, we choose g = g 2 k , then g|kl = g 2 k |kl = −|kl , g|i j = g 2 k |i j = |i j .
Combining with Eq. (32), we obtain kl|Ω|i j = 0. Case 2: All indexes in (k, l, i, j) appear more than once. Then the only possibility excluded from Eq. (31) is In this case, we choose g = g k , then g|kl = g k |kl = i|kl , g|lk = g 2 k |lk = −i|lk .
Appendix B: Optimal one-way strategy In this appendix, we solve the optimization in Eq. (21). To illustrate the main idea behind our method, we first consider a special case in which ρ is of the form Then the optimization in Eq.
which reads maximize w i j , ρ i j min 1 − 1 2 (w i j + w ji ), 1 − ρ − λλ T , subject to 0 ≤ ρ ≤ 1, w i j ≥ 0, for all i j, w i j w ji ≥ |ρ i j | 2 , for all i j, j i w i j + ρ ii = 1, for all i, This optimization, however, cannot be solved analytically. Instead, we resort to a numerical approach, then confirm the separability of the resulting strategies with the method in Ref. [22]. See Fig. 2 for a comparison of the verification efficiency between the optimal strategy from Eq. (50) and the near-optimal strategy in Eq. (25) for two-qutrit states. As can be seen, the optimal efficiency is only slightly better than the near-optimal efficiency, whereas the measurement settings of the optimal strategy can be more complicated. Similar conclusions are also observed in higher-dimensional cases. For instance, we have tested one million randomly-drawn states for d ≤ 10. The results show that the optimal strategy is at most 4% better in efficiency than the near-optimal strategy.