Optimal Universal Uncertainty Relations

We study universal uncertainty relations and present a method called joint probability distribution diagram to improve the majorization bounds constructed independently in [Phys. Rev. Lett. 111, 230401 (2013)] and [J. Phys. A. 46, 272002 (2013)]. The results give rise to state independent uncertainty relations satisfied by any nonnegative Schur-concave functions. On the other hand, a remarkable recent result of entropic uncertainty relation is the direct-sum majorization relation. In this paper, we illustrate our bounds by showing how they provide a complement to that in [Phys. Rev. A. 89, 052115 (2014)].

As a consequence of the uncertainty relations, it is impossible to determine the exact values of the two incompatible observables simultaneously. However, the lower bound in the above uncertainty inequality may become trivial if the measured state |ψ〉 belongs to the nullspace of the commutator [A, B].
In fact, the uncertainty relations provide a limitation on how much information one can obtain by measuring a physical system, and can be characterized in terms of the probability distributions of the measurement outcomes. In order to overcome the drawback in the product form of variance base uncertainty relations, Deutsch 8 introduced the entropic uncertainty relations, which were later improved by Maassen and  Friedland, Gheorghiu and Gour 14 proposed a new concept called "universal uncertainty relations" which are not limited to considering only the well-known entropic functions such as Shannon entropy, Renyi entropy and Tsallis entropy, but also any nonnegative Schur-concave functions. On the other hand, Puchała, Rudnicki and Denote by p m (ρ) = 〈 a m |ρ|a m 〉 and q n (ρ) = 〈 b n |ρ|b n 〉 the probability distributions obtained by measuring the state ρ with respect to these bases, which constitute two probability vectors p(ρ) = (p 1 , p 2 , … , p d ) and q(ρ) = (q 1 , q 2 , … , q d ), respectively. It has been shown that the tensor product of the two probability vectors p(ρ) and q(ρ) is majored by a vector ω independent from the state ρ, Scientific RepoRts | 6:35735 | DOI: 10.1038/srep35735 where "" stands for "majorization" The down-arrow vector x ↓ denotes that the components of x are rearranged in descending order, . The d 2 -dimensional vector ω is given by being a subset of k distinct pairs of indices (m, n) and [d] is the set of the natural numbers from 1 to d. The outer maximum is over all subsets I k with cardinality k and the inner maximum runs over all density matrices. Equation (2) is called a universal uncertainty relation, as for any uncertainty measure Φ , a nonnegative Schur-concave function, one has that The universal uncertainty relation (UUR) (2) generates infinitely many uncertainty relations, for each Φ , in which the right hand side provides a single lower bound.
In relation (2), the state independent vector ω decided by Ω k in Eq. (4) is too hard to evaluate explicitly in general, as it is involved with a highly nontrivial optimization problem. For this reason, only an approximation Ω k of Ω k has been presented 14,15 to construct a weaker majorization vector ω . Naturally, how to find a stronger approximation than previous works becomes an interesting open question.

Results
We first introduce a scheme called "joint probability distribution diagram" (JPDD) to consider the optimization problem involved in calculating Ω k . Next, we present a stronger approximation by proposing an analytical formula for Ω k . To facilitate presentation, we denote our stronger approximation as Ω k without ambiguity. All uncertainty relations considered in the paper will be in the absence of quantum side information.
To construct the joint probability distribution diagram, we associate each summand p i (ρ)q j (ρ) in Ω k to a box located at the position (i, j). Then the summation in Ω k corresponds to certain region of boxes (or rather lattice points) in the first quadrant. We configure the region in a combinatorial way. Suppose that where the entries descend along the rows and columns by assumption. Now, we use a box □ to represent an entry of the matrix. A shadow or grey box in the JPDD means the corresponding entry in the matrix. For example, the top left shadow of the block box specifies the entry p 1 q 1 , see Fig. 1. Thus the region corresponding to the summation in Ω k will be a special region of the rectangular matrix. Our scheme, JPDD, provides a combinatorial method to compute the special region with respect to the Ω k . First, it is easy to see that the top upper left box in JPDD is the maximal element, i.e. Ω 1 , since The main idea is that each exact solution of Ω k corresponds to a particular region in this matrix.
Suppose that the k-th region is found, i.e. Ω k is obtained, then the next (k + 1)-th region is obtained from the k-th region by adding a special box, which must be "connected" with certain boundary of the k-th region. This iterative procedure enables us to compute all Ω k . Before proving the statement rigorously, we first introduce some terminologies.
[Definition 1] (Different boxes) Two boxes (matrix elements) p i q j and p k q l are said to be different if they occupy different positions in JPDD, namely, i ≠ k or j ≠ l. The Fig. 2 shows three examples of different boxes. Note that it may happen that even if the numerical values of p i q j and p k q l are the same, but graphically they are treated as different boxes. "different" and "same" do not imply their quantitative relation. For example, p 1 q 1 may equal to p 1 q 3 in general.
[Definition 2] (Connectedness). Two boxes p i q j and p k q l in JPDD are connected if there does not exist any box p m q n , different from both p i q j and p k q l , such that min{p i q j , p k q l } < p m q n < max{p i q j , p k q l }. For example, different boxes p i q j and p k q l are connected if p m q n ≠ p k q l , for any i, j, k, l ∈ {1, 2, 3, 4}, then p 2 q 2 and p 3 q 3 are not connected while p 2 q 3 and p 2 q 2 are connected, see Fig. 3. [

Definition 3] (Connected region). A set of different boxes is called a region, denoted by
A max is the maximal value of all the elements in A. Note that the region of boxes corresponding to a Ω k must contain the top-left element p 1 q 1 in JPDD as its largest element.
For any probability vector p(ρ) on a d-dimensional Hilbert space with are defined similarly for another probability vector q(ρ) on the same Hilbert space. For any sequence k 1 , ··· , k n , 1 ≤ n ≤ d, we define Ω = ′ ′ + ′ + ′ + ′ ′ + ′ + ′ + In particular, if p 1 ≥ · · · ≥ p d , q 1 ≥ · · · ≥ q d , t h e n Ω = + + , which can be configured by the Fig. 4. In a JPDD when the first k boxes are chosen, the next  (maximal) (k + 1)-th box must appear at the top left corner in the unoccupied region, we give this as follow lemma: [Lemma]: The maximal k boxes for Ω k in the JPDD can be selected to form a connected region. ■ Lemma gives a way to get Ω k+1 from Ω k in a JPDD. As an example, we show how to get Ω 3 from Ω 2 . Set   which gives an iterative formula of Ω k in terms of Ω k d 's. We list in Figs 5 and 6 all the possible Ω k for k = 1, 2 ,..., 4. The above example to get Ω 3 from Ω 2 corresponds to move from the second row to the third. Now we are ready to show the main result.
[Theorem] The quantities Ω k are given by    We have shown how to calculate Ω 1 , Ω 2 and Ω 3 . For the cases k ≥ 4, interested readers can calculate Ω k using a similar method and we sketch the details in the Methods. The above theory enables us to formulate a series of Ω k , based on all quantities we obtain a tighter majorization vector ω. Note that our method is valid when all the maximums are taken over the same quantum state, otherwise our bounds will fail to hold. Even so, our results can outperform B Maj2 11 to some extent. Our results enable us to strengthen the bounds on the sum of two Shannon entropies by B JPDD = H(ω), where ω is given by the improved Ω k in Eq. (8) and H is the Shannon entropy. To see this phenomenon, let us first consider a 4-dimensional system with incompatible observables | 〉 | 〉 =

Conclusion
In conclusion, we have presented a method called joint probability distribution diagram to strengthen the bounds for the universal uncertainty relations. As an example, we consider the bounds on the sum of the Shannon entropies. As the universal uncertainty relations capture the essence of uncertainty in quantum theory, it is unnecessary to quantify them by particular measures of uncertainty such as Shannon or Renyi entropies. Our results give a way to resolve some important cases in this direction, and is shown to offer a better bound for any uncertainty relations given by the nonnegative Schur-concave functions. Furthermore, how to extend this method to the case of multiple measurements are interesting, which requires further studies.

Methods
Proof of the Lamma. The case of k = 1 is obvious since the maximal element is p 1 q 1 . Assume that the statement holds for the case of k − 1: is connected with k 1 > … > k n > 0. Suppose on the contrary that the next maximum Ω k = Ω k − 1 + p i q j is not connected. Then there are two possibilities: (i) i > n or j > k 1 , thus we can replace p 1 q j by p i q 1 or p 1 q j and move further to p n+1 q 1 or + p q k 1 1 1 to get a possible bigger value for Ω k . (ii) i ≤ n and j > k i , in this case we can also replace the box p i q j by a connected box p i′ q j′ to the region of Ω k − 1 by sliding it leftward or upward. Hence the statement is true by induction. ■ Proof of the Theorem. To calculate Ω = Ω  k kk k n d 1 , 2 , , k 1 ≥ … ≥ k n and k 1 + ··· + k n = k, we note that   F(c). The horizontal coordinate is for random runs. It can be seen that our bound outperforms the bound 16 100% of the time, while a bound given by Friedland 14  where R and S are subsets of distinct indices from [d], |R| is the cardinality of R, and ||·|| ∞ is the infinity operator norm which coincides with the maximum eigenvalue of the positive operator. For a given k, there exist sets of k 1 ≥ … ≥ k n such that ∑ = = k k i n i 1 for some n. For any such given k 1 ≥ … ≥ k n , the quantity in [.] in Eq. (8) can be calculated. The outer max picks up the largest quantity for all such possible k 1 , … , k n . Then Ω =Ω + Ω − Ω ( )