Certification of qubits in the prepare-and-measure scenario with large input alphabet and connections with the Grothendieck constant

We address the problem of testing the quantumness of two-dimensional systems in the prepare-and-measure (PM) scenario, using a large number of preparations and a large number of measurement settings, with binary outcome measurements. In this scenario, we introduce constants, which we relate to the Grothendieck constant of order 3. We associate them with the white noise resistance of the prepared qubits and to the critical detection efficiency of the measurements performed. Large-scale numerical tools are used to bound the constants. This allows us to obtain new bounds on the minimum detection efficiency that a setup with 70 preparations and 70 measurement settings can tolerate.


I. INTRODUCTION
Quantum theory reveals interesting and counter-intuitive phenomena in even the simplest physical systems.Paradigmatic examples are Bell nonlocality [1,2] and Einstein-Podolsky-Rosen (EPR) steering [3][4][5][6].These nonlocal phenomena appear as strong correlations between the outcomes of spatially separated measurements performed by independent observers.These correlations enable us to distinguish the classical and quantum origins of the experiments.Recently, a similar split between classical and quantum features was found in a setup closely related to quantum communication tasks, the so-called prepare-and-measure (PM) scenario [7].This scenario can be viewed as a communication game [8] between two parties, Alice (the sender) and Bob (the receiver), where the dimension of the classical (versus quantum) system communicated from Alice to Bob is bounded from above.
The PM game is described as follows (see panel (a) of Fig. 1).Upon receiving an input x = (1, . . ., n), a preparation device (controlled by Alice) emits a physical system in a quantum state ρ x .We assume ρ x ∈ L(C d ) for a given d ≥ 2. In the following, however, we will focus explicitly on d = 2, that is, we assume that twodimensional quantum systems (qubits) or classical systems (bits) are transmitted from Alice to Bob.The state ρ x is passed to a measurement device which, upon receiving an input y = (1, . . ., m) performs a measurement and obtains an outcome b = (1, . . ., o).In this paper we will focus on the smallest, nontrivial case of o = 2, i.e., measurements with two outcomes, in which case we denote the outcomes by b = ±1.
Our goal in this scenario is to compare and quantify the performance of qubits with that of classical bits.This scenario has been discussed to some extent for a small number of preparations n and measurements m (see e.g.Refs.[7,[9][10][11][12][13][14].Note also that the emblematic protocol, the so-called quantum random access code [15] (QRAC), is a special instance of the PM game.See Ref. [8] for more references on communication protocols related to QRAC.These games have also found applications in randomness generation (see [16,17]).More recent notable generalizations of QRAC protocols have been considered in Refs.[18][19][20][21].
However, in this paper we would like to turn our attention to the case of large n and m (i.e. in the range of 70).We will see that the main bottleneck of the study is the computation of the relevant quantities associated with the classical bit case for which we develop large scale numerical tools in this paper.We first concentrate on the qubit case, and then we will elaborate on the classical bit case.In the qubit case we define q(M ), whereas in the classical bit case we define the quantities S(M ) and L 2 (M ).These quantities in turn define the ratios q(M )/L 2 (M ) and (q(M ) − S(M ))/(L 2 (M ) − S(M )), which upper-bound our new constants K PM and K D , respectively.These constants have the physical meaning of defining the respective critical white noise tolerance and critical detection efficiency of the binary-outcome measurements in the qubit prepare-and-measure scenario.
In this paper, we relate these two introduced constants to the purely mathematical Grothendieck constant, K G [22].More generally, Grothendieck's problem has implications for many areas of mathematics.It first had a major impact on the theory of Banach spaces and then on C * -algebras.More recently, it has influenced graph theory and computer science (see e.g.[23]).Furthermore, a connection of the Grothendieck problem to Bell nonlocality was noticed by Tsirelson [24].Subsequently, Acin et al. [25], based on the work of Tsirelson, exploited this connection to show that the critical visibility of the Bell nonlocal two-qubit Werner state is given by 1/K G (3), where K G (3) is a refined version of Grothendieck's constant [26].Relating the local bound of correlation Bell scenarios to the classical bit bound of PM communication scenarios, we find in this paper that the new constant K PM is equal to K G (3).We also introduce the constant K D , which we relate to the critical detection efficiency η crit of binary-outcome measurements in the qubit PM scenario.In particular, we find in our model for finite detection efficiency that η crit = 1/K D .Armed with our efficient numerical tools, we bound the constant K D from below, which implies an upper bound of 0.6377 on η crit .
Qubit case.-In the qubit binary outcome (o = 2) case, the measurement is described by two positive operators {Π b|y }, b = ±1 acting on C 2 which sum to the identity Π b=+1|y + Π b=−1|y = 1 1 for each y, where 1 1 denotes the 2 × 2 identity matrix.The statistics of the experiment are then given by the formula It is important to note that both the preparations and the measurements are unknown to the observer, up to the fact that the dimension of the transmitted system is two.Since we have binary outcomes b = {+1, −1} it becomes convenient to use expectation values Note that E x,y can take up the values in [−1, +1] for all x, y.However, if the Hilbert space dimension of the communicated particle is bounded, then in general not all expectation values E x,y in [−1, +1] become possible.The simplest scenario that shows this effect appears already for n = 3, m = 2 and o = 2 (see [7] for an example).
With respect to the measurement operators M b|y , one case, namely the set of projective rank-1 measurements, is of particular interest to us.In this case, we have where ⃗ b y ∈ S 2 , b = ±1 and ⃗ σ = (σ x , σ y , σ z ) is the vector of Hermitian 2 × 2 Pauli matrices.On the other hand, let us set where ⃗ a x ∈ S 2 .This density matrix corresponds to a pure state with Bloch vector ⃗ a x .Note that in this case, the above equations give us where ⃗ a x , ⃗ b y ∈ S 2 .Limits on the set of possible distributions in dimension two can be captured by the following expression where M x,y are coefficients of a real witness matrix M of dimension n × m.Let us then define the quantity where E x,y is of the form (2), and where we maximize the expression over Bob's measurements {M b|y } and the qubit state ρ x in Eq. ( 1).Thus, Q(M ) is the value that is achievable with the most general two-dimensional quantum resources in our PM setup.We further define the quantity where E x,y = ⃗ a x • ⃗ b y and we maximize over the unit vectors ⃗ a x and ⃗ b y in the three-dimensional Euclidean space.It turns out that Q(M ) can be obtained with pure qubit states and projective measurements [11].However, the optimal projective measurements are in general not of rank-1, they can be of rank-0 or rank-2 as well.Indeed, there are example matrices M (even in the simple n = m = 3, o = 2 case) for which Q(M ) > q(M ).Note that q(M ) corresponds to projective qubit measurements of rank 1, in which case E x,y = ⃗ a x • ⃗ b y (see equation (5)).Yet, as we will see, the set {E x,y } x,y obtained by rank-1 projective measurements is a significant subset of the set {E x,y } x,y corresponding to the most general qubit measurements.The tools for computing the value Q(M ) can be found in Refs.[28,29].Importantly, the value of Q(M ) can serve as a dimension witness in the prepare-and-measure scenario [7].Indeed, if W > Q(M ) for some M (where the witness W is defined by equation ( 6)), this implies that the set of states {ρ x } n x=1 transmitted to Bob must have contained at least one state ρ x=x ′ of at least three dimensions (that is qutrit).
Classical bit versus qubit case.-Itturns out that the witness W can also serve as a quantumness witness.To this end, let us discuss the classical bit case.That is, we want to bound the expression (6) if Alice can only prepare classical two-dimensional systems (i.e.bits).Let us denote the bound on (6) by L 2 (M ), which corresponds to this situation.If W > L 2 (M ), this certifies that some of the measurements performed by Bob are true (incompatible) quantum measurements acting on true qubit states [7,30].Mathematically, the classical bit case is equivalent to the qubit case discussed above, with the restriction that all qubits are sent in the same basis, and all measurements of Bob are carried out in the very same basis.That is, if we want to maximize (6) for correlations E x,y arising from classical two-dimensional systems, the maximum can be attained with pure states where a x = ±1, and observables B y = Π 0|y − Π 1|y which have the form where σ z is the standard Pauli matrix Inserting these values into (2) we obtain Since we have binary variables a x = ±1, they translate to Then the classical one-bit bound L 2 (M ) is given by where E x,y is defined by (11) and we maximize over all binary variables a x , b + y , b − y ∈ {−1, +1}.In words, the expression (11) corresponds to the following deterministic protocol.Alice, depending on x, prepares a bit a x = ±1, which she sends to Bob, who outputs b = ±1 depending on the value of a x and the measurement setting y.That is, Bob's output is a deterministic function b = f (a x , y), where the output assumes b = ±1.We can write where the maximum is taken over all binary a x , b + y and b − y variables ±1.We can eliminate the variables b + y and b − y from the above expression and get the following formula for L 2 (M ): which only consists of maximization over the binary variables a x = ±1.In the above formula, M x denotes the xth row of the real n × m matrix M , where ∥v∥ 1 denotes the Manhattan norm of the real vector v, i.e., ∥v∥ 1 = x |v x |.We prove several interesting properties of L 2 (M ) in the Methods section III A. In particular, L 2 is proven to be a matrix norm.Let us recall that L 2 (M ) is a key quantity in our study, as it enables witnessing both quantumness of preparations and quantumness of measurements.Indeed, W > L 2 (M ), where W is defined in equation ( 6), certifies incompatible quantum measurements acting on true qubit states.That is, not all the performed measurements and not all prepared states originate from the same basis [7].In section III A we do not restrict our study to the properties of the L 2 norm but generalize L 2 (M ) to L k (M ) for any k > 2 and prove that L k is a norm as well, moreover L k (M ) is a monotonic increasing function of k.Furthermore, in section III B we give tips for an efficient implementation of the branch-and-bound algorithm [31] for computing the L k (M ) bound for k = 2 and for k > 2 as well.
Introducing the constants K PM and K D .-Wedefine two quantities K PM and K D which are related to L 2 (M ) and q(M ), and are defined as follows.Let us first introduce K PM , in which case we ask for the maximum ratio between q(M ) and L 2 (M ).That is, we are interested in the value where the maximization is taken over all possible real n×m matrices M , where q(M ) is defined by ( 8) and L 2 (M ) is defined by (12).
Let us now recall the Grothendieck constant of order 3 [22,25,26,32,33], which is given by where the maximization is taken over real matrices M of arbitrary dimensions n × m, q(M ) is defined by (8) and L(M ) is defined as follows where the maximum is taken over all a x , b y ∈ {−1, +1}.
The value of K G (3) in ( 16), according to the recent work of Designolle et al. [34], is bounded by where the lower bound is an improved version of that given in Ref. [35] and the upper bound is an improved version of that given in Refs.[36,37].See Ref. [38] for some historical data on the best lower and upper bounds for K G (d).We prove that K PM = K G (3), which will be given in the Results section II A. We are interested in K D as well, a quantity similar to K PM .We define this quantity as follows where Note the relation whenever L 2 (M ) > S(M ) (also note that q(M ) ≥ L 2 (M )), therefore we have this we immediately obtain the lower bound K D ≥ 1.4367.
In this paper, we give efficient large-scale numerical methods to obtain even better lower bounds on the above quantity.Namely, we prove the lower bound K D ≥ 1.5682.We also prove an upper bound of 2 on this quantity, so putting all together we have the following interval for the constant K D .
It is an open problem to close or at least reduce the gap between the lower and upper limits.
We next present the Results section, which contains our main findings in three subsections.

A. Proof of the relation
To prove our claim, we relate L(M ′ ) to L 2 (M ′ ), where M ′ is given by the following matrix (see also (67)) where M is a real n × m matrix.Denote by M x the xth row of the matrix M .Note that according to the above definition M ′ has size 2n × m and M ′ has rows such that M ′ x = M x and M ′ x+n = −M x for all x = 1, . . ., n.Then the following lemma holds.
Lemma II.1.L 2 (M ′ ) = L(M ′ ) = 2L(M ) for any matrix M ′ of the form (23), where L 2 is the L 2 norm given by the definition (12) and L is the local bound given by (17,40).
The proof of this lemma is given in Methods section III C. Then we need to prove the following lemma.
For an arbitrary matrix M , we have L 2 (M ) ≥ L(M ).This has been proved in Methods section III A. Then the lemma follows from the definitions ( 16) and (15).Our next lemma reads Proof.To prove this, it suffices to show that for an arbitrary real matrix M , there exists the matrix M ′ defined by (23) such that q(M ′ ) = 2q(M ) and L 2 (M ′ ) = 2L(M ).The first relation follows from the special structure of M ′ .The second relation has been shown in Lemma II.1.Therefore, K PM cannot be less than K G (3), which proves our claim.■ Corollary II.4.As a corollary of the above Lemmas II.2 and II.3 we obtain Hence we have the same bounds 1.4367 ≤ K PM ≤ 1.4546 as for K G (3) (see (18)).From the corollary above, we have a matrix M ′ of size 48×24 with q(M ′ )/L 2 (M ′ ) > √ 2. Indeed, the construction is based on a matrix M of size 24 × 24, which provides q(M )/L(M ) > √ 2 [39].To the best of our knowledge, this is the smallest M matrix that has the property q(M )/L(M ) > √ 2. Then the 48by-24 matrix M ′ follows from (23).On the other hand, q(M )/L(M ) = √ 2 is already attained with a 2 × 2 matrix M in the CHSH-form [40]: It remains an open question to show that K PM > √ 2 with a matrix size smaller than 48×24, which might use a different construction than the one above.
Upper bound.-Wefirst prove the upper bound.Translating the Gisin-Gisin model [27] from the Bell nonlocality [1,2] to the PM scenario [7], we find that the following statistics can be obtained in the PM scenario with 1 bit of classical communication: where ⃗ a ∈ S 2 denotes the preparation vector and ⃗ b ∈ S 2 denotes the measurement Bloch vector.We give the proof of this formula in the Methods section III D and we show panel (b) in figure 1 for the description of the classical onebit model.On one hand, due to the above Gisin-Gisin onebit model, we have for an arbitrary n × m matrix M : where E x,y has the form (24) and we maximized over the unit vectors ⃗ a x and ⃗ b y in the three-dimensional Euclidean space.On the other hand, substituting E x,y := E(⃗ a x , ⃗ b y ) in the formula ( 24) into (25) we find where maximization is over the unit vectors ⃗ a x and ⃗ b y in the three-dimensional Euclidean space, and we also used the definition of q(M ) in ( 8) and the definition of S(M ) in (20).Comparing the right-hand side of ( 25) with ( 26), we have where the left-hand side of ( 19) is just K D , which proves the upper bound K D ≤ 2. ■ Lower bound.-In the following, we prove the lower bound using large-scale numerical tools.Note, however, that the resulting bound is rigorous and in particular the final result is due to exact computations.The steps are as follows.
Given a fixed setup with Alice's Bloch vectors ⃗ a x , x = (1, . . ., n) and Bob's Bloch vectors ⃗ b y , y = (1, . . ., m) the method is the following.We define the (n×m)-dimensional one-parameter family of matrices E xy (η) with entries where E x,y = ⃗ a x • ⃗ b y .We wish to show that for some η ∈ [0, 1], the distribution (28) in the PM scenario cannot be simulated with one bit of classical communication.In fact, due to the expectation value (24) of the Gisin-Gisin model, it is enough to consider the interval η ∈ [1/2, 1].To show quantumness, we therefore need to find a matrix M of certain size n × m and a given η for E xy (η) defined by (28), and L 2 (M ) is defined by (12).The above problem, i.e., finding a suitable M with the smallest possible η in (29), can be solved by a modified version [39] of the original Gilbert algorithm [41], a popular collision detection method used, for example, in the video game industry.The algorithm is iterative, and the procedure adapted to our problem is given in Sec.III E. Indeed, using the algorithm of Gilbert, we find the value η * = 0.6377 (30) and a corresponding 70 × 70 matrix M and E xy (η * ) in the form (28) which satisfies inequality (29).We will give more technical details of the input parameters and the implementation of the algorithm in Section III F.Then, rearranging (29) and making use of equation (28), we find the bound where due to the definitions (8,19) the lower bound on K D follows.
C. Physical meaning of the constants K PM and K D The role of K PM in the PM scenario.-Thevalue of K PM is interesting from a physical point of view as well, since it is related to the critical noise resistance of the experimental setup if the transmitted ρ x goes through a noisy, fully depolarizing channel.That is, 1 − p crit = 1 − (1/K PM ) gives the amount (1 − p crit ) of critical white noise 1 1/2 that the PM experiment with rank-1 projective qubit measurements can maximally tolerate while still being able to detect quantumness.Namely, for a fully depolarizing channel with visibility parameter p the qubits ρ x emitted by Alice turn into pρ x + (1 − p)1 1/2, and the expectation value ( 5) becomes where {⃗ a x } x are the Bloch vectors of Alice's qubits, whereas { ⃗ b y } y are the Bloch vectors of Bob's measurements.To witness quantumness, there must exist expectation values E xy in (33) and a matrix M of arbitrary size such that Inserting ( 33) into (34) and making use of ( 8), we obtain for the critical noise tolerance.In fact, the value of K G (3) appears in the studies [25,36,42] of the Bell nonlocality of two-qubit Werner states [43].Note that a recent approach in Ref. [44], based on the simulability of Werner states with local models, yields the same relation (35) between p crit and 1/K G (3) .
From the upper and lower bounds on K PM , the following bounds on the amount (1 − p crit ) of critical white noise follow: The role of K D in the PM scenario.-Insection II B we proved the lower bound of K D ≥ 1.5682.Below we prove that this bound is related to the finite detection efficiency threshold of Bob's measurements.To this end, we assume that Bob's detectors are not perfect and only fire with probability η.Assume that when the measurement y fails to detect, Bob outputs b y = 1 (due to possible relabelings there is no loss of generality).Assume further that the probability of detection η is the same for all y.This is the problem of symmetric detection efficiency.A review of this problem in the Bell scenario can be found in Ref. [45].On the other hand, the same problem in the PM scenario has been elaborated in Refs.[46,47] and the upper bound of 1/ √ 2 on the critical value of the symmetric detection efficiency was found.
Since η does not depend on y, the expectation value E x,y becomes E x,y (η) = ηE x,y + (1 − η) for all x and y.Hence, the witness matrix M detects quantumness with finite detection efficiency η (assuming optimal preparation states and measurements) whenever we have Recalling S(M ) = x,y M x,y , solving the above relation for η, and optimizing over all M witness matrices, we obtain the critical detection efficiency η crit : where K D is defined by (19).In particular using the lower bound value K D ≥ 1.5682, we obtain the improved upper bound 0.6377 on η crit .
It should be noted, however, that the above is not the most general detection efficiency model.Rather than outputting b y = 1, Bob can output a third result, which could potentially give a lower detection efficiency threshold.An open problem is whether this third outcome can lower the detection efficiency threshold.In the above, we also assumed that Bob's qubit measurements are rank-1 projectors that can achieve q(M ).However, it is known that the true qubit maximum Q(M ) (in (7)) can be larger than q(M ) (in ( 8)) for a given M .Hence, we can say that the most general symmetric detection efficiency threshold is upper bounded by 1/K D , and it is an open problem whether this upper bound is tight or not.
Let us mention that in the two-outcome scenario a different type of modelling of the loss event due to the finite detection efficiency can also be imagined.Namely, let us assume that Bob associates the outcomes +1 and −1 to the no-detection event with equal probability.In this case, the expectation value E x,y (η) = ηE x,y +(1−η) when outcome +1 is assigned to the no-detection event becomes ηE x,y .This leads to the modified inequality ηq(M ) > L 2 (M ) in Eq. ( 37) and the modified critical detection efficiency, η crit = min M (L 2 (M )/q(M )) = 1/K PM .Therefore, using Bob's non-deterministic assignment of the ±1 outcomes for the no-detection event, the critical detection efficiency can be linked to K G (3) = K PM , i.e., the Grothendieck constant of order 3. Note, however, that due to our finding that K G (3) < K D , the critical detection efficiency in this non-deterministic modelling of the no-detection event will be suboptimal compared to the deterministic assignment model, when we associate the no-detection event with a given outcome.Notations.-Wefirst introduce notation used throughout this subsection.Let A n , n = 0, 1, 2, . . .be the set of n dimensional vectors over the set A. Let v i denote the ith element of v ∈ A n (i = 1, 2, . . ., n).Let _; _ : A n × A m → A n+m denote the concatenation of vectors.Let () the singleton element of A 0 .Further let (a) ∈ A 1 if a ∈ A. The parenthesis may be omitted so (1); (2); (3) = 1; 2; 3 ∈ R 3 , for example.Let a n = a; a; ...; a ∈ A n where a ∈ A. We write a instead of a n if n can be inferred from the context.We define M n,m as the set of real n × m matrices.Matrices are represented as vectors of their row vectors, i.e.M n,m = (R m ) n .Let M ⊤ ∈ M m,n be the transposition of M ∈ M n,m and let I m ∈ M m,m denote the m × m identity matrix.Further, it is convenient to define by W n,k = {I k j | j = 1, 2, . . ., k} n ⊂ M n,k the set of matrices whose rows are all 0s, but exactly one is 1.
Note that W is defined above in Notations and W ⊤ denotes the transposed matrix of W .We prove below that Eq. ( 39) corresponds to Eq. ( 14) in the case of k = 2.The proof is as follows Properties of L k .-Weprove several interesting properties of L k .Note that our focus in the main text is on k = 2.However, the general case k ≥ 2 is of interest for its own sake.Moreover, it is also motivated physically, corresponding to classical communication beyond bits [7,48].First we prove that L k is a norm for any k ≥ 2. To this end, we prove its homogeneity, positive definiteness and subadditivity properties.
Lemma III.1.L k is a norm.

Homogeneity:
where |t| denotes the absolute value of the scalar t and L k is defined by (39).Positive definiteness: Triangle inequality: The above definition is consistent with the one given in (17).L(M ) is the local or classical bound of correlation Bell inequalities [24] defined by the correlation matrix M in (40).
The L(M ) quantity also appears in computer science literature under the name of K m,n -quadratic programming [49].
Let us note that recently an efficient computation of L(M ) has been proposed in Ref. [35] along with the code [50].
First we prove the basic property that L 2 (M ) ≥ L(M ) for any M .Next we prove that L k (M ) ≤ L k+1 (M ) for k ≥ 2. Then we bound L k (M ) from above by the value of L(M ) multiplied by k.However, we do not know whether the bound can be saturated or not.The lemma stating our first claim is as follows Lemma III.2.
where the proof is given as the following chain of equations plus a single inequality invoked in the fourth line The proof is given below as the following chain: ■ Finally, we prove an upper bound on L k (M ).Our lemma reads as follows Lemma III.4. Proof.
To arrive at the sixth line, we invoked the definition (40).■ It is an open question whether Lemma III.4 is tight or not.However, we can find a family of matrices M (k) , k ≥ 2 such that the ratio L k (M (k) )/L(M (k) ) tends to infinity with increasing k.More formally we have Lemma III.5.For all ε > 0 there exists a matrix M and k > 1 such that The proof is based on an explicit construction of matrices M k , k = (2, . . ., ∞) defined in Ref. [51].See also Refs.[52,53].
For example, Now by explicit calculations we find ■ Note that in the particular case of k = 2 the matrix M (k) is the CHSH expression [40], in which case L(M (2) ) = 2 and L 2 (M (2) ) = 4. Hence, for k = 2 the upper bound in Lemma III.4 is tight.We conjecture that the bound is not tight for greater values of k.
Finally, we show how L 2 and in general L k behaves with the concatenation (A; B) of two matrices A and B, where we defined Lemma III.6.Let A ∈ M i,m , B ∈ M j,m .Then we have Proof.
■ Note that L k (A) ≤ L k (A; B) does not hold in general.For example, let us have and Then by explicit calculation we obtain Finally, it is shown that L k relates to the cut norm C, a matrix norm introduced by Frieze and Kannan in Ref. [54] (see also [55] for several applications in graph theory).This norm is defined as follows: where the maximum is taken over all a x , b y ∈ {0, 1}.Note the similarity in the definition with the L(M ) norm (17) which is equivalent to (40).It has been shown that C(M ) is related to L(M ) as follows [54,56]: Using the above relation (54) along with Lemma III.4,we find that and for the special case of L 2 we have the following lower and upper bounds: Generalization of the L k norm.-Belowwe generalize the norm L k (M ) to F M , which extension will prove to be a key property in the Branch-and-Bound [31] implementation of the L k algorithm.To do so, first we define the following function Definition III.7.
In other words, F M (P ) is the maximum of W ⊤ M 1 where W ∈ W k,n and the prefix of W is P .F M can be considered as a generalization of L k (M ).The following lemma introduces a key property which is made use of in the Branch-and-Bound method.

■ Let us now give the following definition further generalizing F M (P ):
Definition III.9.
The computation of f M can be optimized such that for big enough c values f M (P )(c) returns c without computing F M (P ).This is expressed by the following lemma.
and it can be proved as follows: Programming tips for the efficient implementation of the L2 and L k codes In this subsection, we give programming tips for the branch-and-bound [31] implementation of the exact computation of L k (M ) for any k ≥ 2. For k = 2 and k = 3 our algorithms are even faster than the L k solver for general k due to specialization which we detail below.First, we remind the reader of the notation defined in Sec.III A. The Haskell code can be downloaded from Github [57].Instructions installing and using the code (including parallel execution and using guessed results) can also be found there.
Branch-and-bound calculation of L k .-Thenorm L k (M ) for k ≥ 2 can be calculated using the following definition and the following lemma.
Definition III.12.For all M ∈ M n,m , 0 ≤ i ≤ n let The function f M recursively calls itself with larger and larger P prefixes until the prefix size reaches n.The middle case is a conditional exit from the recursion, which speeds up the computation crucially.
Reducing cost by sharing sub-calculations.-In the definition III.12, the most expensive calculations are L k (B), P ⊤ M 1 and P ⊤ A 1 .We show how to reduce the cost of these calculations.The cost of L k (B) can be reduced by memoizing the previously computed L k values in a table. If If we take into account all dependencies, the correct order of calculating There is an option to skip the P ⊤ A 1 + L k (B) ≤ c test for large B matrices.This means that L k (B) should not be calculated, and the trade-off is that we miss opportunities for exiting recursion.In our experience, skipping the test for B ∈ M k,m , k ≥ (3n/4) results in about 2× speedup.
The cost of calculating P ⊤ A is already computed by the time when (P ; I k i ) ⊤ (A; v) is needed, so the cost of P ⊤ A 1 can be reduced to O(m) by (64).The cost of P ⊤ M can be reduced in the same way.This implies a considerable speedup; for example, for M ∈ M 70,70 the calculation of L 2 (M ) can be made nearly 70 times faster by this optimization.
The cost of P ⊤ A 1 can be further reduced by caching the previously calculated Manhattan norms of the rows of the matrix P ⊤ A.
Reducing cost by symmetries.-Forall S ∈ P k permutation matrices The cost of L 2 can be halved by (65) as follows.Let From (I 2 2 ; W ).This means that we can skip the calculation of f M (I 2 2 , c) for all c, thus L 2 (M ) = f M ((I 2 1 ), 0), i.e., we start the calculation with a non-empty prefix which saves work.
Harnessing (65) in the general L k case is a bit more complex.First we define the set of canonical prefixes.A prefix P = I k i1 ; I k i2 ; . . .I k ij is canonical if the first occurrences of the numbers in the indices i 1 , i 2 , . . ., i j is the sequence 1, 2, 3, . . . .For example, the prefix For each prefix P , there exists a permutation S such that P S is canonical, so that, f M (P, c) = f M (P S, c), which means that it is enough to examine only the canonical prefixes to compute L k .
Parallel and concurrent execution.-Forparallel execution one can use the following equation: We used Eq.(66) for P ∈ W i,k , i < d, where d is a "parallel depth" for fine-tuning the execution for different architectures.Higher depth is better for more cores.
Parallel execution may miss opportunities of exiting recursion because there is no communication between threads about the best known L k values at a certain point of time.Therefore we implemented concurrent execution where threads share the best known L k values.
Reducing cost by guessed L k values.-Optionally, the computation can be sped up by providing a guessed L k (M ) value by the user.This value will be used instead of 0 in Eq. ( 63).The guessed value may be lower than L k (M ).Higher guessed values are better, unless the guessed value is higher than L k (M ), in which case f M returns the guessed value.We compared the result of f M with the witness W of the maximal W ⊤ M 1 value, to be able to detect whether the guessed value was too high or not.
C. L-norm and L2-norm are the same for a special family of matrices M ′ We relate L(M ′ ) to L 2 (M ′ ), where M ′ is given by the following matrix where M is a matrix of size n×m with arbitrary real entries.Note that M ′ has size 2n × m and M ′ has rows such that M ′ x = M x and M ′ x+m = −M x for all x = 1, . . ., m.Then the following lemma holds.
Lemma III.14.L 2 (M ′ ) = L(M ′ ) = 2L(M ) for any matrix M ′ of the form (67), where L 2 is the L 2 norm given by the definition (14) and L is the local bound given by (40).Note that L k is defined by (39), where the case k = 2 corresponds to the definition of L 2 in (14).
Proof.We fix a matrix M of dimension n × m which specifies M ′ by the virtue of (67).Let a x ∈ {−1, 1}, x = 1, . . ., n and b y ∈ {−1, 1}, y = 1, . . ., m be the optimal vectors giving L(M ) in (40).Note that these values are not unique in general, different optimal configurations may exist, however, we choose one such optimal vectors a x and b y .We then choose a x+n = −a x for x = 1, . . ., n, and b + y = b − y = b y for y = 1, . . ., m.With these values, we obtain the lower bound L 2 (M ′ ) ≥ 2L(M ) on L 2 (M ′ ) in (12).Now we show the upper bound As a contradiction of the lemma, assume that L 2 (M ′ ) > 2L(M ).Then, not all a x vectors corresponding to the L 2 (M ′ ) value have the property a x+m = −a x for each x.That is, there exists at least one x, call it x ′ , for which a ′ x = a x ′ +n .Suppose that there is one such an x ′ (the proof for multiple x ′ indices for which a ′ x = a x ′ +n is very similar).Then in the formula (14) for L 2 (M ′ ) the two rows x ′ and x ′ + n in question will appear within the same norm (either in the first or second norm, depending on whether a ′ x = a x ′ +n takes the value +1 or −1).However, in both cases they cancel each other from the norm in question.As a result, two rows of M ′ in (67) are eliminated, one from the matrix M and one from the matrix −M .However, any matrix ±M from which one row has been eliminated cannot have a local bound greater than L(M ).The same applies to a matrix ±M from which we have removed several rows.Therefore, L 2 (M ′ ) > 2L(M ) cannot be true either.Thus we arrived at a contradiction.■ D. Adapting the Gisin-Gisin model to the PM scenario We now adapt the LHV model of Ref. [27] which exploits the finite efficiency of the detectors to reproduce the quantum correlations of the singlet state exactly.We show that the LHV model in Ref. [27] can be adapted to the PM communication scenario to produce the expectation value: The classical model, using one bit of classical communication from Alice to Bob, is as follows.Protocol: Alice and Bob share a classical variable, which is in the form of a unit vector ⃗ λ, chosen uniformly at random from the unit sphere S 2 .

E. The modified Gilbert algorithm adapted to the PM scenario
For a given η ∈ [1/2, 1] and correlation matrix E(η) defined by (28), the algorithm yields the following matrix M satisfying Algorithm: Input: The number of preparations n and the number of measurement settings m that define the setup.The unit vectors {⃗ a x } n x=1 (i.e., the Bloch vectors of Alice's prepared states) and { ⃗ b y } m y=1 (i.e., the Bloch vectors of Bob's projective rank-1 measurements).The (n × m)-dimensional matrix E(η) given by the entries E xy (η) in (28).The values of ϵ and i max that define the stopping criteria.
Output: The matrix M of size n × m.
1. Set i = 0 and set E (i) the n × m zero matrix.
2. Given a matrix E (i) and the matrix E(η), run a heuristic oracle that maximizes the overlap xy over all deterministic one-bit correlations vectors is an NP-hard problem, in Step 2 we use a heuristic method to do it, which we describe in Sec.III G. On the other hand, the description of an exact branch-and-bound type algorithm can be found in Sec.III B. We use the exact method, which is generally more time-consuming than the see-saw method to check that the output matrix M satisfies the condition (75) with the chosen parameter η.If this is true, then it implies the lower bound K D ≥ (1/η), as proved in Sec.II B. It should also be noted that the branch-and-bound-type algorithm is much faster than the brute force algorithm (the implemented algorithm using parallelism can be found in [57]).On a multi-core desktop computer, it can solve problems in range n = m = 70 in a day, while the brute force algorithm is limited to about n = m = 40 settings.

F. Parameters and implementation of Gilbert algorithm
Here we specify the explicit parameters that are used to obtain the lower bound K D ≤ 1.5682.On the threedimensional unit sphere, we choose the vectors {⃗ a x } n x=1 and { ⃗ b y } m y=1 to be equal to each other, ⃗ v i = ⃗ a i = ⃗ b i for i = 1, . . ., n, where n = m = 70.The 70 unit vectors chosen define the optimal packing configuration in the Grassmannian space which can be downloaded from Neil Sloane's database [58].The advantage of this type of packing is that the points and their antipodal points are located as far apart as possible on the three-dimensional unit sphere.
We implemented the modified Gilbert algorithm (of section III E) in Matlab with and without a memory buffer (see more details on the memory buffer in Ref. [39]).In the case of using memory buffer, the step 3 is modified in the algorithm so that instead of calculating the convex combination of the points E (i) and E det,i xy (see section III E), we compute the convex combination of E (i) and the points E det,i-j xy , j = 0, . . ., m − 1, where m is the size of the memory buffer.In our explicit computations, we use a buffer size m = 40 and a stopping condition of k = 2 × 10 5 with η = 0.665.Details on the performance of this modification can be found in Ref. [39].In step 2 of the Gilbert algorithm, the oracle uses the see-saw heuristic described in Sec.III G to obtain a good (typically tight) lower bound to L 2 (M ).On the other hand, we used the branch-and-bound-type algorithm described in Sec.III B to calculate L 2 (M ) exactly for integer M .The algorithm was implemented in Haskell.See the GitHub site [57] for the downloadable version.
The Matlab file eta_70.m,which can also be downloaded from GitHub [57] (located in the subdirectory L2_eta_70) gives detailed results on the input parameters.In particular, it gives the unit vectors ) and the value L 2 (M ).The input matrix M is placed in subdirectory L2_eta_70 under the name W70i.txt.The running time of the Gilbert algorithm (in Sec.III E) implemented in Matlab was about one week.Note, however, that most of the computation time was spent on the oracle (the see-saw part) described in Sec.III G. On the other hand, the Haskell code to compute the exact L 2 (M ) value of the 70 × 70 witness matrix M took about 8 hours to run on a HP Z8 workstation using 56 physical cores.The memory usage of the computation was negligible.
The Matlab eta_70.mroutine defines the 70 × 70 matrix M , and gives the ⃗ v i := ⃗ a i = ⃗ b i the unit vectors from Sloane's database [58] for all i = 1, . . ., 70.Note that M is integer (by multiplying the output M matrix in the Gilbert algorithm by 1000 and truncating the non-integer part).This calculation yields S(M ) = x,y M x,y = 194369 and Q(M ) = x,y M x,y ⃗ a x • ⃗ b y ≃ 5.3672235 × 10 5 .On the other hand, the branch-and-bound-type Haskell code [57] gives the exact value L 2 (M ) = 412667, which is matched by the see-saw search (in Sec.III G).From these numbers we then obtain G. Lower bound to L2(M ) using the see-saw iterative algorithm Below we give an iterative algorithm based on see-saw heuristics to compute L 2 (M ).This algorithm forms the oracle part of step 2 of the Gilbert algorithm, which is described in Sec.III E.

Algorithm:
Input: Integer matrix M of size n × m.
8. Form the ±1-valued column vector a as follows: Let a x = +1 if s + x ≥ s − x , otherwise let a x = −1 for all x = 1, . . ., n.Note that at each iteration step, objective value l 2 (M ) is guaranteed not to decrease.Therefore, the output of the algorithm is a heuristic lower bound on the exact value of L 2 (M ).

IV. DISCUSSION
We have tested the quantumness of two-dimensional systems in the prepare-and-measure (PM) scenario, with n preparations and m binary-outcome measurement settings, where n and m fall well into the range of 70.In the onequbit PM scenario, a two-level system is transmitted from the sender to the receiver.In this setup, a real n × m matrix M defines the coefficients of a linear witness.We denote by L 2 (M ) the exact value of the one-bit bound associated with matrix M .We found efficient numerical algorithms for computing L 2 (M ).If this bound is exceeded, we can detect both the quantumness of the prepared qubits and the quantumness (i.e.incompatibility) of the measurements.
We introduced new constants K M and K D which are related to the Grothendieck constant of order 3. Our largescale tools are crucial for the efficient bounding of L 2 (M ) and hence for bounding of the constants K M and K D .We further relate these new constants to the white noise resistance of the prepared qubits and the critical detection efficiency of the measurements performed.
For large M matrices, we have given two algorithms for computing L 2 (M ): a simple iterative see-saw-type algorithm and a branch-and-bound-type algorithm.The former is a heuristic algorithm that usually gives a tight lower bound on L 2 (M ).However, sometimes it fails to find the exact value of L 2 (M ).This happens more and more often as the size of the matrix M gets larger and larger.In contrast, the latter branch-and-bound-type algorithm gives the exact value of L 2 (M ) and can be used to compute L 2 (M ) for matrix sizes as large as 70 × 70.As an application of the algorithms, we established the bounds 1.5682 ≤ K D ≤ 2 on the new constant and an upper bound of η crit ≤ 0.6377 on the critical detection efficiency of qubit measurements in the PM scenario.

FIG. 1 .
FIG. 1.The prepare-and-measure setup for (a) qubit communication and (b) a classical model using one bit of communication.In (a) upon receiving the input settings ⃗ a and ⃗ b, Alice sends to Bob a qubit in the quantum state ρ ⃗ a .Then Bob performs a projective measurement M b| ⃗ b = (1 1 + b ⃗ b • σ)/2, where the two outcomes are labelled by b = ±1.As a result, the expectation value of Bob's ±1 outcome becomes E(⃗ a, ⃗ b) = ⃗ a • ⃗ b (see equation (5)).In (b) the classical one bit Gisin-Gisin protocol [27] is as follows.The shared randomness ⃗ λ is distributed between the two parties, where the unit vector ⃗ λ ∈ S 2 is chosen uniformly at random from the sphere.After obtaining the settings ⃗ a and ⃗ b, Alice communicates to Bob the classical binary message c = sgn(⃗ a • ⃗ λ).Then Bob outputs b = sgn(c ⃗ b • ⃗ λ) with probability | ⃗ b • ⃗ λ|, and b = 0 with probability 1 − | ⃗ b • ⃗ λ|.Finally, Bob performs a coarse graining on his outputs by grouping b = 0 with b = 1 and identifying both of them with outcome b = 1.As a result, as it is shown in Sec.III D, the expectation value of Bob's b = ±1 outcome becomes E(⃗ a, ⃗ b) = (⃗ a • ⃗ b + 1)/2.
of the L2 and L k , k > 2 norm where ⃗ a ∈ S 2 denotes the preparation Bloch vector and ⃗ b ∈ S 2 denotes the measurement Bloch vector.First we show that the outcomes b = ±1 giving the expectation value E(⃗ a, ⃗ b) = P (b = +1|⃗ a, ⃗ b) − P (b = −1|⃗ a, ⃗ b) = ⃗ a • ⃗ b (69) can be obtained with probability 1/2 and b = 0 outcome with probability 1/2.Then by coarse-graining the above distribution by grouping b = 0 outcome with b = +1, we obtain the expectation value (68).

2 . 3 . 4 . 5 . 6 .
Choose random assignments a x = ±1: That is, a x are (random) elements of a vector a of size n.Its elements are binary having value +1 or -1 only.Set b + = sgn(aM ), where sgn denotes the (modified) sign function: sgn(x) = +1 if x ≥ 0 and −1 otherwise.Let us transpose b + .Set b − = sgn(aM ).Let us transpose b − .Form the column vector s + = M b + of size n.Form the column vector s − = M b − of size n.

9 .
Let l 2 = n x=1 s x .10.With the new vector a, return to point 3. Repeat the algorithm until two values of l 2 are equal in two consecutive iterations.