Analysis and optimization of quantum adaptive measurement protocols with the framework of preparation games

A preparation game is a task whereby a player sequentially sends a number of quantum states to a referee, who probes each of them and announces the measurement result. Many experimental tasks in quantum information, such as entanglement quantification or magic state detection, can be cast as preparation games. In this paper, we introduce general methods to design n-round preparation games, with tight bounds on the performance achievable by players with arbitrarily constrained preparation devices. We illustrate our results by devising new adaptive measurement protocols for entanglement detection and quantification. Surprisingly, we find that the standard procedure in entanglement detection, namely, estimating n times the average value of a given entanglement witness, is in general suboptimal for detecting the entanglement of a specific quantum state. On the contrary, there exist n-round experimental scenarios where detecting the entanglement of a known state optimally requires adaptive measurement schemes.


I. INTRODUCTION
Certain tasks in quantum communication can only be conducted when all the parties involved share a quantum state with a specific property.For instance, two parties with access to a public communication channel must share an entangled quantum state in order to generate a secret key [1].If the same two parties wished to carry out a qudit teleportation experiment, then they would need to share a quantum state with an entanglement fraction beyond 1/d [2].More generally, when only restricted quantum operations are permitted, specific types of quantum states become instrumental for completing certain information processing tasks.This is usually formalized in terms of resource theories [3].Some resources, like entanglement, constitute the basis of quantum communication.Others, such as magic states, are required to carry out quantum computations [4].Certifying and quantifying the presence of resource states with a minimum number of experiments is the holy grail of entanglement [5] and magic state detection [4].
Beyond the problem of characterizing resourceful states mathematically, the experimental detection and quantification of resource states is further complicated by the lack of a general theory to devise efficient measurement protocols.Such protocols would allow one to decide, at minimum experimental cost, whether a source is capable of producing resourceful states.Developing such methods is particularly important for high dimensional systems where full tomography is infeasible or in cases where the resource states to be detected are restricted to a small (convex) subset of the state space, which renders tomography excessive.
General results on the optimal discrimination between different sets of states in the asymptotic regime [6] suggest that the optimal measurement protocol usually involves collective measurements over many copies of the states of interest, and thus would require a quantum memory for its implementation.This contrasts with the measurement scenario encountered in many experimental setups: the lack of a quantum memory often forces an experimentalist to measure each of the prepared states as soon as they arrive at the lab.In this case it is natural to consider a setting where subsequent measurements can depend on previous measurement outcomes, in which case the experimentalist is said to follow an adaptive strategy.Perhaps due to their perceived complexity, the topic of identifying optimal adaptive measurement strategies has been largely overlooked in quantum information theory.
In this paper, we propose the framework of quantum preparation games to reason about the detection and quantification of resource states in this adaptive setting.These are games wherein a player will attempt to prepare some resource which the referee will measure and subsequently assign a score to.We prove a number of general results on preparation games, including the efficient computation of the maximum average score achievable by various types of state preparation strategies.Our results furthermore allow us to optimise over the most general measurement strategies one can follow with only a finite set of measurements, which we term Maxwell demon games.Due to limited computational resources, full optimisations over Maxwell demon games are restricted to scenarios with only n ≈ 3, 4 rounds.For higher round numbers, say, n 20, we propose a heuristic, based on coordinate descent, to carry these optimisations out approximately.More specifically, the outcome of the heuristic is (in general) a sub-optimal preparation game that nonetheless satisfies all the optimization constraints.In addition, we show how to devise arbitrarily complex preparation games through game composition, and yet another heuristic inspired by gradient descent.We illustrate all our techniques with examples from entanglement certification and quantification and highlight the benefit of adaptive measurement strategies in various ways.In this regard, in contradiction to standard practice in entanglement detection, we find that the optimal n-round measurement protocol to detect the entanglement of a arXiv:2011.02216v3[quant-ph] 18 Aug 2021 single, known quantum state does not consist in estimating n times the value of a given (optimised) entanglement witness.On the contrary, there exist adaptive measurement schemes that supersede any non-adaptive protocol for this task.

II. QUANTUM PREPARATION GAMES FOR RESOURCE CERTIFICATION AND QUANTIFICATION
Consider the following task: a source is distributing multipartite quantum states, ρ 1,...,m , among a number of separate parties who wish to quantify how entangled those states are.To this effect, the parties sequentially probe a number of m-partite states prepared by the source.Depending on the results of each experiment, they decide how to probe the next state.After a fixed number of rounds, the parties estimate the entanglement of the probed states.They naturally seek an estimate that lower bounds the actual entanglement content of the states produced during the experiment with high probability.Most importantly, if the source is unable to produce entangled states, the protocol should certify this with high probability.
Experimental scenarios whereby a source (or player) sequentially prepares quantum states that are subject to adaptive measurements (by some party or set of parties that we collectively call the referee) are quite common in quantum information.Besides entanglement detection, they are also found in magic state certification [4], and, more generally, in the certification and quantification of any quantum state resource [3].The common features of these apparently disparate quantum information processing tasks motivate the definition of quantum preparation games.

Number of Rounds: n.
Game Configuration: There is a unique initial game configuration S 1 = {∅}.At every round k, there is a set of allowed configurations S k = {s k 1 , s k 2 , . ..}.After n rounds, the game ends in one of the final configurations s ∈ S n+1 .Measurement Operators: For every game configuration s ∈ S k , there are POVMs {M (k) The game variables, i.e., round number, possible configurations, POVMs and scoring rule, are publicly announced before the game starts.

Measurement Round Rules
At the beginning of round k, the current game configuration s ∈ S k is known to the player.The player prepares a state ρ k according to their preparation strategy P, and sends it to the referee.The referee measures the quantum state ρ k with the POVM {M (k) The referee publicly announces the outcome s of this measurement, which becomes the game configuration for the next round.Scoring After the n th round, the player receives a score g(s), where s ∈ S n+1 is the final configuration.See also Fig. 1 for a pictorial representation of the procedure followed in round k.
A preparation game G is thus fully defined by the triple (S, M, g), where S denotes the sequence of game configuration sets (S k ) n+1 k=1 ; and M , the set of POVMs M ≡ {M (k) In principle, the Hilbert space where the state prepared in round k lives could depend on k and on the current game configuration s k ∈ S k .For simplicity, though, we will assume that all prepared states act on the same Hilbert space, H .In many practical situations, the actual, physical measurements conducted by the referee in round k will have outcomes in O, with |O| < |S k |.The new game configuration s ∈ S k+1 is thus decided by the referee through some non-deterministic function of the current game configuration s and the 'physical' measurement outcome o ∈ O.The definition of the game POVM {M (k) s |s } s encompasses this classical processing of the physical measurement outcomes.The expected score of a player with preparation strategy P is In the equation, p(s|P, G) denotes the probability that, conditioned on the player using a preparation strategy P in the game G, the final game configuration is s.For the sake of clarity, we will sometimes refer to the set of possible ()    +1  +1  (+1)  +2 FIG. 1.Quantum preparation game from the referee's perspective.In each round k of a preparation game, the referee (measurement box) receives a quantum state ρ k from the player.The referee's measurement M (k) will depend on the current game configuration s k , which is determined by the measurement outcome of the previous round.In the same way, the outcome s k+1 of round k will determine the POVMs to be used in round k + 1. Recall that the player can tailor the states ρ k to the measurements to be performed in round k, since they have access to the (public) game configuration s k , shown with the upward line leaving the measurement apparatus.
FIG. 2. Finitely correlated strategies.Suppose that a player owns a device which allows them to prepare and distribute a quantum state to the referee.Unfortunately, at each experimental preparation the player's device interacts with an environment A. Explicitly, if the player activates their device, then the referee receives the state trA i KiρK † i , where ρ is the current state of the environment and Ki : HA → HA ⊗ H are the Kraus operators which evolve the environment and prepare the state that the referee receives.Since the same environment is interacting with each prepared state, the states that the referee receives in different rounds are likely correlated.
final configurations as S instead of S n+1 .
In this paper we consider players who aim to maximise their expected score over all preparation strategies P that are accessible to them, in order to convince the referee of their ability to prepare a desired resource.Intuitively, a preparation strategy is the policy that a player follows to decide, in each round, which quantum state to prepare.Since the player has access to the referee's current game configuration, the player's state preparation can depend on this.The simplest preparation strategy, however, consists in preparing independent and identically distributed (i.i.d.) copies of the same state ρ.We call such preparation schemes i.i.d.strategies and denote them as ρ ⊗n .A natural extension of i.i.d.strategies, which we call finitely correlated strategies [7], follows when we consider interactions with an uncontrolled environment, see Figure 2. I.i.d. and finitely correlated strategies can be extended to scenarios where the preparation depends on the round number k.The mathematical study of these strategies is so similar to that of their round-independent counterparts, that we will not consider such extensions in this article.
Instead, we will analyze more general scenarios, where the player is limited to preparing multipartite states belonging to a specific class C , e.g.separable states.In this case, given ρ, σ ∈ C ∩ B(H ) ⊗k , a player can also generate the state pρ + (1 − p)σ for any p ∈ [0, 1], just by preparing ρ with probability p and σ otherwise.Thus, we can always assume C ∩ B(H ) ⊗k to be convex for all k.The preparation strategies of such a player will be assumed fully general, e.g., the state preparation in round k can depend on k, or on the current game configuration s k .We call such strategies C -constrained.
A. Computing the average score of a preparation game Even for i.i.d.strategies, a brute-force computation of the average game score would require adding up a number of terms that is exponential in the number of rounds.In the following we introduce a method to efficiently compute the average game scores for various types of player strategies.
Let G = (S, M, g) be a preparation game with M ≡ {M (k) , and let C be a set of quantum states.In principle, a C -constrained player could exploit correlations between the states they prepare in different rounds to increase their average score when playing G.They could, for instance, prepare a bipartite state ρ 12 ∈ C ; send part 1 to the referee in round 1 and, depending on the referee's measurement outcome s 2 , send part 2, perhaps after acting on it with a completely positive map depending on s 2 .However, the player would be in exactly the same situation if, instead, they sent state ρ 1 = tr 2 (ρ 12 ) in round 1 and state ρ 2 s2 ∝ tr 1 (M s2|∅ ⊗ I 2 )ρ 12 in round 2. There is a problem, though: the above is only a C -constrained preparation strategy provided that ρ 2 s2 ∈ C .This motivates us to adopt the following assumption.Assumption 1.The set of (in principle, multipartite) states C is closed under arbitrary postselections with the class of measurements conducted by the referee.
This assumption holds for general measurements when C is the set of fully separable quantum states or the set of states with entanglement dimension [8] at most D (for any D > 1).It also holds when C is the set of non-magic states and the referee is limited to conducting convex combinations of sequential Pauli measurements [9].More generally, the assumption is satisfied when, for some convex resource theory [3], C is the set of resource-free states; and the measurements of the referee are resource-free.The assumption is furthermore met when the player does not have a quantum memory.
Under Assumption 1, the player's optimal C -constrained strategy consists in preparing a state ρ k s k ∈ C in each round k, depending on k and the current game configuration s k .Now, define µ (k) s as the maximum average score achieved by a player, conditioned on s being the configuration in round k.Then µ These two relations allow us to inductively compute the maximum average score achievable via C -constrained strategies, µ ∅ .Note that, if the optimizations above were carried out over a larger set of states C ⊃ C , the end result would be an upper bound on the achievable maximum score.This feature will be handy when C is the set of separable states, since the latter is difficult to characterize exactly [10,11].In either case, the computational resources to conduct the computation above scale as Equation ( 2) can also be used to compute the average score of an i.i.d.preparation strategy ρ ⊗n .In that case, C = {ρ}, and the maximization over C is trivial.Similarly, an adaptation of (2) allows us to efficiently compute the average score of finitely correlated strategies, for the details we refer to Appendix A.

B. Optimizing preparation games
Various tasks in quantum information -including entanglement detection -have the following structure: given two sets of preparation strategies S , S and a score function g, we want to find a game G = (S, M, g) that separates these two sets, i.e., a game such that G(P) ≤ δ, for all P ∈ S , and G(P) > δ for all P ∈ S .In some cases, we are interested to search for games where the POVMs conducted by the referee belong to a given (convex) class M .This class represents the experimental limitations affecting the referee, such as space-like separation or the unavailability of a given resource.
Finding a preparation game satisfying the above constraints can be regarded as an optimization problem over the set of quantum preparation games.Consider a set M of adaptive measurement protocols of the form and sets of preparation strategies {S i } r i=1 .A general optimization over the set of quantum preparation games is a problem of the form min where A, b are a t × r matrix and a vector of length t, respectively, and f (v) is assumed to be convex on the vector v ∈ R r .
In this paper, we consider i.i.d., finitely correlated (with known or unknown environment state) and C -constrained preparation strategies.The latter class also covers scenarios where a player wishes to play an i.i.d.strategy with an imperfect preparation device.Calling ρ the ideally prepared state, one can model this contingency by assuming that, at every use, the preparation device (adversarially) produces a quantum state ρ such that ρ − ρ 1 ≤ .If, independently of the exact states prepared by the noisy or malfunctioning device, we wish the average score g i to lie below some value v i , then the corresponding constraint is where E is the set of -constrained preparation strategies, producing states in {ρ : ρ ≥ 0, tr(ρ ) = 1, ρ − ρ 1 ≤ }.
The main technical difficulty in solving problem (3) lies in expressing conditions of the form in a convex (and tractable) way.This will, in turn, depend on which type of measurement protocols we wish to optimize over.We consider families of measurement strategies M such that the matrix s2,...,sn+1 depends affinely on the optimization variables of the problem.For S = {P}, condition (5) then amounts to enforcing an affine constraint on the optimization variables defining the referee's measurement strategy.For finitely correlated strategies, we describe in Appendix A how to phrase (5) as a convex constraint.
For C -constrained strategies, the way to express (5) as a convex constraint depends more intricately on the class of measurements we aim to optimize over.Let us first consider preparation games with n = 1 round, where we allow the referee to conduct any | S|-outcome measurement from the convex set M .Let S represent the set of all C -constrained preparation strategies, for some convex set of states C .Then, condition ( 5) is equivalent to Note that, if we replace C * in ( 7) by a subset thereof, relation ( 5) is still implied.In that case, however, there may be values of v for which relation (5) holds, but not eq.( 7).As we will see later, this observation allows us to devise sound entanglement detection protocols, in spite of the fact that the dual of the set of separable states is difficult to pin down [10,11].
Next, we consider a particularly important family of multi-round measurement schemes, which we call Maxwell demon games.In a Maxwell demon game, the referee's physical measurements in each round k are taken from a discrete set M (k).Namely, for each k, there exist sets of natural numbers A k , X k and fixed POVMs {(N The configuration space at stage k corresponds to the complete history of physical inputs x 1 , . . ., x k−1 and outputs a 1 , . . ., a k−1 , i.e., s k = (a 1 , x 1 , ..., a k−1 , x k−1 ), where s 1 = ∅.Note that the cardinality of S k grows exponentially with k.In order to decide which physical setting x k must be measured in round k, the referee receives advice from a Maxwell demon.The demon, who holds an arbitrarily high computational power and recalls the whole history of inputs and outputs, samples x k from a distribution P k (x k |s k ).The final score of the game γ ∈ G is also chosen by the demon, through the distribution P (γ|s n+1 ).A Maxwell demon game is the most general preparation game that a referee can run, under the reasonable assumption that the set of experimentally available measurement settings is finite.
Finally, we consider the set of adaptive measurement schemes with fixed POVM elements {M (j) s |s : j = k} and variable {M (k) s |s } ⊂ M , for some tractable convex set of measurements M .As in the two previous cases, the matrix ( 6) is linear in the optimization variables {M (k) s |s }, so (5) can be expressed in a tractable, convex form for sets of finitelymany strategies and finitely correlated strategies with unknown environment.Similarly to the case of Maxwell demon games, enforcing (5) for C -constrained strategies requires promoting {µ (j) s : j ≤ k} to optimization variables (see Appendix).
Via coordinate descent, this observation allows us to conduct optimizations (3) over the set of all adaptive schemes with a fixed game configuration structure (S j ) n+1 j=1 .Consider, indeed, the following method.Box 2: A heuristic for general optimizations over preparation games 1. Starting point: a natural number L, an optimization problem of the form (3), a sequence of sets of game configurations S = (S j ) n+1 j=1 , a measurement scheme M = {M (j) 3. Choose an index k ∈ {1, ..., n} and, using the techniques explained in Appendix G (eq. G2), minimize the objective value of (3) over measurement schemes M with sj+1|sj , for all s j ∈ S j , s j+1 ∈ S j+1 , j = k, subject to the optimization constraints.Call f the objective value of the optimal measurement scheme M .With this algorithm, at each iteration, the objective value f (v) in problem (3) can either decrease or stay the same: The hope is that it returns a small enough value f after a moderate number L of iterations.In Appendix G the reader can find a successful application of this heuristic to devise 20-round quantum preparation games.
The main drawback of this algorithm is that it is very sensitive to the initial choice of POVMs, so it generally requires several random initializations to achieve a reasonably good value of the objective function.It is therefore suitable for optimizations of n ≈ 50 round measurement schemes.Optimizations over, say, n = 1000 round games risk getting stuck in a bad local minimum.
To address this issue, we provide two additional methods for the design of large-n quantum preparation games below.

C. Large-round preparation games from composition
The simplest way to construct preparation games with arbitrary round number consists in playing several preparation games, one after another.Consider thus a game where, in each round and depending on the current game configuration, the referee chooses a preparation game.Depending on the outcome, the referee changes the game configuration and plays a different preparation game with the player in the next round.We call such a game a meta-preparation game.Similarly, one can define meta-meta preparation games, where, in each round, the referee and the player engage in a meta-preparation game.This recursive construction can be repeated indefinitely.
In Appendix C we show that the maximum average score of a (meta) j -game, which refers to a game at level j of the above recursive construction, can be computed inductively, through a formula akin to eq. ( 2).Moreover, in the particular case that the preparation games that make up the (meta) j -game have {0, 1} scores, one only needs to know their minimum and maximum scores to compute the (meta) j -game's maximum average score.
For simple meta-games such as "play m times the {0, 1}-scored preparation game G, count the number of wins and output 1 (0) if it is greater than or equal to (smaller than) a threshold v", which we denote G (m) v , we find that the optimal meta-strategy for the player is to always play G optimally, thus recovering where P = arg max P∈S G(P), from [13].p(G, v, m) can be interpreted as a p-value for C -constrained strategies, as it measures the probability of obtaining a result at least as extreme as the observed data v under the hypothesis that the player's strategies are constrained to belong to S .

D. Devising large-round preparation games based on gradient descent
A more sophisticated alternative to devise many-round quantum preparation games exploits the principles behind Variational Quantum Algorithms [14].These are used to optimize the parameters of a quantum circuit by following the gradient of an operator average.Similarly, we propose a gradient-based method to identify the optimal linear witness for detecting certain quantum states.Since the resulting measurement scheme is adaptive, the techniques developed so far are crucial for studying its vulnerability with respect to an adaptive preparation attack.
Consider a set of i.i.d.preparation strategies E = {ρ ⊗n : ρ ∈ E}, and let { W (θ) ≤ 1 : θ ∈ R m } ⊂ B(H ) be a parametric family of operators such that ∂ ∂θx W (θ) ≤ K, for x = 1, ..., m.Given a function f : R m+1 → R, we wish to devise a preparation game that, ideally, assigns to each strategy ρ ⊗n ∈ E an average score of with Intuitively, W (θ ρ ) represents the optimal witness to detect some property of ρ, and both the average value of W (θ ρ ) and the value of θ ρ hold information regarding the use of ρ as a resource.
Next, we detail a simple heuristic to devise preparation games G whose average score approximately satisfies eq.( 11).If, in addition, f (θ ρ , tr[W (θ ρ )ρ]) ≤ δ for all ρ ∈ C , then one would expect that G(P) δ, for all C -constrained strategies P ∈ S .
1.The possible game configurations are vectors from the set S k = {−(k − 1), ..., k − 1} m+1 , for k = 1, ..., n.Given s k ∈ S k , we will denote by sk the vector that results when we erase the first entry of s k .

The final score of the game is
More sophisticated variants of this game can, for instance, let depend on k, or take POVMs with more than two outcomes into account.It is worth remarking that, for fixed m, the number of possible game configurations scales with the total number of rounds n as O(n m+1 ).
If the player uses an i.i.d.strategy, then the sequence of values (θ k ) k reflects the effect of applying stochastic gradient descent [15] to solve the optimization problem (12).Hence, for the i.i.d.strategy ρ ⊗n and n 1, one would expect the sequence of values (θ k ) k to converge to θ ρ , barring local maxima.In that case, the average score of the game will be close to (11) with high probability.For moderate values of n, however, it is difficult to anticipate the average game scores for strategies in E and S , so that a detailed analysis with the procedure from eq. ( 2) becomes necessary (see the applications below for an example).

III. ENTANGLEMENT CERTIFICATION AS A PREPARATION GAME
A paradigmatic example of a preparation game is entanglement detection.In this game, the player is an untrusted source of quantum states, while the role of the referee is played by one or more separate parties who receive the states prepared by the source.The goal of the referee is to make sure that the source has indeed the capacity to distribute entangled states.The final score of the entanglement detection preparation game is either 1 (certified entanglement) or 0 (no entanglement certified), that is, g : S → {0, 1}.In this case, one can identify the final game configuration with the game score, i.e., one can take S = {0, 1}.The average game score is then equivalent to the probability that the referee certifies that the source can distribute entangled states.
Consider then a player who is limited to preparing separable states, i.e., a player for whom C corresponds to the set of fully separable states.Call the set of preparation strategies available to such a player S .Ideally, we wish to implement a preparation game such that the average game score of a player using strategies from S (i.e., the probability that the referee incorrectly labels the source as entangled) is below some fixed value e I .In hypothesis testing, this quantity is known as type-I error.At the same time, if the player follows a class E of preparation strategies (involving the preparation of entangled states), the probability that the referee incorrectly labels the source as separable is upper bounded by e II .This latter quantity is called type-II error.In summary, we wish to identify a game G such that p(1|P) ≤ e I , for all P ∈ S , and p(0|P) ≤ e II , for all P ∈ E .
In the following, we consider three types of referees, with access to the following sets of measurements: 1. Global measurements: M 1 denotes the set of all bipartite POVMs.

A. Few-round protocols for entanglement detection
We first consider entanglement detection protocols with just a single round (n = 1).Let E = {ρ 1 , ..., ρ r−1 } be a set of r − 1 bipartite entangled states.Our objective is to minimise the type-II error, given a bound e I on the acceptable type-I error.To express this optimization problem as in (3), we define S i ≡ {ρ i }, for i = 1, ..., r − 1, and S r ≡ S , the set of separable strategies.In addition, we take f (v) = v 1 and choose A, b so that v r = e I , v 1 = ...v r−1 .Finally, we consider complementary score functions g, g : S → {0, 1} and assign the scores g i = g for i = 1, ..., r − 1, and g r = g .All in all, the problem to solve is min To optimize over the dual C * of the set of separable states, as required in (15), we invoke the Doherty-Parillo-Spedalieri (DPS) hierarchy [16,17].As shown in the Appendix D, the dual of this hierarchy approximates the set of all entanglement witnesses from the inside and converges as n → ∞.In the case of two qubits the DPS hierarchy already converges at the first level.Hence, the particularly simple ansatz where V 0 , V 1 ≥ 0 and T B is the partial transpose over the second subsystem, already leads us to derive tight bounds on the possible e II , given e I and the class of measurements available to the referee.For larger dimensional systems, enforcing condition ( 16) instead of the second constraint in (15) results in a sound but perhaps suboptimal protocol (namely, a protocol not necessarily minimizing e II ).Nevertheless, increasing the level of the DPS dual hierarchy generate a sequence of increasingly better (and sound) protocols whose type-II error converges to the minimum possible value asymptotically.Eq. ( 15) requires us to enforce the constraint (M s|∅ ) s ∈ M .For M = M 1 , this amounts to demanding that the matrices (M (1) s|∅ ) s are positive semidefinite and add up to the identity.In that case, problem (15) can be cast as a semidefinite program (SDP) [18].
For the cases M = M 2 , M 3 , denote Alice and Bob's choices of Pauli measurements by x and y, with outcomes, a, b respectively, and call γ ∈ {0, 1} the outcome of the 1-way LPCC measurement.Then we can express Alice and Bob's effective POVM as where the distribution P (x, y, γ|a, b) is meant to model Alice and Bob's classical processing of the outcomes they receive.For M = M 2 , P (x, y, γ|a, b) must satisfy the conditions 1 y,γ P (x, y, γ|a, b) = P (x) and whereas, for M = M 3 , P (x, y, γ|a, b) satisfies For M = M 2 , M 3 , enforcing the constraint (M s|∅ ) s ∈ M thus requires imposing a few linear constraints on the optimization variables P (x, y, γ|a, b).For these cases, problem (15) can therefore be cast as an SDP as well.
In Figure 3, we compare the optimal error trade-offs for M = M 1 , M 2 , M 3 and further generalise this to scenarios, where, e.g.due to experimental errors, the device preparing the target state ρ is actually distributing states -close to ρ in trace norm.The corresponding numerical optimisations, as well as any other convex optimization problem solved in this paper, were carried out using the semidefinite programming solver MOSEK [19], in combination with the optimization packages YALMIP [20] or CVX [21].We provide an example of a MATLAB implementation of these optimisations at [22].
We next consider the problem of finding the best strategy for M = M 2 , M 3 for n-round entanglement detection protocols.In this scenario, our general results for Maxwell demon games are not directly applicable.The reason is that, although both Alice and Bob are just allowed to conduct a finite set of physical measurements (namely, the three Pauli matrices), the set of effective local or LPCC measurements which they can enforce in each game round is not discrete.Nonetheless, a simple modification of the techniques developed for Maxwell demon games suffices to make the optimizations tractable.For this, we model Alice's and Bob's setting choices (x i ) i , (y i ) i and final score γ of the game, depending on their respective outcomes (a i ) i , (b i ) i through conditional distributions Depending on whether the measurements in each round are taken from M 2 or M 3 this distribution will obey different sets of linear constraints.For the explicit reformulation of problem (3) as an SDP in this setting, we refer to Appendix E. Solving this optimization problem, we find the optimal multi-round error trade-offs for two-qubit entanglement detection in scenarios where the POVMs considered within each round are either in the set M 2 (LPCC) or M 3 (Local Pauli measurements), see Figure 4. Now let us consider the scenario from above where within each round a measurement from class M 3 is applied in more detail.Does the adaptability of the choice of POVM between the rounds in a Maxwell demon game actually ).The referee has access to measurement strategies from the sets M1 (blue), M2 (red), M3 (yellow).We display the mimimal eII for fixed eI .As each game corresponds to a hypothesis test, the most reasonable figure of merit is to quantify the type-I and type-II errors (eI , eII ) a referee could achieve.These error pairs lie above the respective curves in the plots, any error-pair below is not possible with the resources at hand.Our optimisation also provides us with an explicit POVM, i.e., a measurement protocol, that achieves the optimal error pairs.(a) Entanglement detection for exact state preparation.The minimal total errors for |φ are eI + eII = 0.6464 with M1, eI + eII = 0.8152 with M2, and eI + eII = 0.8153 with M3.For most randomly sampled states, these errors are much larger.We remark that there are also states, such as the singlet, where M2 and M3 lead to identical optimal errors.(b) Entanglement detection for noisy state preparation.To enforce that all states close to ρ = |ψ ψ| remain undetected with probability at most eII , we need to invoke eq. ( 7), with C = {ρ : ρ ≥ 0, tr(ρ ) = 1, ρ − ρ 1 ≤ }.In Appendix F we show how to derive the dual to this set.The plot displays the = 0.1 case.improve the error trade-offs?Specifically, we aim to compare the case where the referee has to choose a POVM from M 3 for each round of the game beforehand to the case where they can choose each POVM from M 3 on the fly based on their previous inputs and outputs.The answer to this question is intuitively clear when we consider a set E of more than one state, since then we can conceive a strategy where in the first round we perform a measurement that allows us to get an idea which of the states in E we are likely dealing with, while in the second round we can then use the optimal witness for that state.However, more surprisingly, we find that this can also make a difference for a single state E = {|ψ ψ|}.For instance, for the state |ψ (yellow), G (purple) are obtained through 30 independent repetitions of optimal one-shot games G restricted to measurements in M2.These are compared to the optimal 3-round adaptive protocols G with measurements M2 performed in each of the three rounds, independently repeated 10 times as G (blue).The 1 and 3-shot games G and G are also displayed in Figure 4. We observe that the repetition of the adaptive protocol outperforms the others in the regime of low eI + eII .
that, in two-round games, the minimum value of e I + e II equals 0.7979 with adaptation between rounds and 0.8006 without adaptation (see [23] for a statistical interpretation of the quantity e I + e II ).
This result may strike the reader as surprising: on first impulse, one would imagine that the best protocol to detect the entanglement of two preparations of a known quantum state ρ entails testing the same entanglement witness twice.A possible explanation for this counter-intuitive phenomenon is that preparations in E and S are somehow correlated: either both preparations correspond to ρ or both preparations correspond to a separable state.From this point of view, it is not far-fetched that an adaptive measurement strategy can exploit such correlations.
Our framework also naturally allows for the optimisation over protocols with e II = 0 and where the corresponding e I error is being minimised, thus generalising previous work on detecting entanglement in few experimental rounds [24,25].Using the dual of the DPS hierarchy for full separability [26], we can furthermore derive upper bounds on the errors for states shared between more than two parties.Similarly, a hierarchy for detecting high-dimensional entangled states [27] allows us to derive protocols for the detection of high-dimensional entangled states using quantum preparation games in [28].
Due to the exponential growth of the configuration space, optimisations over Maxwell demon adaptive measurement schemes are hard to conduct even for relatively low values of n.Devising entanglement detection protocols for n 1 requires completely different techniques.

B. Many-round protocols for entanglement detection
In order to devise many-round preparation games, an alternative to carrying out full optimizations is to rely on game composition. 3In this regard, in Figure 5 we compare 10 independent repetitions of a 3-round adaptive strategy to 30 independent repetitions of a 1-shot protocol, based on (10).This way of composing preparation games can easily be performed with more repetitions.Indeed, for m = 1000 repetitions we find preparation games with errors at the order of ≈ 10 −14 .In the asymptotic regime, the binomial distribution of the number of 1-outcomes for a player restricted to separable strategies (see eq. ( 10)) can be approximated by a normal distribution.Finally, we apply gradient descent as a guiding principle to devise many-round protocols for entanglement quantification.For experimental convenience, the preparation game we develop is implementable with 1-way LOCC measurements.
We wish our protocol to be sound for i.i.d.strategies in E = {ρ ⊗n : ρ ∈ E}, with E being the set of all states for θ ∈ (0, π/2).For such states, the protocol should output a reasonably good estimate of |ψ θ 's entanglement entropy, S(|ψ θ ) = h(cos 2 (θ)), with h(x) = −x log(x) − (1 − x) log(1 − x) the binary entropy.Importantly, if the player is limited to preparing separable states, the average score of the game should be low.Following eq. ( 11), we introduce This operator satisfies W (θ) ≤ 1 and |ψ θ is the only eigenvector of W (θ) with eigenvalue 1. W (θ) can be estimated via 1-way LOCC with the POVM M 0 . Furthermore, consider This dichotomic observable can be estimated via eq.( 13) with the 1-way LOCC POVM defined by which satisfies )) with 0 ≤ λ ≤ 1 and δ(θ) = max ρ∈C tr[W (θ)ρ].This captures the following intuition: if the estimate v of tr[W (θ n )ρ] is below a convex combination of the maximum value achievable (namely, ψ θ |W (θ n = θ)|ψ θ = 1) and the maximum value δ(θ n ) achievable by separable states, then the state shall be regarded as separable and thus the game score is set to zero.In Figure 6, we illustrate how this game performs.

IV. CONCLUSION
We have introduced quantum preparation games as a convenient framework to analyze the certification and quantification of resources.We derived general methods to compute the (maximum) average score of arbitrary preparation games under different restrictions on the preparation devices: this allowed us to prove the soundness or security of general certification protocols.Regarding the generation of such protocols, we explained how to conduct exact (approximate) optimizations over preparation games with a low (moderate) number of rounds.In addition, we introduced two methods to devise large-round preparation games, via game composition and through gradient descent methods.These general results were applied to devise novel protocols for entanglement detection and quantification.To our knowledge, these are the first non-trivial adaptive protocols ever proposed for this task.In addition, we discovered that, against the common practice in entanglement detection, entanglement certification protocols for a known quantum state can often be improved using adaptive measurement strategies.
Even though we illustrated our general findings on quantum preparation games with examples from entanglement theory, where the need for efficient protocols is imminent, we have no doubt that our results will find application in other resource theories.With the current push towards building a quantum computer, a second use of our results that should be particularly emphasized is the certification of magic states.More generally, developing applications of our work to various resource theories, including for instance the quantification of non-locality, is an interesting direction for future work.
Another compelling line of research consists in studying the average performance of preparation games where Assumption 1 does not hold.In those games, a player can exploit the action of the referee's measurement device to generate states outside the class allowed by their preparation device.Such games naturally arise when the player is .This captures the intuition that in the first few rounds it is more important to adjust the angle, while in later rounds the witness should be measured more often.(a) The score assigned to i.i.d.preparation strategies as a function of the parameter θ of |ψ θ for n = 41 rounds for E (blue) compared to the optimal separable value (red).As expected, the average game scores of the i.i.d.strategies {|ψ θ ψ θ | ⊗n : θ} mimic the shape of the curve h(cos(θ) 2 ) and the scores obtainable with the set of separable strategies S perform significantly worse compared to the states from E with angles close to θ = π 4 .(b) The optimal scores achievable by players capable of preparing bipartite quantum states of bounded negativity [29], obtained through application of eq. ( 2).We observe that the average score of the game constitutes a good estimator for entanglement negativity.
limited to preparing resource-free states for some resource theory, but the referee is allowed to conduct resourceful measurements.An obvious motivating example of these games is the detection of magic states via general POVMs.
Finally, it would be interesting to explore an extension of preparation games where the referee is allowed to make the received states interact with a quantum system of fixed dimension in each round.This scenario perfectly models the computational power of a Noisy Intermediate-Scale Quantum (NISQ) device.In view of recent achievements in experimental quantum computing, this class of games is expected to become more and more popular in quantum information theory.

4 .
M ← M , l ← l + 1.If l ≥ L,return M and f and stop.Otherwise, go to step 3.

2. 1 -
way Local Pauli measurements and Classical Communication (LPCC): M 2 is the set of POVMs conducted by two parties, Alice and Bob, on individual subsystems, where Alice may perform a Pauli measurement first and then, depending on her inputs and outputs, Bob chooses a Pauli measurement as well.The final outcome is a function of both inputs and outcomes.3. Local Pauli measurements: M 3 contains all POVMs where Alice and Bob perform Pauli measurements x, y on their subsystems, obtaining results a, b, respectively.The overall output is γ = f (a, b, x, y), where f is a (non-deterministic) function.

1FIG. 3 . 1 - 1 √ 2 (
FIG. 3. 1-shot entanglement certification for |φ = 1 √ 2 (|00 + |1+).The referee has access to measurement strategies from the sets M1 (blue), M2 (red), M3 (yellow).We display the mimimal eII for fixed eI .As each game corresponds to a hypothesis test, the most reasonable figure of merit is to quantify the type-I and type-II errors (eI , eII ) a referee could achieve.These error pairs lie above the respective curves in the plots, any error-pair below is not possible with the resources at hand.Our optimisation also provides us with an explicit POVM, i.e., a measurement protocol, that achieves the optimal error pairs.(a) Entanglement detection for exact state preparation.The minimal total errors for |φ are eI + eII = 0.6464 with M1, eI + eII = 0.8152 with M2, and eI + eII = 0.8153 with M3.For most randomly sampled states, these errors are much larger.We remark that there are also states, such as the singlet, where M2 and M3 lead to identical optimal errors.(b) Entanglement detection for noisy state preparation.To enforce that all states close to ρ = |ψ ψ| remain undetected with probability at most eII , we need to invoke eq.(7), with C = {ρ : ρ ≥ 0, tr(ρ ) = 1, ρ − ρ 1 ≤ }.In Appendix F we show how to derive the dual to this set.The plot displays the = 0.1 case.

FIG. 4 .
FIG. 4. Maxwell demon games played for various numbers of rounds.The referee has access to measurement strategies from the sets M2 (a) and M3 (b) within each round.The choice of the overall POVM implemented in each round will, in either case, depend on all inputs and outputs of previous rounds.The curves display the optimal error pairs for n = 1 (yellow), n = 2 (green) and n = 3 (blue) for E = {|φ φ| ⊗n }.

FIG. 6 .
FIG.6.Gradient descent based preparation game with parameters = 0.1, λ = 0.1 and θ0 = 0.The probability of measuringM 0 1 , M 0 −1 in round k is chosen according to p k (0) = 1 1+e −(2k−n).This captures the intuition that in the first few rounds it is more important to adjust the angle, while in later rounds the witness should be measured more often.(a) The score assigned to i.i.d.preparation strategies as a function of the parameter θ of |ψ θ for n = 41 rounds for E (blue) compared to the optimal separable value (red).As expected, the average game scores of the i.i.d.strategies {|ψ θ ψ θ | ⊗n : θ} mimic the shape of the curve h(cos(θ) 2 ) and the scores obtainable with the set of separable strategies S perform significantly worse compared to the states from E with angles close to θ = π 4 .(b) The optimal scores achievable by players capable of preparing bipartite quantum states of bounded negativity[29], obtained through application of eq.(2).We observe that the average score of the game constitutes a good estimator for entanglement negativity.