Robust and efficient verification of graph states in blind measurement-based quantum computation

Blind quantum computation (BQC) is a secure quantum computation method that protects the privacy of clients. Measurement-based quantum computation (MBQC) is a promising approach for realizing BQC. To obtain reliable results in blind MBQC, it is crucial to verify whether the resource graph states are accurately prepared in the adversarial scenario. However, previous verification protocols for this task are too resource consuming or noise susceptible to be applied in practice. Here, we propose a robust and efficient protocol for verifying arbitrary graph states with any prime local dimension in the adversarial scenario, which leads to a robust and efficient protocol for verifying the resource state in blind MBQC. Our protocol requires only local Pauli measurements and is thus easy to realize with current technologies. Nevertheless, it can achieve the optimal scaling behaviors with respect to the system size and the target precision as quantified by the infidelity and significance level, which has never been achieved before. Notably, our protocol can exponentially enhance the scaling behavior with the significance level.


INTRODUCTION
Quantum computation offers the promise of exponential speedups over classical computation on a number of important problems [1][2][3].However, it is very challenging to realize practical quantum computation in the near future, especially for clients with limited quantum computational power.Blind quantum computation (BQC) [4] is an effective method that enables such a client to delegate his (her) computation to a server, who is capable to perform quantum computation, without leaking any information about the computation task.So far, various protocols of BQC have been proposed in theory [5][6][7][8] and demonstrated in experiments [9][10][11][12].Many of these protocols build on the model of measurement-based quantum computation (MBQC) [13][14][15], in which graph states are used as resources and local projective measurements on qudits are used to drive the computation.
To realize BQC successfully, it is crucial to protect the privacy of the client and verify the correctness of the computation results.The latter task, known as verification of BQC, has been studied in various models as explained in the Methods section, among which MBQC in the receiveand-measure setting is particularly convenient [16][17][18][19][20][21].However, it is extremely challenging to construct robust and efficient verification protocols, especially for noisy, intermediate-scale quantum (NISQ) devices [3,22,23].Actually, this problem lies at the heart of the active research field of quantum characterization, verification, and validation (QCVV) [24][25][26][27][28][29].
In this work, we focus on the problem of verifying the resource graph states in the following adversarial scenario [16,30,31], which is crucial to the verification of blind MBQC in the receive-and-measure setting [6,[16][17][18][19][20][21]: Alice is a client (verifier) who can only perform single-qudit projective measurements with a trusted measurement device, and Bob is a server (prover) who can prepare arbitrary quantum states.In order to perform MBQC, Alice delegates the preparation of the n-qudit graph state |G⟩ ∈ H to Bob, who then prepares a quantum state ρ on the whole space H ⊗(N +1) and sends it to Alice qudit by qudit.If Bob is honest, then he is supposed to prepare N + 1 copies of |G⟩; while if he is malicious, then he can mess up the computation of Alice by generating an arbitrary correlated or even entangled state ρ.To obtain reliable computation results, Alice needs to verify the resource state prepared by Bob with suitable tests on N systems, where each test is a binary measurement on a single-copy system.If the test results satisfy certain conditions, then the conditional reduced state on the remaining system is close to the target state |G⟩ and can be used for MBQC; otherwise, the state is rejected.Since there is no communication from Alice to Bob after the preparation of the state ρ, the information-theoretic blindness is guaranteed by the no-signaling principle [6].
The assumption that the client can perform reliable local projective measurements can be justified as follows.First, the measurement devices are controlled by Alice in her laboratory and are not affected by the adversary.So it is reasonable to assume that the measurement devices are trustworthy.Second, in practice, Alice can calibrate and verify her measurement devices before performing blind MBQC, and the resource costs of these operations TABLE I. Comparison of various protocols for verifying the resource states of blind MBQC in the adversarial scenario.Here n is the qubit (qudit) number of the resource graph state; ϵ and δ denote the target infidelity and significance level, respectively.The optimal scaling behaviors of the test number N in n, ϵ, and δ are O(1), O(ϵ −1 ), and O(ln δ −1 ), respectively.By 'robust to noise' we mean the verifier Alice can accept with a high probability if the state prepared has a sufficiently high fidelity.The robustness achieved in Ref. [17] is different from the current definition.The scaling behaviors with respect to ϵ and δ are not clear for protocols in Refs.[18,20,21,32].See Supplementary Note 3 for details.putation and the qudit number of the resource graph state.If high quality measurements can be certified after calibration and verification, then Alice can safely use them to verify the graph state and perform blind MBQC.
As pointed out above, the verification of the resource graph state in the adversarial scenario [16,30,31] is a crucial and challenging part in the verification of blind MBQC.A valid verification protocol in the adversarial scenario has to meet the basic requirements of completeness and soundness [16,20,31].The completeness means Alice does not reject the ideal graph state |G⟩.Intuitively, the verification protocol is sound if Alice does not mistakenly accept any bad state that is far from the ideal state |G⟩.Concretely, the soundness means the following: once accepting, Alice needs to ensure with a high confidence level 1 − δ that the reduced state for MBQC has a sufficiently high fidelity (at least 1 − ϵ) with |G⟩.Here 0 < δ ≤ 1 is called the significance level and the threshold 0 < ϵ < 1 is called the target infidelity.The two parameters specify the target verification precision.The efficiency of a protocol is characterized by the number N of tests needed to achieve a given precision.Under the requirements of completeness and soundness, the optimal scaling behaviors of N with respect to ϵ, δ, and the qudit number n of |G⟩ are O(ϵ −1 ), O(ln δ −1 ), and O(1), respectively, as explained in the Results section.However, it is highly nontrivial to construct efficient verification protocols in the adversarial scenario.Although various protocols have been proposed [16,18,20,21,[31][32][33], most protocols known so far are too resource consuming.Even without considering noise robustness, only the protocol of Refs.[30,31] achieves the optimal scaling behaviors with n, ϵ, and δ (see Table I).
Moreover, most protocols are not robust to experimental noise: the state prepared by Bob may be rejected with a high probability even if it has a small deviation from the ideal resource state.However, in practice, it is extremely difficult to prepare quantum states with genuine multipartite entanglement perfectly.So it is unrealistic to ask honest Bob to generate the perfect resource state.On the other hand, if the deviation from the ideal state is small enough, then it is still useful for MBQC [20,32].Therefore, a practical and robust protocol should accept nearly ideal states with a sufficiently high probability; otherwise, Alice needs to repeat the verification protocol many times to perform MBQC, which substantially increase the sample complexity.Unfortunately, no protocol known in the literature can achieve this goal.
Recently, a fault-tolerant protocol was proposed for verifying MBQC based on two-colorable graph states [17].With this protocol, Alice can detect whether or not the given state belongs to a set of error-correctable states; then she can perform fault-tolerant MBQC on the accepted state.Although this protocol is noise-resilient to some extent, it is not very efficient (see Table I), and is difficult to realize in the current era of NISQ devices [3,22,23] because too many physical qubits are required to encode the logical qubits.In addition, this protocol is robust only to certain correctable errors since it is based on a given error correcting code.If the actual error is not correctable, then the probability of acceptance will decrease exponentially with the number of tests, which substantially increases the actual sample complexity.
In this work, we propose a robust and efficient protocol for verifying general qudit graph states with a prime local dimension in the adversarial scenario, which plays a crucial role in robust and efficient verification of blind MBQC.Our protocol is appealing to practical applications because it only requires stabilizer tests based on local Pauli measurements, which are easy to implement with current technologies.It is robust against arbitrary types of noise in state preparation, as long as the fidelity is sufficiently high.Moreover, our protocol can achieve optimal scaling behaviors with respect to the system size and target precision ϵ, δ, and the sample cost is comparable to the counterpart in the nonadversarial scenario as clarified in the Methods section.As far as we know, such a high efficiency has never been achieved before when robustness is taken into account.In addition to qudit graph states, our protocol can also be applied to verifying many other important quantum states in the adversarial scenario, as explained in the Discussion section.Furthermore, many technical results developed in the course of our work are also useful to studying random sampling without replacement, as discussed in the companion pa-per [34] (cf. the Methods section).

Qudit graph states
To establish our results, first we review the definition of qudit graph states as a preliminary, where the local dimension d is a prime.Mathematically, a graph G = (V, E, m E ) is characterized by a set of n vertices V = {1, 2, . . ., n} and a set of edges E together with multiplicities specified by m E = (m e ) e∈E , where m e ∈ Z d and Z d is the ring of integers modulo d, which is also a field given that d is a prime.Two distinct vertices i, j of G are adjacent if they are connected by an edge.The generalized Pauli operators X and Z for a qudit read where j ∈ Z d .Given a graph G = (V, E, m E ) with n vertices, we can construct an n-qudit graph state |G⟩ ∈ H as follows [33,35] ( This graph state is also uniquely determined by its stabilizer group S generated by the n commuting operators S i := X i j∈Vi Z m (i,j) j for i = 1, 2, . . ., n, where V i is the set of vertices adjacent to vertex i.Each stabilizer operator in S can be written as where k := (k 1 , . . ., k n ) ∈ Z n d , and (g k ) i denotes the local generalized Pauli operator for the ith qudit.

Strategy for testing qudit graph states
Recently, a homogeneous strategy [30,31] for testing qubit stabilizer states based on stabilizer tests was proposed in Ref. [36] and generalized to the qudit case with a prime local dimension in Sec.X E of Ref. [31].Here we use a variant strategy for testing qudit graph states, which serves as an important subroutine of our verification protocol.Let S be the stabilizer group of |G⟩ ∈ H and D(H) be the set of all density operators on H.For any operator g k ∈ S, the corresponding stabilizer test is constructed as follows: party i measures the local generalized Pauli operator (g k ) i for i = 1, 2, . . ., n, and records the outcome by an integer o i ∈ Z d , which corresponds to the eigenvalue ω oi of (g k ) i ; then the test is passed if and only if the outcomes satisfy i o i = 0 mod d.By construction, the test can be represented by a two-outcome measurement {P k , 1 1 −P k }.Here 1 1 is the identity operator on H; is the projector onto the eigenspace of g k with eigenvalue 1 and corresponds to passing the test, while For 1/d ≤ λ < 1, if one performs Ω and the trivial test with probabilities p = d(1−λ) d−1 and 1−p, respectively, then another strategy can be constructed as [30,31] We denote by ν := 1 − λ the spectral gap of Ω from the largest eigenvalue.This strategy plays a key role in our verification protocol introduced in the next subsection.As shown in Supplementary Note 6 A, the second equality in Eq. ( 5) holds whenever d is a prime, but may fail if d is not a prime.In the latter case, our strategy is no longer homogeneous in general, and many results in this work may not hold since they are based on homogeneous strategies.This is why we restrict our attention to the case of prime local dimensions.

Verification of graph states in blind MBQC
Suppose Alice intends to perform quantum computation with single-qudit projective measurements on the n-qudit graph state |G⟩ generated by Bob.As shown in Fig. 1, our protocol for verifying |G⟩ in the adversarial scenario runs as follows.
1. Bob produces a state ρ on the whole space H ⊗(N +1) with N ≥ 1 and sends it to Alice.
2. After receiving the state, Alice randomly permutes the N + 1 systems of ρ (due to this procedure, we can assume that ρ is permutation invariant without loss of generality) and applies the strategy Ω defined in Eq. ( 6) to the first N systems.are observed among the N tests, Alice accepts the reduced state σ N +1 on the remaining system and uses it for MBQC; otherwise, she rejects it.
With this verification protocol, Alice aims to achieve three goals: completeness, soundness, and robustness.Recall that |G⟩ can always pass each test, so the completeness is automatically guaranteed.The soundness is characterized by the target infidelity ϵ and significance level δ as explained in the introduction.For verification protocols working in the nonadversarial scenario, where the source only produces independent states with no correlation or entanglement among different runs, the optimal scaling behaviors of the test number N with respect to ϵ, δ, and n are O(ϵ −1 ), O(ln δ −1 ), and O(1), respectively [31,36].The adversarial scenario studied in this work has a weaker assumption on the source [31,36], so the scaling behaviors in ϵ, δ, and n cannot be better.Although the condition of soundness looks quite simple, it is highly nontrivial to determine the degree of soundness.Even in the special case k = 0, this problem was resolved only very recently after quite a lengthy analysis [30,31].Unfortunately, the robustness of this protocol is poor in this special case, as we shall see later.So we need to tackle this challenge in the general case.
Most previous works did not consider the problem of robustness at all, because it is already very difficult to detect the bad case without considering robustness.To characterize the robustness of a protocol, we need to consider the case in which honest Bob prepares an independent and identically distributed (i.i.d.) quantum state, that is, ρ is a tensor power of the form ρ = τ ⊗(N +1) with τ ∈ D(H).Due to inevitable noise, τ may not equal the ideal state |G⟩⟨G|.Nevertheless, if the infidelity Our N min  Our N min  18) with λ = 1/2, and the red dashed curve corresponds to the RHS of Eq. ( 20), which is an upper bound for Nmin(ϵ, δ, λ, r).The blue dashed curve corresponds to the HM protocol [16], and the green solid curve corresponds to the ZH protocol [31] with λ = 1/2.The performances of the TMMMF protocol [20] and TM protocol [32] are not shown because the numbers of tests required are too large (see Supplementary Note 3).
ϵ τ := 1−⟨G|τ |G⟩ is smaller than the target infidelity, that is, ϵ τ < ϵ, then τ is still useful for quantum computing.For a robust verification protocol, such a state should be accepted with a high probability.
In the i.i.d.case, the probability that Alice accepts τ reads where N is the number of tests, k is the number of allowed failures, and B N,k (p) := k j=0 N j p j (1 − p) N −j is the binomial cumulative distribution function.To construct a robust verification protocol, it is preferable to choose a large value of k, so that p iid N,k (τ ) is sufficiently high.Unfortunately, most previous verification protocols can reach a meaningful conclusion only when k = 0 [16,18,30,31,33], in which case the probability decreases exponentially with the test number N , which is not satisfactory.These protocols need a large number of tests to guarantee the soundness, so it is difficult to get accepted even if Bob is honest.Hence, previous protocols with the choice k = 0 are not robust to noise in state preparation.Since the acceptance probability is small, Alice needs to repeat the verification protocol many times to ensure that she accepts the state τ at least once, which substantially increases the actual sample cost.When ϵ τ = ϵ 2 for example, the number of repetitions required is at least Θ exp[ 1 4δ ] for the HM protocol in Ref. [16] and Θ 1 √ δ for the ZH protocol in Refs.[30,31] (see Supplementary Note 3 for details).As a consequence, the total number of required tests is at least Θ 1 δ exp[ 1 4δ ] for the HM protocol and Θ ln δ −1 √ δ for the ZH protocol, as illustrated in Fig. 2. Therefore, although some protocols known in the literature are reasonably efficient in detecting the bad case, they are not useful in verifying the resource state of blind MBQC in a realistic scenario.

Guaranteed infidelity
Suppose ρ is permutation invariant.Then the probability that Alice accepts ρ reads where Ω := 1 1 − Ω. Denote by σ N +1 the reduced state on the remaining system when at most k failures are observed.The fidelity between σ N +1 and the ideal state where The actual verification precision can be characterized by the following figure of merit with 0 where λ is determined by Eq. ( 6), and the minimization is taken over permutation-invariant states ρ on H ⊗(N +1) .If Alice accepts the state prepared by Bob, then she can guarantee (with significance level δ) that the reduced state σ N +1 has infidelity at most ελ (k, N, δ) with the ideal state |G⟩.Consequently, according to the relation between the fidelity and trace norm, Alice can ensure the condition [16] for any POVM element 0 ≤ E ≤ 1 1; that is, the deviation of any measurement outcome probability from the ideal value is not larger than ελ (k, N, δ).
In view of the above discussions, the computation of ελ (k, N, δ) given in Eq. ( 11) is of central importance to analyzing the soundness of our protocol.Thanks to the analysis presented in the Methods section, this quantum optimization problem can actually be reduced to a classical sampling problem studied in the companion paper [34].Using the results derived in Ref. [34], we can deduce many useful properties of ελ (k, N, δ) as well as its analytical formula, which are presented in Supplementary Note 1.Here it suffices to clarify the monotonicity properties of ϵ λ (k, N, δ) as stated in Proposition 1 below, which follows from Proposition 6.5 in Ref. [34].Let Z ≥j be the set of integers larger than or equal to j.
Verification with a fixed error rate If the number k of allowed failures is sublinear in N , that is, k = o(N ), then the acceptance probability p iid N,k (τ ) in Eq. ( 7) for the i.i.d.case approaches 0 as the number of tests N increases, which is not satisfactory.To achieve robust verification, here we set the number k to be proportional to the number of tests, that is, k = ⌊sνN ⌋, where 0 ≤ s < 1 is the error rate, and ν = 1 − λ is the spectral gap of the strategy Ω.In this case, when Bob prepares i.i.d.states τ ∈ D(H) with ϵ τ < s, the acceptance probability p iid N,k (τ ) approaches one as N increases.In addition, we can deduce the following theorem, which is proved in Supplementary Note 6 B.
Notably, if the ratio s/ϵ is a constant, then the sample cost is only O(ϵ −1 ln δ −1 ).The scaling behaviors in ϵ and δ are the same as the counterparts in the nonadversarial scenario, and are thus optimal.
The number of allowed failures Next, we consider the case in which the number N of tests is given.To construct a concrete verification protocol, we need to specify the number k of allowed failures such that the conditions of soundness and robustness are satisfied simultaneously.According to Proposition 1, a small k is preferred to guarantee soundness, while a larger k is preferred to guarantee robustness.To construct a robust and efficient verification protocol, we need to find a good balance between the two conflicting requirements.The following proposition provides a suitable interval for the number k of allowed failures that can guarantee soundness; see Supplementary Note 6 E for a proof.Proposition 2. Suppose 0 < λ, ϵ < 1, 0 < δ ≤ 1/4, and Next, we turn to the condition of robustness.When honest Bob prepares i.i.d.quantum states τ ∈ D(H) with infidelity 0 < ϵ τ < ϵ, the probability that Alice accepts τ is p iid N,k (τ ) given in Eq. (7), which is strictly increasing in k according to Lemma S4 in Supplementary Note 2. Suppose we set k = l(λ, N, ϵ, δ).As the number of tests N increases, the acceptance probability has the following asymptotic behavior if 0 < ϵ τ < ϵ (see Supplementary Note 6 F for a proof), where D(p∥q) := p ln p q + (1 − p) ln 1−p 1−q is the relative entropy between two binary probability vectors (p, 1 − p) and (q, 1−q), and l is a shorthand for l(λ, N, ϵ, δ).Therefore, the probability of acceptance is arbitrarily close to one as long as N is sufficiently large, as illustrated in Fig. 4. Hence, our verification protocol is able to reach any degree of robustness.

Sample complexity of robust verification
Now we consider the resource cost required by our protocol to reach given verification precision and robustness.Let ρ be the state on H ⊗(N +1) prepared by Bob and σ N +1 be the reduced state after Alice performs suitable tests and accepts the state ρ.To verify the target state within infidelity ϵ, significance level δ, and robustness r (with 0 ≤ r < 1) entails the following two conditions.1. (Soundness) If the infidelity of σ N +1 with the target state is larger than ϵ, then the probability that Alice accepts ρ is less than δ.
The tensor power ρ in Condition 2 can be replaced by the tensor product of N + 1 independent quantum states τ 1 , τ 2 , . . ., τ N +1 ∈ D(H) that have infidelities at most rϵ.All our conclusions do not change under this modification.
To achieve the conditions of soundness and robustness, we need to choose the test number N and the number k of allowed failures properly.To determine the resource cost, we define N min (ϵ, δ, λ, r) as the minimum number of tests required for robust verification, that is, the minimum positive integer N such that there exists an integer 0 ≤ k ≤ N − 1 which together with N achieves the above two conditions.Note that the conditions of soundness and robustness can be expressed as So N min (ϵ, δ, λ, r) can be expressed as Next, we propose a simple algorithm, Algorithm 1, for computing N min (ϵ, δ, λ, r), which is very useful to practical applications.In addition to N min (ϵ, δ, λ, r), this algorithm determines the corresponding number of allowed failures, which is denoted by k min (ϵ, δ, λ, r).In Supplementary Note 7 C we explain why Algorithm 1 works.Algorithm 1 is particularly useful to studying the variations of N min (ϵ, δ, λ, r) with the four parameters ϵ, δ, λ, r as illustrated in Fig. 5.When δ and r are fixed, N min (ϵ, δ, λ, r) is inversely proportional to ϵ; when ϵ, r are fixed and δ approaches 0, N min (ϵ, δ, λ, r) is proportional to ln δ −1 .In addition, Fig. 5 (d) indicates that a strategy Ω with small or large λ is not very efficient for robust verification, while any choice satisfying 0.3 ≤ λ ≤ 0.5 is nearly optimal.

6:
if M ≥ k + 1 and ελ (k, M, δ) ≤ ϵ, then The following theorem provides an informative upper bound for N min (ϵ, δ, λ, r) and clarifies the sample complexity of robust verification; see Supplementary Note 6 D for a proof.Theorem 3. Suppose 0 < λ, ϵ < 1, 0 < δ ≤ 1/2, and 0 ≤ r < 1.Then the conditions of soundness and robustness in Eq. (17) hold as long as For given λ and r, the minimum number of tests is only O(ϵ −1 ln δ −1 ), which is independent of the qudit number n of |G⟩ and achieves the optimal scaling behaviors with respect to the infidelity ϵ and significance level δ.The coefficient is large when λ is close to 0 or 1, while it is around the minimum for any value of λ in the interval [0.3, 0.5].Numerical calculation based on Algorithm 1 shows that the upper bound for N min (ϵ, δ, λ, r) provided in Theorem 3 is a bit conservative, especially when r is small.In other words, the actual sample cost is smaller than what can be proved rigorously.Nevertheless, the bound is quite informative about the general trends.If we choose r = λ = 1/2 for example, then Theorem 3 implies that while numerical calculation shows that N min (ϵ, δ, λ, r) ≤ 67 ϵ −1 ln δ −1 .Compared with previous works [16,30,31], our protocol improves the scaling behavior with respect to the significance level δ exponentially and even doubly exponentially, as illustrated in Fig. 2.  Nmin(ϵ, δ, λ,r) /10 5 (a)

DISCUSSION
Verification of resource graph states in the adversarial scenario is a crucial step in the verification of blind MBQC.We have proposed a highly robust and efficient protocol for achieving this task, which applies to any qudit graph state with a prime local dimension.To implement this protocol, it suffices to perform simple stabilizer tests based on local Pauli measurements, which is quite appealing to NISQ devices.For any given degree of robustness, to verify the target graph state within infidelity ϵ and significance level δ, only O(ϵ −1 ln δ −1 ) tests are required, which achieves the optimal sample complexity with respect to the system size, infidelity, and significance level.Compared with previous protocols, our protocol can reduce the sample cost dramatically in a realistic scenario; notably, the scaling behavior in the significance level can be improved exponentially.So far we have focused on the verification of resource graph states with trustworthy and ideal local projective measurements.According to Eq. ( 12), if the blind MBQC is performed with ideal measurements after Alice accepts the state prepared by Bob, then the precision of the computation results is guaranteed by the precision of the graph state.However, in practice, it is unrealistic to assume that the measurement devices are perfect.So we need additional operations to guarantee the precision of the computation results when verifying blind MBQC in the receive-and-measure setting.As mentioned in the introduction, the client can calibrate her measurement devices before performing blind MBQC with a small overhead.In addition, we can convert the noise in measurements to noise in state preparation.To apply this method, we need the assumption that any measurement used in MBQC and graph state verification can be expressed as a composition of a measurement-independent noise process and the noiseless measurement.The detail of this conversion method is presented in Supplementary Note 4. When the noise process depends on the specific measurement, the situation is more complicated, and further study is required to deal with such noise.
After obtaining a reliable resource graph state accepted by the verification protocol, Alice can use it to perform MBQC.In this procedure, she needs to adaptively select local projective measurements to drive the computation.Nevertheless, these operations can be completed by using a classical computer, and the classical computation complexity scales linearly with the size of the original quantum computation [13].Therefore, the most challenging part in the verification of blind MBQC is the verification of the resource graph state, which is the focus of this work.
In the above discussion, we assume that the measurement devices are controlled by the client and are trustworthy.It is also desirable to construct robust and efficient protocols for verifying blind MBQC when the measurement devices are not trustworthy.To this end, a device-independent (DI) verification protocol was proposed in Ref. [37].However, this protocol has a quantum communication complexity of the order O(ñ c ), where ñ is the size of the delegated quantum computation and c > 2048, which is too prohibitive for any practical implementation.By combining the CHSH inequality and stabilizer tests applied to a qubit graph state, Ref. [19] proposed a protocol for self-testing MBQC in the receiveand-measure setting.This protocol requires O(n 4 log n) samples with n being the qubit number of the resource graph state, which is much more efficient than previous protocols, but is still far from the optimal scaling achieved in this work.In addition, it does not consider the problem of robustness.To further reduce the overhead and improve the robustness, it might be helpful to combine our approach with DI quantum state certification (DI QSC) developed recently [38].See Supplementary Note 5 for details.
In addition to graph states, our protocol can also be used to verify many other pure quantum states in the adversarial scenario, where the state preparation is controlled by a potentially malicious adversary Bob, who can produce an arbitrary correlated or entangled state ρ on the whole system H ⊗(N +1) .Let |Ψ⟩ ∈ H be the target pure state to be verified.Then a verification strategy Ω for |Ψ⟩ is called homogeneous [30,31] if it has the form Efficient homogeneous strategies based on local projective measurements have been constructed for many important quantum states [31,36,[39][40][41][42][43][44][45][46].If a homogeneous strategy Ω given in Eq. ( 22) can be constructed, then the target state |Ψ⟩ can be verified in the adversarial scenario by virtue of our protocol: Alice first randomly permutes all systems of ρ and applies the strategy Ω to the first N systems, then she accepts the remaining unmeasured system if at most k failures are observed among these tests.Most results (including Theorems 1, 2, 3, Algorithm 1, and Propositions 1, 2) in this paper are still applicable if the target graph state |G⟩ is replaced by |Ψ⟩.Therefore, our verification protocol is of interest not only to blind MBQC, but also to many other tasks in quantum information processing that entail high-security.More results on quantum state verification (QSV) in the adversarial scenario are presented in Supplementary Note 7.
Up to now we have focused on robust QSV in the adversarial scenario, in which the prepared state ρ can be arbitrarily correlated or entangled, which is pertinent to blind MBQC.On the other hand, robust QSV in the i.i.d.scenario is also important to many applications.Although this scenario is much simpler than the adversarial scenario, the sample complexity of robust QSV has not been clarified before.In the Methods section and Supplementary Note 8 we will discuss this issue in detail and clarify the sample complexity of robust QSV in the i.i.d.scenario in comparison with the adversarial scenario.Not surprisingly, most of our results on the adversarial scenario have analog for the i.i.d.scenario.

Protocols for realizing verifiable BQC
To put our work into context, here we briefly review existing protocols for realizing verifiable BQC, which can be broadly divided into four classes [24].Many protocols in the four classes build on the model of MBQC due to its convenience and flexibility.
The first class of protocols work in the multi-prover setting [8,37,47,48].These protocols can achieve a classical client (verifier), but a trade-off is the requirement of multiple non-communicating servers (provers) that share entanglement with each other, which is very difficult to realize in practice.
The second and third classes of protocols need only a single server, but assume that the client has limited quantum computational power.The second class of protocols work in the prepare-and-send setting [10,[49][50][51], in which the client has a trusted preparation device and the ability to send single-qudit quantum states to the server.This class includes the protocol based on quantum authentication [49], protocol based on repeating indistinguishable runs of tests and computations [50], and protocol based on trap qubits [51], which has been demonstrated experimentally [10].The third class of protocols work in the receive-and-measure setting [16-18, 20, 21, 37], in which the client receives quantum states from the server and has the ability to perform reliable local projective measurements.This class includes the protocol based on CHSH games [37], protocols based on QSV in the adversarial scenario [16-18, 20, 21], and our protocol.Notably, the above three classes of protocols are all informationtheoretically secure [24].
Recently, the forth class of protocols based on computational assumptions have been developed [52][53][54][55], which elegantly enables a classical client to hide and verify the quantum computation of a single server.However, these schemes are no longer information-theoretically secure, and their overheads are too prohibitive for any sort of practical implementation in the near future.
Simplifying the calculation of ϵ λ (k, N, δ) Here we show how to simplify the calculation of the guaranteed infidelity ϵ λ (k, N, δ) given in Eq. ( 11) by virtue of results derived in the companion paper [34].
Recall that Ω is a homogeneous strategy for the target state |G⟩ ∈ H as shown in Eq. ( 6).It has the following spectral decomposition, where D is the dimension of H, and Π j are mutually orthogonal rank-1 projectors with Π 1 = |G⟩⟨G|.In addition, ρ is a permutation-invariant state on H ⊗(N +1) .Note that p k (ρ) defined in Eq. ( 9) and f k (ρ) defined in Eq. ( 10) only depend on the diagonal elements of ρ in the product basis constructed from the eigenbasis of Ω (as determined by Π j ).Hence, we may assume that ρ is diagonal in this basis without loss of generality.In other words, ρ can be expressed as a mixture of tensor products of Π j .For i = 1, 2, . . ., N + 1, we can associate the ith system of ρ with a {0, 1}-valued variable Y i : we define Y i = 0 (1) if the state on the ith system is Π 1 (Π j̸ =1 ).Since the state ρ is permutation invariant, the variables Y 1 , . . ., Y N +1 are subject to a permutation-invariant joint distribution P Y1,...,Y N +1 on [N + 1] := {1, 2, . . ., N + 1}.Conversely, for any permutation-invariant joint distribution on [N + 1], we can always find a diagonal state ρ, whose corresponding variables Y 1 , . . ., Y N +1 are subject to this distribution.
Next, we define a {0, 1}-valued random variable U i to express the test outcome on the ith system, where 0 corresponds to passing the test and 1 corresponds to the failure.If Y i = 0, which means the state on the ith system is Π 1 , then the ith system must pass the test; if Y i = 1, which means the state on the ith system is Π j̸ =1 , then the ith system passes the test with probability λ, and fails with probability 1 − λ.So we have the following conditional distribution: Note that U i is determined by the random variable Y i and the parameter λ in Eq. ( 6).Let K be the random variable that counts the number of 1, that is, the number of failures, among U 1 , U 2 , . . ., U N .Then the probability that Alice accepts is given that Alice accepts if at most k failures are observed among the N tests.This probability only depends on the joint distribution P Y1,...,Y N +1 .If at most k failures are observed, then the fidelity of the state on the (N + 1)th system can be expressed as the conditional probability which also only depends on P Y1,...,Y N +1 .Hence, the guaranteed infidelity defined in Eq. ( 11) can be expressed as where the optimization is taken over all permutationinvariant joint distributions P Y1,...,Y N +1 .Equation ( 27) reduces the computation of ϵ λ (k, N, δ) to the computation of a maximum conditional probability.The latter problem was studied in detail in our companion paper [34], in which ϵ λ (k, N, δ) is called the upper confidence limit.Hence, all properties of ϵ λ (k, N, δ) derived in Ref. [34] also hold in the current context.Notably, several results in this paper are simple corollaries of the counterparts in Ref. [34].To be specific, Proposition 1 follows from Proposition 6.5 in Ref. [34]; Theorem S1 in Supplementary Note 1 follows from Theorem 6.4 in Ref. [34]; Lemma S6 in Supplementary Note 2 follows from Lemma 6.7 in Ref. [34]; Lemma S7 in Supplementary Note 2 follows from Lemma 2.2 in Ref. [34]; Proposition S7 in Supplementary Note 7 follows from Lemma 5.4 and Eq.(89) in Ref. [34].
Although this paper and the companion paper [34] study essentially the same quantity ϵ λ (k, N, δ), they have different focuses.In Ref. [34], we mainly focus on asymptotic behaviors of ϵ λ (k, N, δ) and its related quantities, which are of interest to the theory of statistical sampling and hypothesis testing.The main goal of Ref. [34] is to show that the randomized test with parameter λ > 0 can substantially improve the significance level over the deterministic test with λ = 0.In this paper, by contrast, we focus on finite bounds for ϵ λ (k, N, δ) and its related quantities, which are important to practical applications.In addition, the key result on robust verification, Theorem 3, has no analog in the companion paper.The main goal of this paper is to provide a robust and efficient protocol for verifying the resource graph state in blind MBQC and clarify the sample complexity.So the two papers are complementary to each other.
It is worth pointing out that the 'randomized test' considered in Ref. [34] has a different meaning from the 'quantum test' in this paper because of different conventions in the two communities.The 'randomized test' in Ref. [34] means the whole procedure that one observes the N variables U 1 , U 2 , . . ., U N and makes a decision based on the number of failures observed; while a 'quantum test' in this paper means Alice performs a two-outcome measurement on one system of the state ρ, in which one outcome corresponds to passing the test, and the other outcome corresponds to a failure.
Robust and efficient verification of quantum states in the i.i.d.scenario Up to now we have focused on QSV in the adversarial scenario, in which the server Bob can prepare an arbitrary state ρ on the whole space H ⊗(N +1) .In this section, we turn to the i.i.d.scenario, in which the prepared state is a tensor power of the form ρ = σ ⊗(N +1) with σ ∈ D(H).This verification problem was originally studied in Refs.[39,40] and later more systematically in Ref. [36].So far, efficient verification strategies based on local operations and classical communication (LOCC) have been constructed for various classes of pure states, including bipartite pure states [42,43,56], stabilizer states (including graph states) [16,31,33,36,57], hypergraph states [33], weighted graph states [58], Dicke states [45,59], ground states of local Hamiltonians [60,61], and certain continuous-variable states [62], see Refs.[28,29] for overviews.Verification protocols based on local collective measurements have also been constructed for Bell states [40,63].However, most previous works did not consider the problem of robustness.Consequently, most protocols known so far are not robust, and the sample cost may increase substantially if robustness is taken into account, see Supplementary Note 8 A for explanation.Only recently, several works considered the problem of robustness [29,[64][65][66][67]; however, the degree of robustness of verification protocols has not been analyzed, and the sample complexity of robust verification has not been clarified, although this problem is apparently much simpler than the counterpart in the adversarial scenario.
In this section, we propose a general approach for constructing robust and efficient verification protocols in the i.i.d.scenario and clarify the sample complexity of robust verification.The results presented here can serve as a benchmark for understanding QSV in the adversarial scenario.To streamline the presentation, the proofs of these results [including Propositions 3-6 and Eq.(38)] are relegated to Supplementary Note 8.
Consider a quantum device that is expected to produce the target state |Ψ⟩ ∈ H, but actually produces the states σ 1 , σ 2 , . . ., σ N in N runs.In the i.i.d.scenario, all these states are identical to the state σ, and the goal of Alice is to verify whether σ is sufficiently close to the target state |Ψ⟩.If a strategy Ω of the form in Eq. ( 22) can be constructed for |Ψ⟩, then our verification protocol runs as follows: Alice applies the strategy Ω to each of the N states, and counts the number of failures.If at most k failures are observed among the N tests, then Alice accepts the states prepared; otherwise, she rejects.Here 0 ≤ k ≤ N − 1 is called the number of allowed failures.The completeness of this protocol is guaranteed because the target state |Ψ⟩ can never be mistakenly rejected.
Most previous works did not consider the problem of robustness and can reach a meaningful conclusion only when k = 0 [31,33,36,[39][40][41][42][43][44][45], i.e., Alice accepts iff all N tests are passed.However, the requirement of passing all tests is too demanding in a realistic scenario and leads to poor robustness, as clarified in Supplementary Note 8. To remedy this problem, several recent works considered modifications that allow some failures [29,[64][65][66][67].However, the robustness of such verification protocols has not been analyzed, and the sample complexity of robust verification has not been clarified.
Here we consider robust verification in which at most k failures are allowed.Then the probability of acceptance is given by where ϵ σ := 1 − ⟨Ψ|σ|Ψ⟩ is the infidelity between σ and the target state.Similar to Eq. ( 11), for 0 < δ ≤ 1 we define the guaranteed infidelity in the i.i.d.scenario as where the first maximization is taken over all states σ on H, and the second equality follows from Eq. (28).By definition, if Alice accepts the state σ, then she can ensure (with significance level δ) that σ has infidelity at most ε iid λ (k, N, δ) with the target state (soundness).Hence, ε iid λ (k, N, δ) characterizes the verification precision in the i.i.d.scenario.Since the i.i.d.scenario has a stronger constraint than the full adversarial scenario, the guaranteed infidelity for the former scenario cannot be larger than that for the later scenario, that is, as illustrated in Fig. 6.
The following proposition clarifies the monotonicities of ε iid λ (k, N, δ).It is the counterpart of Proposition 1.
and N ∈ Z ≥k+1 .Then ϵ iid λ (k, N, δ) is strictly decreasing in δ and N , but strictly increasing in k.
Next, we consider the verification with a fixed error rate in the i.i.d.scenario.Concretely, we set the number of allowed failures k to be proportional to the number of tests, i.e., k = ⌊sνN ⌋, where 0 ≤ s < 1 is the error rate, and ν = 1 − λ is the spectral gap of the strategy Ω.
Proposition 4. Suppose 0 < s, λ < 1, 0 < δ ≤ 1/2, and Similar to the behavior of ελ (⌊νsN ⌋, N, δ), the guaranteed infidelity ε iid λ (⌊νsN ⌋, N, δ) for the i.i.d.scenario converges to the error rate s as the number N gets large, as illustrated in Fig. 6.To achieve a given infidelity ϵ and significance level δ, which means ε iid λ (⌊νsN ⌋, N, δ) ≤ ϵ, it suffices to set s < ϵ and choose a sufficiently large N .By virtue of Proposition 4 we can derive the following proposition, which is the counterpart of Theorem 2.

5:
Find the largest integer M such that B M,k (νrϵ) ≥ 1 − δ.Here the condition of robustness is the same as the counterpart in the adversarial scenario, while the condition of soundness is different.In the adversarial scenario, once accepting, only the reduced state on the remaining unmeasured system can be used for application, so the condition of soundness only focuses on the fidelity of this state.In the i.i.d.scenario, by contrast, the prepared states are identical and independent, so the condition of soundness focuses on the fidelity of each state.
Given the total number N of tests and the number k of allowed failures, then the conditions of soundness and robustness can be expressed as Let N iid min (ϵ, δ, λ, r) be the minimum number of tests required for robust verification in the i.i.d.scenario.Then N iid min (ϵ, δ, λ, r) is the minimum positive integer N such that Eq. ( 33) holds for some 0 ≤ k ≤ N − 1, namely, It is determined by νϵ, δ, r, and is the counterpart of N min (ϵ, δ, λ, r) in the adversarial scenario.
Next, we propose a simple algorithm, Algorithm 2, for computing N iid min (ϵ, δ, λ, r), which is very useful to practical applications.This algorithm is the counterpart of Algorithm 1 for computing N min (ϵ, δ, λ, r).In addition to the number of tests, Algorithm 2 also determines the corresponding number of allowed failures, which is denoted by k iid min (ϵ, δ, λ, r).In Supplementary Note 8 F we explain why Algorithm 2 works.
Algorithm 2 is quite useful to studying the variations of N iid min (ϵ, δ, λ, r) with λ, δ, ϵ, and r as illustrated in Fig. 7.When ϵ, r are fixed and δ approaches 0, N iid min (ϵ, δ, λ, r) is proportional to ln δ −1 .When δ and r are fixed, N iid min (ϵ, δ, λ, r) is inversely proportional to νϵ.This fact shows that strategies with larger spectral gaps are more efficient, in sharp contrast with the adversarial scenario.
At this point it is instructive to compare the minimum number of tests for robust verification in the adversarial scenario with the counterpart in the i.i.d.scenario.Numerical calculation shows that the ratio of N min (ϵ, δ, λ, r) over N iid min (ϵ, δ, λ, r) is decreasing in λ, as reflected in Fig. 8.For a typical value of λ, say λ = 1/2, this ratio is smaller than 2, so the sample complexity in the adversarial scenario is comparable to the counterpart in the i.i.d.scenario.When λ is small, one can construct another strategy with a larger λ by adding the trivial test [see Eq. ( 6)], which can achieve a higher efficiency in the adversarial scenario.Due to this reason, the ratio of N min (ϵ, δ, λ, r) over N iid min (ϵ, δ, λ, r) is not so important when λ ≤ 0.3.
The following proposition provides a guideline for choosing appropriate parameters N and k for achieving a given verification precision and robustness.
Then the conditions of soundness and robustness in Eq. (33) hold as long as s ∈ (rϵ, ϵ), k = ⌊νsN ⌋, and For 0 < p, r < 1 we define functions By virtue of Proposition 6 we can derive the following informative bounds (for 0 < δ, ϵ, r < 1), These bounds become tighter when the significance level δ approaches 0, as shown in Supplementary Figure 4.
Finally, it is instructive to clarify the relation between QSV in the i.i.d.scenario, nonadversarial scenario, and adversarial scenario.In the i.i.d.scenario, the assumptions on the source are the strongest, so QSV is the easiest, and the sample cost is the smallest.In the adversarial scenario, by contrast, the assumptions on the source are the weakest, so QSV is the most difficult, and the sample cost is the largest.For graph states with a prime local dimension, the sample cost in the adversarial scenario is comparable to the counterpart in the i.i.d.scenario thanks to our analysis above, which means the sample costs in all three scenarios are comparable.For 0 ≤ p ≤ 1 and z, k ∈ Z ≥0 , define Here it is understood that x 0 = 1 even if x = 0.For integer 0 ≤ z ≤ N + 1, define Lemma S1 (Lemma 6.2, [34]).Suppose 0 < λ < 1, k, z, N ∈ Z ≥0 , and N ≥ k + 1.Then h z (k, N, λ) strictly decreases with z for k ≤ z ≤ N + 1, and g z (k, N, λ) strictly decreases with z for 0 ≤ z ≤ N + 1.
By this lemma, we have The exact value of ϵ λ (k, N, δ) is determined by the following theorem, which follows from Theorem 6.4 in the companion paper [34] according to the discussions in the Methods section.

Supplementary Note 2. AUXILIARY LEMMAS
To establish our main results here we prepare several auxiliary lemmas.
For 0 < p, q < 1, the relative entropy between two binary probability vectors (p, 1 − p) and (q, 1 − q) reads Lemma S3.When 0 < p < q < 1, D(p∥q) is strictly increasing in q and strictly decreasing in p; when 0 < q < p < 1, D(p∥q) is strictly decreasing in q and strictly increasing in p.
Proof of Lemma S3.We have which is positive when 0 < p < q < 1, and is negative when 0 < q < p < 1.In addition, we have which is negative when 0 < p < q < 1, and is positive when 0 < q < p < 1.These observations complete the proof.
Lemma S4 (Lemma 3.2, [68]).Suppose k, z ∈ Z ≥0 , 0 ≤ k ≤ z and 0 < p < 1.Then B z,k (p) is strictly increasing in k, strictly decreasing in z, and nonincreasing in p.In addition, B z,k (p) is strictly decreasing in p when k < z.
For 0 < p < 1 and k, z ∈ Z ≥0 , the Chernoff bound states that The following lemma provides a reverse Chernoff bound for B z,k (p).

Supplementary Note 3. PERFORMANCES OF PREVIOUS PROTOCOLS FOR VERIFYING THE RESOURCE STATES IN BLIND MBQC
To illustrate the advantage of our verification protocol, here we provide more details on the performances of the previous protocols summarized in Table 1 in the main text, including protocols in Refs.[16,17,20,31,32].It turns out none of these protocols can verify the resource states of blind MBQC in a robust and efficient way.Actually, most previous works did not consider the problem of robustness at all, because it is already very difficult to detect the bad case without considering robustness.
Suppose Bob is honest and prepares an i.i.d.state of the form ρ = τ ⊗(N +1) , where τ ∈ D(H) has a high fidelity with the target state and is useful for MBQC.For an ideal verification protocol, such a state should be accepted with a high probability.However, the acceptance probability is very small for most protocols known in the literature.This is not surprising given that a large number of tests are required by these protocols to detect the bad case, which means it is difficult to get accepted even if Bob is honest.So many repetitions are necessary to ensure that Alice can accept the state preparation at least once, which may substantially increase the actual sample cost.
To be concrete, suppose Alice repeats the verification protocol M times, and the acceptance probability of each run is p acc (depending on τ ).Then the probability that she accepts the state τ at least once reads 1 − (1 − p acc ) M .To ensure that this probability is at least 1 − δ with 0 < δ < 1, the minimum of M reads where the inequality follows from Lemma S2 and the approximation is applicable when p acc ≪ 1.To make a fair comparison with our protocol presented in the main text, we can choose δ = δ.When robustness is taken into account, therefore, the actual sample complexity will be increased by a factor of M ≈ (ln δ −1 )/p acc , which is a very large overhead for most previous protocols.
A. HM protocol [16] In Ref. [16], Hayashi and Morimae (HM) introduced a protocol for verifying two-colorable graph states in the adversarial scenario; the state ρ prepared by Bob is accepted only if all tests are passed.To verify an n-qubit two-colorable graph state |G⟩ ∈ H within target infidelity ϵ and significance level δ, this protocol requires tests.This scaling is optimal with respect to ϵ and n, but not optimal with respect to δ.
Next, we analyze the robustness of the HM protocol, which is not considered in the original paper [16].When Bob generates i.i.d.quantum states τ ∈ D(H) with infidelity ϵ τ , the average probability that τ passes each test is at most 1 − ϵ τ /2.So the probability p HM acc that Alice accepts τ satisfies For high precision verification, we have ϵ, δ ≪ 1 and N HM ≫ 1, so the RHS of Eq. ( 25) is Θ(1) only if ϵ τ = O(N −1 HM ) = O(δϵ).Note that δϵ is much smaller than the target infidelity ϵ, so it is in general very difficult to achieve such a low infidelity even if the target infidelity ϵ is accessible.Therefore, the robustness of the HM protocol is very poor.
If the ratio ϵ τ /ϵ is a constant, then the probability p HM acc decreases rapidly as δ decreases.When ϵ τ = ϵ/2 and δ ≤ 1/8 for example, this probability is upper bounded by as illustrated in Supplementary Figure 1.Here the first inequality follows from Eqs. ( 24) and ( 25), and the second inequality is proved in Supplementary Note 3 F.This probability decreases exponentially with δ −1 and is already extremely small when ϵ = δ = 0.01, in which case p HM acc < 1.35 × 10 −11 .According to Eq. ( 23), to ensure that Alice accepts the state prepared by Bob at least once with confidence level 1 − δ, the number of repetitions is given by Therefore, the total sample cost reads which increases exponentially with 1/δ.By contrast, the sample cost of our protocol is only O(ϵ −1 ln δ −1 ), which improves the scaling behavior with δ doubly exponentially.The maximum probability that Alice accepts i.i.d.quantum states τ ∈ D(H) with previous verification protocols.Here δ is the significance level, ϵ = 0.01 is the target infidelity, and τ has infidelity ϵτ = ϵ/2 with the target state |G⟩.The blue curve corresponds to the HM protocol [16] and is based on Eq. ( 25); the other four curves correspond to the ZH protocol [31] and are based on Eq. ( 29).[31] In Ref. [31], Zhu and Hayashi (ZH) introduced a protocol for verifying qudit stabilizer states (including graph states) in the adversarial scenario.In this protocol, Alice uses the strategy in Eq. ( 6) in the main text to test N systems of the state ρ prepared by Bob, and she accepts the state on the remaining system iff all N tests are passed.Hence, the ZH protocol can be viewed as a special case of our protocol with k = 0. To verify an n-qudit graph state |G⟩ ∈ H within target infidelity ϵ and significance level δ, the ZH protocol requires N ZH = Θ(ϵ −1 ln δ −1 ) tests, which is optimal with respect to all n, ϵ and δ if robustness is not taken into account.The analytical formula of N ZH is provided in Theorem 2 of Ref. [31].

B. ZH protocol
The robustness of the ZH protocol is higher than the HM protocol, but is still not satisfactory.When Bob generates i.i.d.quantum states τ ∈ D(H) with infidelity ϵ τ , the average probability that τ passes each test in the ZH protocol is 1 − νϵ τ , where ν = 1 − λ is the spectral gap of the strategy in Eq. ( 6) in the main text.The probability that Alice accepts τ reads [cf.Eq. ( 8) in the main text] For high precision verification, we have ϵ, δ ≪ 1 and )), which is much more demanding than achieving the target infidelity ϵ.So the robustness of the ZH protocol is not satisfactory.
In most cases of practical interest, we have 0 as illustrated in Supplementary Figure 1.Here the inequality is proved in Supplementary Note 3 G.According to Eq. ( 23), to ensure that Alice accepts the state prepared by Bob at least once with confidence level 1 − δ, the number of repetitions reads So the total sample cost reads By contrast, the sample cost of our protocol is only O(ϵ −1 ln δ −1 ), which improves the scaling behavior with δ exponentially.When λ = 1/2 and ϵ = δ = 0.01 for example, calculation shows that N ZH = 1307 and p ZH acc ≈ 3.79%.
C. TMMMF protocol [20] In Ref. [20], Takeuchi, Mantri, Morimae, Mizutani, and Fitzsimons (TMMMF) introduced a protocol for verifying qudit graph states in the adversarial scenario.Let |G⟩ ∈ H be the target graph state of n qudits.In the TMMMF protocol, the total number of copies required is N TMMMF = 2n⌈(5n 4 ln n)/32⌉, and the number of tests required is tests are passed, then the verifier Alice can guarantee that the reduced state σ on one of the remaining systems satisfies with significance level n 1−5c/64 , where c is a constant that satisfies 64/5 < c < (n − 1) 2 /4.The number of required copies N TMMMF grows rapidly with n, which makes the TMMMF protocol hardly practical.By contrast, to verify an n-qudit graph state |G⟩ with infidelity ϵ = (2 √ c + 1)/n and significance level δ = n 1−5c/64 , the number of copies required by our protocol is If the parameter c is a constant independent of n, then only N = O(n ln n) copies are needed.So our protocol is much more efficient than the TMMMF protocol.In addition, in our protocol, it is easy to adjust the verification precision as quantified by the infidelity and significance level.By contrast, the TMMMF protocol does not have this flexibility because the target infidelity and significance level are intertwined with the qudit number.
To analyze the robustness of the TMMMF protocol, suppose Bob generates i.i.d.states τ ∈ D(H) with infidelity ϵ τ = ϵ/2.In this case, the following proposition proved in Supplementary Note 3 H shows that the probability of acceptance decreases exponentially fast with the qudit number n if Alice applies the TMMMF protocol.Therefore, the robustness of the TMMMF protocol is very poor.( According to Eq. ( 23), to ensure that Alice accepts the state prepared by Bob at least once with confidence level 1 − δ, the number of repetitions reads Consequently, the total number of copies consumed by Alice is which is astronomical.
D. TM protocol [32] In Ref. [32], Takeuchi and Morimae (TM) introduced a protocol for verifying qubit hypergraph states (including graph states) in the adversarial scenario.Let k ≥ (4n) 7 and m ≥ (2 ln 2)n 3 k 18/7 be positive integers.To verify an n-qubit graph state within infidelity ϵ = k −1/7 and significance level δ = k −1/7 .The number of required tests is N test = nk, and the total number of samples is which is astronomical and too prohibitive for any practical application.By contrast, the sample number required by our protocol to achieve the same precision is only which is much smaller than N TM .Furthermore, in the TM protocol, the choices of the target infidelity ϵ and significance level δ are restricted, while our protocol is applicable for all valid choices of ϵ and δ.Since the TM protocol is impossible to realize in practice, we do not analyze its robustness further.
E. FH protocol [17] In Ref. [17], Fujii and Hayashi (FH) introduced a protocol for verifying fault-tolerant MBQC based on two-colorable graph states.Let |G⟩ ∈ H be the ideal two-colorable resource graph state of n qubits, H S be the subspace of states that are error-correctable, and Π S be the projector onto H S .With the FH protocol, by performing N FH = ⌈1/(δϵ)−1⌉ tests, Alice can ensure that the reduced state σ on the remaining system satisfies tr(σΠ S ) ≥ 1 − ϵ with significance level δ.That is, Alice can verify whether or not the given state belongs to the class of error-correctable states.Once accepting, she can safely use the reduced state σ to perform fault-tolerant MBQC.In this sense, the FH protocol is fault tolerant, which offers a kind of robustness different from our protocol.The scaling of N FH is optimal with respect to both ϵ and n, but not optimal with respect to δ.
Since the FH protocol relies on a given quantum error correcting code, it is difficult to realize for NISQ devices because too many physical qubits are required to encode logical qubits.In addition, the FH protocol is robust only to certain correctable errors.If the actual error is not correctable (even if the error probability is small), then the probability of acceptance will decrease exponentially with δ −1 (cf. the discussion in Supplementary Note 3 A).By contrast, our protocol is robust against arbitrary error as long as the error probability is small.
The core idea of the FH protocol lies in quantum subspace verification.Later the idea of subspace verification was studied more systematically in Refs.[60,61].In principle, this idea can be combined with our protocol to construct verification protocols that are robust to both correctable errors and noncorrectable errors.This line of research deserves further exploration in the future.
In preparation for further discussion, for 0 < x < 1 we define the function which is strictly increasing in x, because The following proof is composed of two steps.
Step 1: The aim of this step is to prove Eq. ( 30) in the case 0 < δ ≤ λ 2 .
In this case we have where (a) follows from Eq. ( 45), and (b) follows from the monotonicity of f (x) and the assumption 0 < ϵ ≤ 1/2.Hence, to prove Eq. ( 30), it suffices to show that the last expression in (48) is not larger than This inequality can be proved as follows, LHS of ( 49) which completes the proof of Step 1.Here (a) follows because 0 < δ ≤ λ 2 and 1+λ λ ln λ ln 3+λ 4 − 1 2 ≥ 0 by Lemma S12 below, and (b) follows from Lemma S13 below.
Step 2: The aim of this step is to prove Eq. ( 30) in the case λ 2 < δ ≤ λ.

H. Proof of Proposition S1
Let S 1 , . . ., S n be the n stabilizer generators of |G⟩ as given below Eq. ( 2) in the main text.Let P i = 1 d j∈Z d S j i be the projector onto the eigenspace of S i with eigenvalue 1; then |G⟩⟨G| = n i=1 P i .Accordingly, we have Since the n projectors P 1 , P 2 , . . ., P n commute with each other, the above inequality can be proved by considering the common eigenspaces of these projectors.If all P i have eigenvalue 1, then the inequality holds because n ≤ 1 + (n − 1); if 0 ≤ t < n projectors have eigenvalue 1, then the inequality holds because t ≤ 0 + (n − 1).
In the TMMMF protocol, the N test systems to be tested are divided uniformly into n groups, and Alice performs the stabilizer test {P i , 1 1 − P i } on each system in the ith group, where P i corresponds to passing the test.If at most ⌊N test /(2n 2 )⌋ failures are observed among the N test tests, Alice accepts Bob's state.When Bob generates i.i.d.quantum states τ ∈ D(H) with infidelity ϵ τ , the average probability that τ passes a test is given by where the inequality follows from Eq. ( 60) above.For j = 1, 2, . . ., N test , the outcome of the jth test can be associated with a {0, 1}-valued variable W j , where 0 corresponds to passing the test and 1 corresponds to failure.Then Eq. (61) implies that E j W j /N test ≥ ϵ τ /n.The probability that Alice accepts reads According to the Hoeffding's theorem on Poisson binomial distribution (see Theorem 4 in Ref. [69]), we have where the second inequality follows from Lemma S4.Therefore, the probability of acceptance satisfies which confirms Proposition S1.Here (a) follows from the Chernoff bound (12), and (b) follows from the following relation, In Eq. ( 65), (a) follows from Lemma S3; (b) follows from Lemma S3 and the relation and (c) follows from the inequality D(x∥y) ≥ (x − y) 2 /(2y) for x ≤ y.

Supplementary Note 4. CONVERSION OF MEASUREMENT NOISE
In the main text, we proposed a robust and efficient protocol for verifying the resource graph state |G⟩ ∈ H with ideal local projective measurements.To apply our protocol to verifying blind MBQC in a realistic setting, we need to deal with the unknown noise processes that afflict the measurements of Alice.
Here we consider a noise model in which any actual measurement performed by Alice can be expressed as a composition of a noise process and the noiseless measurement.Suppose M j = M j l l is the measurement that Alice intends to perform on the space H. Then the actual noisy measurement that she performs has the form , where E j is a CPTP map that encodes the noise process associated with M j , and E † j is the adjoint map of E j .To eliminate the impact of measurement noise on the verification and MBQC, one method is to convert the measurement noise to preparation noise of the state ρ on H ⊗(N +1) prepared by Bob.To apply this method, we need to further assume that the noise process E j is independent of the specific measurement, that is, E j = E for all j.
Under the above noise assumption, suppose Alice runs the verification protocol developed in the main text with her noisy measurement devices.In the protocol, for i = 1, 2, . . ., N , Alice performs a measurement M ′ ji on the ith system of ρ for state verification; once accepting, she then performs a measurement M ′ j N +1 on the last system for MBQC.Note that this procedure is equivalent to the following procedure: The initial state is E ⊗(N +1) (ρ) instead of ρ; for i = 1, 2, . . ., N , Alice performs the noiseless measurement M ji on the ith system of E ⊗(N +1) (ρ); once accepting, she then performs the noiseless measurement M j N +1 on the remaining system for MBQC.Therefore, the verification of ρ with noisy measurements is equivalent to the verification of E ⊗(N +1) (ρ) with noiseless measurements.In this sense, the noise in measurements has been successfully converted to noise in the state ρ.With noisy measurements, if Alice accepts the state prepared by Bob, then she can guarantee (with significance level δ) that the reduced state σ N +1 on the remaining system satisfies Consequently, according to the relation between the fidelity and trace norm, Alice can ensure the condition for any POVM element 0 ≤ E ≤ 1 1; that is, when performing a noisy measurement on σ N +1 , the deviation of any outcome probability from the ideal value is not larger than ελ (k, N, δ).
When the noise processes E j depend on the measurements and the concrete forms of E j are not known, the above conversion method does not work, especially when some local measurements used in MBQC are different from those used in state verification.To see this, note that the verification of ρ with noisy measurements can be viewed as the verification of E j1 ⊗ • • • ⊗ E j N +1 (ρ) with noiseless measurements.In this case, even if Alice accepts, the fidelity of the remaining system cannot be guaranteed since it is subject to a noise process E j N +1 which might be different from the noise processes in the state verification procedure.To solve this problem, we need to use the same set of local measurements in MBQC and state verification.However, universal quantum computation cannot be achieved using graph states together with Pauli measurements.By contrast, Ref. [70] proposed a special qubit hypergraph state that can realize MBQC with Pauli X and Z measurements.In addition, this state can be verified with Pauli X and Z measurements.Although the same set of measurements are used in MBQC and state verification, there are still two problems to be addressed.First, it is not easy to construct a homogeneous strategy for this hypergraph state with X and Z measurements.So many results in this work cannot be applied directly since they are based on homogeneous strategies.Second, the noise conversion method summarized above cannot be applied if X and Z measurements have different noises.
The first problem shows the importance of extending our work to the non-homogeneous case, which deserves further study.The second problem can be resolved when the noise in measurements has a classical form, which means each actual X (Z) measurement can be regarded as the application of some classical noise (i.e., the application of a 2 × 2 probability transition matrix) to the outcomes of the noiseless X (Z) measurement.In this case, the noise effect on a single system of ρ is determined by the ratio of X and Z measurements used in the test or MBQC.To eliminate the impact of measurement noise, we can use two strategies Ω A and Ω B for testing the hypergraph state, which are applied to different systems of ρ.Here Ω A has a larger ratio of X measurements and Ω B has a smaller ratio of X measurements.When the MBQC performed on the last system has an intermediate ratio of X measurements between the above two ratios and the error rates of both strategies are smaller than a threshold to verify the hypergraph state, we accept it.In this case, the noise effect is also an intermediate value between the noise effects of the two strategies Ω A and Ω B .Therefore, we can conclude that the sum of the classical noise in MBQC and the error in state preparation is smaller than a certain threshold, and can guarantee the precision of the computation result on the last system.In this way, the conversion method is applicable under the assumption of classical noise.When this assumption does not hold, further study is required to devise a noise conversion method.
not only to construct a robust self-testing protocol for |Ψ⟩, but also to determine the values of several key parameters such as p QM and c, which are usually quite challenging.For graph states considered in this work, a robust self-testing protocol was developed recently [72].However, this protocol only applies to qubit graph states; in addition, it is not easy to calculate or give a nontrivial bound for c, since the derivation relies on the proofs of complicated operator inequalities [72].As a consequence, no concrete DI QSC protocol for general qudit graph states are known so far, and the scaling behavior of the sample cost N with respect to the qudit number is not clear.
There are several similarities between our verification approach in the adversarial scenario and the DI QSC approach.First, after receiving states produced by the untrusted source, both approaches test some systems and provide a certificate on the remaining system(s) for application.This capability is powerful and crucial especially when the state preparation is controlled by a potentially malicious adversary.Second, both approaches are able to deal with the case in which not all tests are successfully passed.As discussed in the main text, this ability is crucial to achieve robustness against noise and is appealing to practical applications.
The DI QSC approach has different assumptions from our protocol working in the adversarial scenario.On the one hand, its assumption on the devices is weaker than ours.In our protocol, the source is untrusted and the measurement device is trusted; while in DI QSC, the source and the measurement device are both untrusted.On the other hand, DI QSC has a stronger assumption on the source than our protocol.It assumes that the states produced by the source in different runs are mutually independent, while our protocol applies to a more general source which may produce arbitrary correlated or entangled states.1By combing DI QSC with our protocol, it might be possible to construct a robust and efficient verification protocol that applies to a general untrusted source and an untrusted measurement device.We leave this line of research to future work.

Supplementary Note 6. PROOFS OF RESULTS IN THE MAIN TEXT
In this section, we prove several results presented in the main text, including Eqs. ( 5) and ( 16), Theorems 1, 2, 3, and Proposition 2.
A. Proof of the second equality of Eq. ( 5) in the main text When the local dimension d is a prime, the sum of all P k can be expressed as which implies Eq. ( 5) in the main text.Here (a) follows from Eq. ( 4) in the main text, and (b) follows from the following relation: where (c) holds because g j k k = g k k , and (d) holds because k∈Z n d g k /d n is the stabilizer projector associated with the stabilizer group S.
It worth pointing out that the equality (d) of Eq. ( 72) holds for any local dimension d, but the equality (c) of Eq. ( 72) and the second equality of Eq. ( 5) in the main text may fail if d is not a prime.For example, for the single-vertex graph state |G⟩ with n = 1 and d = 4, we have It turns out that So the second equality of Eq. ( 5) in the main text does not hold in this case.
Our verification protocol proposed in the main text works for graph states with prime local dimensions, which is in line with the convention of most literature on graph states.Much less is known about graph states when the local dimension d is not a prime.In particular, no homogeneous strategy for graph states is known so far, and many results in this work may not hold since they are based on homogeneous strategies.To remedy this problem, one may either try to construct homogeneous strategies for general graph states or try to extend our work to non-homogeneous cases.Both lines of researches deserve further exploration in the future.

B. Proof of Theorem 1
First, Lemma S7 implies that which confirms the lower bound in Theorem 1.
To complete the proof, it remains to prove the second inequality of Eq. ( 17) in the main text, that is, B N,k (νrϵ) ≥ 1 − δ.When r = 0, this inequality is obvious.When r > 0, we have where (a) follows from the Chernoff bound (12) and the inequality N νrϵ ≤ k + 1; (b) follows from Lemma S3 and the B. Verification with a fixed number of allowed failures In the main text, we have considered the verification of quantum states with a fixed error rate, in which the number of allowed failures k is proportional to the number of tests N .Here, we shall consider the case in which k is a fixed integer.Then ελ (k, N, δ) = O(N −1 ) according to Lemma S6.As the number of tests N increases, the guaranteed infidelity approaches 0. To study the minimum number of tests required to reach a given precision, define If the number of tests N satisfies N ≥ N k (ϵ, δ, λ), then ελ (k, N, δ) ≤ ϵ given that ελ (k, N, δ) is nonincreasing in N .Next, we provide several informative bounds for N k (ϵ, δ, λ); see Supplementary Note 7 E for proofs.Recall that z * (k, δ, λ) is the smallest integer z that satisfies B z,k (ν) ≤ δ, as defined in Eq. ( 14), and z * (k, δ, λ) = z * (k, δ, λ) − 1.
This proposition follows from Lemma 5.4 in our companion paper [34] according to the discussions in the Methods section.By this proposition, in the high-precision limit ϵ, δ → 0, the efficiency of our protocol is determined by the factor (λ ln λ −1 ) −1 , which attains its minimum e when λ = 1/ e.Hence, the strategy with λ = 1/ e can achieve the smallest sample cost for high-precision verification.

C. Minimum number of tests for robust verification
For 0 < λ, δ, ϵ < 1 and 0 ≤ r < 1, the minimum number of tests required for robust verification in the adversarial scenario is denoted by N min (ϵ, δ, λ, r) as defined in Eq. (18) in the main text.Algorithm 1 proposed in the main text can be used to calculate N min (ϵ, δ, λ, r) and the corresponding number of allowed failures.
In Algorithm 1, the aim of steps 1-11 is to find k min .Notably, steps 1-2 aim to find k min in the case r = 0; steps 3-10 aim to find k min in the case r > 0 by virtue of the following properties: (1) B M,k (νrϵ) is strictly decreasing in M for M ≥ k by Lemma S4; and (2) ϵ λ (k, M, δ) is nonincreasing in M for M ≥ k + 1 by Proposition 1. Steps 12-13 aim to find N min by virtue of Eqs. ( 98) and (99).
Algorithm 1 is also complementary to Theorem 3 and Proposition S4, which imply the following upper bounds for N min (ϵ, δ, λ, r), N min (ϵ, δ, λ, r) ≤ 2 These bounds prove that the sample cost of our protocol is only O(ϵ −1 ln δ −1 ), which achieves the optimal scaling behaviors with respect to the infidelity ϵ and significance level δ.Numerical calculation based on Algorithm 1 shows that these bounds are still a bit conservative, especially when r is small.Nevertheless, these bounds are quite informative about the general trends.In the case λ = 1/2, the ratio of the bound in Eq. (100) over N min (ϵ, δ, λ, r) is illustrated in Supplementary Figure 2, which shows that this ratio is decreasing in r and is not larger than 14 in the high-precision limit ϵ, δ → 0. Suppose a quantum device is expected to produce the pure state |Ψ⟩ ∈ H, but actually produces the states σ 1 , σ 2 , . . ., σ N in N runs.In the i.i.d.scenario, all these states are identical to σ.The goal of Alice is to verify whether σ is sufficiently close to the target state |Ψ⟩.To this end, in each run Alice can perform a random test from a set of accessible tests, where each test corresponds to a two-outcome measurement [31,36].The overall effect of these tests can also be described by a two-outcome measurement {Ω, 1 1 − Ω}, where Ω satisfies 0 ≤ Ω ≤ 1 1 and Ω|Ψ⟩ = |Ψ⟩, so that the target state can always pass each test.If all N tests are passed, then Alice accepts the states prepared; otherwise, she rejects.
In the basic framework outlined above, Alice can draw a meaningful conclusion about the state prepared if all N tests are passed, but little information can be extracted if one or more failures are observed.If the state prepared is not perfect, then it may be rejected with a high probability even if it has a high fidelity.For example, suppose 0 ≤ λ < 1, 0 < δ < 1, and ϵ σ = ϵ/2 ≤ 1/4, where ϵ and δ are the target infidelity and significance level; then the probability that Alice accepts is upper bounded by where the second inequality is proved below.This probability decreases rapidly as δ decreases, which means previous verification protocols are not robust to noise in state preparation.As a consequence, many repetitions are necessary to guarantee that Alice accepts the state σ at least once.To achieve confidence level 1 − δ for example, the number of repetitions required is given by [cf.Eq. ( 23)] Accordingly, the total sample cost rises up to which is substantially larger than the sample cost in Eq. ( 115), which does not take robustness into account.
B. Relation between ε iid λ (k, N, δ) and the companion paper [34] Recall that in Eq. ( 27) of the Methods section, the two optimizations are taken over all permutation-invariant joint distributions on the N + 1 variables Y 1 , Y 2 , . . ., Y N +1 .Here we turn to an alternative setting in which the variables Y 1 , Y 2 , . . ., Y N +1 are independent and identically distributed (i.i.d.).Consider max where the maximization is over all independent and identical distributions on Y 1 , . . ., Y N +1 .This quantity is called the upper confidence limit for the i.i.d.setting in our companion paper [34], which studied the asymptotic behaviors of Eq. ( 125) and related quantities.The quantity in Eq. ( 125) is actually equal to ε iid λ (k, N, δ) defined in Eq. ( 29) of the Methods section.According to Eq. ( 29) in the Methods section we have Based on this relation, many properties of ε iid λ=0 (k, N, δ) derived in Ref. [34] also apply to the guaranteed infidelity ε iid λ (k, N, δ) in this section after proper modifications.Notably, Eqs. ( 138) and (142) below in this paper are corollaries of Eqs.(90) and (87) in Ref. [34], respectively.In this paper, from a more practical perspective than Ref. [34], we derive several finite bounds for ϵ iid λ (k, N, δ) and related quantities.The main goal of this section is to provide a robust and efficient protocol for verifying quantum states in the i.i.d.scenario.
The monotonicities with N and k follow from a similar reasoning.
Proof of Proposition 4. We have In addition, we have which confirms the lower bound in Eq. ( 31) of the Methods section.Here (a) follows from Eq. ( 29) in the Methods section, and (b) follows from the assumption δ ≤ 1/2.To prove (c), we shall consider two cases depending on the value of ⌊νsN ⌋.
In conclusion, the inequality (c) in Eq. (129) holds, which completes the proof of Proposition 4.
Proof of Proposition 6.Let Then we have ε iid λ (k, N, δ) ≤ ϵ by Proposition 5.It follows that B N,k (νϵ) ≤ δ according to Lemma S18, which confirms the first inequality of Eq. ( 33) in the Methods section.
To complete the proof, it remains to prove the second inequality of Eq. ( 33) in the Methods section, that is, In addition, we have Equations ( 133) and (134) together confirm the inequality B N,k (νrϵ) ≥ 1 − δ and complete the proof.

D. Verification with a fixed number of allowed failures
In the Methods section, we have considered the verification of quantum states with a fixed error rate, in which the number of allowed failures k is proportional to the number of tests N .In this subsection we consider the case in which the number of allowed failures k is a fixed integer.Similar to N k (ϵ, δ, λ) for the adversarial scenario, for k ∈ Z ≥0 , 0 < ϵ, δ < 1, and 0 ≤ λ < 1, we define where the second equality follows from Lemma S18.According to the monotonicity of ε iid λ (k, N, δ) in Proposition 3, if the number of tests N satisfies N ≥ N iid k (ϵ, δ, λ), then ε iid λ (k, N, δ) ≤ ϵ.The following proposition provides informative upper and lower bounds for N iid k (ϵ, δ, λ).It is the counterpart of Proposition S6 in Supplementary Note 7.
Proposition S8.Suppose 0 < ϵ, δ < 1, 0 ≤ λ < 1, and k ∈ Z ≥0 .Then we have If in addition δ ≤ 1/2, then Next, we consider the high-precision limit ϵ, δ → 0. In this case, as clarified in the following proposition, the resource cost of our protocol is inversely proportional to the spectral gap ν.This result is the counterpart of Proposition S7 in Supplementary Note 7.

E. The number of allowed failures for a given number of tests
As in the adversarial scenario, to construct a concrete verification protocol, we need to specify the number k of allowed failures in addition to the number N of tests, so that the conditions of soundness and robustness are satisfied simultaneously.A small k is preferred to guarantee soundness, while a larger k is preferred to guarantee robustness.To construct a robust and efficient verification protocol, we need to find a good balance between the two conflicting requirements.The following proposition provides a suitable interval for k that can guarantee soundness; Next, we turn to the condition of robustness.When Bob prepares i.i.d.quantum states τ ∈ D(H) with 0 < ϵ τ < ϵ, the probability that Alice accepts τ reads p iid N,k (τ ) = B N,k (νϵ τ ), which is strictly increasing in k according to Lemma S4.
By definition (150), Eq. ( 33) in the Methods section holds when N = N iid min (ϵ, δ, λ, r) and k = k iid min (ϵ, δ, λ, r).In Algorithm 2, the aim of steps 1-11 is to find k iid min .In particular, steps 1-2 aim to find k iid min in the case r = 0; steps 3-10 aim to find k iid min in the case r > 0 by virtue of the fact that both B M,k (νrϵ) and B M,k (νϵ) are strictly decreasing in M for M ≥ k according to Lemma S4.Steps 12-13 aim to find N iid min by virtue of Eqs. ( 150) and (151).
G. Upper bounds for N iid min (ϵ, δ, λ, r) in Eq. ( 38) of the Methods section a. Main proof and illustration Equation (38) in the Methods section can be proved by virtue of Proposition 6 in the Methods section, which provides a guideline for choosing appropriate parameters N and k to achieve a given verification precision and robustness.In practice, we need to choose a suitable error rate s, so that the number of tests in Proposition 6 is as small as possible.To this end, we determine the maximum of the denominator in Eq. (35) in the Methods section.Since D(νs∥νrϵ) is nondecreasing in s, while D(νs∥νϵ) is nonincreasing in s for s ∈ (rϵ, ϵ), the above maximum value is attained when D(νs∥νrϵ) = D(νs∥νϵ).Based on this observation we can deduce that This fact implies the upper bounds for N iid min (ϵ, δ, λ, r) in Eq. ( 38) of the Methods section.In particular, the second inequality of Eq. ( 38) in the Methods section follows from Lemma S19 below.
To illustrate the tightness of the bounds in Eq. ( 38) of the Methods section, the ratio of the second upper bound over N iid min (ϵ, δ, λ, r) is plotted in Supplementary Figure 4.In addition, the coefficient ξ(r) of the second upper bound is plotted in Supplementary Figure 5.
Proof of Lemma S21.The proof is composed of three steps.
2(1 + r)(1 − r) + 4 r(ln r) > 0 The following relation holds when 0 < r < 1: Eq. (164) (166) The following relations hold when 0 < r < 1 and 0 < p < 1: : first, prepare the state |+⟩ := j∈Z d |j⟩/ √ d for each vertex; then, for each edge e ∈ E, apply m e times the generalized controlled-Z operation CZ e on the vertices of e, where CZ e = k∈Z d |k⟩⟨k| i ⊗Z k j if e = (i, j).The resulting graph state has the form |G⟩ = e∈E CZ me e |+⟩ ⊗n .

3 .FIG. 1 .
FIG.1.Schematic view of our verification protocol.Here the state ρ generated by Bob might be arbitrarily correlated or entangled on the whole space H ⊗(N +1) .To verify the target state, Alice first randomly permutes all N + 1 systems, and then uses a strategy Ω to test each of the first N systems.Finally, she accepts the reduced state σN+1 on the remaining system iff at least N − k tests are passed.

FIG. 2 .
FIG.2.Number of tests required to verify a general qudit graph state in the adversarial scenario within infidelity ϵ = 0.01, significance level δ, and robustness r = 1/2.The red dots correspond to Nmin(ϵ, δ, λ, r) in Eq. (18) with λ = 1/2, and the red dashed curve corresponds to the RHS of Eq. (20), which is an upper bound for Nmin(ϵ, δ, λ, r).The blue dashed curve corresponds to the HM protocol[16], and the green solid curve corresponds to the ZH protocol[31] with λ = 1/2.The performances of the TMMMF protocol[20] and TM protocol[32] are not shown because the numbers of tests required are too large (see Supplementary Note 3).

6 : 12 :
if M ≥ k + 1 and B M,k (νϵ) ≤ δ then Find the smallest integer N that satisfies N ≥ k iid min + 1 and B N,k iid min (νϵ) ≤ δ. 13: N iid min ← N 14: return k iid min and N iid min

which confirm Proposition 5 .
Here the relation A ⇐ B means B is a sufficient condition of A; (a) follows from Lemma S18 below; (b) follows from the Chernoff bound (12) and the inequality ⌊νsN ⌋/N ≤ νϵ; (c) follows from Lemma S3 and the inequality ⌊νsN ⌋/N ≤ νs.
. (162).Here the notation A ⇐ B means B is a sufficient condition of A; (a) holds because the LHS of Eq. (162) equals 0 when r = 1; and (b) holds because 1 − 1/r − ln r = 0 when r = 1.
1 1 − P k corresponds to the failure.It is easy to check that P k |G⟩ = |G⟩, which means |G⟩ can always pass the test.The stabilizer test corresponding to the operator 1 1 ∈ S is called the 'trivial test' since all states can pass the test with certainty.To construct a verification strategy for |G⟩, we perform all distinct tests P k for k ∈ Z n d randomly each with probability d −n .The resulting strategy is characterized by a two-outcome measurement { Ω, 1 1 − Ω}, which is determined by the verification operator             FIG. 3. Variations of ελ (⌊νsN ⌋, N, δ) with the number N of tests and error rate s [by Eq. (6) in Supplementary Note 1].Here λ = 1/2 and significance level δ = 0.05.Each horizontal line represents an error rate.As the test number N increases, ελ (⌊νsN ⌋, N, δ) approaches s.