Loophole-free Bell test using electron spins in diamond: second experiment and additional analysis

The recently reported violation of a Bell inequality using entangled electronic spins in diamonds (Hensen et al., Nature 526, 682–686) provided the first loophole-free evidence against local-realist theories of nature. Here we report on data from a second Bell experiment using the same experimental setup with minor modifications. We find a violation of the CHSH-Bell inequality of 2.35 ± 0.18, in agreement with the first run, yielding an overall value of S = 2.38 ± 0.14. We calculate the resulting P-values of the second experiment and of the combined Bell tests. We provide an additional analysis of the distribution of settings choices recorded during the two tests, finding that the observed distributions are consistent with uniform settings for both tests. Finally, we analytically study the effect of particular models of random number generator (RNG) imperfection on our hypothesis test. We find that the winning probability per trial in the CHSH game can be bounded knowing only the mean of the RNG bias. This implies that our experimental result is robust for any model underlying the estimated average RNG bias, for random bits produced up to 690 ns too early by the random number generator.

Scientific RepoRts | 6:30289 | DOI: 10.1038/srep30289 1 quantum random bit are combined by subsequent XOR operations. The resulting bit is used as the input of the same microwave switch as used in the first run 17 . The XOR operation takes 70 ns of additional time, shifting the start of the readout pulse to a later time by the same amount. We leave the end of the readout window unchanged, resulting in the same locality conditions as in the first test.
We note that the Twitter-based classical random bits by themselves cannot close the locality loophole: the raw data is available on the Internet well before the trials and the protocol to derive the bits is deterministic and programmed locally. The only operations that are performed in a space-like separated manner are the XOR operations between 8 stored bits. Therefore, strictly speaking only the quantum-RNG is providing fresh random bits. Since a loophole-free Bell test is described solely by the random input bit generation and the outcome recording at A and B (and in our case the event-ready signal recording at C), the second run can test the same null hypothesis as the first run as these events are unchanged. That being said, the use of the Twitter-based classical randomness puts an additional constraint on local-hidden-variable models attempting to explain our data.
Second, we set larger (i.e. less conservative) heralding windows at the event-ready detector in order to increase the data rate compared to the first experiment. We start the heralding window about 700 picoseconds earlier, motivated by the data from the first test. We predefine a window start of 5426.0 ns after the sync pulse for channel 0, and 5425.1 ns for channel 1. We set a window length of 50 ns.
Finally, we also use the ψ + -Bell state, which is heralded by two photo-detection events in the same beamsplitter output arm at the event-ready station. In general the fidelity of this Bell state is lower than that of ψ − due to detector after-pulsing 25 (note that for ψ − the after-pulsing is not relevant because ψ − is heralded by photo detection events in different beamsplitter output arms). However, we found the after-pulsing effect to be small enough for the detectors used in this run. We set an adapted window length of the second window of 4 ns and 2.5 ns for channels 0, 1 respectively, where the exponentially decaying NV emission is still large relative to the after-pulsing probability. As described below, we can combine the ψ − -related and ψ + -related Bell trials into a single hypothesis test 26 .
Apart from these modifications, all settings, analysis software, calibrations and stabilisation routines were identical to those in the first run 17 . Random numbers from Twitter. After each potential heralding event (corresponding to the E-events described in the Supplementary Information of Hensen et al. 17 ), both at location A and B we take 8 new bits from a predefined random dataset (one for A and one for B) based on Twitter messages, to send to the FPGA-based random-number combiner (see Fig. 1).
The random dataset for A was obtained by collecting 139952 messages from Twitter trending topic with hash-tag #2DaysUntilMITAM, starting from 14:47:58 November 11th, 2015. The messages were collected using the Python Tweepy-package (www.tweepy.org). Only the actual message text was used (no headers), consisting of at most 140 Unicode characters. From each message a single bit was obtained by first converting each character into an integer representing its Unicode code point, converting the integer to the smallest binary bit-string representing that number and finally taking the parity of all the resulting bit-strings together (even or uneven number of ones). The dataset for B was similarly obtained from 134501 messages with the hash-tag #3DaysTillPURPOSE, streamed prior to the dataset A, starting from 16:52:44 November 10th, 2015.
We note that although one may expect the Justin Bieber and One Direction fan-bases to be sufficiently disjoint to produce an uncorrelated binary dataset, the hashtag from dataset B featured in 2 out of 139952 tweets of dataset A, and vice-versa in 4 out of 134501 tweets. Still, a Fisher-exact independence test of A (first 134501 bits) and B's dataset results in a P-value of 0.63. The bias of the 8-bit parity sets were 0.44% and 0.95%, with statistical uncertainty ( ) N 1 2 of 0.38% and 0.39% for A, B respectively. As these bits are XOR' ed with bits from the quantum random number generator with much smaller bias, this has no expected effect on the bias in the used input settings. Finally, we characterized the performance of the FPGA combiners, which showed no errors on 10 8 XOR operations. APD replacement. After 5 days of measurement, the APD at location C corresponding to channel 0 broke down during the daily calibration routine and was subsequently replaced. To take into account the changed detection-to-output delay for the event-ready filter settings, the laser pulse arrival time was recorded for the new APD before proceeding. We adapted the start of the event-ready window for channel 0 accordingly, and used this for all the data taken afterwards. Joint P-value for ψ − and ψ + heralded events. Here we expand the statistical analysis used for the first run 17 to incorporate the ψ − and ψ + events into one hypothesis test. For each of these states we perform a different variant of the CHSH game, and then use the methods of Elkouss and Wehner 26 to combine the two: The output signal of the "event-ready"-box = = t t ( ) m i i m 1 now has three possible outcomes, where the tag t i = 0 still corresponds to a failure (no, not ready) event. We now distinguish two different successful preparations of the boxes A and B: t i = − 1 corresponds to a successful preparations of the ψ − Bell state, and t i = + 1 to a ψ + Bell state. In terms of non-local games, Alice and Bob are playing two different games, where in case t i = − 1 they must have in order to win, and in case of to win. Note that both games have the same maximum winning probabilities. This means that we can take k := k − + k + , with k − the number of times − = x y ( 1)

Results
In this test we set the total number of Bell trials n 2 = 300. After 210 hours of measurement over 22 days during 1 month, we find S 2 = 2.35 ± 0.18, with S 2 the weighted average of for ψ − heralded events (different detectors clicked), and for ψ + (same detector clicked). See Fig. 2. This yields a P-value of 0.029 in the conventional analysis 17 (a non-loophole-free analysis that assumes independent trials, perfect random number generators and Gaussian statistics), and with k 2 = 237 a P-value of 0.061 in the complete analysis 17 (which allows for arbitrary memory between the trial, partially predictable random inputs and makes no assumptions about the probability distributions).
Combined P-value for the two tests. We now turn to analysing the statistical significance of the two runs combined. Let us first note that there are many methods for combining hypothesis tests and P-values, each with its own assumptions. Extending the conventional analysis, we take the weighted sum of the CHSH parameters obtained for both tests to find S combined = 2.38 ± 0.136, yielding a P-value of 2.6 · 10 −3 . For the complete analysis, we give here two example cases. The first case is where the tests are considered to be fully independent; the P-values can then be combined using Fisher's method, resulting in a joint P-value of 1.7 · 10 −2 for the complete analysis. As a second example the two runs are considered to form a single test; the data can then be combined, k 1 + k 2 = 433 for n 1 + n 2 = 545, resulting in a joint P-value of 8.0 · 10 −3 for the complete analysis. We emphasize that these are extreme interpretations of a subtle situation and these P-values should be considered accordingly.  17 . Shown are data for both ψ − heralded events (red, two clicks in different APD's at location C), and for ψ + heralded events (blue, two clicks in the same APD). Numbers in bars represent the amount of correlated and anti-correlated outcomes respectively, for ψ − (red) and ψ + (orange). Error bars shown are Scientific RepoRts | 6:30289 | DOI: 10.1038/srep30289 Although the predefined event-ready filter settings were used for the hypothesis tests presented, the datasets recorded during the Bell experiments contain all the photon detection times at location C. This allows us to investigate the effect of choosing different heralding windows in post-processing. Such an analysis does not yield reliable global P-values (look-elswehere effect), but can give insight in the physics and optimal parameters of the experiment. In Fig. 3 we present the dependence of the recorded Bell violation S, and number of Bell trials n, if we offset the start of the windows. For negative offsets, photo-detection events caused by reflected laser light starts to play an important role, and as expected the Bell violation decreases since the event-ready signal is in that regime no longer a reliable indicator of the generation of an entangled state. The observed difference between the runs in offset times at which the laser reflections start to play a role are caused by the less aggressive filter settings in the second run. However, we see that in both runs the S-value remains constant up a negative offset of about 0.8 ns, indicating that the filter settings were still chosen on the conservative side.

Statistical analysis of settings choices.
Both for the Bell run in Hensen et al. 17 and for the Bell run presented above, we are testing a single well-defined null hypothesis formulated before the experiment, namely that a local-realist model for space-like separated sites could produce data with a violation at least as large as we observe. The settings independence is guaranteed by the space-like separation of relevant events (at stations A, B and C). Since no-signalling is part of this local-realist model, there is no extra assumption that needs to be checked in the data. We have carefully calibrated and checked all timings to ensure that the locality loophole is indeed closed.
Nonetheless, one can still check (post-experiment) for many other types of potential correlations in the recorded dataset if one wishes to. However, since now many hypotheses are tested in parallel, P-values should take into account the fact that one is doing multiple comparisons (the look-elsewhere effect, LEE). Failure to do so can lead to too many false positives, an effect well known in particle physics. In contrast, there is no LEE for a single pre-defined null hypothesis as in our Bell test.
Formulation and testing of multiple hypotheses can result in obtaining almost arbitrarily low local P-values. which may have almost no global significance [27][28][29] . As an example, recalculating the P-value for the local realist hypothesis, given the first dataset for a window start offset of − 900 picoseconds compared to the predefined window starts, results in a local P-value of 0.0081 using the complete analysis (see Fig. 3). Taking this to the extreme by doing a search of the window start offsets for both channels independently and the joint window length offset, results in a local P-value of 0.0018. These examples clearly illustrate that without taking into account that multiple hypotheses are being tested, such local P-values can not be used to assign significance.
With these considerations in mind we analyse the settings choices in the two sub-sections below.
Settings choices in the first and second dataset. The distribution of the 245 input settings in the first dataset (see Fig. 4a in Hensen et al. 17 ) is (n (0,0) , n (0,1) , n (1,0) , n (1,1) ) = (53, 79, 62, 51), with n (a,b) the number times the inputs (a, b) were used. This realisation looks somewhat unbalanced for a uniform distribution, and one could be motivated to test the null hypothesis that the RNGs are uniform. Performing a Monte-Carlo simulation of 10 5 realisations of a uniform multinomial distribution with size n = 245 we find a local P-value of 0.053 to get such a distribution or more extreme. We can get further insight by looking at all the setting choices recorded during the test. Around every potential heralding event about 5000 settings are recorded, for which we find a local P-value of 0.57 (Table 1), consistent with a uniform setting distribution. Many additional tests can be performed on equally many slices or subsets of the data, where one or more of the filters (see Supplementary Information of Hensen et al. 17 ) is relaxed. In Table 1 we list the individual (local) P-values for a set of 4 hypotheses regarding the settings choices, for both the first and second dataset.  For tests 1 and 2 we evaluate a two tailed binomial test with equal success probability. For test 3 we perform a Monte-Carlo simulation of 10 5 realisations of a uniform multinomial distribution with size fixed to the number of observations in that particular row, i.e. n = 245 for the second row in Table 1.
We observe that only one local P-value is below 0.05: Fisher's exact test on the distribution of the settings in the first data set yields a local P-value of 0.029. However, as described in the next subsection below, when properly taking the look-elsewhere effect into account this does not result in a significant rejection of the uniform settings hypothesis at the 0.05 level. Finally, the valid Bell trials of the first and second dataset combined, shown in the last row of Table 1, are also consistent with uniformly chosen input settings.
Significance and look-elsewhere effect. We now analyse the significance of the local P-values in Table 1 by taking into account the look-elsewhere effect. Say we are looking for correlations between parameters that are in fact completely independent. Looking at one correlation, it is as if we take one random sample from a distribution; the probability that it is at 2 sigma or more extreme is thus about 0.05. If we look for 4 different correlations (assuming all parameters are independent), it is similar to taking 4 random samples, and thus the probability that at least one is at 2 sigma or more extreme is 1 − (1 − 0.05) 4 = 0.18. In reverse, assuming fully independent hypotheses, the local P-value p′ should have obeyed roughly 1 − (1 − p′ ) 4 < 0.05, so p′ < 0.013, to be statistically significant at the 0.05 level.
In our case it is actually more complicated because there can be dependencies between hypotheses. We can numerically get some of these numbers. For instance, we have simulated the random number generation (RNG) using Monte-Carlo under the assumption of independent uniform outputs and calculated local P-values for the four hypotheses listed above. The probability that at least one of these yield local P-value p′ < 0.05 turns out to be about 0.13 for the 245 events in the Bell test. This is different from 1 − (1 − 0.05) 4 = 0.18 because of correlations between the tests, but it is clearly much higher than 0.05. In reverse, to arrive at an overall probability of 0.05 of finding at least one test yielding local P-value p′ < p threshold for the data in the first Bell dataset, we find p threshold = 0.021. In other words, if we would only be looking at the settings corresponding to the valid Bell trials, then a local P-value below 0.021 would signal a statistically significant violation of our hypothesis at the 0.05 level. We do not find such evidence for the valid Bell trial data (see first row in Table 1).
The last column gives the probability that at least one of the hypothesis tests on the data in that row yields a local P-value p′ < 0.05, given uniform settings. In the one-but-last column we give p threshold , again only for the data set in that row, for a significance at the 0.05 level. These values assume that we would only be testing our hypotheses on that particular row. Since we are now looking at different rows, p threshold for each row is a strict upper bound to the p threshold for the full table, as we are looking at different cross-sections of the raw data set at the same time; the p threshold for the full table will thus be lower but it is not trivial to compute this, given the large dependence between the subsets of data used for each row. However, since we do not find any local P-value to be below p threshold for the corresponding row, we can conclude that the data does not allow rejection of the settings independence hypothesis, even without calculating the global p threshold for the full Table. Refined analysis of imperfect random number generators. Ideally, the RNGs yield a fully unpredictable random bit in every trial of the Bell test. A deviation from the ideal behaviour can be denoted by an excess predictability or bias b, that can take on values between 0 and 1 2 . In principle the value of b can be different in every trial of a Bell test, which can be modelled by some probability distribution over the value of b. By characterising the physical RNGs, we can hope to learn something about the mean τ of this probability distribution. As a particular example of an underlying probability distribution for the bias, consider the case where the random bit with probability f and perfectly unpredictable (b = 0) with probability 1 − f. This example could model a scenario where the random numbers are generated with some spread in time such that some of them are produced so early that they could be known by the other party before the end of the trial.
A recent analysis of the effect of partial predictability of RNGs on the bound of the CH-Eberhard inequality revealed a strong dependence on the interpretation of the mean excess predictability 32 , estimated from characterisation of the RNGs. In particular, for a model in which the mean excess predictability ε is distributed (evenly) over all trials, the CH-Eberhard inequality can be violated even if the relevant Bell parameter J (which can be viewed as an average violation per trial in terms of probabilities) is much lower than ε. On the other hand, Kofler et al. 32 found that in case of an all-or-nothing scenario, such that in a fraction ε of the trials the RNG is fully predictable and in the rest of the trials fully unpredictable, the threshold value for a violation is roughly given by J > ε.
Motivated by these findings, we generalize here the analysis of the effect of imperfect random number generators on the winning probability per trial in the CHSH game. We extend the analysis in the Supplementary Information of Hensen et al. 17 (see also Elkouss and Wehner 26 ) to the case where any bias b is produced by an arbitrary underlying probability distribution per trial. That is, there is no maximum bias, but rather the bias can probabilistically take on any value. We find that in our case, as long as the event-ready signal is independent of the random bits, the only relevant parameter is the mean τ of the bias; the concrete form of the random variable has no impact on the bound on the probability of winning CHSH. In the example of early production of random bits, there exists a time-window in which independence of the event-ready signal can be guaranteed by its space-like separation from the early random generation event.
In the analysis below we explicitly take into account the possibility of early production of random bits, which we expect to be a particular interpretation of the probability distribution over b as above. Indeed we find that when the random bits are perfectly predictable with probability f, and perfectly unpredictable with probability 1 − f, then a distribution over the bias b with a mean of τ = f/2 links the two viewpoints of the analysis.
In order to make the discussion precise, in the following we describe the random variables that characterize the experiment, then make a rigorous derivation of the winning probability.

Properties of the tested LHVM.
We introduce the following sequences of random variables. The notation and arguments borrow from earlier work 26,[33][34][35]  1 is the sequence of event-ready signals in the case of an event-ready experiment. In an event-ready experiment, we make no assumptions regarding the statistics of the event-ready station, which may be under full control of the LHVM and can depend arbitrarily on the history of the experiment.
We introduce three sequences of random variables to model each RNG.
denote the inputs to the boxes.
denote two sequences of binary variables that take value 1 if the random number was generated so early that signaling is possible and 0 otherwise. We call the former an early number and the latter a on-time number. Finally, let take values in the range [− 1/2, 1/2] and denote the bias of the random number generators at each attempt. We here assume that these distributions can differ for all i, they do not depend on the history H i . Using more involved notation, the same bound can be made if their mean is known conditioned on the history.  Table 1. From left to right each column corresponds with: dataset on which statistics are computed, local P-value for the null hypothesis RNG A is uniform, local P-value for the null hypothesis RNG B is uniform, local P-value for the null hypothesis RNG A&B is uniform, Fisher's test, Pearson's test, and p threshold and joint P-values p joint . The joint P-value for a set of hypotheses is the probability that for at least one of the hypotheses we observe a P-value less than α where here α = 0.05. This captures the fact that the more hypotheses we test, the more likely it becomes that one of them will fall below the significance threshold. The value p threshold the largest threshold for individual tests for which the joint P-value for that row is less than 0.05. The local P-values in the row should this be compared to this number. This captures the fact that when testing multiple hypothesis, the local P-values of the individual ones actually need to be much smaller for the overall test to be significant. The local P-values in columns RNG A, RNG B, Fisher and Pearson are exact calculations. The columns RNG A&B, p threshold and p joint are approximations obtained via 10 5 , 10 4 and 10 4 trials of a Monte-Carlo simulation, respectively.
The random variable H i models the state of the experiment prior to the measurement. As such, H i includes any hidden variables, sometimes denoted using the letter λ 33 . It also includes the history of all possible configurations of inputs and outputs of the prior attempts = − X Y A B T ( , , , , ) j j j j j j i 1 1 . The null hypothesis (to be refuted) is that our experimental setup can be modelled using a LHVM. LHVMs verify the following conditions: 1. Independent random number generators. Conditioned on the history of the experiment the random numbers are independent of each other and of the output of the event-ready signal We allow X i and Y i to be partially predictable given the history of the experiment. The predictability is governed by some random variables F F , Furthermore, from the characterization of the devices we have that for all i: Locality. The outputs a i and b i only depend on the local input settings and history: they are independent of each other and of the input setting at the other side, conditioned on the previous history, the current eventready signal and the inputs being generated on-time 3. Sequentiality of the experiments. Every one of the m attempts takes place sequentially such that any possible signalling between different attempts beyond the previous conditions is prevented 36 .
Except for these conditions the variables might be correlated in any possible way.
Winning probability for imperfect random number generators. Here, we derive a tight upper bound on the winning probability of CHSH with imperfect random number generators in an event-ready setup. For CHSH, the inputs X i , Y i , outputs A i , B i and output of the heralding station T i take values 0 and 1. If T i = 0 the scoring variable C i takes always the value zero, if T i = 1 then C i = 1 when x · y = a ⊕ b and C i = 0 in the remaining cases. We will take the RNGs to have maximum advantage f of producing early random numbers.
, and let a sequence of random variables as described in the previous section correspond with m attempts of a CHSH heralding experiment. Suppose that the null hypothesis holds, i.e., nature is governed by an LHVM. Given that for all i ≤ m: , we have for i ≤ m, any possible history H i = h i of the experiment, and T i = 1 that the probability of C i = 1 is upper bounded by Proof. Let us first bound the effect of the early random numbers in the winning probability. We have

{0,1}
x y ≤ = = + = = + = = The first inequality follows by assuming that the CHSH is won with probability one when a random number is early. The second inequality follows from assuming that Let us now bound the bias for the on-time numbers. We focus on the random numbers X i ; the same argument can be made for Y i . For simplicity, we omit the explict conditioning on H i = h i . First of all, note that since we have together with (5) that Furthermore, note that we can expand the probability as Combining (14), (13) and the assumption  : min , . Let us now expand the probability that C i = 1 conditioned on the event that both numbers were on-time. For simplicity, we drop the explicit conditioning on We can break these probabilities into simpler terms The first equality followed by the locality condition, the second one simply by the definition of conditional probability. With this decomposition, we can express (16) x y x y x y , {0,1} , where we have used the shorthands (1 ) otherwise (25) x y x y x y x y x y , Now we will expand (20). We know that 1/2 − τ ≤ α x , β y ≤ 1/2 + τ. In principle, τ does not need to take the values in the extreme on the range. Without loss of generality let α 0 = 1/2 + τ A and β 0 = 1/2 + τ B , with τ A , τ B ∈ [− 1/2, 1/2].  Since (30) is a sum of two convex combinations, it must take its maximum value at one of the extreme points, that is with χ 0 ∈ {0, 1} and χ 1 ∈ {0, 1}. We can thus consider all four combinations of values for χ 0 and χ 1 given by Since 0 ≤ γ 0 , γ 1 ≤ 1, we have in all cases that the sum is upper bounded by 3. Now, using (28)  comparison, for testing such theories an experiment using the CH-Eberhard inequality would require J > 10 −3 to obtain a violation 32 , which is two orders of magnitude beyond the state of the art of photonic experiments 18,19 . This difference may be traced back to the use of event-ready detectors in our experiments, which dramatically increases the fidelity of the entangled state and thus the winning probability per Bell trial.

Conclusion
The loophole-free violation of Bell's inequality in the second data run reported here further strengthens the rejection of a broad class of local realistic theories. We find that the data is consistent with independent setting choices, both in the first and second dataset, as well as in the combined dataset. Refined analysis of the effect of a bias in the random number generators shows that only the mean bias plays a role in the winning probability. As a consequence, the P-value bound for our experiments is independent of the underlying distribution for the RNG bias, for random bits produced up to 690 ns too early by the random number generator. The large spatial separation and the strong violation in winning probability per trial of about 0.05 makes our implementation promising for future applications of non-locality for device-independent quantum key distribution 37 and randomness generation 38,39 .