Introduction

Ever since its inception, the counterintuitive predictions of quantum theory have stimulated debate about the fundamental nature of reality. In 1964, John Bell found that the correlations between outcomes of distant measurements allowed under local realism1 are strictly bounded, while certain quantum mechanical states are predicted to violate this bound2. Numerous violations of a Bell inequality in agreement with quantum theory have been reported3,4,5,6,7,8,9,10,11,12,13,14,15,16. However, due to experimental limitations additional assumptions were required in all experiments up to 2015 in order to reject the local-realist hypothesis, resulting in loopholes. Last year we reported the first experimental loophole-free violation of the CHSH-Bell inequality using entangled electron spins associated with nitrogen-vacancy (NV) centers in diamond, separated by 1.3 km17. Less than three months after our experiment, two groups observed violations of the CH-Eberhard inequality on spatially-separated photons18,19 and before the end of the year first signatures of a CHSH-Bell violation on single Rubidium atoms were found20.

Below, we report on data from a second loophole-free Bell test performed with the same setup as in Hensen et al.17. Additionally, we analyse in detail the recorded distribution of settings choices in both the first and second datasets. Finally, we investigate the effect of arbitrary models underlying the bias in the random number generation.

Second run

After finishing the first loophole-free Bell experiment in July 2015, both the A(lice) and B(ob) setups were modified and used in various local experiments. In December 2015, we rebuilt the Bell setup for performing a second run of the Bell test, with three small modifications compared to the first run.

First, we add a source of classical random numbers for the input choices19. A random basis choice is now made by applying an XOR operation between a quantum random bit generated as previously21,22,23 and classical random bits based on Twitter messages, as proposed by Pironio24. In particular, we generate two sets of classical random numbers, one for the basis choice at A and one for the basis choice at B (see details in the following sections). At each location, 8 of these bits are fed into an FPGA. Just before the random basis rotation, the 8 Twitter bits and 1 quantum random bit are combined by subsequent XOR operations. The resulting bit is used as the input of the same microwave switch as used in the first run17. The XOR operation takes 70 ns of additional time, shifting the start of the readout pulse to a later time by the same amount. We leave the end of the readout window unchanged, resulting in the same locality conditions as in the first test.

We note that the Twitter-based classical random bits by themselves cannot close the locality loophole: the raw data is available on the Internet well before the trials and the protocol to derive the bits is deterministic and programmed locally. The only operations that are performed in a space-like separated manner are the XOR operations between 8 stored bits. Therefore, strictly speaking only the quantum-RNG is providing fresh random bits. Since a loophole-free Bell test is described solely by the random input bit generation and the outcome recording at A and B (and in our case the event-ready signal recording at C), the second run can test the same null hypothesis as the first run as these events are unchanged. That being said, the use of the Twitter-based classical randomness puts an additional constraint on local-hidden-variable models attempting to explain our data.

Second, we set larger (i.e. less conservative) heralding windows at the event-ready detector in order to increase the data rate compared to the first experiment. We start the heralding window about 700 picoseconds earlier, motivated by the data from the first test. We predefine a window start of 5426.0 ns after the sync pulse for channel 0 and 5425.1 ns for channel 1. We set a window length of 50 ns.

Finally, we also use the ψ+-Bell state, which is heralded by two photo-detection events in the same beamsplitter output arm at the event-ready station. In general the fidelity of this Bell state is lower than that of ψ due to detector after-pulsing25 (note that for ψ the after-pulsing is not relevant because ψ is heralded by photo detection events in different beamsplitter output arms). However, we found the after-pulsing effect to be small enough for the detectors used in this run. We set an adapted window length of the second window of 4 ns and 2.5 ns for channels 0, 1 respectively, where the exponentially decaying NV emission is still large relative to the after-pulsing probability. As described below, we can combine the ψ-related and ψ+-related Bell trials into a single hypothesis test26.

Apart from these modifications, all settings, analysis software, calibrations and stabilisation routines were identical to those in the first run17.

Random numbers from Twitter

After each potential heralding event (corresponding to the E-events described in the Supplementary Information of Hensen et al.17), both at location A and B we take 8 new bits from a predefined random dataset (one for A and one for B) based on Twitter messages, to send to the FPGA-based random-number combiner (see Fig. 1).

Figure 1
figure 1

Schematic of random input bit generation by combining bits from a quantum random number generator (QRNG) and classical random bits from a dataset based on Twitter messages.

The random dataset for A was obtained by collecting 139952 messages from Twitter trending topic with hash-tag #2DaysUntilMITAM, starting from 14:47:58 November 11th, 2015. The messages were collected using the Python Tweepy-package (www.tweepy.org). Only the actual message text was used (no headers), consisting of at most 140 Unicode characters. From each message a single bit was obtained by first converting each character into an integer representing its Unicode code point, converting the integer to the smallest binary bit-string representing that number and finally taking the parity of all the resulting bit-strings together (even or uneven number of ones). The dataset for B was similarly obtained from 134501 messages with the hash-tag #3DaysTillPURPOSE, streamed prior to the dataset A, starting from 16:52:44 November 10th, 2015.

We note that although one may expect the Justin Bieber and One Direction fan-bases to be sufficiently disjoint to produce an uncorrelated binary dataset, the hashtag from dataset B featured in 2 out of 139952 tweets of dataset A and vice-versa in 4 out of 134501 tweets. Still, a Fisher-exact independence test of A (first 134501 bits) and B’s dataset results in a P-value of 0.63. The bias of the 8-bit parity sets were 0.44% and 0.95%, with statistical uncertainty of 0.38% and 0.39% for A, B respectively. As these bits are XOR’ed with bits from the quantum random number generator with much smaller bias, this has no expected effect on the bias in the used input settings. Finally, we characterized the performance of the FPGA combiners, which showed no errors on 108 XOR operations.

APD replacement

After 5 days of measurement, the APD at location C corresponding to channel 0 broke down during the daily calibration routine and was subsequently replaced. To take into account the changed detection-to-output delay for the event-ready filter settings, the laser pulse arrival time was recorded for the new APD before proceeding. We adapted the start of the event-ready window for channel 0 accordingly and used this for all the data taken afterwards.

Joint P -value for ψ and ψ + heralded events

Here we expand the statistical analysis used for the first run17 to incorporate the ψ and ψ+ events into one hypothesis test. For each of these states we perform a different variant of the CHSH game and then use the methods of Elkouss and Wehner26 to combine the two: The output signal of the “event-ready”-box now has three possible outcomes, where the tag ti = 0 still corresponds to a failure (no, not ready) event. We now distinguish two different successful preparations of the boxes A and B: ti = −1 corresponds to a successful preparations of the ψ Bell state and ti = +1 to a ψ+ Bell state. In terms of non-local games, Alice and Bob are playing two different games, where in case ti = −1 they must have in order to win and in case of ti = +1 they must have to win. Note that both games have the same maximum winning probabilities. This means that we can take k := k + k+, with k the number of times and k+ the number of times ; the remainder of the analysis remains the same and in particular the obtained bound to the P-value is unchanged (see Elkouss and Wehner26, page 20). We then have for the adapted CHSH function (see Supplementary Information of Hensen et al.17):

and adapted total number of events then becomes:

Results

In this test we set the total number of Bell trials n2 = 300. After 210 hours of measurement over 22 days during 1 month, we find S2 = 2.35 ± 0.18, with S2 the weighted average of for ψ heralded events (different detectors clicked) and for ψ+ (same detector clicked). See Fig. 2.

Figure 2
figure 2

Second loophole-free Bell test results.

(a) Summary of the data and the CHSH correlations. We record a total of n2 = 300 trials of the Bell test. Dotted lines indicate the expected correlation based on the spin readout fidelities and the characterization measurements presented in Hensen et al.17. Shown are data for both ψ heralded events (red, two clicks in different APD’s at location C) and for ψ+ heralded events (blue, two clicks in the same APD). Numbers in bars represent the amount of correlated and anti-correlated outcomes respectively, for ψ (red) and ψ+ (orange). Error bars shown are , with n(a,b) the number of events with inputs (a, b).

This yields a P-value of 0.029 in the conventional analysis17 (a non-loophole-free analysis that assumes independent trials, perfect random number generators and Gaussian statistics) and with k2 = 237 a P-value of 0.061 in the complete analysis17 (which allows for arbitrary memory between the trial, partially predictable random inputs and makes no assumptions about the probability distributions).

Combined P-value for the two tests

We now turn to analysing the statistical significance of the two runs combined. Let us first note that there are many methods for combining hypothesis tests and P-values, each with its own assumptions. Extending the conventional analysis, we take the weighted sum of the CHSH parameters obtained for both tests to find Scombined = 2.38 ± 0.136, yielding a P-value of 2.6 · 10−3. For the complete analysis, we give here two example cases. The first case is where the tests are considered to be fully independent; the P-values can then be combined using Fisher’s method, resulting in a joint P-value of 1.7 · 10−2 for the complete analysis. As a second example the two runs are considered to form a single test; the data can then be combined, k1 + k2 = 433 for n1 + n2 = 545, resulting in a joint P-value of 8.0 · 10−3 for the complete analysis. We emphasize that these are extreme interpretations of a subtle situation and these P-values should be considered accordingly.

Although the predefined event-ready filter settings were used for the hypothesis tests presented, the datasets recorded during the Bell experiments contain all the photon detection times at location C. This allows us to investigate the effect of choosing different heralding windows in post-processing. Such an analysis does not yield reliable global P-values (look-elswehere effect), but can give insight in the physics and optimal parameters of the experiment. In Fig. 3 we present the dependence of the recorded Bell violation S and number of Bell trials n, if we offset the start of the windows. For negative offsets, photo-detection events caused by reflected laser light starts to play an important role and as expected the Bell violation decreases since the event-ready signal is in that regime no longer a reliable indicator of the generation of an entangled state. The observed difference between the runs in offset times at which the laser reflections start to play a role are caused by the less aggressive filter settings in the second run. However, we see that in both runs the S-value remains constant up a negative offset of about 0.8 ns, indicating that the filter settings were still chosen on the conservative side.

Figure 3
figure 3

CHSH parameter S, number of Bell trials n and post-selected complete-analysis local P-value versus window start offset for the event-ready photon detections at location C, for the first (grey) and second (orange) dataset.

The time-offset shown is with respect to the predefined windows (corresponding to the dotted line). Confidence region shown is one sigma, calculated according to the conventional analysis. Shifting the window back in time, the relative fraction of heralding events caused by photo-detection from laser reflections increases, thereby reducing the observed Bell violation.

Statistical analysis of settings choices

Both for the Bell run in Hensen et al.17 and for the Bell run presented above, we are testing a single well-defined null hypothesis formulated before the experiment, namely that a local-realist model for space-like separated sites could produce data with a violation at least as large as we observe. The settings independence is guaranteed by the space-like separation of relevant events (at stations A, B and C). Since no-signalling is part of this local-realist model, there is no extra assumption that needs to be checked in the data. We have carefully calibrated and checked all timings to ensure that the locality loophole is indeed closed.

Nonetheless, one can still check (post-experiment) for many other types of potential correlations in the recorded dataset if one wishes to. However, since now many hypotheses are tested in parallel, P-values should take into account the fact that one is doing multiple comparisons (the look-elsewhere effect, LEE). Failure to do so can lead to too many false positives, an effect well known in particle physics. In contrast, there is no LEE for a single pre-defined null hypothesis as in our Bell test.

Formulation and testing of multiple hypotheses can result in obtaining almost arbitrarily low local P-values. which may have almost no global significance27,28,29. As an example, recalculating the P-value for the local realist hypothesis, given the first dataset for a window start offset of −900 picoseconds compared to the predefined window starts, results in a local P-value of 0.0081 using the complete analysis (see Fig. 3). Taking this to the extreme by doing a search of the window start offsets for both channels independently and the joint window length offset, results in a local P-value of 0.0018. These examples clearly illustrate that without taking into account that multiple hypotheses are being tested, such local P-values can not be used to assign significance.

With these considerations in mind we analyse the settings choices in the two sub-sections below.

Settings choices in the first and second dataset

The distribution of the 245 input settings in the first dataset (see Fig. 4a in Hensen et al.17) is (n(0,0), n(0,1), n(1,0), n(1,1)) = (53, 79, 62, 51), with n(a,b) the number times the inputs (a, b) were used. This realisation looks somewhat unbalanced for a uniform distribution and one could be motivated to test the null hypothesis that the RNGs are uniform. Performing a Monte-Carlo simulation of 105 realisations of a uniform multinomial distribution with size n = 245 we find a local P-value of 0.053 to get such a distribution or more extreme. We can get further insight by looking at all the setting choices recorded during the test. Around every potential heralding event about 5000 settings are recorded, for which we find a local P-value of 0.57 (Table 1), consistent with a uniform setting distribution.

Table 1 From left to right each column corresponds with: dataset on which statistics are computed, local P-value for the null hypothesis RNG A is uniform, local P-value for the null hypothesis RNG B is uniform, local P-value for the null hypothesis RNG A&B is uniform, Fisher’s test, Pearson’s test and pthreshold and joint P-values pjoint.
Figure 4
figure 4

The P-value of the two runs as a function of τ, the mean bias of the RNG.

Many additional tests can be performed on equally many slices or subsets of the data, where one or more of the filters (see Supplementary Information of Hensen et al.17) is relaxed. In Table 1 we list the individual (local) P-values for a set of 4 hypotheses regarding the settings choices, for both the first and second dataset.

  1. 1

    RNG A is uniform

  2. 2

    RNG B is uniform

  3. 3

    RNG A and RNG B are jointly uniform

  4. 4

    Fisher’s exact test30 for n < 5000, Pearson’s χ2 test31 for n > 5000)

For tests 1 and 2 we evaluate a two tailed binomial test with equal success probability. For test 3 we perform a Monte-Carlo simulation of 105 realisations of a uniform multinomial distribution with size fixed to the number of observations in that particular row, i.e. n = 245 for the second row in Table 1.

We observe that only one local P-value is below 0.05: Fisher’s exact test on the distribution of the settings in the first data set yields a local P-value of 0.029. However, as described in the next subsection below, when properly taking the look-elsewhere effect into account this does not result in a significant rejection of the uniform settings hypothesis at the 0.05 level. Finally, the valid Bell trials of the first and second dataset combined, shown in the last row of Table 1, are also consistent with uniformly chosen input settings.

Significance and look-elsewhere effect

We now analyse the significance of the local P-values in Table 1 by taking into account the look-elsewhere effect. Say we are looking for correlations between parameters that are in fact completely independent. Looking at one correlation, it is as if we take one random sample from a distribution; the probability that it is at 2 sigma or more extreme is thus about 0.05. If we look for 4 different correlations (assuming all parameters are independent), it is similar to taking 4 random samples and thus the probability that at least one is at 2 sigma or more extreme is 1 − (1 − 0.05)4 = 0.18. In reverse, assuming fully independent hypotheses, the local P-value p′ should have obeyed roughly 1 − (1 − p′)4 < 0.05, so p′ < 0.013, to be statistically significant at the 0.05 level.

In our case it is actually more complicated because there can be dependencies between hypotheses. We can numerically get some of these numbers. For instance, we have simulated the random number generation (RNG) using Monte-Carlo under the assumption of independent uniform outputs and calculated local P-values for the four hypotheses listed above. The probability that at least one of these yield local P-value p′ < 0.05 turns out to be about 0.13 for the 245 events in the Bell test. This is different from 1 − (1 − 0.05)4 = 0.18 because of correlations between the tests, but it is clearly much higher than 0.05. In reverse, to arrive at an overall probability of 0.05 of finding at least one test yielding local P-value p′ < pthreshold for the data in the first Bell dataset, we find pthreshold = 0.021. In other words, if we would only be looking at the settings corresponding to the valid Bell trials, then a local P-value below 0.021 would signal a statistically significant violation of our hypothesis at the 0.05 level. We do not find such evidence for the valid Bell trial data (see first row in Table 1).

The last column gives the probability that at least one of the hypothesis tests on the data in that row yields a local P-value p′ < 0.05, given uniform settings. In the one-but-last column we give pthreshold, again only for the data set in that row, for a significance at the 0.05 level. These values assume that we would only be testing our hypotheses on that particular row. Since we are now looking at different rows, pthreshold for each row is a strict upper bound to the pthreshold for the full table, as we are looking at different cross-sections of the raw data set at the same time; the pthreshold for the full table will thus be lower but it is not trivial to compute this, given the large dependence between the subsets of data used for each row. However, since we do not find any local P-value to be below pthreshold for the corresponding row, we can conclude that the data does not allow rejection of the settings independence hypothesis, even without calculating the global pthreshold for the full Table.

Refined analysis of imperfect random number generators

Ideally, the RNGs yield a fully unpredictable random bit in every trial of the Bell test. A deviation from the ideal behaviour can be denoted by an excess predictability or bias b, that can take on values between 0 and . In principle the value of b can be different in every trial of a Bell test, which can be modelled by some probability distribution over the value of b. By characterising the physical RNGs, we can hope to learn something about the mean τ of this probability distribution. As a particular example of an underlying probability distribution for the bias, consider the case where the random bit is perfectly predictable with probability f and perfectly unpredictable (b = 0) with probability 1 − f. This example could model a scenario where the random numbers are generated with some spread in time such that some of them are produced so early that they could be known by the other party before the end of the trial.

A recent analysis of the effect of partial predictability of RNGs on the bound of the CH-Eberhard inequality revealed a strong dependence on the interpretation of the mean excess predictability32, estimated from characterisation of the RNGs. In particular, for a model in which the mean excess predictability ε is distributed (evenly) over all trials, the CH-Eberhard inequality can be violated even if the relevant Bell parameter J (which can be viewed as an average violation per trial in terms of probabilities) is much lower than ε. On the other hand, Kofler et al.32 found that in case of an all-or-nothing scenario, such that in a fraction ε of the trials the RNG is fully predictable and in the rest of the trials fully unpredictable, the threshold value for a violation is roughly given by J > ε.

Motivated by these findings, we generalize here the analysis of the effect of imperfect random number generators on the winning probability per trial in the CHSH game. We extend the analysis in the Supplementary Information of Hensen et al.17 (see also Elkouss and Wehner26) to the case where any bias b is produced by an arbitrary underlying probability distribution per trial. That is, there is no maximum bias, but rather the bias can probabilistically take on any value. We find that in our case, as long as the event-ready signal is independent of the random bits, the only relevant parameter is the mean τ of the bias; the concrete form of the random variable has no impact on the bound on the probability of winning CHSH. In the example of early production of random bits, there exists a time-window in which independence of the event-ready signal can be guaranteed by its space-like separation from the early random generation event.

In the analysis below we explicitly take into account the possibility of early production of random bits, which we expect to be a particular interpretation of the probability distribution over b as above. Indeed we find that when the random bits are perfectly predictable with probability f and perfectly unpredictable with probability 1 − f, then a distribution over the bias b with a mean of τ = f/2 links the two viewpoints of the analysis.

In order to make the discussion precise, in the following we describe the random variables that characterize the experiment, then make a rigorous derivation of the winning probability.

Properties of the tested LHVM

We introduce the following sequences of random variables. The notation and arguments borrow from earlier work26,33,34,35. Let the outputs of the boxes where i is used to label the i-th element, the histories of attempts previous to the i-th attempt, denotes the scores at each attempt and is the sequence of event-ready signals in the case of an event-ready experiment. In an event-ready experiment, we make no assumptions regarding the statistics of the event-ready station, which may be under full control of the LHVM and can depend arbitrarily on the history of the experiment.

We introduce three sequences of random variables to model each RNG. Let denote the inputs to the boxes. Let denote two sequences of binary variables that take value 1 if the random number was generated so early that signaling is possible and 0 otherwise. We call the former an early number and the latter a on-time number. Finally, let take values in the range [−1/2, 1/2] and denote the bias of the random number generators at each attempt. We here assume that these distributions can differ for all i, they do not depend on the history Hi. Using more involved notation, the same bound can be made if their mean is known conditioned on the history.

The random variable Hi models the state of the experiment prior to the measurement. As such, Hi includes any hidden variables, sometimes denoted using the letter λ33. It also includes the history of all possible configurations of inputs and outputs of the prior attempts .

The null hypothesis (to be refuted) is that our experimental setup can be modelled using a LHVM. LHVMs verify the following conditions:

1. Independent random number generators. Conditioned on the history of the experiment the random numbers are independent of each other

and of the output of the event-ready signal

We allow Xi and Yi to be partially predictable given the history of the experiment. The predictability is governed by some random variables . For we have

Furthermore, from the characterization of the devices we have that for all i: . We define f = max{fA, fB} and τ = max{τA, τB}.

2. Locality. The outputs ai and bi only depend on the local input settings and history: they are independent of each other and of the input setting at the other side, conditioned on the previous history, the current event-ready signal and the inputs being generated on-time

3. Sequentiality of the experiments. Every one of the m attempts takes place sequentially such that any possible signalling between different attempts beyond the previous conditions is prevented36.

Except for these conditions the variables might be correlated in any possible way.

Winning probability for imperfect random number generators

Here, we derive a tight upper bound on the winning probability of CHSH with imperfect random number generators in an event-ready setup. For CHSH, the inputs Xi, Yi, outputs Ai, Bi and output of the heralding station Ti take values 0 and 1. If Ti = 0 the scoring variable Ci takes always the value zero, if Ti = 1 then Ci = 1 when x · y = ab and Ci = 0 in the remaining cases. We will take the RNGs to have maximum advantage f of producing early random numbers.

Lemma 1. Let and let a sequence of random variables as described in the previous section correspond with m attempts of a CHSH heralding experiment. Suppose that the null hypothesis holds, i.e., nature is governed by an LHVM. Given that for all i ≤ m: , we have for i ≤ m, any possible history Hi = hi of the experiment and Ti = 1 that the probability of Ci = 1 is upper bounded by

where and .

Proof. Let us first bound the effect of the early random numbers in the winning probability. We have

The first inequality follows by assuming that the CHSH is won with probability one when a random number is early. The second inequality follows from assuming that .

Let us now bound the bias for the on-time numbers. We focus on the random numbers Xi; the same argument can be made for Yi. For simplicity, we omit the explict conditioning on Hi = hi. First of all, note that since

we have together with (5) that

which implies 1/2 − τ ≤ Pr(Xi = 1) ≤ 1/2 + τ. Furthermore, note that we can expand the probability as

Combining (14), (13) and the assumption we obtain

where . Let us now expand the probability that Ci = 1 conditioned on the event that both numbers were on-time. For simplicity, we drop the explicit conditioning on Hi = hi, Ti = 1, QA = 0, QB = 0.

We can break these probabilities into simpler terms

The first equality followed by the locality condition, the second one simply by the definition of conditional probability. With this decomposition, we can express (16) as

where we have used the shorthands

Now we will expand (20). We know that 1/2 − τ ≤ αx, βy ≤ 1/2 + τ. In principle, τ does not need to take the values in the extreme on the range. Without loss of generality let α0 = 1/2 + τA and β0 = 1/2 + τB, with τA, τB [−1/2, 1/2].

It thus remains to bound the sum of fx,y. Note that we can write

Since (30) is a sum of two convex combinations, it must take its maximum value at one of the extreme points, that is with χ0 {0, 1} and χ1 {0, 1}. We can thus consider all four combinations of values for χ0 and χ1 given by

Since 0 ≤ γ0, γ1 ≤ 1, we have in all cases that the sum is upper bounded by 3.

Now, using (28) we have

where in the first inequality we bound f0,0, f0,1, f1,0 by 1. The second inequality follows since τ′ ≤ 1/2 and for τA, τB below 1/2 (33) is strictly increasing both in τA and τB; this implies that the maximum is found in the extreme: τ = τA = τB.

Finally, we can plug the bound in the winning probability given that both numbers were on time into the winning probability and we obtain:

Equation (35) shows the equal footing of and τ. This result highlights the fact that early production of random numbers is just a particular distribution underlying the bias of the random number generators, where the probability f of producing an early number corresponds to a mixture of completely predictable numbers and the probability 1 − f to unpredictable numbers.

The finding that the only relevant RNG parameter for the winning probability of the CHSH game is the mean bias makes a Bell test based on this winning probability particularly robust against RNG imperfections. In our two Bell test runs, we find a violation in terms of the CHSH winning probability of about 0.05 and 0.04 respectively, both orders of magnitude larger than the mean bias (<10−4), and, given our theory result, independent of the underlying distribution of bias over the trials. As depicted in Fig. 4, this means for instance that our P-values are hardly affected if the generator produces random numbers too early with a probability up to 10−3. The above only holds if the event-ready signal is still independent of the early produced random bits: in case the random bits are produced so early that they are not anymore space-like separated from the event-ready signal generation, the event-ready detector could select the Bell trials based on random bits being produced too early. In our experimental setup, we can thus test theories in which random bits are produced up to 690 ns too early (this time can be increased by moving the event-ready signal backwards in time) with a probability up to about 10−3. For comparison, for testing such theories an experiment using the CH-Eberhard inequality would require J > 10−3 to obtain a violation32, which is two orders of magnitude beyond the state of the art of photonic experiments18,19. This difference may be traced back to the use of event-ready detectors in our experiments, which dramatically increases the fidelity of the entangled state and thus the winning probability per Bell trial.

Conclusion

The loophole-free violation of Bell’s inequality in the second data run reported here further strengthens the rejection of a broad class of local realistic theories. We find that the data is consistent with independent setting choices, both in the first and second dataset, as well as in the combined dataset. Refined analysis of the effect of a bias in the random number generators shows that only the mean bias plays a role in the winning probability. As a consequence, the P-value bound for our experiments is independent of the underlying distribution for the RNG bias, for random bits produced up to 690 ns too early by the random number generator. The large spatial separation and the strong violation in winning probability per trial of about 0.05 makes our implementation promising for future applications of non-locality for device-independent quantum key distribution37 and randomness generation38,39.

Additional Information

How to cite this article: Hensen, B. et al. Loophole-free Bell test using electron spins in diamond: second experiment and additional analysis. Sci. Rep. 6, 30289; doi: 10.1038/srep30289 (2016).