Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Neural network-based prediction of the secret-key rate of quantum key distribution

Abstract

Numerical methods are widely used to calculate the secure key rate of many quantum key distribution protocols in practice, but they consume many computing resources and are too time-consuming. In this work, we take the homodyne detection discrete-modulated continuous-variable quantum key distribution (CV-QKD) as an example, and construct a neural network that can quickly predict the secure key rate based on the experimental parameters and experimental results. Compared to traditional numerical methods, the speed of the neural network is improved by several orders of magnitude. Importantly, the predicted key rates are not only highly accurate but also highly likely to be secure. This allows the secure key rate of discrete-modulated CV-QKD to be extracted in real time on a low-power platform. Furthermore, our method is versatile and can be extended to quickly calculate the complex secure key rates of various other unstructured quantum key distribution protocols.

Introduction

With the concurrent rise of artificial intelligence and quantum information science, these two fields are merging in a synergistic manner. In this growing trend, some works try to design new theoretical models based on quantum algorithms to improve classical machine learning for desired quantum speed-up1,2,3,4,5,6,7,8,9,10. At the same time, with the ever-increasing complexity of quantum systems, advanced quantum information technologies also require powerful tools for data processing and data analysis. We therefore urgently need to leverage existing classical machine learning techniques to solve practical, but difficult, problems in quantum information science, such as tomography11,12,13, classifying quantum states14,15,16, quantum metrology17,18,19, quantum control20,21 and quantum cryptography22.

Quantum key distribution (QKD)23,24 is by far the most practical technology in quantum information. It allows two distant parties (Alice and Bob) to establish secure keys against any eavesdropper. Various QKD protocols have been proposed one after another in recent decades25,26,27,28,29,30. Calculating the secure key rates of these QKD protocols is typically done by analytical methods31, but these analytical methods are usually inseparable from certain symmetry assumptions. These assumptions are often broken by experimental imperfections in practice. Therefore, to analyze the security of QKD protocols that are more suitable for practical implementations, some numerical methods based on convex optimization32,33,34,35 have been developed.

For instance, continuous-variable (CV) QKD has its own distinct advantages at a metropolitan distance36,37 due to the use of common components of coherent optical communication technology. In addition, the homodyne38 or heterodyne39 measurements used by CV-QKD have inherent extraordinary spectral filtering capabilities, which allows the crosstalk in wavelength division multiplexing (WDM) channels to be effectively suppressed. Therefore, hundreds of QKD channels may be integrated into a single optical fiber and can be cotransmitted with classic data channels. This allows QKD channels to be more effectively integrated into existing communication networks. In CV-QKD, discrete modulation technology has attracted much attention31,40,41,42,43,44,45,46,47,48,49,50 because of its ability to reduce the requirements for modulation devices. However, due to the lack of symmetry, the security proof of discrete modulation CV-QKD also mainly relies on numerical methods43,44,45,46,47,48,51.

Unfortunately, calculating a secure key rate by numerical methods requires minimizing a convex function over all eavesdropping attacks related with the experimental data52,53. The efficiency of this optimization depends on the number of parameters of the QKD protocol. For example, in discrete modulation CV-QKD, the number of parameters is generally \(1000-3000\) depending on the different choices of cutoff photon numbers44. This leads to the corresponding optimization possibly taking minutes or even hours51. Therefore, it is especially important to develop tools for calculating the key rate that are more efficient than numerical methods.

In this work, we take the homodyne detection discrete-modulated CV-QKD44 as an example to construct a neural network capable of predicting the secure key rate for the purpose of saving time and resource consumption. We apply our neural network to a test set obtained at different excess noises and distances. Excellent accuracy and time savings are observed after adjusting the hyperparameters. Importantly, the predicted key rates are highly likely to be secure. Note that our method is versatile and can be extended to quickly calculate the complex secure key rates of various other unstructured quantum key distribution protocols. Through some open source deep learning frameworks for on-device inference, such as TensorFlow Lite54, our model can also be easily deployed on devices at the edge of the network, such as mobile devices, embedded Linux or microcontrollers.

Results

Discrete-modulated CV-QKD

To clearly show the problem we try to solve, we briefly introduce the main ideas of discrete-modulated CV-QKD and give the convex optimization problem of finding its key rates in this section. See Ref.44 and see description of “Discrete-modulated CV-QKD” in Methods.

The protocol involves two parties, Alice and Bob. Alice randomly prepares one of the four coherent states and sends it to Bob by an untrusted quantum channel. Bob measures the received coherent state using homodyne detection. After repeating N rounds, Alice and Bob perform sifting, parameter estimation, error correction and privacy amplification over the classical authentication channel to obtain the final secure key rates. The key rate formula in the asymptotic limit can be expressed according to Refs.32,33 as

$$\begin{aligned} R^{\infty }=\min _{\rho _{A B} \in \mathbf {S}} D\left( \mathscr {G}\left( \rho _{A B}\right) \Vert \mathscr {Z}\left[ \mathscr {G}\left( \rho _{A B}\right) \right] \right) -p_{\mathrm {pass}} \delta _{\mathrm {EC}}, \end{aligned}$$
(1)

where \(D(\rho \Vert \sigma )={\text {Tr}}\left( \rho \log _{2} \rho \right) -{\text {Tr}}\left( \rho \log _{2} \sigma \right)\) is the quantum relative entropy; \(\rho _{AB}\) is the bipartite state of Alice and Bob; \(\mathscr {G}\) is the mapping to describe the postprocessing of the bipartite state \(\rho _{A B}\); \(\mathscr {Z}\) is a pinching quantum channel for reading out the results of the key rate mapping; \(\mathbf {S}\) is the set of all density operators that match the experimental observations; \(p_{\mathrm {pass}}\) is a sifting factor that determines how many rounds of data are used for generating keys; \(\delta _{\mathrm {EC}}\) represents the amount of information leakage per bit in the error-correction process.

The key to finding the secure key rates is to solve the minimum value of \(D\left( \mathscr {G}\left( \rho _{A B}\right) \Vert \mathscr {Z}\left[ \mathscr {G}\left( \rho _{A B}\right) \right] \right)\), since \(p_{\mathrm {pass}} \delta _{\mathrm {EC}}\) is a fixed quantity. The associated optimization problem is44

$$\begin{aligned} \begin{aligned} {\text {minimize}}&D\left( \mathscr {G}\left( \rho _{A B}\right) \Vert \mathscr {Z}\left[ \mathscr {G}\left( \rho _{A B}\right) \right] \right) \\ \text {subject to }&\\&{\text {Tr}}\left[ \rho _{A B}\left( |x\rangle \left\langle \left. x\right| _{A} \otimes {\hat{q}}\right) \right] =p_{x}\langle {\hat{q}}\rangle _{x}\right. , \\&{\text {Tr}}\left[ \rho _{A B}\left( |x\rangle \left\langle \left. x\right| _{A} \otimes {\hat{p}}\right) \right] =p_{x}\langle {\hat{p}}\rangle _{x}\right. , \\&{\text {Tr}}\left[ \rho _{A B}\left( |x\rangle \left\langle \left. x\right| _{A} \otimes {\hat{n}}\right) \right] =p_{x}\langle {\hat{n}}\rangle _{x}\right. , \\&{\text {Tr}}[\rho _{A B}(|x\rangle \left\langle x\right| _{A} \otimes {\hat{d}})]=p_{x}\langle {\hat{d}}\rangle _{x}, \\&{\text {Tr}}\left[ \rho _{A B}\right] =1, \\&\rho _{A B} \ge 0,\\&{\text {Tr}}_{B}\left[ \rho _{A B}\right] =\sum _{i, j=0}^{3} \sqrt{p_{i} p_{j}}\left\langle \varphi _{j} \mid \varphi _{i}\right\rangle |i\rangle \left\langle \left. j\right| _{A}\right. , \end{aligned} \end{aligned}$$
(2)

where \(|x\rangle \left\langle \left. x\right| _{A}\right.\) is a local projective measurement operator of Alice’s side, where \(x \in \{0,1,2,3\}\); \({\hat{q}}=\frac{1}{\sqrt{2}}\left( {\hat{a}}^{\dagger }+{\hat{a}}\right)\), where \({\hat{a}}\) and \({\hat{a}}^{\dagger }\) are the annihilation and creation operators of a single-mode state, respectively; \({\hat{p}}=\frac{i}{\sqrt{2}}\left( {\hat{a}}^{\dagger }-{\hat{a}}\right)\); \({\hat{n}}=\frac{1}{2}\left( {\hat{q}}^{2}+{\hat{p}}^{2}-1\right) ={\hat{a}}^{\dagger } {\hat{a}}\); \({\hat{d}}={\hat{q}}^{2}-{\hat{p}}^{2}={\hat{a}}^{2}+\left( {\hat{a}}^{\dagger }\right) ^{2}\); \(\langle {\hat{q}}\rangle _{x}\), \(\langle {\hat{p}}\rangle _{x}\), \(\langle {\hat{n}}\rangle _{x}\) and \(\langle {\hat{d}}\rangle _{x}\) represent the corresponding expectation values of the operators \({\hat{q}}\), \({\hat{p}}\), \({\hat{n}}\) and \({\hat{d}}\) acting on \(\rho _{B}^{x}\), respectively; \(\rho _{B}^{x}=\frac{1}{p_{x}} {\text {Tr}}_{A}\left[ \rho _{A B}\left( |x\rangle \left\langle \left. x\right| _{A} \otimes \mathrm {id}_{B}\right) \right] \right.\) is the state of Bob after Alice has performed measurement \(|x\rangle \langle x|\) on \(\rho _{A B}\), and \(p_x\) is the corresponding probability; \(\mathrm {id}_{B}\) is the identity transformation acting on system B.

The first four constraints in Eq. (2) are derived from experimental observations. The fifth and sixth constraints are conditions that the density matrix must satisfy. The seventh constraint comes from the fact that Alice’s states do not change because they do not go through insecure quantum channels.

The optimization problem in Eq. (2) is to find the optimal \(\rho _{A B}\) in \(\mathbf {S}\) such that \(R^{\infty }\) is minimized. \(\rho _{A B}\) is infinite-dimensional because the attacker has the ability to arbitrarily perturb the optical mode sent by Alice into an infinite-dimensional state to send to Bob. To solve this optimization problem using numerical methods, we need to apply the photon-number cutoff assumption to \(\rho _{A B}\) to ensure that the number of variables is in a reasonable range. A detailed description of this method can be found in Ref.44.

After applying the photon-number cutoff assumption, the optimization problem in Eq. (2) can be solved by applying the numerical method in Refs.33,44, but this is very time consuming. In this work, to reduce the time to predict secure key rates, we use the key rates obtained by the numerical method in Refs.33,44 as labels to train our neural network.

Neural networks for predicting the key rates

We use an artificial neural network to predict the key rates of discrete-modulated CV-QKD. The general spirit of the work is to encode the optimization problem in Eq. (2) on the loss function of a feedforward neural network and train the neural network by minimizing this loss function. The trained neural network can be seen as a mapping, which has learned the structure of the training set. For new instances, the neural network outputs the results directly via mapping, unlike traditional numerical methods that perform complex searches. As a result, the trained neural network saves a great deal of time, while ensuring a good level of accuracy. A more detailed description of neural networks can be found in Ref.55.

Figure 1
figure 1

Schematic diagram of our neural network model. We preprocess each training input \(\mathbf {x}_i\) and its corresponding label \(y_i\) to obtain \(\mathbf {x}_i^*\) and \(y_i^*\). The neural network receives \(\mathbf {x}_i^*\) and outputs the corresponding \(y_i^{*p}\). The numbers of neurons in the first hidden layer and the second hidden layer of the neural network are 400 and 200, respectively. \(y_i^{*p}\) and \(y_i^*\) are used to compute the loss function designed by us. Minimization of the loss function completes the training process.

A four-layer neural network model is designed to predict the key rates of discrete-modulated CV-QKD (Fig. 1). The input layer of the network has 29 neurons, which are used to receive the training inputs. The first hidden layer and the second hidden layer of the network have 400 and 200 neurons respectively, and their activation functions are the tanh function and sigmoid function, respectively. The output layer has only one neuron, which is used to predict secure key rates.

Figure 2
figure 2

Relative deviations before and after data preprocessing. We use the network structure shown in Fig. 1 with the mean square error as the loss function to compare the results of data preprocessing (a) and without data preprocessing (b). The data set is generated under the excess noise of 0.002–0.005, and is split into a training set containing 158,000 samples and a test set containing 2000 samples. The horizontal coordinate represents the different samples in the test set. The vertical coordinate represents the relative deviations between the key rate predicted by our neural network and the key rate obtained by the numerical method at each sample.

To train our neural network, we generate the data set containing 552,000 input instances \(\left\{ \mathbf {x}_{i}\right\}\) and 552,000 corresponding labels \(\left\{ {y}_{i}\right\}\) using the numerical method in Refs.33,44. Each \(\mathbf {x}_{i} \in \left\{ \mathbf {x}_{i}\right\}\) represents a vector of 29 variables, and label \({y}_{i}\) represents the corresponding key rate. There are 16 variables in each \(\mathbf {x}_{i}\) that are the right parts of the first four restrictions of Eq. (2), 12 variables in each \(\mathbf {x}_{i}\) are nondiagonal elements of the right side matrix of the last restriction of Eq. (2), and the remaining variable is excess noise \(\xi\). The 29 variables in each \(\mathbf {x}_i\) can be calculated in the experiment by using experimental parameters and experimental observations. In our simulation, these random input instances \(\left\{ \mathbf {x}_{i}\right\}\) are generated directly from seven experimental parameters (transmission distance L, light intensity \(\mu\), excess noise \(\xi\), and probability p0, p1, p2 and p3) and the following method.

When the excess noise \(\xi\) is within 0.002–0.014, we first generate a two-dimensional grid with excess noise and distance in the horizontal and vertical coordinates, respectively. Specifically, the value of the distance is between 0 and 100 km in a step of 5 km. The value of the excess noise is between 0.002 and 0.014 in a step of 0.001. Then, each grid point is sampled 80 times. With each sampling, the excess noise fluctuates around the exact value, and the float range is 0.0005 up and down. Once the excess noise for this sampling is determined, the light intensity will take a value every 0.01 between 0.35 and 0.60. Each sampling needs to generate 25 input instances with a positive key rate; otherwise, the current round of sampling is discarded and restarted. In this way, 2000 input instances are generated on each grid point. Correspondingly, a total of 520,000 training inputs are generated on this two-dimensional grid. When the excess noise \(\xi\) is 0.015, a similar two-dimensional grid is generated. However, we only sample to 80 km, so only 32,000 instances are generated. In this way, we collect a total of 552,000 samples with excess noise \(\xi\) between 0.002 and 0.015. Using the numerical approach in Refs.33,44, we calculate the corresponding key rate for each sample as the label of the data set on the blade cluster system of the High Performance Computing Center of Nanjing University. We consume over 40, 000 core hours, and the node we used contains 4 Intel Xeon Gold 6248 CPUs, which involves immense computational power.

To improve the convergence speed and accuracy of our neural network, we preprocess the input instances \(\left\{ \mathbf {x}_{i}\right\}\) and the corresponding labels \(\left\{ {y}_{i}\right\}\). To demonstrate the necessity of the data preprocessing, we use the network structure shown in Fig. 1 to perform a controlled experiment with the mean square error as the loss function. With the excess noise of 0.002–0.005, the absolute values of the relative deviations between the key rates predicted by our neural network and the corresponding key rates obtained by the numerical method do not exceed \(25\%\) after the data preprocessing (Fig. 2), whereas the absolute values of the relative deviations exceed \(400\%\) without the data preprocessing. Here, the relative deviation is the absolute deviation between the predicted value and true value divided by the true value. A detailed description of the data preprocessing can be found in “Details of data preprocessing” in Methods.

A new loss function is specifically designed to make key rates predicted by our neural network as information-theoretically secure as possible, rather than using the traditional mean squared error as a loss function. The expression of the loss function is as follows:

$$\begin{aligned} \begin{aligned} C&=\frac{1}{n} \sum _{i=1}^{n} \gamma \left( e_{i}^{* 2}+\max \left( e_{i}^{*},-\log _{10}(\varepsilon )\right) \right) -(1-\gamma )\left( \min \left( e_{i}^{*}, 0\right) \right) \end{aligned} \end{aligned}$$
(3)

, where n is the number of training inputs. \(e_i^*=y_{i}^{*p}-y_{i}^{ *}\) is the residual error between the preprocessed label \(y_{i}^{*}\) and the corresponding output \(y_{i}^{* p}\) of the neural network.

The minimum function part in Eq. (3) is the penalty term and is used to make the key rates predicted by the neural network as information-theoretically secure as possible. On the other hand, the part consisting of the maximum function and the squared term in Eq. (3) is used to bound the upper limit of \(e_i^*\) to obtain higher key rates. The parameter \(\gamma\) is used to balance the effects of the two parts. With the help of this loss function, we expect that the relative deviations between predicted value and true value can be bound in \((\varepsilon -1,0)\) after choosing the proper \(\varepsilon\) and \(\gamma\).

Figure 3
figure 3

Performance comparison of neural networks with different hyperparameters. (a) The results of the neural network with the hyperparameters \(\gamma =0.20\) and \(\varepsilon =0.80\) in predicting 2000 samples with excess noise between 0.002 and 0.005 in the test set. The predicted key rates are strictly below the key rates obtained by the numerical method in Refs.33,44. (b) The histogram of the relative deviation distribution in (a). The absolute value of the relative deviations remains roughly in the region of 5–20%. (cf) plot the corresponding results for the hyperparameters \(\gamma =0.20\), \(\varepsilon =0.90\) and \(\gamma =0.80\), \(\varepsilon =0.80\), respectively.

The performance of the neural networks is related to hyperparameters \(\gamma\) and \(\varepsilon\). Without loss of generality, we take the examples of neural networks with excess noise \(\xi\) between 0.002 and 0.005 (Fig. 3). When \(\gamma =0.20\) and \(\varepsilon =0.80\), the key rates predicted by the neural network are strictly lower than those obtained by the numerical method in Refs.33,44, which means that the key rates predicted by the neural network are information-theoretically secure. Meanwhile, the absolute values of the relative deviations are mainly distributed between 0.05 and 0.20 (Fig. 3a,b). Figure 3c–f plot the corresponding results for the hyperparameters \(\gamma =0.20\), \(\varepsilon =0.90\) and \(\gamma =0.80\), \(\varepsilon =0.80\), respectively. Note that the partial key rates predicted by the neural networks under \(\gamma =0.20\), \(\varepsilon =0.90\) and \(\gamma =0.80\), \(\varepsilon =0.80\) are higher than the key rates obtained by the numerical method. This indicates that the performance of neural networks trained with hyperparameters \(\gamma =0.20\), \(\varepsilon =0.90\) and \(\gamma =0.80\), \(\varepsilon =0.80\) is not as good as that of neural network trained with hyperparameters \(\gamma =0.20\) and \(\varepsilon =0.80\). Therefore, we need to carefully tune hyperparameters of the neural networks to ensure their stable performance.

The 552,000 data generated by the numerical method are split into a training set containing 524,400 data and a test set containing 27,600 data. The test set is sampled from the original data set and covers instances generated under all combinations of excess noise and distance. The data preprocessing procedure follows data splitting. The Adam optimization algorithm56 is used to train our neural network. The initial learning rate is set to 0.001. For each training, we set 200 epochs and 256 batch sizes. In addition, techniques such as early stopping and dropout57 are used to prevent overtting. The relative deviations of the trained network on the test set and the training set have similar distributions, which indicates that the model has good generalization performance.

Key rate comparison

We use our neural network to predict, given the optimal light intensity, key rates of discrete-modulated CV-QKD at different distances and different excess noises after training the neural network under \(\gamma =0.20\) and \(\varepsilon =0.80\) according to the method described in “Methods” above. As shown in Fig. 4, we compare the key rates with the corresponding key rates obtained by the numerical method in Refs.33,44. The results show that all key rates predicted by the neural network are strictly lower than those obtained by the numerical method. It is worth noting that the relative deviations between them are basically within \(20\%\) (relevant data can be found in “Detailed data” in Methods).

To illustrate the more general case, we test the test set containing 27,600 samples mentioned at the end of “Methods”. The results show that the number of samples, for which the key rates predicted by the neural network are lower than the corresponding results calculated by the numerical method, is 27,379. Namely, the probability that the key rate predicted by the neural network on the test set is secure is as high as \(99.2\%\).

Our neural network shows greater advantages over the numerical method in terms of time and resource consumption. We compare the time required to predict the key rates with our neural network and the time required to calculate the key rates with the numerical method on a high-performance personal computer with a 3.3 GHz AMD Ryzen 9 4900H and 16 GB of RAM (Fig. 5). The neural network is 6–8 orders of magnitude of the numerical method for predicting the key rates of the discrete-modulated CV-QKD within 0–100 km for excess noise \(\xi\) = 0.008–0.012. In addition, as the excess noise increases, the speed of the neural network increases even more. Refer to “Detailed data” for more detailed data.

Figure 4
figure 4

Secure key rate versus the transmission distance for homodyne detection discrete-modulated CV-QKD with excess noise \(\xi\) of 0.002, 0.004, 0.008, 0.011 and 0.014 using our neural network (circles) and the numerical method in Refs.33,44 (triangles). The light intensity is chosen to be optimal in the interval [0.35, 0.6]. Tht transmission efficiency \(\eta =10^{-0.02 L}\). The reconciliation efficiency \(\beta =0.95\). The neural network used for comparison is trained by setting the hyperparameters \(\gamma =0.20\) and \(\varepsilon =0.80\). The cutoff photon number in the numerical method is set as 10.

Discussion

We have constructed neural networks and shown that these neural networks can predict the information-theoretically secure key rates of homodyne detection discrete-modulated CV-QKD with a great probability (up to \(99.2\%\)) at a distance of 0–100 km and an excess noise of no more than 0.015. In particular, with excess noise up to 0.008 or more, the speed of our method is at least improved by six orders of magnitude compared to that of the numerical method in Refs.33,44. For example, it takes an average of 190 s to numerically calculate the point with the excess noise \(\xi\) around 0.008, which greatly affects the efficiency of QKD systems to calculate the secure key rate. In contrast, a neural network can calculate tens of thousands of key rates in 1 s. Considering that it takes a certain amount of time for the QKD system to collect data, the speed of predicting the key rates by the neural network completely meets practical applications. This advantage brings us one step closer to achieving low latency for discrete modulated CV-QKD on a low-power platform. Our method is applicable in principle to any protocol that already has reliable numerical methods. However, for protocols such as 16/64/256 QAM DM-CVQKD protocol with analytical methods whose effects are very close to those of numerical methods, it is not necessary to use the method proposed in this paper.

Recently, there have been two main types of situations in which machine learning is used in QKD. One is used for experimental parameter optimization58,59 and the other is used to assist experimental control60,61,62. They all use machine learning to replace traditional optimization or feedback control algorithms, which are significantly different from our work. To the best of our knowledge, this is the first time we have tried to apply machine learning methods to predict key rates of QKD. This poses a greater challenge than parameter optimization with machine learning methods. This is because the parameters predicted by the neural networks are substituted into numerical or analytical methods to find the corresponding key rates, which naturally ensures that the key rates are information-theoretically secure. However, the key rates obtained by neural networks do not guarantee this naturally, which forces us to redesign the loss function and seek better data preprocessing methods to guarantee the acquired key rate with information-theoretic security. Note that the probability (\(0.8\%\)) of our neural network predicting an insecure key rate is too large compared to conventional security parameters of the QKD protocol (e.g. \(10^{-6}\)). In practice, however, we need to sample thousands of data points and calculate their respective key rates to obtain a usable keystring. The key here is that when we sum and average the key rates of all data points predicted by our neural network, the insecure probability of this averaged key rate can be reduced very low. If there are enough data points, this insecure probability can also approximate conventional security parameters of the QKD protocol.

Figure 5
figure 5

Time consumption comparison between the neural network method and numerical method. The comparison results with excess noise of 0.008, 0.010 and 0.012 are shown as diamonds, circles and triangles, respectively. Each point represents the logarithm of the ratio of the running time of the numerical method divided by the running time of the neural network method. The neural network used for comparison is trained by setting the hyperparameters \(\gamma =0.20\) and \(\varepsilon =0.80\). The cutoff photon number in the numerical method is set as 10.

We expect that larger excess noises and longer distances will require a deeper network, more sophisticated loss functions, and more detailed data preprocessing methods to improve the performance of neural networks on the training set. More training data are also necessary to improve the generalization ability of the neural networks. For deep neural networks, the rapid growth or rapid disappearance of the transmitted gradient hinders the optimization process; therefore, the debugging process is highly technical. The debugging process can be guided by monitoring the activation function values of the neurons and histograms 1 of those gradients55.

Our machine learning approach is at least six orders of magnitude of the numerical method at predicting the secure key rates of homodyne detection discrete-modulated CV-QKD with excess noise up to 0.008 or more. However, training our neural network is still time consuming. This is because we need to use traditional numerical methods to obtain a number of key rates as the training set of the neural networks. In particular, the performance of our neural network is dependent on the choice of hyperparameters \(\gamma\), \(\varepsilon\) and initial learning rate. This means that we may need to train several times to obtain a suitable neural network. To make our machine learning method more intelligent, further work is necessary to design another neural network to automatically find the most suitable hyperparameters. We have also tried other machine learning methods, such as boosting decision trees. These methods have smaller relative deviations, but have greater variances. We have left the fusion of these methods to future research.

The important contribution of our work is that it opens the door to using classical machine learning to predict QKD key rates. In particular, our ideas and methods are very easy to generalize to other QKD protocols. We expect that our work will stimulate further research to help most QKD systems run on low-power chips63 in mobile devices64.

Methods

Discrete-modulated CV-QKD

According to Ref.44, homodyne detection discrete-modulated CV-QKD is described below:

(1) State preparation.-Alice prepares a coherent state \(\left| \psi _k\right\rangle\) from the set \(\{|\alpha \rangle ,|-\alpha \rangle ,|i \alpha \rangle ,|-i \alpha \rangle \}\) according to the probability of \([p_A/2,p_A/2,(1-p_A)/2,(1-p_A)/2]\), where \(\alpha \in R\) is a predetermined amplitude and k is the number of rounds. Then Alice sends the state \(\left| \psi _k\right\rangle\) to Bob.

(2) Measurement.-Bob performs a homodyne measurement on the received state. He chooses to measure a certain orthogonal component (q or p) according to the probability of \([p_B,1-p_B]\). If q is chosen, Bob notes \(b_k=0\), otherwise he notes \(b_k=1\). Then, Bob records his measurement outcome \(y_{k} \in R\).

(3) Announcement and sifting.-After repeating the first two steps N times, Alice and Bob communicate via the classical authentication channel and divide the obtained data into the following four subsets:

$$\begin{aligned} \begin{aligned} \mathscr {I}_{q q}&=\left\{ k \in [N]:\left| \psi _{k}\right\rangle \in \{|\alpha \rangle ,|-\alpha \rangle \}, b_{k}=0\right\} , \\ \mathscr {I}_{q p}&=\left\{ k \in [N]:\left| \psi _{k}\right\rangle \in \{|\alpha \rangle ,|-\alpha \rangle \}, b_{k}=1\right\} , \\ \mathscr {I}_{p q}&=\left\{ k \in [N]:\left| \psi _{k}\right\rangle \in \{|i \alpha \rangle ,|-i \alpha \rangle \}, b_{k}=0\right\} , \\ \mathscr {I}_{p p}&=\left\{ k \in [N]:\left| \psi _{k}\right\rangle \in \{|i \alpha \rangle ,|-i \alpha \rangle \}, b_{k}=1\right\} , \end{aligned} \end{aligned}$$
(4)

where [N] denotes the set of all integers from 1 to N. Then Alice and Bob randomly select a subset \(\mathscr {I}_{\text{ key } }\) of size m from \(\mathscr {I}_{q q}\) for generating keys. The key string \(\mathbf {X}=\left( x_{1}, x_{2}, \ldots , x_{m}\right)\) at Alice is also determined according to the following rules:

$$\begin{aligned} \forall j \in [m], \quad x_{j}=\left\{ \begin{array}{ll} 0 &{} \text{ if } \left| \psi _{f(j)}\right\rangle =|\alpha \rangle , \\ 1 &{} \text{ if } \left| \psi _{f(j)}\right\rangle =|-\alpha \rangle , \end{array}\right. \end{aligned}$$
(5)

where f(j) is a function that maps from \(\mathscr {I}_{\text{ key } }\) to \(\mathscr {I}_{q q}\). The remaining data in \(\mathscr {I}_{q q}\), \(\mathscr {I}_{q p}\), \(\mathscr {I}_{p q}\) and \(\mathscr {I}_{pp}\) are integrated into the set \(\mathscr {I}_{\text{ test } }\) and used for parameter estimation.

(4) Parameter estimation.-Alice and Bob perform parameter estimation based on the data in \(\mathscr {I}_{\text{ test } }\). First, they calculate the first and second moments of q and p quadratures for each of the four coherent states sent by Alice. Then they calculate the secret key rate based on the convex optimization problem in Eq. (8).

If the result shows that the key rate is equal to 0, Alice and Bob abort the protocol and start over. Otherwise, they continue with the next step.

(5) Reverse reconciliation key map.-The key string \(\mathbf {Z}=\left( z_{1}, z_{2}, \ldots , z_{m}\right)\) at Bob is determined according to Bob’s measurement outcome \(y_k\) in step 2 and the following rules:

$$\begin{aligned} z_{j}=\left\{ \begin{array}{ll} 0 &{} \text{ if } y_{f(j)} \in \left[ \Delta _{c}, \infty \right) , \\ 1 &{} \text{ if } y_{f(j)} \in \left( -\infty ,-\Delta _{c}\right] , \\ \perp &{} \text{ if } y_{f(j)} \in \left( -\Delta _{c}, \Delta _{c}\right) , \end{array}\right. \end{aligned}$$
(6)

where \(\Delta _{c} \ge 0\) is determined by the postselection of data.

Alice and Bob then pick out the location of the symbol \(\perp\) and remove the data at that location by classical communication. The set \(\mathbf {X}\) and \(\mathbf {Z}\) after removing \(\perp\) is the raw key string.

(6) Error correction and privacy amplification.-Alice and Bob choose a suitable error-correction protocol and a suitable privacy-amplification protocol to generate secret key rates.

The key rate can be calculated using the well-known Devetak-Winter formula65 in the asymptotic limit and under collective attacks. To apply this formula, we transform the prepare-and-measure protocol into the entanglement-based protocol.

Alice prepares the state according to the ensemble \(\left\{ \left| \varphi _{x}\right\rangle , p_{x}\right\}\) in the prepare-and-measure protocol. In the equivalent entanglement-based protocol, Alice prepares the bipartite state in the form of \(|\Psi \rangle _{A A^{\prime }}=\sum _{x} \sqrt{p_{x}}|x\rangle _{A}\left| \varphi _{x}\right\rangle _{A^{\prime }}\). Here Alice keeps \(|x\rangle _{A}\) in register A and sends \(\left| \varphi _{x}\right\rangle _{A^{\prime }}\) to Bob. \(\left| \varphi _{x}\right\rangle _{A^{\prime }}\) changes as it passes through an insecure quantum channel. The process can be described by a completely positive and trace-preserving map \(\mathscr {E}_{A^{\prime } \rightarrow B}\). The bipartite state \(\rho _{A B}\) thus transforms into

$$\begin{aligned} \rho _{A B}=\left( \mathrm {id}_{A} \otimes \mathscr {E}_{A^{\prime } \rightarrow B}\right) \left( |\Psi \rangle \left\langle \left. \Psi \right| _{A A^{\prime }}\right) \right. , \end{aligned}$$
(7)

where \(\mathrm {id}_{A}\) is the identity transformation acting on A. Under reverse reconciliation66, the key rate formula can be expressed according to Refs.32,33 as

$$\begin{aligned} R^{\infty }=\min _{\rho _{A B} \in \mathbf {S}} D\left( \mathscr {G}\left( \rho _{A B}\right) \Vert \mathscr {Z}\left[ \mathscr {G}\left( \rho _{A B}\right) \right] \right) -p_{\mathrm {pass}} \delta _{\mathrm {EC}}. \end{aligned}$$
(8)
figure a
figure b

Details of data preprocessing

To improve the performance of our neural network, we preprocess the training inputs \(\left\{ \mathbf {x}_{i}\right\}\) before training the neural network. The process can be expressed as

$$\begin{aligned} x_{i j}^{*}=\frac{x_{i j}-{\bar{x}}_{j}}{ \sigma _{j}}, \end{aligned}$$
(9)

where \(x_{i j}\) represents the j-th component of the i-th sample; \({\bar{x}}_{j}\) and \(\sigma _{j}\) are the mean and variance of the j-th component in all samples, respectively; \(x_{i j}^{*}\) is the j-th component of the i-th sample after being preprocessed.

The preprocessed data \(\{\mathbf {x}_i^*\}\) follow a standard normal distribution with a mean of 0 and a variance of 1. The process removes dimensional restrictions and facilitates the comparison of features of different dimensions. Since the maximum difference between different key rates in these samples is 4 orders of magnitude, we preprocess the labels as follows to speed up the training process of the neural networks:

$$\begin{aligned} y_{i}^{ *}=-\log _{10}\left( y_{i}\right) , \end{aligned}$$
(10)

where \(y_{i}^{ *}\) is the label corresponding to the i-th sample after being preprocessed. Note that the outputs predicted by the neural networks trained with preprocessed labels \(\{y_{i}^{ *}\}\) need to be inverse solved using the following equation:

$$\begin{aligned} y_{i}^{ p}=10^{-y_{i}^{*p}}, \end{aligned}$$
(11)

where \(y_{i}^{*p}\) and \(y_{i}^{ p}\) are the output value and the predicted key rate of the neural networks for the i-th sample, respectively.

Algorithms  1 and  2 show the detailed training process of the neural networks and the process of using trained neural networks to predict new samples, respectively.

Table 1 Relative deviations between key rates predicted by our neural network and the corresponding key rates obtained by the numerical method for the given optimal light intensity at different distances and different excess noises.
Table 2 Time consumption of the neural network versus the numerical method with excess noise \(\xi\) of 0.008, 0.010 and 0.012.

Detailed data

Table  1 shows the relative deviations between the key rates predicted by our neural network and the corresponding key rates obtained by the numerical method for the given optimal light intensity at different distances and different excess noises. This table is a supplement to Fig. 4.

Table  2 shows the specific data of the time consumption of the neural network and the numerical method with excess noise \(\xi\) of 0.008, 0.010 and 0.012. In the numerical method, each point with excess noise \(\xi\) of approximately 0.01 takes 200 s on average, which greatly affects the efficiency of the QKD system to calculate the secure key rate. In contrast, the neural network can calculate tens of thousands of key rates in 1 s. Considering that it takes a certain amount of time for the QKD system to collect data, the speed of predicting the key rates by the neural network completely meets practical applications.

References

  1. Lloyd, S., Mohseni, M. & Rebentrost, P. Quantum principal component analysis. Nat. Phys. 10, 631–633 (2014).

    CAS  Article  Google Scholar 

  2. Ciliberto, C. et al. Quantum machine learning: a classical perspective. Proc. R. Soc. A 474, 20170551 (2018).

    ADS  MathSciNet  PubMed  PubMed Central  MATH  Article  Google Scholar 

  3. Beer, K. et al. Training deep quantum neural networks. Nat. Commun. 11, 808 (2020).

    ADS  CAS  PubMed  PubMed Central  Article  Google Scholar 

  4. Bondarenko, D. & Feldmann, P. Quantum autoencoders to denoise quantum data. Phys. Rev. Lett. 124, 130502 (2020).

    ADS  CAS  PubMed  Article  Google Scholar 

  5. Farhi, E. & Neven, H. Classification with quantum neural networks on near term processors. arXiv preprint arXiv:1802.06002 (2018).

  6. Mitarai, K., Negoro, M., Kitagawa, M. & Fujii, K. Quantum circuit learning. Phys. Rev. A 98, 032309 (2018).

    ADS  CAS  Article  Google Scholar 

  7. Wan, K. H., Dahlsten, O., Kristjánsson, H., Gardner, R. & Kim, M. Quantum generalisation of feedforward neural networks. npj Quantum Inf. 3, 36 (2017).

    ADS  Article  Google Scholar 

  8. Chen, Z.-B. Quantum neural network and soft quantum computing. arXiv preprint arXiv:1810.05025 (2018).

  9. Jerbi, S., Trenkwalder, L. M., Nautrup, H. P., Briegel, H. J. & Dunjko, V. Quantum enhancements for deep reinforcement learning in large spaces. PRX Quantum 2, 010328 (2021).

    Article  Google Scholar 

  10. Abbas, A. et al. The power of quantum neural networks. Nat. Comput. Sci. 1, 403–409 (2021).

    Article  Google Scholar 

  11. Torlai, G. et al. Neural-network quantum state tomography. Nat. Phys. 14, 447–450 (2018).

    CAS  Article  Google Scholar 

  12. Smith, A. W., Gray, J. & Kim, M. Efficient quantum state sample tomography with basis-dependent neural networks. PRX Quantum 2, 020348 (2021).

    ADS  Article  Google Scholar 

  13. Quek, Y., Fort, S. & Ng, H. K. Adaptive quantum state tomography with neural networks. npj Quantum Inf. 7, 105 (2021).

    ADS  Article  Google Scholar 

  14. Gao, J. et al. Experimental machine learning of quantum states. Phys. Rev. Lett. 120, 240501 (2018).

    ADS  CAS  PubMed  MATH  Article  Google Scholar 

  15. Ma, Y.-C. & Yung, M.-H. Transforming Bell’s inequalities into state classifiers with machine learning. npj Quantum Inf. 4, 34 (2018).

    ADS  Article  Google Scholar 

  16. Yang, M. et al. Experimental simultaneous learning of multiple nonclassical correlations. Phys. Rev. Lett. 123, 190401 (2019).

    ADS  CAS  PubMed  Article  Google Scholar 

  17. Hentschel, A. & Sanders, B. C. Efficient algorithm for optimizing adaptive quantum metrology processes. Phys. Rev. Lett. 107, 233601 (2011).

    ADS  PubMed  Article  CAS  Google Scholar 

  18. Fiderer, L. J., Schuff, J. & Braun, D. Neural-network heuristics for adaptive bayesian quantum estimation. PRX Quantum 2, 020303 (2021).

    Article  Google Scholar 

  19. Cimini, V. et al. Calibration of multiparameter sensors via machine learning at the single-photon level. Phys. Rev. Appl. 15, 044003 (2021).

    ADS  CAS  Article  Google Scholar 

  20. Bukov, M. et al. Reinforcement learning in different phases of quantum control. Phys. Rev. X 8, 031086 (2018).

    CAS  Google Scholar 

  21. Wise, D. F., Morton, J. J. & Dhomkar, S. Using deep learning to understand and mitigate the qubit noise environment. PRX Quantum 2, 010316 (2021).

    Article  Google Scholar 

  22. Coyle, B., Doosti, M., Kashefi, E. & Kumar, N. Variational quantum cloning: Improving practicality for quantum cryptanalysis. arXiv preprint arXiv:2012.11424 (2020).

  23. Bennett, C. H. & Brassard, G. Quantum cryptography: public key distribution and coin tossing int. In Conf. on Computers, Systems and Signal Processing (Bangalore, India, vol. 175 (1984).

  24. Ekert, A. K. Quantum cryptography based on Bell’s theorem. Phys. Rev. Lett. 67, 661 (1991).

    ADS  MathSciNet  CAS  PubMed  MATH  Article  Google Scholar 

  25. Xu, F., Ma, X., Zhang, Q., Lo, H.-K. & Pan, J.-W. Secure quantum key distribution with realistic devices. Rev. Mod. Phys. 92, 025002 (2020).

    ADS  MathSciNet  CAS  Article  Google Scholar 

  26. Xie, Y.-M. et al. Breaking the rate-loss bound of quantum key distribution with asynchronous two-photon interference. PRX Quantum 3, 020315 (2022).

    ADS  Article  Google Scholar 

  27. Yin, H.-L., Zhu, W. & Fu, Y. Phase self-aligned continuous-variable measurement-device-independent quantum key distribution. Sci. Rep. 9, 49 (2019).

    ADS  PubMed  PubMed Central  Article  CAS  Google Scholar 

  28. Yin, H.-L. et al. Experimental composable security decoy-state quantum key distribution using time-phase encoding. Opt. Express 28, 29479–29485 (2020).

    ADS  PubMed  Article  Google Scholar 

  29. Tang, G.-Z., Li, C.-Y. & Wang, M. Polarization discriminated time-bin phase-encoding measurement-device-independent quantum key distribution. Quant. Eng. 3, e79 (2021).

    Google Scholar 

  30. Cui, Z.-X., Zhong, W., Zhou, L. & Sheng, Y.-B. Measurement-device-independent quantum key distribution with hyper-encoding. Sci. China Phys. Mech. Astron. 62, 1–10 (2019).

    CAS  Article  Google Scholar 

  31. Matsuura, T., Maeda, K., Sasaki, T. & Koashi, M. Finite-size security of continuous-variable quantum key distribution with digital signal processing. Nat. Commun. 12, 252 (2021).

    ADS  CAS  PubMed  PubMed Central  Article  Google Scholar 

  32. Coles, P. J., Metodiev, E. M. & Lütkenhaus, N. Numerical approach for unstructured quantum key distribution. Nat. Commun. 7, 11712 (2016).

    ADS  CAS  PubMed  PubMed Central  Article  Google Scholar 

  33. Winick, A., Lütkenhaus, N. & Coles, P. J. Reliable numerical key rates for quantum key distribution. Quantum 2, 77 (2018).

    Article  Google Scholar 

  34. Primaatmaja, I. W., Lavie, E., Goh, K. T., Wang, C. & Lim, C. C. W. Versatile security analysis of measurement-device-independent quantum key distribution. Phys. Rev. A 99, 062332 (2019).

    ADS  CAS  Article  Google Scholar 

  35. Tan, E. Y.-Z., Schwonnek, R., Goh, K. T., Primaatmaja, I. W. & Lim, C. C.-W. Computing secure key rates for quantum cryptography with untrusted devices. npj Quantum Inf. 7, 158 (2021).

    ADS  Article  Google Scholar 

  36. Pirandola, S. et al. Advances in quantum cryptography. Adv. Opt. Photon. 12, 1012–1236 (2020).

    Article  Google Scholar 

  37. Zhang, Y. et al. Long-distance continuous-variable quantum key distribution over 202.81 km of fiber. Phys. Rev. Lett. 125, 010502 (2020).

  38. Grosshans, F. & Grangier, P. Continuous variable quantum cryptography using coherent states. Phys. Rev. Lett. 88, 057902 (2002).

    ADS  PubMed  Article  CAS  Google Scholar 

  39. Weedbrook, C. et al. Quantum cryptography without switching. Phys. Rev. Lett. 93, 170504 (2004).

    ADS  PubMed  Article  CAS  Google Scholar 

  40. Zhao, Y.-B., Heid, M., Rigas, J. & Lütkenhaus, N. Asymptotic security of binary modulated continuous-variable quantum key distribution under collective attacks. Phys. Rev. A 79, 012307 (2009).

    ADS  Article  CAS  Google Scholar 

  41. Leverrier, A. & Grangier, P. Unconditional security proof of long-distance continuous-variable quantum key distribution with discrete modulation. Phys. Rev. Lett. 102, 180504 (2009).

    ADS  PubMed  Article  CAS  Google Scholar 

  42. Hirano, T. et al. Implementation of continuous-variable quantum key distribution with discrete modulation. Quantum Sci. Tech. 2, 024010 (2017).

    ADS  Article  Google Scholar 

  43. Ghorai, S., Grangier, P., Diamanti, E. & Leverrier, A. Asymptotic security of continuous-variable quantum key distribution with a discrete modulation. Phys. Rev. X 9, 021059 (2019).

    CAS  Google Scholar 

  44. Lin, J., Upadhyaya, T. & Lütkenhaus, N. Asymptotic security analysis of discrete-modulated continuous-variable quantum key distribution. Phys. Rev. X 9, 041064 (2019).

    CAS  Google Scholar 

  45. Lin, J. & Lütkenhaus, N. Trusted detector noise analysis for discrete modulation schemes of continuous-variable quantum key distribution. Phys. Rev. Appl. 14, 064030 (2020).

    ADS  CAS  Article  Google Scholar 

  46. Liu, W.-B. et al. Homodyne detection quadrature phase shift keying continuous-variable quantum key distribution with high excess noise tolerance. PRX Quantum 2, 040334 (2021).

    ADS  Article  Google Scholar 

  47. Upadhyaya, T., van Himbeeck, T., Lin, J. & Lütkenhaus, N. Dimension reduction in quantum key distribution for continuous-and discrete-variable protocols. PRX Quantum 2, 020325 (2021).

    ADS  Article  Google Scholar 

  48. Kanitschar, F. & Pacher, C. Tight secure key rates for CV-QKD with 8PSKmodulation. arXiv preprint arXiv:2107.06110 (2021).

  49. Kaur, E., Guha, S. & Wilde, M. M. Asymptotic security of discrete-modulation protocols for continuous-variable quantum key distribution. Phys. Rev. A 103, 012412 (2021).

    ADS  MathSciNet  CAS  Article  Google Scholar 

  50. Denys, A., Brown, P. & Leverrier, A. Explicit asymptotic secret key rate of continuous-variable quantum key distribution with an arbitrary modulation. Quantum 5, 540 (2021).

    Article  Google Scholar 

  51. Hu, H., Im, J., Lin, J., Lütkenhaus, N. & Wolkowicz, H. Robust interior point method for quantum key distribution rate computation. arXiv preprint arXiv:2104.03847 (2021).

  52. Bunandar, D., Govia, L. C., Krovi, H. & Englund, D. Numerical finite-key analysis of quantum key distribution. npj Quantum Inf. 6, 104 (2020).

    ADS  Article  Google Scholar 

  53. George, I., Lin, J. & Lütkenhaus, N. Numerical calculations of the finite key rate for general quantum key distribution protocols. Phys. Rev. Res. 3, 013274 (2021).

    CAS  Article  Google Scholar 

  54. Abadi, M. et al. TensorFlow: Large-scale machine learning on heterogeneous systems (2015). https://www.tensorflow.org/. Software available from tensorflow.org.

  55. Goodfellow, I., Bengio, Y. & Courville, A. Deep learning (MIT Press, Cambridge, 2016).

    MATH  Google Scholar 

  56. Kingma, D. P. & Ba, J. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014).

  57. Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I. & Salakhutdinov, R. Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15, 1929–1958 (2014).

    MathSciNet  MATH  Google Scholar 

  58. Lu, F.-Y. et al. Parameter optimization and real-time calibration of a measurement-device-independent quantum key distribution network based on a back propagation artificial neural network. J. Opt. Soc. Am. B 36, B92–B98 (2019).

    CAS  Article  Google Scholar 

  59. Wang, W. & Lo, H.-K. Machine learning for optimal parameter prediction in quantum key distribution. Phys. Rev. A 100, 062334 (2019).

    ADS  CAS  Article  Google Scholar 

  60. Liu, W., Huang, P., Peng, J., Fan, J. & Zeng, G. Integrating machine learning to achieve an automatic parameter prediction for practical continuous-variable quantum key distribution. Phys. Rev. A 97, 022316 (2018).

    ADS  CAS  Article  Google Scholar 

  61. Liu, J.-Y., Ding, H.-J., Zhang, C.-M., Xie, S.-P. & Wang, Q. Practical phase-modulation stabilization in quantum key distribution via machine learning. Phys. Rev. Appl. 12, 014059 (2019).

    ADS  CAS  Article  Google Scholar 

  62. Chin, H.-M., Jain, N., Zibar, D., Andersen, U. L. & Gehring, T. Machine learning aided carrier recovery in continuous-variable quantum key distribution. npj Quant. Inf. 7, 20 (2021).

    ADS  Article  Google Scholar 

  63. Kwek, L.-C. et al. Chip-based quantum key distribution. AAPPS Bull. 31, 15 (2021).

    Article  Google Scholar 

  64. Wang, X.-F. et al. Transmission of photonic polarization states from geosynchronous earth orbit satellite to the ground. Quant. Eng. 3, e73 (2021).

    Google Scholar 

  65. Devetak, I. & Winter, A. Distillation of secret key and entanglement from quantum states. Proc. R. Soc. A 461, 207–235 (2005).

    ADS  MathSciNet  MATH  Article  Google Scholar 

  66. Grosshans, F., Cerf, N. J., Wenger, J., Tualle-Brouri, R. & Grangier, P. Virtual entanglement and reconciliation protocols for quantum cryptography with continuous variables. arXiv preprint arXiv:quant-ph/0306141 (2003).

Download references

Acknowledgements

We gratefully acknowledge the support from the Natural Science Foundation of Jiangsu Province (No. BK20211145), the Fundamental Research Funds for the Central Universities (No. 020414380182), the Key Research and Development Program of Nanjing Jiangbei New Aera (No. ZDYD20210101), the Key-Area Research and Development Program of Guangdong Province (No. 2020B0303040001). We are grateful to the High Performance Computing Center of Nanjing University for performing the numerical calculations in this paper on its blade cluster system.

Author information

Authors and Affiliations

Authors

Contributions

H.-L.Y. and Z.-B.C. conceived the research. M.-G.Z., Z.-P.L. and H.-L.Y. devised the neural network architecture and carried out the numerical simulations. M.-G.Z., Z.-P.L., W.-B.L., C.-L.L., J.-L.B., Y.-R.X, Y.F. and H.-L.Y. developed the theory and calculated the secure key rate. All authors discussed the results and prepared the manuscript. M.-G.Z. and Z.-P.L. contributed equally to this work.

Corresponding authors

Correspondence to Hua-Lei Yin or Zeng-Bing Chen.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher's note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Zhou, MG., Liu, ZP., Liu, WB. et al. Neural network-based prediction of the secret-key rate of quantum key distribution. Sci Rep 12, 8879 (2022). https://doi.org/10.1038/s41598-022-12647-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1038/s41598-022-12647-x

Comments

By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Search

Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing