An Approach to Cryptography Based on Continuous-Variable Quantum Neural Network

An efficient cryptography scheme is proposed based on continuous-variable quantum neural network (CV-QNN), in which a specified CV-QNN model is introduced for designing the quantum cryptography algorithm. It indicates an approach to design a quantum neural cryptosystem which contains the processes of key generation, encryption and decryption. Security analysis demonstrates that our scheme is security. Several simulation experiments are performed on the Strawberry Fields platform for processing the classical data “Quantum Cryptography” with CV-QNN to describe the feasibility of our method. Three sets of representative experiments are presented and the second experimental results confirm that our scheme can correctly and effectively encrypt and decrypt data with the optimal learning rate 8e − 2 regardless of classical or quantum data, and better performance can be achieved with the method of learning rate adaption (where increase factor R1 = 2, decrease factor R2 = 0.8). Indeed, the scheme with learning rate adaption can shorten the encryption and decryption time according to the simulation results presented in Figure 12. It can be considered as a valid quantum cryptography scheme and has a potential application on quantum devices.

discrete and continuous spectrum of the quantum eigenstates, quantum states can be divided into two categories: discrete variables and continuous variables, and discrete variable quantum information theory has been widely researched. It inspirits the continuous-variable quantum fields including the extension of quantum information communication from finite to infinite dimensions. In continuous-variable fields, information represented by qumodes is carried in the quantum states of bosonic modes, and continuous quadrature amplitudes of the quantized electromagnetic field can be applied to implement quantum state preparation, unitary manipulation and quantum measurement 33,34 . Unlike discrete variable quantum models that perform unitary operations, such as Pauli matrixes, continuous-variable quantum models often utilize Gaussian and non-Gaussian operators 33 to transform quantum states. For a qumode x which can be described with two real-valued variables  ∈ x p ( , ) 2 , the transformations on phase space with Gaussian operation gates 34 can be summarized as follows: where the simplest single mode Gaussian gates φ R( ), α D( ), S r ( ) are rotation gate, displacement gate and squeezing gate respectively, and the (phaseless) beamsplitter θ BS( ) indicates the basic two mode Gaussian gate. The ranges for the parameter values are φ, θ π ∈ [0, 2 ), C R α ∈ ≅ 2 , and r ≥ 0. A general CV-QNN model 34 is presented in Fig. 1. The width of the later layers can be decreased (increased) by tracing out qumodes (increasing ancillary qumodes) and the output of the last layer can be measured to obtain valued information. By the way, classical neural network can be embedded into the general CV-QNN model by fixing gate parameters so that the formalism may not create any superposition or entanglement. In other words, the CV-QNN can deal with classical data, i.e., the input | 〉 c can be created by applying the displacement operator D(c) where c is classical data to the vacuum state: n 1 2 In addition, different QNN models, such as recurrent quantum neural network, can be reasonably constructed with the changeable structure in Fig. 1, and a neuron of quantum neural network needs to be specified as well to achieve different functions.
Training algorithms for quantum neural network. An initial neural network is required to be trained so that it can handle practical problems, such as correctly encrypt and decrypt data or classify images, etc. The methods for training QNN roughly fall into two main categories: • Optimize neural network parameters with existing quantum algorithms. Such as, utilize the quantum search algorithm to find optimal weights for network 35 . • Generate quantum training algorithms corresponding to the classical training algorithms to find the optimal value of target function.
Gradient descent belonging to the second category can be applied to quantum computation, which is universal that great quantities of modules on the programming software platform have the ability to automatically compute the optimum gradient. In this scheme, we perform experiments on Strawberry Fields 32 and adopt Adam algorithm to optimize CV-QNN. Adam algorithm is a stochastic gradient descent algorithm, which is suitable for optimizing quantum neural cryptosystem due to its non-deterministic but optimized output. Specifically, optimizing quantum neural network can be implemented by adjusting parameters of transformation matrices. Take the rotation operator R( ) ⁎ φ as an example, then the following transformation can be derived after training QNN according to Eq. (1). Cryptography algorithm based on continuous-variable quantum neural network. Specific model design for cryptography algorithm and the processes of secret-key generation, encryption and decryption with CV-QNN model are provided in this section.
Design of CV-QNN for cryptography algorithm. Mathematical isomorphism between the input and output of a neuron verifies that CV-QNN can be utilized to encrypt and decrypt data. According to general function expression of classical neural network Y = f(W * X + b), where W, X and b are weight matrix, input vector and bias vector respectively, and Y is the output vector of classical neural network. Similarly, we can get theoretical expression between neurons of CV-QNN 34 , i.e.,ˆŷ . By the way, x j ( ) represents the jth input of the neuron or the jth output of a neuron in the last layer, ŷ k ( ) represents the kth output of the neuron or the kth input of a neuron in the next layer. α k represents the parameters of displacement α D( ) k ( ) and ϕ(·) is nonlinear function. Similarly, mathematical isomorphism between layers of CV-QNN can be summarized as follows: where y y y y m (2) . In addition, the initial inputs of the network can be easily recovered by taking inverse of unitary matrix, i.e., 1F igure 1. A general continuous-variable quantum neural network model. ι  ( ) for ι ∈ . .. n {1, 2, , } represents a single layer of quantum neural network. The width of layers can be decreased by tracing out some qumodes or can be increased by increasing some auxiliary qumodes. The output of the last layer can be measured to obtain valued information. (2020) 10:2107 | https://doi.org/10.1038/s41598-020-58928-1 www.nature.com/scientificreports www.nature.com/scientificreports/ In order to design the cryptography model effectively and practically to conforms to Eq. (8), Gaussian and non-Gaussian operators are fixed to construct a neuron of quantum neural network. Fig. 2(a) introduces the schematic of general neurons of CV-QNN 34 corresponding to neurons of the layer ι  ( ) , and the schematic of specific neurons for cryptography model is presented in Fig. 2(b) where rotation operators R 1 and R 1 take the place of U 1 and U 2 in Fig. 2(a) respectively. Hence, a neuron can be defined as follows: Above discussions demonstrate that quantum neural network can be properly applied as cryptogrsystem with secret key W. Thus, a cryptography model can be designed by multi-layer CV-QNN which is presented in Fig. 3 where the inputs x k ( ) are transformed by plaintext M according to Eq. (5). The process of x k ( ) being computed by CV-QNN can be simply described as . Besides, the CV-QNN has two kinds of outputs. One is ĥ k ( ) which are . The first four components implement an affine transformation, followed by a final nonlinear transformation. www.nature.com/scientificreports www.nature.com/scientificreports/ the outputs of hidden layer, the other is ŷ k ( ) which are direct outputs of the last layer. By the way, ĥ k ( ) can be used to verify the integrity of data, and ŷ k ( ) can be utilized to construct cipher block (mentioned in the next section in detail).
Key generation. It is well known that the weight may be random before training CV-QNN, thus the quantum neural network is required to be trained with lots of training sets and training algorithm for processing data correctly. During the process of training, the weights W in Eq. (8) are updated, i.e., secret keys are generated. In addition, the network architectures, the chosen optimization algorithms and the training sets which are unrevealed, can determine the distributions of weights of hidden layers 36 . In other words, the network architectures et al also can be regarded as the keys for quantum neural cryptosystem. Hence, multiple keys are contained in the cryptosystem, so that adversaries are difficult to obtain all configurations above to acquire the secret keys. Moreover, the dimensions of input and output and the hierarchy of hidden layers decide the length and complexity of keys. Thus, valid users can change the length of keys accordingly to satisfy the security of communications 37 .
Encryption. If the plaintext M are classical data, then the data are required to be preprocessed into qumodes according to ( ) in accordance with the dimension m of the input vector. Therefore the total number of encryptions can be defined as The whole process of encryption can be simply presented as Eq. (11).
Dimension m can be changed as m′ vary in each layer by tracing some qumodes out 34 or increasing ancillary qumodes, and n represents the size of hidden layers. Inputs  . Let the output state of the circuit be ψ | 〉 x ( ) for the given input | 〉 D M ( ) 0 , so the expectation value of the quadrature operator ŷ, or namely the outputs of the neural network, is 〈 〉 y , i.e., Hence the error function or cost function can be indicated as Eq. (13).
The process of encryption is shown in Fig. 4, where the qumodes x k ( ) can be input into CV-QNN in batches or once. ŷ k ( ) are the final outputs of the neural network which are computed to get E k ( ) . ĥ k ( ) from the outputs of hidden layer of CV-QNN can be served as the message authentication code (MAC) [37][38][39] , and then cipher block Apparently, the cryptosystem both implements information encryption and the features of MAC.
Decryption. The process of decryption is shown in Fig. 5 40 where ε is the limitation of fault tolerance, Bob then can accept the integrated x k ( ) . The whole communication stages between Alice and Bob are illustrated in Fig. 6. Alice and Bob communicate with each other in an identical neural network. The first stage is that Alice and Bob synchronize measurement basis (MB) together (synchronized MB are denoted as LMB). The process of synchronization can be described as following steps: (i) Alice sends quantum states generated by random sets of MB A with m sets of MB denoted The process of data verification is shown in the top dotted box, where ′ h k ( ) is used to verify the integrity of the data received by the receivers.   Fig. 8 demonstrates that the more LMB and cipher blocks can reduce the probability of successfully intercepting cipher. In general communications, just two sets of LMB can contribute to high security. For the message replay attack, assume that Eve wants to cheat the receiver with the prepared quantum states instead of real cipher and she sends the fake cipher to receiver. Specifically, Eve changes ĥ k ( ) and/or E k ( ) , and sends them to Bob for the purpose of message replay. Then Bob decrypts and gets data ′ x k ( ) , meanwhile ′ x k ( ) are used as the inputs of the neural network to get ′ h k ( ) . According to the comparison between ĥ n 2 are required for replaying the n-bit cipher. Therefore, the encrypted information cannot be eavesdropped for the attacker lacking corresponding LMB and cipher replay attack cannot be successful for required exponential difficulties to pass the whole MAC. These small probability events of successfully attacking cipher make the scheme achieve high security. This kind of attack is more impossible for CV-QNN with continuous variables, because the attacker cannot know the continuous cipher with brute force. It is also impossible that the invader wants to synchronize an unknown neural network to crack cipher unless he knows the structure of the neural network very clearly 41 . Thus the scheme can resist cipher attack and ensure the security of the proposed cryptography algorithm to the maximum extent.

Resistance of system forgery attack.
Refer to the situation that the private key is static during the process of an encryption, the cryptanalyst can analyze the key by intercepting numerous of plaintexts with corresponding and available ciphers even in the classical extensive neural network cryptosystem. For simulating a neural network similar to the cryptosystem, the attacker can train a new neural network with the intercepted data and compare the outputs of network with available ciphertext to adjust train algorithm, network architecture etc. to obtain plaintext directly. Furthermore, it is a non-negligible attack for synchronizing network cryptosystems 42 .
Suppose that a hacker can copy the intercepted quantum plain and corresponding cipher to construct a similar cryptography model, which seems to be a threat for our scheme, and it is worth considering. The neural network can keep instable so that the generated cipher can be chaotic and unpredictable for resisting the attack. Similar to TCP congestion control mechanism, learning rate adaption which can adjust the learning rate during the process of encryption contributes to solve the problem 37 . Define a parameter  ξ ∈ and compare ξ with the value of loss function E k ( ) to control learning rate η. When ξ is less than E k ( ) , learning rate is increased (i.e., η multiplied by the increase factor R 1 in Table 1), otherwise reduced (i.e., η multiplied by the decrease factor R 2 ). The instable neural network which can generate chaotic cipher is impossible to be successfully simulated by any hacker who cannot find the laws of encryption. In addition, each plaintext block is encrypted with a pair of corresponding secret keys denoted by τ ″ k where ″ = ...       k 1, 2 , L M m ( ) , the total length of the keys should be the sum of τ ″ k . According to Eq. (11), the composition of key K all can be expressed as Figure 8. The success probability of cipher eavesdropping for an attacker. When the sets of LMB is 2 and the number of cipher blocks is greater than 10, the success probability of intercepting cipher tends to 0. For the situation that when the sets of LMB is 3 and the number of cipher blocks is just larger than 6, the success probability of cipher eavesdropping is 0.
indicates that κ is perfect and confidential. Hence the scheme can resist the chosen-plaintext attack.
Performance analysis. Due to quantum properties, more classical information can be encoded into multiple degrees of freedom of a quantum state. Hence quantum neural network can carry more information than classical cryptosystem. For the sake of simplicity, classical information and quantum states are one-to-one mapping in our scheme. Compared to the cryptosystem which always requires a new private key for "one-time pad" resulting in increasing the communication time, the cryptography algorithm based on CV-QNN has an effective performance with parallel computational power 44 and high key utilization. Define the total number of neurons as mn where m is the number of neurons per neural layer and n is the number of neural layers, the number of average operators in a neuron as O p . The minimum key utilization ratio can be expressed as μ.
p With the assistance of learning process of quantum neural network, as the number of encryption increases, the changeable number of weights may slowly decreases. It means that the neural network converges and encrypts faster, especially when correlations are existed between plain. In the Fig. 9, the weight changes at different steps are shown, and the all configuration parameters are from the fourth simulation experiment in subsection "Simulation" of the paper. We can see that from the 100th step to the 500th step, the weight gradually converges, i.e., O p becomes small. Particularly, the value of O p reduces, and the key utilization μ increases. Hence compared with other cryptography models which are not based on neural network, quantum neural network uses less keys to encrypt more data.

Experiment
Hidden layers Learning rate Iterations Learning rate adaption Control value ξ = 0.04 Increase factor R 1 = 2 Decrease factor R 2 = 0.8 >2.0 500 Control value ξ = 0.04 Increase factor R 1 = 2 Decrease factor R 2 = 0.8 Table 1. Configuration parameters for the first, second, and third experiments.

Results and Discussion
A CV-QNN model is designed to construct a cryptosystem for encryption and decryption, which is characterized by quantum properties and data processing parallelism of neural network. The multiple and continuous variables, such as phase parameters of rotation operation, make the system difficult to be cracked by attackers. Moreover, the additional key negotiation process is not required since the learning process of CV-QNN for encryption and decryption can generate keys. Thus, it is more efficient than other cryptography systems that require key negotiation. The capability of LMB is introduced in the pre-process, which can solve the problem of cipher eavesdropping during the process of communications, though it may increase overheads. Cryptosystem based on ANN is mostly threaten if attackers capture amount of information to simulate a similar neural network to process data. Hence, the analogical method of "TCP congestion control" is applied to keep the network instable for resiting system forgery attacking. The simulated encrypted results demonstrate the security can be improved by adapting parameters (the depth, the learning rate and so on), and the decrypted results show that the original plain can be derived without any error.

Simulation.
Simulation results are presented with the continuous variable quantum simulation platform, named Strawberry Fields 32 to validate the feasibility of the scheme. The simulated neural network consists of 8  www.nature.com/scientificreports www.nature.com/scientificreports/ layers, the cutoff dim which is Hilbert space truncation dimension is 2. Several experimental simulations are done with different learning rate, and three representative groups of experiments are selected to explore the specific cryptography task. In Table 1, ξ is the control value to adapt learning rate for keeping instability of the neural network, and a optimal learning rate is 8e − 2 for the experiments. Training algorithm is Adam which is an automatic optimization algorithm on the simulation platform. It is worth mentioned that the quantum neural network can accept both quantum information and classical information, and during the processes of experimental simulations, classical plain "Quantum Cryptography" is preprocessed into 139-bit binary string, which is taken as an example to be the input of CV-QNN.
The first experimental results are shown in Fig. 10. Cipher1 ĥ k ( ) (shown in Fig. 10(a)) is the output of penultimate layer of the neural network. Cipher2 E (k) (shown in Fig. 10(b)) is the 2-dimensional function between input and output, times represents the density scale of displayed data. Note that the maximum error rate between x k ( ) and ŷ k ( ) is only 0.3% according to Fig. 10(b), which verifies that the quantum neural network can correctly encrypt data. Despite cipher1 approximates to plain, attackers are difficult in stealing the all correct cipher by means of intercepting information for the existence of the LMB known only by the sender and receiver. Consider that the Figure 12. The comparison results between "run time with learning rate adaption" (RT) and "run time without learning rate adaption" (RT-N), or the first experiment and the second experiment where the dominant frequency of running CPU is 3.70 GHz. It can be seen that from the 100th steps to 500th steps, the RT is always less than RT-N. For example, the RT is less than RT-N by around 0.1 s in the 300th steps, which demonstrates that the method of introducing learning rate adaption can accelerate the process of encryption. www.nature.com/scientificreports www.nature.com/scientificreports/ static secret keys may expose a quantum neural cryptosystem to system forgery attack. Hence in the second experiment shown in Fig. 11, we try to introduce the solution of "TCP congestion" to keep the neural network instable resisting the attack. To be specific, the neural network should be trained during the process of encryption, when the neural network tends to be stable, the method of learning rate adaption is involved to acquire chaotic cipher. In Fig. 11(a), cipher1 obviously approximates plain x k ( ) after 20 times. At about 80 times, the method of learning rate adaption is utilized and then unpredicted ciper1 is generated. Similarly, chaotic cipher2 shown in Fig. 11(b) also can be obtained. The presentation of Fig. 11 demonstrates that the learning rate adaption can improve the security indeed and can reduce the time of encryption process (seen in Fig. 12). The third experimental results are used to analyze the relation between the learning rate and security, and we find that a overlarge learning rate cannot correctly present cipher effects. In Fig. 13, when the learning rate is large (e.g., greater than 2.0 referring to Table 1), the cipher1 ĥ k ( ) : = 0 (shown in Fig. 13(a)) and cipher2 =Ê x : Fig. 13(b)) which are insensitive to plaintext and cannot provide any information for decryption.
In these experiments, if the attacker wants to intercept correct cipher1 and cipher2, due to the fact that he cannot have a corresponding quantum neural network cryptosystem and LMB for decryption, the violent solving must be his optimal weapon 45 . Thus he needs to try both 2 139 operators to guess ĥ k ( ) and E (k) , and he expects to match ĥ k ( ) and E (k) for 2 139 * 2 139 times as well for achieving plaintext. Finally, the attacker needs to try Ts times to crack ciphertext, and the probability of correctly guessing cipher is ( ) ( )  www.nature.com/scientificreports www.nature.com/scientificreports/ Thus the encrypted classical information with our CV-QNN is intractable to be cracked according to above discussions. For the other situation, when the inputs of CV-QNN are continuous-variable quantum states information, the theoretically unconditional security can be derived for the quantum characters, the continuities of continuous-variable quantum states and the private key. Hence the security of our system can be ensured regardless of the classical information or quantum states. Besides, a decrypted simulation with configuration parameters of the second experiment except for the method of learning rate adaption shows in Fig. 14, where input plaint and decrypted plain are perfectly matched, which demonstrates that constructing a cryptosystem with CV-QNN is effective.

conclusions
An available and secure cryptography algorithm has been proposed, in which an extended cryptography model based on CV-QNN is utilized to encrypt and decrypt data. Security and performance analysis shows that the cryptography algorithm can resist cipher eavesdropping, message replay, system forgery attack and the chosen-plaintext attack to guarantee information security and speed up encryption process simultaneously. Moreover, the algorithm inherits the merits of quantum properties, and the experiments results simulated on Strawberry Field platform show that the scheme can correctly encrypt and decrypt data effectively including classical or quantum data. It indicates the first attempt for combining CV-QNN with quantum cryptography, and inspires more potential applications of quantum neural network on quantum devices, such as quantum key distribution (QKD) which can be implemented by the synchronization of QNN.