Abstract
Gatebased quantum computations represent an essential to realize nearterm quantum computer architectures. A gatemodel quantum neural network (QNN) is a QNN implemented on a gatemodel quantum computer, realized via a set of unitaries with associated gate parameters. Here, we define a training optimization procedure for gatemodel QNNs. By deriving the environmental attributes of the gatemodel quantum network, we prove the constraintbased learning models. We show that the optimal learning procedures are different if side information is available in different directions, and if side information is accessible about the previous running sequences of the gatemodel QNN. The results are particularly convenient for gatemodel quantum computer implementations.
Similar content being viewed by others
Introduction
Gatebased quantum computers represent an implementable way to realize experimental quantum computations on nearterm quantum computer architectures^{1,2,3,4,5,6,7,8,9,10,11,12,13}. In a gatemodel quantum computer, the transformations are realized by quantum gates, such that each quantum gate is represented by a unitary operation^{14,15,16,17,18,19,20,21,22,23,24,25,26}. An input quantum state is evolved through a sequence of unitary gates and the output state is then assessed by a measurement operator^{14,15,16,17}. Focusing on gatemodel quantum computer architectures is motivated by the successful demonstration of the practical implementations of gatemodel quantum computers^{7,8,9,10,11}, and several important developments for nearterm gatemodel quantum computations are currently in progress. Another important aspect is the application of gatemodel quantum computations in the nearterm quantum devices of the quantum Internet^{27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43}.
A quantum neural network (QNN) is formulated by a set of quantum operations and connections between the operations with a particular weight parameter^{14,25,26,44,45,46,47}. Gatemodel QNNs refer to QNNs implemented on gatemodel quantum computers^{14}. As a corollary, gatemodel QNNs have a crucial experimental importance since these network structures are realizable on nearterm quantum computer architectures. The core of a gatemodel QNN is a sequence of unitary operations. A gatemodel QNN consists of a set of unitary operations and communication links that are used for the propagation of quantum and classical side information in the network for the related calculations of the learning procedure. The unitary transformations represent quantum gates parameterized by a variable referred to as gate parameter (weight). The inputs of the gatemodel QNN structure are a computational basis state and an auxiliary quantum system that serves a readout state in the output measurement phase. Each input state is associated with a particular label. In the modeled learning problem, the training of the gatemodel QNN aims to learn the values of the gate parameters associated with the unitaries so that the predicted label is close to a true label value of the input (i.e., the difference between the predicted and true values is minimal). This problem, therefore, formulates an objective function that is subject to minimization. In this setting, the training of the gatemodel QNN aims to learn the label of a general quantum state.
In artificial intelligence, machine learning^{4,5,6,19,23,45,46,48,49,50,51,52,53} utilizes statistical methods with measured data to achieve a desired value of an objective function associated with a particular problem. A learning machine is an abstract computational model for the learning procedures. A constraint machine is a learning machine that works with constraint, such that the constraints are characterized and defined by the actual environment^{48}.
The proposed model of a gatemodel quantum neural network assumes that quantum information can only be propagated forward direction from the input to the output, and classical side information is available via classical links. The classical side information is processed further via a postprocessing unit after the measurement of the output. In the general gatemodel QNN scenario, it is assumed that classical side information can be propagated arbitrarily in the network structure, and there is no available side information about the previous running sequences of the gatemodel QNN structure. The situation changes, if side information propagates only backward direction and side information about the previous running sequences of the network is also available. The resulting network model is called gatemodel recurrent quantum neural network (RQNN).
Here, we define a constraintbased training optimization method for gatemodel QNNs and RQNNs, and propose the computational models from the attributes of the gatemodel quantum network environment. We show that these structural distinctions lead to significantly different computational models and learning optimization. By using the constraintbased computational models of the QNNs, we prove the optimal learning methods for each network—nonrecurrent and recurrent gatemodel QNNs—vary. Finally, we characterize optimal learning procedures for each variant of gatemodel QNNs.
The novel contributions of our manuscript are as follows.

We study the computational models of nonrecurrent and recurrent gatemodel QNNs realized via an arbitrary number of unitaries.

We define learning methods for nonrecurrent and recurrent gatemodel QNNs.

We prove the optimal learning for nonrecurrent and recurrent gatemodel QNNs.
This paper is organized as follows. In Section 2, the related works are summarized. Section 3 defines the system model and the parameterization of the learning optimization problem. Section 4 proves the computational models of gatemodel QNNs. Section 5 provides learning optimization results. Finally, Section 6 concludes the paper. Supplemental information is included in the Appendix.
Related Works
Gatemodel quantum computers
A theoretical background on the realizations of quantum computations in a gatemodel quantum computer environment can be found in^{15} and^{16}. For a summary on the related references^{1,2,3,13,15,16,17,54,55}, we suggest^{56}.
Quantum neural networks
In^{14}, the formalism of a gatemodel quantum neural network is defined. The gatemodel quantum neural network is a quantum neural network implemented on gatemodel quantum computer. A particular problem analyzed by the authors is the classification of classical data sets which consist of bitstrings with binary labels.
In^{44}, the authors studied the subject of quantum deep learning. As the authors found, the application of quantum computing can reduce the time required to train a deep restricted Boltzmann machine. The work also concluded that quantum computing provides a strong framework for deep learning, and the application of quantum computing can lead to significant performance improvements in comparison to classical computing.
In^{45}, the authors defined a quantum generalization of feedforward neural networks. In the proposed system model, the classical neurons are generalized to being quantum reversible. As the authors showed, the defined quantum network can be trained efficiently using gradient descent to perform quantum generalizations of classical tasks.
In^{46}, the authors defined a model of a quantum neuron to perform machine learning tasks on quantum computers. The authors proposed a small quantum circuit to simulate neurons with threshold activation. As the authors found, the proposed quantum circuit realizes a “œquantum neuron”. The authors showed an application of the defined quantum neuron model in feedforward networks. The work concluded that the quantum neuron model can learn a function if trained with superposition of inputs and the corresponding output. The proposed training method also suffices to learn the function on all individual inputs separately.
In^{25}, the authors studied the structure of artificial quantum neural network. The work focused on the model of quantum neurons and studied the logical elements and tests of convolutional networks. The authors defined a model of an artificial neural network that uses quantummechanical particles as a neuron, and set a MonteCarlo integration method to simulate the proposed quantummechanical system. The work also studied the implementation of logical elements based on introduced quantum particles, and the implementation of a simple convolutional network.
In^{26}, the authors defined the model of a universal quantum perceptron as efficient unitary approximators. The authors studied the implementation of a quantum perceptron with a sigmoid activation function as a reversible manybody unitary operation. In the proposed system model, the response of the quantum perceptron is parameterized by the potential exerted by other neurons. The authors showed that the proposed quantum neural network model is a universal approximator of continuous functions, with at least the same power as classical neural networks.
Quantum machine learning
In^{57}, the authors analyzed a Markov process connected to a classical probabilistic algorithm^{58}. A performance evaluation also has been included in the work to compare the performance of the quantum and classical algorithm.
In^{19}, the authors studied quantum algorithms for supervised and unsupervised machine learning. This particular work focuses on the problem of cluster assignment and cluster finding via quantum algorithms. As a main conclusion of the work, via the utilization of quantum computers and quantum machine learning, an exponential speedup can be reached over classical algorithms.
In^{20}, the authors defined a method for the analysis of an unknown quantum state. The authors showed that it is possible to perform “œquantum principal component analysis” by creating quantum coherence among different copies, and the relevant attributes can be revealed exponentially faster than it is possible by any existing algorithm.
In^{21}, the authors studied the application of a quantum support vector machine in Big Data classification. The authors showed that a quantum version of the support vector machine (optimized binary classifier) can be implemented on a quantum computer. As the work concluded, the complexity of the quantum algorithm is only logarithmic in the size of the vectors and the number of training examples that provides a significant advantage over classical support machines.
In^{22}, the problem of quantumbased analysis of big data sets is studied by the authors. As the authors concluded, the proposed quantum algorithms provide an exponential speedup over classical algorithms for topological data analysis.
The problem of quantum generative adversarial learning is studied in^{51}. In generative adversarial networks a generator entity creates statistics for data that mimics those of a valid data set, and a discriminator unit distinguishes between the valid and nonvalid data. As a main conclusion of the work, a quantum computer allows us to realize quantum adversarial networks with an exponential advantage over classical adversarial networks.
In^{54}, superpolynomial and exponential improvements for quantumenhanced reinforcement learning are studied.
In^{55}, the authors proposed strategies for quantum computing molecular energies using the unitary coupled cluster ansatz.
The authors of^{56} provided demonstrations of quantum advantage in machine learning problems.
In^{57}, the authors study the subject of quantum speedup in machine learning. As a particular problem, the work focuses on finding Boolean functions for classification tasks.
System Model
Gatemodel quantum neural network
Definition 1 A QNN_{QG} is a quantum neural network (QNN) implemented on a gatemodel quantum computer with a quantum gate structure QG. It contains quantum links between the unitaries and classical links for the propagation of classical side information. In a QNN_{QG}, all quantum information propagates forward from the input to the output, while classical side information can propagate arbitrarily (forward and backward) in the network. In a QNN_{QG}, there is no available side information about the previous running sequences of the structure.
Using the framework of^{14}, a QNN_{QG} is formulated by a collection of L unitary gates, such that an ith, i = 1, …, L unitary gate U_{i}(θ_{i}) is
where P is a generalized Pauli operator formulated by a tensor product of Pauli operators {X, Y, Z}, while θ_{i} is referred to as the gate parameter associated with U_{i}(θ_{i}).
In QNN_{QG}, a given unitary gate U_{i}(θ_{i}) sequentially acts on the output of the previous unitary gate U_{i−1}(θ_{i−1}), without any nonlinearities^{14}. The classical side information of QNN_{QG} is used in calculations related to error derivation and gradient computations, such that side information can propagate arbitrarily in the network structure.
The sequential application of the L unitaries formulates a unitary operator \(U(\overrightarrow{\theta })\) as
where U_{i}(θ_{i}) identifies an ith unitary gate, and \(\overrightarrow{\theta }\) is the gate parameter vector
At (2), the evolution of the system of QNN_{QG} for a particular input system \(\psi ,\varphi \rangle \) is
where \(Y\rangle \) is the (n + 1)length output quantum system, and \(\psi \rangle =z\rangle \) is a computational basis state, where z is an nlength string
where each z_{i} represents a classical bit with values
while the (n + 1)th quantum state is initialized as
and is referred to as the readout quantum state.
Objective function
The \(f(\overrightarrow{\theta })\) objective function subject to minimization is defined for a QNN_{QG} as
where \( {\mathcal L} ({x}_{0},\tilde{l}(z))\) is the loss function^{14}, defined as
where \(\tilde{l}(z)\) is the predicted value of the binary label
of the string z, defined as^{14}
where Y_{n+1} ∈ {−1, 1} is a measured Pauli operator on the readout quantum state (7), while x_{0} is as
The \(\tilde{l}\) predicted value in (11) is a real number between −1 and 1, while the label l(z) and Y_{n+1} are real numbers −1 or 1. Precisely, the \(\tilde{l}\) predicted value as given in (11) represents an average of several measurement outcomes if Y_{n+1} is measured via R output system instances Y〉^{(r)}s, r = 1, …, R^{14}.
The learning problem for a QNN_{QG} is, therefore, as follows. At an \({{\mathscr{S}}}_{T}\) training set formulated via R input strings and labels
where r refers to the rth measurement round and R is the total number of measurement rounds, the goal is therefore to find the gate parameters (3) of the L unitaries of QNN_{QG}, such that \(f(\overrightarrow{\theta })\) in (8) is minimal.
Recurrent Gatemodel quantum neural network
Definition 2 An RQNN_{QG} is a QNN implemented on a gatemodel quantum computer with a quantum gate structure QG, such that the connections of RQNN_{QG} form a directed graph along a sequence. It contains quantum links between the unitaries and classical links for the propagation of classical side information. In an RQNN_{QG}, all quantum information propagates forward, while classical side information can propagate only backward direction. In an RQNN_{QG}, side information is available about the previous running sequences of the structure.
The classical side information of RQNN_{QG} is used in error derivation and gradient computations, such that side information can propagate only in backward directions. Similar to the QNN_{QG} case, in an RQNN_{QG}, a given ith unitary U_{i}(θ_{i}) acts on the output of the previous unitary U_{i−1}(θ_{i−1}). Thus, the quantum evolution of the RQNN_{QG} contains no nonlinearities^{14}. As follows, for an RQNN_{QG} network, the objective function can be similarly defined as given in (8). On the other hand, the structural differences between QNN_{QG} and RQNN_{QG} allows the characterization of different computational models for the description of the learning problem. The structural differences also lead to various optimal learning methods for the QNN_{QG} and RQNN_{QG} structures as it will be revealed in Section 4 and Section 5.
Comparative representation
For a simple graphical representation, the schematic models of a QNN_{QG} and RQNN_{QG} for an (r − 1)th and rth measurement rounds are compared in Fig. 1. The (n + 1)length input systems are depicted by \({\psi }_{r1}\rangle 1\rangle \) and \({\psi }_{r}\rangle 1\rangle \), while the output systems are denoted by \({Y}_{r1}\rangle \) and \({Y}_{r}\rangle \). The result of the M measurement operator in the (r − 1)th and rth measurement rounds are denoted by \({Y}_{n+1}^{(r1)}\) and \({Y}_{n+1}^{(r)}\). In Fig. 1(a), structure of a QNN_{QG} is depicted for an (r − 1)th and rth measurement round. In Fig. 1(b), the structure of a RQNN_{QG} is illustrated. In a QNN_{QG}, side information is not available about the previous, (r − 1)th measurement round in a particular rth measurement round. For an RQNN_{QG}, side information is available about the (r − 1)th measurement round (depicted by the dashed gray arrows) in a particular rth measurement round. The side information in the RQNN_{QG} setting refer to information about the gateparameters and the measurement results of the (r − 1)th measurement round.
Parameterization
Constraint machines
The tasks of machine learning can be modeled via its mathematical framework and the constraints of the environment^{4,5,6}. A \({\mathscr{C}}\) constraint machine is a learning machine working with constraints^{48}. A constraint machine can be formulated by a particular function f or via some elements of a functional space \( {\mathcal F} \). The constraints model the attributes of the environment of \({\mathscr{C}}\).
The learning problem of a \({\mathscr{C}}\) constraint machine can be represented via a \({\mathscr{G}}=(V,S)\) environmental graph^{48,59,60,61,62}. The \({\mathscr{G}}\) environmental graph is a directed acyclic graph (DAG), with a set V of vertexes and a set S of arcs. The vertexes of \({\mathscr{G}}\) model associated features, while the arcs between the vertexes describe the relations of the vertexes.
The \({\mathscr{G}}\) environmental graph formalizes factual knowledge via modeling the relations among the elements of the environment^{48}. In the environmental graph representation, the \({\mathscr{C}}\) constraint machine has to decide based on the information associated with the vertexes of the graph.
For any vertex v of V, a perceptual space element x, and its identifier 〈x〉 that addresses x in the computational model can be defined as a pair
where \(x\in {\mathscr{X}}\) is an element (vector) of the perceptual space \({\mathscr{X}}\subset {{\mathbb{C}}}^{d}\). Assuming that features are missing, the ◊ symbol can be used. Therefore, \({\mathscr{X}}\) is initialized as \({{\mathscr{X}}}_{0}\),
The environment is populated by individuals, and the \( {\mathcal I} \) individual space is defined via V and \({{\mathscr{X}}}_{0}\) as
such that the existing features are associated with a subset \(\tilde{V}\) of V.
The features can be associated with the 〈x〉 identifier via a \({f}_{{\mathscr{P}}}\) perceptual map as
If the condition
holds, then \({f}_{{\mathscr{P}}}\) is yielded as
A given individual \(\iota \in {\mathcal I} \) is defined as a feature vector \(x\in {\mathscr{X}}\). An \(\iota \in {\mathcal I} \) individual of the individual space \( {\mathcal I} \) is defined as
where + is the sum operator in \({{\mathbb{C}}}^{d}\), ¬ is the negation operator, while Υ is a constraint as
where \({{\mathscr{X}}}_{0}\) is given in (15). Thus, from (20), an individual ι is a feature vector x of \({\mathscr{X}}\) or a vertex v of \({\mathscr{G}}\).
Let \({\iota }^{\ast }\in {\mathcal I} \) be a specific individual, and let f be an agent represented by the function \(f: {\mathcal I} \to {{\mathbb{C}}}^{n}\). Then, at a given environmental graph \({\mathscr{G}}\), the \({\mathscr{C}}\) constraint machine is defined via function f as a machine in which the learning and inference are represented via enforcing procedures on constraints \({C}_{{\iota }^{\ast }}\) and C_{ι}, such that for a \({\mathscr{C}}\) constraint machine the learning procedure requires the satisfaction of the constraints over all \({ {\mathcal I} }^{\ast }\), while in the inference the satisfaction of the constraint is enforced over the given \({\iota }^{\ast }\in {\mathcal I} \)^{48}, by theory. Thus, \({\mathscr{C}}\) is defined in a formalized manner, as
where \(\tilde{ {\mathcal I} }\) is a subset of \( {\mathcal I} \), ι^{*} refers to a specific individual, vertex or function, χ(⋅) is a compact constraint function, while v^{*} and f^{*}(ι^{*}) refer to the vertex and function at ι^{*}, respectively.
Calculus of variations
Some elements from the calculus of variations^{63,64} are utilized in the learning optimization procedure.
EulerLagrange Equations: The EulerLagrange equations are secondorder partial differential equations with solution functions. These equations are useful in optimization problems since they have a differentiable functional that is stationary at the local maxima and minima^{63}. As a corollary, they can be also used in the problems of machine learning.
Hessian Matrix: A Hessian matrix H is a square matrix of secondorder partial derivatives of a scalarvalued function, or scalar field^{63}. In theory, it describes the local curvature of a function of many variables. In a machinelearning setting, it is a useful tool to derive some attributes and critical points of loss functions.
Constraintbased Computational Model
In this section, we derive the computational models of the QNN_{QG} and RQNN_{QG} structures.
Environmental graph of a gatemodel quantum neural network
Proposition 1 The \({{\mathscr{G}}}_{{{\rm{QNN}}}_{QG}}=(V,S)\) environmental graph of a QNN_{QG} is a DAG, where V is a set of vertexes, in our setting defined as
where \({{\mathscr{S}}}_{in}\) is the input space, \({\mathscr{U}}\) is the space of unitaries, \({\mathscr{Y}}\) is the output space, and S is a set of arcs.
Let \({{\mathscr{G}}}_{{{\rm{QNN}}}_{QG}}\) be an environmental graph of QNN_{QG}, and let \({v}_{{U}_{i}}\) be a vertex, such that \({v}_{{U}_{i}}\in V\) is related to the unitary U_{i}(θ_{i}), where index i = 0 is associated with the \(z\mathrm{,\; 1}\rangle \) input system with vertex v_{0}. Then, let \({v}_{{U}_{i}}\) and \({v}_{{U}_{j}}\) be connected vertices via directed arc s_{ij}, s_{ij} ∈ S, such that a particular θ_{ij} gate parameter is associated with the forward directed arc (Note: the notation U_{j}(θ_{ij}) refers to the selection of θ_{j} for the unitary U_{j} to realize the operation U_{i}(θ_{i})U_{j}(θ_{j}), i.e., the application of U_{j}(θ_{j}) on the output of U_{i}(θ_{i}) at a particular gate parameter θ_{j}), as
such that arc s_{0j} is associated with θ_{0j} = θ_{j}.
Then a given state \({x}_{{U}_{i}({\theta }_{i})}\) of \({\mathscr{X}}\) associated with U_{i}(θ_{i}) is defined as
where \({v}_{{U}_{i}}\) is a label for unitary U_{i} in the environmental graph \({{\mathscr{G}}}_{{{\rm{QNN}}}_{QG}}\) (serves as an identifier in the computational structure of (25)), while parameter \({a}_{{U}_{i}({\theta }_{i})}\) is defined for a U_{i}(θ_{i}) as
where Ξ(i) refers to the parent set of \({v}_{{U}_{i}}\), U_{i}(θ_{hi}) refers to the selection of θ_{i} for unitary U_{i} for a particular input from U_{h}(θ_{h}), while \({b}_{{U}_{i}({\theta }_{i})}\) is the bias relative to \({v}_{{U}_{i}}\).
Applying a f_{∠} topological ordering function on \({{\mathscr{G}}}_{{{\rm{QNN}}}_{QG}}\) yields an ordered graph structure \({f}_{\angle }({{\mathscr{G}}}_{{{\rm{QNN}}}_{QG}})\) of the L unitaries. Thus, a given output \(Y\rangle \) of QNN_{QG} can be rewritten in a compact form as
where the term \({x}_{0}\in {{\mathscr{S}}}_{in}\) is associated with the input system as defined in (12).
A particular state \({x}_{{U}_{l}({\theta }_{l})}\), l = 1, …, L is evaluated in function of \({x}_{{U}_{l1}({\theta }_{l1})}\) as
The environmental and ordered graphs of a gatemodel quantum neural network are illustrated in Fig. 2. In Fig. 2(a) the \({{\mathscr{G}}}_{{{\rm{QNN}}}_{QG}}\) environmental graph of a QNN_{QG} is depicted, and the ordered graph \({f}_{\angle }({{\mathscr{G}}}_{{{\rm{QNN}}}_{QG}})\) is shown in Fig. 2(b).
Computational model of gatemodel quantum neural networks
Theorem 1
The computational model of a QNN_{QG} is a \({\mathscr{C}}({\rm{Q}}N{N}_{QG})\) constraint machine with linear transition functions f_{T}(QNN_{QG}).
Proof. Let \({\mathscr{G}}({\rm{Q}}N{N}_{QG})=(V,S)\) be the environmental graph of a QNN_{QG}, and assume that the number of types of the vertexes is p. Then, the vertex set V can be expressed as a collection
where V_{i} identifies a set of vertexes, p is the total number of the V_{i} sets, such that \({V}_{i}\cap {V}_{j}=\varnothing ,\) if only i ≠ j^{48}. For a v ∈ V_{i} vertex from set V_{i}, an \({f}_{T}:{{\mathbb{C}}}^{{{\rm{\dim }}}_{in}}\to {{\mathbb{C}}}^{{{\rm{\dim }}}_{out}}\) transition function^{48} can be defined as
where \({{\mathscr{X}}}_{{V}_{i}}\) is the perceptual space \({\mathscr{X}}\) of V_{i}, \({{\mathscr{Z}}}_{{V}_{i}}\subset {{\mathbb{C}}}^{{\rm{\dim }}({{\mathscr{Z}}}_{{V}_{i}})}\); \({\rm{\dim }}({{\mathscr{Z}}}_{{V}_{i}})\) is the dimension of the space \({{\mathscr{Z}}}_{{V}_{i}}\); x_{v} is an element of \({{\mathscr{X}}}_{{V}_{i}}\); \({x}_{v}\in {{\mathscr{X}}}_{{V}_{i}}\) associated with a unitary U_{v}(θ_{v}); \({\mathscr{Z}}\) is the state space, \({{\mathscr{Z}}}_{{V}_{i}}\) is the state space of V_{i}, \({{\mathscr{Z}}}_{{V}_{i}}\subset {{\mathbb{C}}}^{{\rm{d}}{\rm{i}}{\rm{m}}({{\mathscr{Z}}}_{{V}_{i}})}\), \({\rm{d}}{\rm{i}}{\rm{m}}({{\mathscr{Z}}}_{{V}_{i}})\) is the dimension of the space \({{\mathscr{Z}}}_{{V}_{i}}\); Γ(v) refers to the children set of v; Γ(v) is the cardinality of set Γ(v); \(\gamma \in {\mathscr{Z}}\) is a state variable in the state space \({\mathscr{Z}}\) that serves as side information to process the v vertices of V in \({\mathscr{G}}({{\rm{QNN}}}_{QG})\), while \({\gamma }_{{\rm{\Gamma }}(v)}\in {{\mathscr{Z}}}_{{V}_{i}}^{{\rm{\Gamma }}(v)}\subset {{\mathbb{C}}}^{{\rm{\Gamma }}(v)}\) and \({\gamma }_{{\rm{\Gamma }}(v)}=({\gamma }_{{\rm{\Gamma }}(v\mathrm{),1}},\cdots ,{\gamma }_{{\rm{\Gamma }}(v),{\rm{\Gamma }}(v)})\), by theory^{48,62}. Thus, the f_{T} transition function in (30) is a complexvalued function that maps an input pair (γ, x) from the space of \({\mathscr{X}}\times {\mathscr{Z}}\) to the state space \({\mathscr{Z}}\).
Similarly, for any V_{i}, an \({f}_{O}:{{\mathbb{C}}}^{{{\rm{\dim }}}_{in}}\to {{\mathbb{C}}}^{{{\rm{\dim }}}_{out}}\) output function^{48} can be defined as
where \({{\mathscr{Y}}}_{{V}_{i}}\) is the output space \({\mathscr{Y}}\), and γ_{v} is a state variable associated with v, \({\gamma }_{v}\in {{\mathscr{Z}}}_{{V}_{i}}\), such that γ_{v} = γ_{0} if \({\rm{\Gamma }}(v)=\varnothing \). The f_{O} output function in (31) is therefore a complexvalued function that maps an input pair (γ, x) from the space of \({\mathscr{X}}\times {\mathscr{Z}}\) to the output space \({\mathscr{Y}}\).
From (30) and (31), it follows that for any V_{i}, there exists the ϕ(V_{i}) associated functionpair as
Let us specify the generalized functions of (30) and (31) for a QNN_{QG}.
Let \(U(\overrightarrow{\theta })\) of QNN_{QG} be defined as given in (2). Since in QNN_{QG}, a given ith unitary U_{i}(θ_{i}) acts on the output of the previous unitary U_{i−1}(θ_{i−1}), the network contains no nonlinearities^{14}. As a corollary, the state transition function f_{T}(QNN_{QG}) in (30) is also linear for a QNN_{QG}.
Let \({\gamma }_{v}\rangle \) be the quantum state associated with γ_{v} state variable of a given v. Then, the constraints on the transition function and output function of a QNN_{QG} can be evaluated as follows.
Let f_{T}(QNN_{QG}) be the transition function of a QNN_{QG} defined for a given v ∈ V of \({\mathscr{G}}({{\rm{QNN}}}_{QG})\) via (30) as
The F_{O}(QNN_{QG}) output function of a QNN_{QG} for a given v of \({\mathscr{G}}({{\rm{QNN}}}_{QG})\) via (31) is
Since f_{T}(QNN_{QG}) in (33) and F_{O}(QNN_{QG}) in (34) correspond with the data flow computational scheme of a QNN_{QG} with linear transition functions, (33) and (34) represent an expression of the constraints of QNN_{QG}. These statements can be formulated in a compact form.
Let ζ_{v} be a constraint on f_{T}(QNN_{QG}) of QNN_{QG} as
Thus, the f_{T}(QNN_{QG}) transition function is constrained as
With respect to the output function, let φ_{v} be a constraint on F_{O}(QNN_{QG}) of QNN_{QG} as
where ⚬ is the composition operator, such that \((f\circ g)(x)=f(g(x))\), \({\wp }_{v}\) is therefore another constraint as \({\wp }_{v}({F}_{O}({{\rm{QNN}}}_{QG}))=0\).
Then let π_{v} be a compact constraint on f_{T}(QNN_{QG}) and F_{O}(QNN_{QG}) defined via constraints (35) and (37) as
Since it can be verified that a learning machine that enforces the constraint in (38), is in fact a constraint machine. As a corollary, the constraints (33) and (34), along with the compact constraint (38), define a \({\mathscr{C}}({{\rm{QNN}}}_{QG})\) constraint machine for a QNN_{QG} with linear functions f_{T}(QNN_{QG}) and F_{O}(QNN_{QG}).■
Diffusion machine
Let \({\mathscr{C}}\) be the constraint machine with linear transition function f_{T}(γ_{Γ(v)}, x_{v}), and let §_{v} be a state variable such that \(\forall \,v\in V\)
and let F_{O}(γ_{v}, x_{v}) be the output function of \({\mathscr{C}}\), such that ∀v ∈ V
where c_{v} is a constraint.
Then, the \({\mathscr{C}}\) constraint machine is a \({\mathscr{D}}\) diffusion machine^{48}, if only \({\mathscr{C}}\) enforces the constraint \({C}_{{\mathscr{D}}}\), as
Computational model of recurrent gatemodel quantum neural networks
Theorem 2
The computational model of an RQNN_{QG} is a \({\mathscr{D}}({{\rm{RQNN}}}_{QG})\) diffusion machine with linear transition functions f_{T}(RQNN_{QG}).
Proof. Let \({\mathscr{C}}({{\rm{RQNN}}}_{QG})\) be the constraint machine of RQNN_{QG} with linear transition function f_{T}(RQNN_{QG}) = f_{T}(γ_{Γ(v)}, x_{v}). Using the \({{\mathscr{G}}}_{{{\rm{RQNN}}}_{QG}}\) environmental graph, let Λ_{v} be a constraint on f_{T}(RQNN_{QG}) of RQNN_{QG}, v ∈ V as
where \({\gamma }_{v}\rangle \) is the quantum state associated with γ_{v} state variable of a given v of RQNN_{QG}. With respect to the output function F_{O}(RQNN_{QG}) = F_{O}(γ_{v}, x_{v}) of RQNN_{QG}, let ω_{v} be a constraint on F_{O}(RQNN_{QG}) of RQNN_{QG}, as
where Ω_{v} is another constraint as Ω_{v}(F_{O}(RQNN_{QG})) = 0.
Since RQNN_{QG} is a recurrent network, for all v ∈ V of \({{\mathscr{G}}}_{{{\rm{RQNN}}}_{QG}}\), a diffuse constraint λ(Q(x)) can be defined via constraints (42) and (43), as
where x = (x_{1}, …, x_{V}), and Q(x) = (Q(x_{1}), …, Q(x_{V})) is a function that maps all vertexes of \({{\mathscr{G}}}_{{{\rm{RQNN}}}_{QG}}\). Therefore, in the presence of (44), the relation
follows for an RQNN_{QG}, where \({\mathscr{D}}({{\rm{RQNN}}}_{QG})\) is the diffusion machine of RQNN_{QG}. It is because a constraint machine \({\mathscr{C}}({{\rm{RQNN}}}_{QG})\) that satisfies (44) is, in fact, a diffusion machine \({\mathscr{D}}({{\rm{RQNN}}}_{QG})\), see also (41).
In (42), the f_{T}(RQNN_{QG}) state transition function can be defined for a \({\mathscr{D}}({{\rm{RQNN}}}_{QG})\) via constraint (42) as
Then, let H_{t} be a unit vector for a unitary U_{t}(θ_{t}), t = 1, …, L − 1, defined as
where x_{t} and y_{t} are real values.
Then, let Z_{t+1} be defined via \(U(\overrightarrow{\theta })\) and (47) as
where E is a basis vector matrix^{60}.
Then, by rewriting \(U(\overrightarrow{\theta })\) as
where ϕ, φ are real parameters, allows us to evaluate \(U(\overrightarrow{\theta }){H}_{t}\) as
with
where H_{t+1} is normalized at unity, and function \({f}_{\sigma }^{{{\rm{RQNN}}}_{QG}}(\cdot )\) is defined as
where ⋅_{1} is the L1norm.
Since the RQNN_{QG} has linear transition function, (52) is also linear, and allows us to rewrite (52) via the environmental graph representation for a particular (γ_{Γ(v)}, x_{v}), as
where f_{T}(γ_{Γ(v)}, x_{v}) is given in (50).
Thus, by setting t = ν, the term H_{t} can be rewritten via (50) and (52) as
Then, the Y_{t}(RQNN_{QG}) output of RQNN_{QG} is evaluated as
where W is an output matrix^{60}.
Then let Γ(v) = L, therefore at a particular objective function f(θ) of the RQNN_{QG}, the derivative \(\frac{df(\theta )}{d{x}_{\nu }}\) can be evaluated as
where
is a Jacobian matrix^{60}. For the norms the relation
holds, where
The proof is concluded here.■
Optimal Learning
Gatemodel quantum neural network
Theorem 3
A supervised learning is an optimal learning for a \({\mathscr{C}}({{\rm{QNN}}}_{QG})\).
Proof. Let π_{v} be the compact constraint on f_{T}(QNN_{QG}) and F_{O}(QNN_{QG}) of \({\mathscr{C}}({{\rm{QNN}}}_{QG})\) from (38), and let A be a constraint matrix. Then, (38) can be reformulated as
where b(x) is a smooth vectorvalued function with compact support^{48}, \({f}^{\ast }: {\mathcal I} \to {{\mathbb{C}}}^{n}\),
is the compact function subject to be determined such that
The problem formulated via (60) can be rewritten as
As follows, learning of functions f_{T}(QNN_{QG}) and F_{O}(QNN_{QG}) of \({\mathscr{C}}({{\rm{QNN}}}_{QG})\) can be reduced to the determination of function f^{*}(x), which problem is solvable via the EulerLagrange equations^{48,63,64}.
Then, let \({{\mathscr{S}}}_{L({\rm{QNN}})}\) be a nonempty supervised learning set defined as a collection
where (x_{κ}, y_{κ}), y_{κ} = f^{*}(x_{κ}) is a supervised pair, and \({\mathscr{X}}\) is the cardinality of the perceptive space \({\mathscr{X}}\) associated with \({{\mathscr{S}}}_{L({\rm{QNN}})}\).
Since \({{\mathscr{S}}}_{L({\rm{QNN}})}\) is nonempty set, f^{*}(x) can be evaluated by the EulerLagrange equations^{48,63,64}, as
where A^{T} is the transpose of the constraint matrix A, and \(\ell \) is a differential operator as
where c_{κ}s are constants, \({\nabla }^{2}\) is a Laplacian operator such that \({\nabla }^{2}f(x)={\sum }_{i}{\partial }_{i}^{2}f(x)\); while Υ is as
where \({\mathscr{G}}(\cdot )\) is the Green function of differential operator \(\ell \). Since function \({\mathscr{G}}(\,\cdot \,)\) is translation invariant, the relation
follows. Since the constraint that has to be satisfied over the perceptual space \({\mathscr{X}}\) is given in (62), the \( {\mathcal L} \) Lagrangian can be defined as
where 〈⋅,⋅〉 is the inner product operator, while P is defined via (66) as
where \({P}^{\dagger }\) is the adjoint of P, while λ(x) is the Lagrange multiplier as
where
and \(\ell b\) is as
Then, (65) can be rewritten using (71) and (73) as
where H(x) is as
and Φ is as
where I_{n} is an identity matrix.
Therefore, after some calculations, f^{*}(x) can be expressed as
where χ_{κ} is as
The compact constraint of \({\mathscr{C}}({{\rm{QNN}}}_{QG})\) determined via (77) is optimal, since (77) is the optimal solution of the Euler–Lagrange equations.
The proof is concluded here.■
Lemma 1
There exists a supervised learning for a \({\mathscr{C}}({{\rm{QNN}}}_{QG})\) with complexity \({\mathscr{O}}(S)\), where S is the number arcs (number of gate parameters) of \({{\mathscr{G}}}_{{{\rm{QNN}}}_{QG}}\).
Proof. Let \({{\mathscr{G}}}_{{{\rm{QNN}}}_{QG}}\) be the environmental graph of QNN_{QG}, such that QNN_{QG} is characterized via \(\overrightarrow{\theta }\) (see (3)).
The optimal supervised learning method of a \({\mathscr{C}}({{\rm{QNN}}}_{QG})\) is derived through the utilization of the \({{\mathscr{G}}}_{{{\rm{QNN}}}_{QG}}\) environmental graph of QNN_{QG}, as follows.
The \({{\mathscr{A}}}_{{\mathscr{C}}({{\rm{QNN}}}_{QG})}\) learning process of \({\mathscr{C}}({{\rm{QNN}}}_{QG})\) in the \({{\mathscr{G}}}_{{{\rm{QNN}}}_{QG}}\) structure is given in Algorithm 1.
The optimality of Algorithm 1 arises from the fact that in Step 4, the gradient computation involves all the gate parameters of the QNN_{QG}, and the gate parameter updating procedure has a computational complexity \({\mathscr{O}}(S)\). The QNN_{QG} complexity is yielded from the gate parameter updating mechanism that utilizes backpropagated classical side information for the learning method.
The proof is concluded here.■
Description and method validation
The detailed steps and validation of Algorithm 1 are as follows.
In Step 1, the number R of measurement rounds is set.
Step 2 is the quantum evolution phase of QNN_{QG} that yields an output quantum system \(Y\rangle \) via forward propagation of quantum information through the unitary sequence \(U(\overrightarrow{\theta })\) realized via the L unitaries. Then, a parameterization follows for each \({x}_{{U}_{i}({\theta }_{i})}\), and the terms \({W}_{{U}_{i}({\theta }_{i})}\) and \({Q}_{{U}_{i}({\theta }_{i})}\) are defined to characterize the θ_{i} angles of the U_{i}(θ_{i}) unitary operations in the QNN_{QG}.
In Step 3, side information initializations are made for the error computations. A given \({W}_{{U}_{i}({\theta }_{i})}\) is set as a cumulative quantity with respect to the parent set Ξ ∈ i of unitary U_{i}(θ_{i}) in QNN_{QG}.
Note, that (80) and (81) represent side information, thus the gate parameter θ_{hi} is used to identify a particular unitary U(θ_{hi}).
Let \({{\mathscr{G}}^{\prime} }_{{{\rm{QNN}}}_{QG}}\) be the the environmental graph of QNN_{QG} such that the directions of quantum links are reversed. It can be verified that for a \({{\mathscr{G}}^{\prime} }_{{{\rm{QNN}}}_{QG}}\), \({\delta }_{{U}_{i}({\theta }_{i})}\) from (82) can be rewritten as
and \({\delta }_{{U}_{L}({\theta }_{L})}\) can be evaluated as given in (83)
while the term \({\delta }_{{U}_{i}({\theta }_{i})}{W}_{{U}_{j}({\theta }_{j})}\) for each U_{i}(θ_{i}) can be rewritten as
Since (86) and (85) are defined via the nonreversed \({{\mathscr{G}}}_{{{\rm{QNN}}}_{QG}}\), for a given unitary the Γ children set is used. The utilization of the Ξ parent set with reversed link directions in \({{\mathscr{G}}^{\prime} }_{{{\rm{QNN}}}_{QG}}\) (see (89), (90), (91)) is therefore analogous to the use of the Γ children set with nonreversed link directions in \({{\mathscr{G}}}_{{{\rm{QNN}}}_{QG}}\). It is because classical side information is available in arbitrary directions in \({{\mathscr{G}}}_{{{\rm{QNN}}}_{QG}}\).
First, we consider the situation, if i = 1, …, L − 1, thus the error calculations are associated to unitaries U_{1}(θ_{1}), …, U_{L−1}(θ_{L−1}), while the output unitary U_{L}(θ_{L}) is proposed for the i = L case.
In \({{\mathscr{G}}}_{{{\rm{QNN}}}_{QG}}\), the error quantity \({\delta }_{{U}_{i}({\theta }_{i})}\) associated to U_{i}(θ_{i}) is determined, where \({W}_{{U}_{L}({\theta }_{L})}\) is associated to the output unitary U_{L}(θ_{L}). Only forward steps are required to yield \({W}_{{U}_{L}({\theta }_{L})}\) and \({Q}_{{U}_{L}({\theta }_{L})}\). Then, utilizing the chain rule and using the children set Γ(i) of a particular unitary U_{i}(θ_{i}), the term \(d{W}_{{U}_{L}({\theta }_{L})}/d{Q}_{{U}_{i}({\theta }_{i})}\) in \({\delta }_{{U}_{i}({\theta }_{i})}\) can be rewritten as \(\frac{d{W}_{{U}_{L}({\theta }_{L})}}{d{Q}_{{U}_{i}({\theta }_{i})}}={\sum }_{h\in {\rm{\Gamma }}(i)}\frac{d{W}_{{U}_{L}({\theta }_{L})}}{d{Q}_{{U}_{h}({\theta }_{h})}}\frac{d{Q}_{{U}_{h}({\theta }_{h})}}{d{W}_{{U}_{i}({\theta }_{i})}}\frac{d{W}_{{U}_{i}({\theta }_{i})}}{d{Q}_{{U}_{i}({\theta }_{i})}}\). In fact, this term equals to \({Q}_{{U}_{i}({\theta }_{i})}{\sum }_{h\in {\rm{\Gamma }}(i)}{\theta }_{hi}{\delta }_{{U}_{h}({\theta }_{h})}\), where \({\delta }_{{U}_{h}({\theta }_{h})}\) is the error associated to a U_{h}(θ_{h}), such that U_{h}(θ_{h}) is a children unitary of U_{i}(θ_{i}). The \({\delta }_{{U}_{h}({\theta }_{h})}\) error quantity associated to a children unitary U_{h}(θ_{h}) of U_{i}(θ_{i}) can also be determined in the same manner, that yields \({\delta }_{{U}_{h}({\theta }_{h})}=d{W}_{{U}_{L}({\theta }_{L})}/d{Q}_{{U}_{h}({\theta }_{h})}\). As follows, by utilizing side information in \({{\mathscr{G}}}_{{{\rm{QNN}}}_{QG}}\) allows us to determine \({\delta }_{{U}_{i}({\theta }_{i})}\) via the \( {\mathcal L} (\,\cdot \,)\) loss function and the Γ(i) children set of unitary U_{i}(θ_{i}), that yields the quantity given in (82).
The situation differs if the error computations are made with respect to the output system, thus for the Lth unitary U_{L}(θ_{L}). In this case, the utilization of the loss function \( {\mathcal L} ({x}_{0},\tilde{l}(z))\) allows us to use the simplified formula of \({\delta }_{{U}_{L}({\theta }_{L})}=d {\mathcal L} ({x}_{0},\tilde{l}(z))/d{Q}_{{U}_{L}({\theta }_{L})}\), as given in (83). Taking the \(\frac{d {\mathcal L} ({x}_{0},\tilde{l}(z))}{d{\theta }_{ij}}\) derivative of the loss function \( {\mathcal L} ({x}_{0},\tilde{l}(z))\) with respect to the angle θ_{ij} yields \(\frac{d {\mathcal L} ({x}_{0},\tilde{l}(z))}{d{Q}_{{U}_{i}({\theta }_{i})}}\frac{d{Q}_{{U}_{i}({\theta }_{i})}}{d{\theta }_{ij}}\), that is, in fact equals to \({\delta }_{{U}_{i}({\theta }_{i})}{W}_{{U}_{j}({\theta }_{j})}\).
In Step 4, the quantities defined in the previous steps are utilized in the QNN_{QG} for the error calculations. The errors are evaluated and updated in a backpropagated manner from unitary U_{L}(θ_{L}) to U_{1}(θ_{1}). Since it requires only side information these steps can be achieved via a \({\rm{P}}({{\mathscr{G}}}_{{{\rm{QNN}}}_{QG}})\) postprocessing (along with Step 3). First, a gate parameter modification vector \(\overrightarrow{{\rm{\Delta }}}\theta \) is defined, such that its ith element, \(\overrightarrow{{\rm{\Delta }}}{\theta }_{i}\), is associated with the modification of the θ_{i} gate parameter of an ith unitary U_{i}(θ_{i}).
The ith element \(\overrightarrow{{\rm{\Delta }}}{\theta }_{i}\) is initialized as \(\overrightarrow{{\rm{\Delta }}}{\theta }_{i}={W}_{{U}_{i}({\theta }_{i})}\). If \(\overrightarrow{{\rm{\Delta }}}{\theta }_{i}\) equals to 1, then no modification is required in the θ_{i} gate parameter of U_{i}(θ_{i}). In this case, the \({\delta }_{{U}_{i}({\theta }_{i})}\) error quantity of U_{i}(θ_{i}) can be determined via a simple summation, using the children set of U_{i}(θ_{i}), as \({\delta }_{{U}_{i}({\theta }_{i})}={\sum }_{j\in {\rm{\Gamma }}(i)}{\theta ^{\prime} }_{ij}{\delta }_{{U}_{j}({\theta }_{j})}\), where U_{j}(θ_{j}) is a children of U_{i}(θ_{i}), as it is given in (85). On the other hand, if \(\overrightarrow{{\rm{\Delta }}}{\theta }_{i}\ne 1\), then the θ_{i} gate parameter of U_{i}(θ_{i}) requires a modification. In this case, summation \({\sum }_{j\in \Gamma (i)}{\theta }_{ij}{\delta }_{{U}_{j}({\theta }_{j})}\) has to be weighted by the actual \(\overrightarrow{{\rm{\Delta }}}{\theta }_{i}\) to yield \({\delta }_{{U}_{i}({\theta }_{i})}\). This situation is obtained in (86).
According to the update mechanism of (84–86), for z = L − 1, …, 1, the errors are updated via (88) as follows. At z = L and \(\overrightarrow{{\rm{\Delta }}}{\theta }_{z}=1\), \({\delta }_{{U}_{z}({\theta }_{z})}\) is as
while at \(\overrightarrow{{\rm{\Delta }}}{\theta }_{z}\ne 1\), \({\delta }_{{U}_{z}({\theta }_{z})}\) is updated as
For z = L − 1, …, 1, if \(\overrightarrow{{\rm{\Delta }}}{\theta }_{z}=1\), then \({\delta }_{{U}_{z}({\theta }_{z})}\) is as
while, if \(\overrightarrow{{\rm{\Delta }}}{\theta }_{z}\ne 1\), then
In Step 5, for a given unitary U_{i}(θ_{i}), i = 2, …, L and for its parent U_{j}(θ_{j}), the \({g}_{{U}_{i}({\theta }_{i}),{U}_{j}({\theta }_{j})}\) gradient is computed via the \({\delta }_{{U}_{i}({\theta }_{i})}\) error quantity derived from (85–86) for U_{i}(θ_{i}), and by the \({W}_{{U}_{j}({\theta }_{j})}\) quantity associated to parent U_{j}(θ_{j}). (For U_{1}(θ_{1}) the parent set Ξ(1) is empty, thus i > 1.) The computation of \({g}_{{U}_{i}({\theta }_{i}),{U}_{j}({\theta }_{j})}\) is performed for all U_{j}(θ_{j}) parents of U_{i}(θ_{i}), thus (87) is determined for ∀j, j ∈ Ξ(i). By the chain rule,
Since for i = L, \({\delta }_{{U}_{L}({\theta }_{L})}\) is as given in (83), the gradient can be rewritten via (91) as
Finally, Step 6 utilizes the number R of measurements to extend the results for all measurement rounds, r = 1, …, R. Note that in each round a measurement operator is applied, for simplicity it is omitted from the description.
Since the algorithm requires no reversed quantum links, i.e. \({{\mathscr{G}}^{\prime} }_{{{\rm{QNN}}}_{QG}}\) for the computations of (85–86), the gradient of the loss in (87) with respect to the gate parameter can be determined in an optimal way for QNN_{QG} networks, by the utilization of side information in \({{\mathscr{G}}}_{{{\rm{QNN}}}_{QG}}\).
The steps and quantities of the learning procedure (Algorithm 1) of a QNN_{QG} are illustrated in Fig. 3. The QNN_{QG} network realizes the unitary \(U(\overrightarrow{\theta })\). The quantum information is propagated through quantum links (solid lines) between the unitaries, while the auxiliary classical information is propagated via classical links in the network (dashed lines). An ith node is represented via unitary U_{i}(θ_{i}).
For an ith unitary, U_{i}(θ_{i}), parameters \({W}_{{U}_{i}({\theta }_{i})}\), \({Q}_{{U}_{i}({\theta }_{i})}\) and \({\delta }_{{U}_{i}({\theta }_{i})}=d{W}_{{U}_{L}({\theta }_{L})}/d{Q}_{{U}_{i}({\theta }_{i})}\) for i < L, are computed, where \({W}_{{U}_{L}({\theta }_{L})}={\sum }_{j\in \Xi (L)}{\theta }_{Lj}{V}_{{U}_{j}({\theta }_{j})}\). For the output unitary, \({\delta }_{{U}_{L}({\theta }_{L})}=d {\mathcal L} ({x}_{0},\tilde{l}(z))/d{Q}_{{U}_{L}({\theta }_{L})}\). Parameters \({W}_{{U}_{i}({\theta }_{i})}\) and \({Q}_{{U}_{i}({\theta }_{i})}\) are determined via forward propagation of side information, the \({\delta }_{{U}_{i}({\theta }_{i})}\) quantities are evaluated via backward propagation of side information. Finally, the gradients, \({g}_{{U}_{i}({\theta }_{i}),{U}_{j}({\theta }_{j})}={\delta }_{{U}_{i}({\theta }_{i})}{W}_{{U}_{j}({\theta }_{j})},\) are computed.
Recurrent gatemodel quantum neural network
In classical neural networks, backpropagation^{59,60,61} (backward propagation of errors) is a supervised learning method that allows to determine the gradients to learn the weights in the network. In this section, we show that for a recurrent gatemodel QNN, a backpropagation method is optimal.
Theorem 4
A backpropagation in \({{\mathscr{G}}}_{{{\rm{RQNN}}}_{QG}}\) is an optimal learning in the sense of gradient descent.
Proof. In an RQNN_{QG}, the backward classical links provide feedback side information for the forward propagation of quantum information in multiple measurement rounds. The backpropagated side information is analogous to feedback loops, i.e, to recurrent cycles over time. The aim of the learning method is to optimize the gate parameters of the unitaries of the RQNN_{QG} quantum network via a supervised learning, using the side information available from the previous k = 1, …, r − 1 measurement rounds at a particular measurement round r.
Let \({{\mathscr{G}}}_{{{\rm{RQNN}}}_{QG}}\) be the environmental graph of RQNN_{QG}, and f_{T}(RQNN_{QG}) be the transition function of an RQNN_{QG}. Then the γ_{v} constraint is defined via \({{\mathscr{G}}}_{{{\rm{RQNN}}}_{QG}}\) as
while the constraint Ω_{v} on the output F(γ_{v}, x_{v}) of RQNN_{QG} is defined via ω_{v} = 0 as^{48,61,62}
Utilizing the structure of the \({{\mathscr{G}}}_{{{\rm{RQNN}}}_{QG}}\) environmental graph allows us to define a modified version of the backpropagation through time algorithm^{59} to the RQNN_{QG}.
The learning of \({\mathscr{D}}({{\rm{RQNN}}}_{QG})\) with constraints (42), (43), and (44) is given in Algorithm 2, depicted as \({{\mathscr{A}}}_{{\mathscr{D}}({{\rm{RQNN}}}_{QG})}\).
As a corollary, the training of \({\mathscr{D}}({{\rm{RQNN}}}_{QG})\) can be reduced to a backpropagation method via the environmental graph of RQNN_{QG}.■
Description and method validation
The detailed steps and validation of Algorithm 2 are as follows.
In Step 1, the number R of measurement rounds are set for RQNN_{QG}. For each measurement round initialization steps (100, 101) are set.
Step 2 provides the quantum evolution phase of RQNN_{QG}, and produces output quantum system \({Y}_{r}\rangle \) (102) via forward propagation of quantum information through the unitary sequence \(U({\overrightarrow{\theta }}_{r})\) of the L unitaries.
Step 3 initializes the P^{(r)}(RQNN_{QG}) postprocessing method via the definition of (105) for gradient computations. In (106), the quantity \({{\rm{\Phi }}}_{r}={z}^{(r)}+U({\overrightarrow{\theta }}_{r1})+{B}_{r}\) connects the side information of the rth measurement round with the side information of the (r − 1)th measurement round; and \(U({\overrightarrow{\theta }}_{r1})\) is the unitary sequence of the (r − 1)th round, and B_{r} is a bias the current measurement round. The quantity ξ_{r,k} = dΦ_{r}/dΦ_{k} in (107) utilizes the Φ_{i} quantities (see (106)) of the ith measurement rounds, such that i = k + 1, …, r, where k < r.
Step 4 determines the g_{r} loss function gradient of the rth measurement round. In (108), the g_{r} gradient is determined as \({\sum }_{k=1}^{r}\frac{ {\mathcal L} ({x}_{\mathrm{0,}r},\tilde{l}({z}^{(r)}))}{d{{\rm{\Phi }}}_{r}}\frac{d{{\rm{\Phi }}}_{r}}{d{{\rm{\Phi }}}_{k}}\frac{d{\tilde{{\rm{\Phi }}}}_{k}}{d{\mathscr{S}}({\overrightarrow{\theta }}_{r})}\), that is, via the utilization of the side information of the k = 1, …, r measurement rounds at a particular r.
In Step 5, the gate parameters are updated via the gradient descent rule^{59} by utilizing the gradients of the k = 1, …, r measurement rounds at a particular r. Since in (111) all the gate parameters of the L unitaries are updated by ω_{r} as given in (112), for a particular unitary U_{i}(θ_{r,i}), the gate parameter is updated via \({\overrightarrow{\alpha }}_{r}\) (114) to θ_{r+1,i} as
Finally, Step 6 outputs the G final gradient of the total R measurement rounds in (116), as a summation of the g_{r} gradients (108) determined in the r = 1, …, R rounds.
The steps of the learning method of an RQNN_{QG} (Algorithm 2) are illustrated in Fig. 4. The \({\overrightarrow{\theta }}_{r}\) gate parameters of the unitaries of unitary sequence \(U({\overrightarrow{\theta }}_{r})\) are set as \({\overrightarrow{\theta }}_{r}={\overrightarrow{\theta }}_{r1}{\omega }_{r1},\) where \({\overrightarrow{\theta }}_{r1}\) is the gate parameter vector associated to sequence \(U({\overrightarrow{\theta }}_{r1})\), while α_{r−1,i} = ω_{r−1} is the gate parameter modification coefficient, and \({\omega }_{r1}=\frac{\lambda }{r1}{\sum }_{k=1}^{r1}\frac{ {\mathcal L} ({x}_{\mathrm{0,}k},\tilde{l}({z}^{(k)}))}{d{\mathscr{S}}({\overrightarrow{\theta }}_{k})}\).
Closedform error evaluation
Lemma 2
The δ quantity of the unitaries of a \({\mathscr{D}}({{\rm{RQNN}}}_{QG})\) can be expressed in a closed form via the \({{\mathscr{G}}}_{{{\rm{RQNN}}}_{QG}}\) environmental graph of RQNN_{QG}.
Proof. Let \({{\mathscr{G}}}_{{{\rm{RQNN}}}_{QG}}\) be the environmental graph of RQNN_{QG}, such that RQNN_{QG} is characterized via \(\overrightarrow{\theta }\) (see (3)). Utilizing the structure \({{\mathscr{G}}}_{{{\rm{RQNN}}}_{QG}}\) of RQNN_{QG} allows us to express the square error in a closed form as follows.
Let Y and Z refer to output realizations \(Y\rangle \) and \(Z\rangle \) of RQNN_{QG}, \({\mathscr{Y}}\in Y,Z\), with an output set \({\mathscr{Y}}\), and let \( {\mathcal L} ({x}_{0},\tilde{l}(z))\) be the loss function. Then let \({{\bf{H}}}_{{{\rm{RQNN}}}_{QG}}\) be a Hessian matrix^{48} of the RQNN_{QG} structure, with a generic coordinate \({\hslash }_{ij,lm}^{{{\rm{RQNN}}}_{QG}}\), as
where \({W}_{{U}_{i}({\theta }_{i})}\) is given in (81), f_{i∠m}(⋅) is a topological ordering function on \({{\mathscr{G}}}_{{{\rm{RQNN}}}_{QG}}\), indices Y and Q are associated with the output realizations \(Y\rangle \) and \(Q\rangle \), while \({({\delta }_{{U}_{l}({\theta }_{l}),{U}_{i}({\theta }_{i})}^{Q})}^{2}\) is the square error between unitaries U_{l}(θ_{l}) and U_{i}(θ_{i}) at a particular output \(Q\rangle \) as
where \({Q}_{{U}_{i}({\theta }_{i})}\) is as in (80). Note that the relation \({({\delta }_{{U}_{l}({\theta }_{l}),{U}_{i}({\theta }_{i})}^{Q})}^{2}\ne 0\) in (119) holds if only there is an edge s_{il} between \({v}_{{U}_{i}}\in V\) and \({v}_{{U}_{l}({\theta }_{l})}\in V\) in the environmental graph \({{\mathscr{G}}}_{{{\rm{RQNN}}}_{QG}}\) of RQNN_{QG}. Thus,
Since \({{\mathscr{G}}}_{{{\rm{RQNN}}}_{QG}}\) contains all information for the computation of (119) and \({\mathscr{D}}({{\rm{RQNN}}}_{QG})\) is defined through the structure of \({{\mathscr{G}}}_{{{\rm{RQNN}}}_{QG}}\), the proof is concluded here.■
Conclusions
Gatemodel QNNs allow an experimental implementation on nearterm gatemodel quantum computer architectures. Here we examined the problem of learning optimization of gatemodel QNNs. We defined the constraintbased computational models of these quantum networks and proved the optimal learning methods. We revealed that the computational models are different for nonrecurrent and recurrent gatemodel quantum networks. We proved that for nonrecurrent and recurrent gatemodel QNNs, the optimal learning is a supervised learning. We showed that for a recurrent gatemodel QNN, the learning can be reduced to backpropagation. The results are particularly useful for the training of QNNs on nearterm quantum computers.
References
Preskill, J. Quantum Computing in the NISQ era and beyond. Quantum 2, 79 (2018).
Harrow, A. W. & Montanaro, A. Quantum Computational Supremacy. Nature 549, 203–209 (2017).
Aaronson, S. & Chen, L. Complexitytheoretic foundations of quantum supremacy experiments. Proceedings of the 32nd Computational Complexity Conference, CCC ’17, 22:1–22:67, (2017).
Biamonte, J. et al. Quantum Machine Learning. Nature 549, 195–202 (2017).
LeCun, Y., Bengio, Y. & Hinton, G. Deep Learning. Nature 521, 436–444 (2014).
Goodfellow, I., Bengio, Y. & Courville, A. Deep Learning. MIT Press. Cambridge, MA (2016).
Debnath, S. et al. Demonstration of a small programmable quantum computer with atomic qubits. Nature 536, 63–66 (2016).
Monz, T. et al. Realization of a scalable Shor algorithm. Science 351, 1068–1070 (2016).
Barends, R. et al. Superconducting quantum circuits at the surface code threshold for fault tolerance. Nature 508, 500–503 (2014).
Kielpinski, D., Monroe, C. & Wineland, D. J. Architecture for a largescale iontrap quantum computer. Nature 417, 709–711 (2002).
Ofek, N. et al. Extending the lifetime of a quantum bit with error correction in superconducting circuits. Nature 536, 441–445 (2016).
IBM. A new way of thinking: The IBM quantum experience. URL, http://www.research.ibm.com/quantum (2017).
Brandao, F. G. S. L., Broughton, M., Farhi, E., Gutmann, S. & Neven, H. For Fixed Control Parameters the Quantum Approximate Optimization Algorithm’s Objective Function Value Concentrates for Typical Instances. arXiv 1812, 04170 (2018).
Farhi, E. & Neven, H. Classification with Quantum Neural Networks on Near Term Processors. arXiv 1802, 06002v1 (2018).
Farhi, E., Goldstone, J., Gutmann, S. & Neven, H. Quantum Algorithms for Fixed Qubit Architectures. arXiv 1703, 06199v1 (2017).
Farhi, E., Goldstone, J. & Gutmann, S. A Quantum Approximate Optimization Algorithm. arXiv 1411, 4028 (2014).
Farhi, E., Goldstone, J. & Gutmann, S. A Quantum Approximate Optimization Algorithm Applied to a Bounded Occurrence Constraint Problem. arXiv 1412, 6062 (2014).
Lloyd, S. The Universe as Quantum Computer, A Computable Universe: Understanding and exploring Nature as computation, H. Zenil ed., World Scientific, Singapore, 2012, arXiv:1312.4455v1 (2013).
Lloyd, S., Mohseni, M. & Rebentrost, P. Quantum algorithms for supervised and unsupervised machine learning. arXiv 1307, 0411v2 (2013).
Lloyd, S., Mohseni, M. & Rebentrost, P. Quantum principal component analysis. Nature Physics 10, 631 (2014).
Rebentrost, P., Mohseni, M. & Lloyd, S. Quantum Support Vector Machine for Big Data Classification. Phys. Rev. Lett. 113 (2014).
Lloyd, S., Garnerone, S. & Zanardi, P. Quantum algorithms for topological and geometric analysis of data. Nat. Commun. 7, arXiv:1408. 3106 (2016).
Schuld, M., Sinayskiy, I. & Petruccione, F. An introduction to quantum machine learning. Contemporary Physics 56, pp. 172–185. arXiv: 1409.3097 (2015).
Imre, S. & Gyongyosi, L. Advanced Quantum Communications  An Engineering Approach. WileyIEEE Press (New Jersey, USA) (2012).
Dorozhinsky, V. I. & Pavlovsky, O. V. Artificial Quantum Neural Network: quantum neurons, logical elements and tests of convolutional nets, arXiv:1806.09664 (2018).
Torrontegui, E. & GarciaRipoll, J. J. Universal quantum perceptron as efficient unitary approximators, arXiv:1801.00934 (2018).
Lloyd, S. et al. Infrastructure for the quantum Internet. ACM SIGCOMM Computer Communication Review 34, 9–20 (2004).
Gyongyosi, L., Imre, S. & Nguyen, H. V. A Survey on Quantum Channel Capacities. IEEE Communications Surveys and Tutorials 99, 1, https://doi.org/10.1109/COMST.2017.2786748 (2018).
Van Meter, R. Quantum Networking, John Wiley and Sons Ltd, ISBN 1118648927, 9781118648926 (2014).
Gyongyosi, L. & Imre, S. Multilayer Optimization for the Quantum Internet. Scientific Reports, Nature, https://doi.org/10.1038/s4159801830957x, (2018).
Gyongyosi, L. & Imre, S. Entanglement Availability Differentiation Service for the Quantum Internet. Scientific Reports, Nature, https://doi.org/10.1038/s41598018288013, https://www.nature.com/articles/s41598018288013 (2018).
Gyongyosi, L. & Imre, S. EntanglementGradient Routing for Quantum Networks. Scientific Reports, Nature, https://doi.org/10.1038/s4159801714394w, https://www.nature.com/articles/s4159801714394w, (2017).
Gyongyosi, L. & Imre, S. Decentralized BaseGraph Routing for the Quantum Internet, Physical Review A, American Physical Society, https://doi.org/10.1103/PhysRevA.98.022310, https://link.aps.org/doi/10.1103/PhysRevA.98.022310 (2018).
Pirandola, S., Laurenza, R., Ottaviani, C. & Banchi, L. Fundamental limits of repeaterless quantum communications, Nature Communications, 15043, https://doi.org/10.1038/ncomms15043 (2017).
Pirandola, S. et al. Theory of channel simulation and bounds for private communication. Quantum Sci. Technol. 3, 035009 (2018).
Laurenza, R. & Pirandola, S. General bounds for senderreceiver capacities in multipoint quantum communications. Phys. Rev. A 96, 032318 (2017).
Pirandola, S. Capacities of repeaterassisted quantum communications. arXiv 1601, 00966 (2016).
Pirandola, S. Endtoend capacities of a quantum communication network. Commun. Phys. 2, 51 (2019).
Cacciapuoti, A. S. et al. Quantum Internet: Networking Challenges in Distributed Quantum Computing. arXiv 1810, 08421 (2018).
Shor, P. W. Scheme for reducing decoherence in quantum computer memory. Phys. Rev. A 52, R2493–R2496 (1995).
Petz, D. Quantum Information Theory and Quantum Statistics, SpringerVerlag, Heidelberg, Hiv: 6. (2008).
Bacsardi, L. On the Way to QuantumBased Satellite Communication. IEEE Comm. Mag. 51(08), 50–55 (2013).
Gyongyosi, L. & Imre, S. A Survey on Quantum Computing Technology, Computer Science Review, Elsevier, https://doi.org/10.1016/j.cosrev.2018.11.002, ISSN: 15740137 (2018).
Wiebe, N., Kapoor, A. & Svore, K. M. Quantum Deep Learning. arXiv 1412, 3489 (2015).
Wan, K. H. et al. Quantum generalisation of feedforward neural networks. npj Quantum Information 3, 36 arXiv 1612, 01045 (2017).
Cao, Y., Giacomo Guerreschi, G. & AspuruGuzik, A. Quantum Neuron: an elementary building block for machine learning on quantum computers. arXiv: 1711.11240 (2017).
Lloyd, S. & Weedbrook, C. Quantum generative adversarial learning. Phys. Rev. Lett., 121, arXiv 1804, 09139 (2018).
Gori, M. Machine Learning: A ConstraintBased Approach, ISBN: 9780081006597, Elsevier (2018).
Hyland, S. L. & Ratsch, G. Learning Unitary Operators with Help From u(n). arXiv 1607, 04903 (2016).
Dunjko, V. et al. Superpolynomial and exponential improvements for quantumenhanced reinforcement learning. arXiv: 1710.11160 (2017).
Romero, J. et al. Strategies for quantum computing molecular energies using the unitary coupled cluster ansatz. arXiv: 1701.02691 (2017).
Riste, D. et al. Demonstration of quantum advantage in machine learning. arXiv 1512, 06069 (2015).
Yoo, S. et al. A quantum speedup in machine learning: finding an Nbit Boolean function for a classification. New Journal of Physics 16(10), 103014 (2014).
Farhi, E. & Harrow, A. W. Quantum Supremacy through the Quantum Approximate Optimization Algorithm. arXiv 1602, 07674 (2016).
Crooks, G. E. Performance of the Quantum Approximate Optimization Algorithm on the Maximum Cut Problem. arXiv 1811, 08419 (2018).
Gyongyosi, L. & Imre, S. Dense Quantum Measurement Theory. Scientific Reports, Nature, https://doi.org/10.1038/s41598019432502 (2019).
Farhi, E., Kimmel, S. & Temme, K. A Quantum Version of Schoning’s Algorithm Applied to Quantum 2SAT. arXiv 1603, 06985 (2016).
Schoning, T. A probabilistic algorithm for kSAT and constraint satisfaction problems. Foundations of Computer Science, 1999. 40th Annual Symposium on, pages 410–414. IEEE (1999).
Salehinejad, H., Sankar, S., Barfett, J., Colak, E. & Valaee, S. Recent Advances in Recurrent Neural Networks. arXiv 1801, 01078v3 (2018).
Arjovsky, M., Shah, A. & Bengio, Y. Unitary Evolution Recurrent Neural Networks. arXiv: 1511.06464 (2015).
Goller, C. & Kchler, A. Learning taskdependent distributed representations by backpropagation through structure. Proc. of the ICNN96, pp. 347–352, Bochum, Germany, IEEE (1996).
Baldan, P., Corradini, A. & Konig, B. Unfolding Graph Transformation Systems: Theory and Applications to Verification, In: Degano P., De Nicola R., Meseguer J. (eds) Concurrency, Graphs and Models. Lecture Notes in Computer Science, vol 5065. Springer, Berlin, Heidelberg (2008).
Roubicek, T. Calculus of variations. Mathematical Tools for Physicists. (Ed. Grinfeld, M.) J. Wiley, Weinheim, ISBN 9783527411887, pp. 551–588 (2014).
Binmore, K. & Davies, J. Calculus Concepts and Methods. Cambridge University Press. p. 190. ISBN 9780521775410. OCLC 717598615. (2007).
Acknowledgements
The research reported in this paper has been supported by the National Research, Development and Innovation Fund (TUDFO/51757/2019ITM, Thematic Excellence Program). This work was partially supported by the National Research Development and Innovation Office of Hungary (Project No. 20171.2.1NKP201700001), by the Hungarian Scientific Research Fund  OTKA K112125 and in part by the BME Artificial Intelligence FIKP grant of EMMI (BME FIKPMI/SC).
Author information
Authors and Affiliations
Contributions
L.GY. designed the protocol and wrote the manuscript. L.GY. and S.I. analyzed the results. All authors reviewed the manuscript.
Corresponding author
Ethics declarations
Competing Interests
The authors declare no competing interests.
Additional information
Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary information
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Gyongyosi, L., Imre, S. Training Optimization for GateModel Quantum Neural Networks. Sci Rep 9, 12679 (2019). https://doi.org/10.1038/s4159801948892w
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s4159801948892w
This article is cited by

Efficient noise mitigation technique for quantum computing
Scientific Reports (2023)

Preparing quantum states by measurementfeedback control with Bayesian optimization
Frontiers of Physics (2023)

Smart explainable artificial intelligence for sustainable secure healthcare application based on quantum optical neural network
Optical and Quantum Electronics (2023)

Natural quantum reservoir computing for temporal information processing
Scientific Reports (2022)

Fixedpoint oblivious quantum amplitudeamplification algorithm
Scientific Reports (2022)
Comments
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.