Introduction

Quantum computers1,2,3,4,5,6,7,8,9,10 utilize the fundamentals of quantum mechanics to perform computations11,12,13,14,15,16,17,18,19. For experimental gate-model quantum computer architectures and the near-term quantum devices of the quantum Internet20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60, gate-based architectures provide an implementable solution to realize quantum computations2,3,4,9,10,23,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85. In a gate-model quantum computer the operations are realized via a sequence of quantum gates, and each quantum gate represents a unitary transformation10,23,62,63,64,65,66,67,68,69,70,71,72,86,87,88,89,90,91. The input of a quantum computer is a quantum system realized via several quantum states, and the unitaries of the quantum computer change the initial system state into a specific state9,10,62,63. The output quantum system is then measured by a measurement array.

A computational problem fed into a quantum computer defines an objective function with a particular connectivity (computational pathway)10. The solution of this computational problem in the quantum computer involves identifying an objective function with a target value that is subject to be reached. To achieve the target objective function value, the quantum computer must reach a particular system state such that the gate parameters of the unitary operations satisfy the target value. These optimal gate parameter values of the unitary operations of the quantum computer identify the optimal state of the quantum computer. This optimal system state is referred to as the target system state of the quantum computer. Finding the target system state involves multiple measurement rounds and iterations, with high-cost system state preparations (Note, the term "quantum state preparation" in the current context refers to a quantum state determination method. It is because the aim of the proposed procedure is the determination of an optimal state of the quantum computer, i.e., the optimal values of the gate-parameters of the unitaries of the quantum computer, see also10), quantum computations, and measurement procedures. Therefore, optimizing the determination procedure of the target system state is essential for gate-model quantum computers.

Here, we define a method for state determination and computational path evaluation for gate-model quantum computers. The aim of state determination is to find a target system state for a quantum computer such that the pre-determined target objective function value is reached. The aim of the computational path evaluation is to find the connectivity of the objective function in the target system state on the fixed hardware architecture10 of the quantum computer. To resolve these issues, we define a framework that utilizes the theory of kernel methods92,93,94,95,96,97,98,99,100,101,102 and high-dimensional Hilbert spaces. In traditional theoretical computer science, kernel methods represent a useful and low computational-cost tool in statistical learning, signal processing theory and machine learning. We prove that these methods can also be utilized in gate-model quantum computations for particular problems.

The novel contributions of our manuscript are as follows:

  1. 1.

    We define a method for optimal quantum state determination and computational path evaluation for near-term quantum computers.

  2. 2.

    The proposed state determination method finds a target system state for a quantum computer at a given target objective function value.

  3. 3.

    The computational pathway evaluation finds the connectivity of the objective function in the target system state on the fixed hardware architecture of the quantum computer.

  4. 4.

    The proposed solution evolves the target system state of the quantum computer without requiring the preparation of intermediate system states between the initial and target states of the quantum computer.

  5. 5.

    The method avoids high-cost system state preparations, expensive running procedures and measurement rounds in gate-model quantum computers.

  6. 6.

    The results are useful for gate-model quantum computers and the near-term quantum devices of the quantum Internet.

This paper is organized as follows. In Section 1, related works are summarized. Section 2 presents the problem statement. Section 3 discusses the results. Finally, Section 4 concludes the paper. Supplemental information is included in the Appendix.

Related Works

The related works are summarized as follows.

Gate-model quantum computers

The model of gate-model quantum computer architectures and the construction of algorithms for qubit architectures are studied in10. The proposed system model of the work also serves as a reference for our system model. Some related preliminaries can also be found in62,63.

In9, the authors defined a gate-model quantum neural network. The proposed system model is a quantum neural network realized via a gate-model quantum computer.

In61, the authors studied a gate-model quantum algorithm called the “Quantum Approximate Optimization Algorithm” (QAOA) and its connection with the Sherrington-Kirkpatrick (SK)103 model. The results serve as a framework for analyzing the QAOA, and can be used for evaluating the performance of QAOA on more general problems.

The behavior of the objective function value of the QAOA algorithm for some specific cases has been studied in74. As the authors concluded, for some fixed parameters and instances drawn from a particular distribution, the objective function value is concentrated such that typical instances have almost the same value of the objective function.

Further performance analyses of the QAOA algorithm can be found in76,77. Practical implementations connected to gate-model quantum computing and the QAOA algorithm can be found in78,79.

In104, the authors studied methods quantum computing based hybrid solution methods for large-scale discrete-continuous optimization problems. The results are straightforwardly applicable for gate-model quantum computers. As the authors concluded, the proposed quantum computing methods have high computational efficiency in terms of solution quality and computation time, by utilizing the unique features of both classical and quantum computers.

A recent experimental quantum computer implementation has been demonstrated in1. The results of the work confirmed the quantum supremacy2,3 of quantum computers over traditional computers in particular problems.

The work of4 gives a summary on quantum computing technologies in the NISQ (Noisy Intermediate-Scale Quantum) era and beyond.

Quantum state preparation

In105, the authors studied the utilization of reinforcement learning in different phases of quantum control. The authors studied the performance of reinforcement learning in the problem of finding short, high-fidelity driving protocol from an initial to a target state in non-integrable many-body quantum systems of interacting qubits. As the authors concluded, the performance of the proposed reinforcement learning method is comparable to optimal control methods.

In106, the authors studied the question of efficient variational simulation of non-trivial quantum states. The results represent an efficient and general route for preparing non-trivial quantum states that are not adiabatically connected to unentangled product states. The system model integrates a feedback loop between a quantum simulator and a classical computer. As the authors concluded, the proposed results are experimentally realizable on near-term quantum devices of synthetic quantum systems.

In107, the problem of simulated quantum computation of molecular energies is studied. While, on a traditional computer the calculation time for the energy of atoms and molecules scales exponentially with system size, on a quantum computer it scales polynomially. The authors demonstrated that such chemical problems can be solved via quantum algorithms using modest numbers of qubits.

In108, the authors studied the modeling and feedback control design for quantum state preparation. The work describes the modeling methods of controlled quantum systems under continuous observation, and studies the design of feedback controls that prepare particular quantum states. In the proposed analysis, the field-theoretic model is subjected to statistical inference and is ultimately controlled.

For an information theoretical analysis of quantum optimal control, see109. In this work, the authors studied quantum optimal control problems and the solving methods. The authors showed that if an efficient classical representation of the dynamics exists, then optimal control problems on many-body quantum systems can be solved efficiently with finite precision. As the authors concluded, the size of the space of parameters necessary to solve quantum optimal control problems defined on pure, mixed states and unitaries is polynomially bounded from the size of the of the set of reachable states in polynomial time.

In110, the authors studied the complexity of controlling quantum many-body dynamics. As the authors found, arbitrary time evolutions of many-body quantum systems can be reversed even in cases when only part of the Hamiltonian can be controlled. The authors also determined a lower bound on the control complexity of a many-body quantum dynamics for some particular cases.

System Model and Problem Statement

System model

Let QG be the quantum gate structure of a gate-model quantum computer, defined with L unitary gates, where an i-th, i = 1, …, L unitary gate \({U}_{i}\left({\theta }_{i}\right)\) is

$${U}_{i}\left({\theta }_{i}\right)=\exp \left(-i{\theta }_{i}{P}_{i}\right),$$
(1)

where Pi is a generalized Pauli operator formulated by the tensor product of Pauli operators \(\left\{X,Y,Z\right\}\), while θi is the gate parameter associated with \({U}_{i}\left({\theta }_{i}\right)\).

The L unitary gates formulate a system state \(| \vec{\theta }\rangle \) of the quantum computer, as

$$| \vec{\theta }\rangle ={U}_{L}\left({\theta }_{L}\right){U}_{L-1}\left({\theta }_{L-1}\right)\ldots {U}_{1}\left({\theta }_{1}\right),$$
(2)

where \({U}_{i}\left({\theta }_{i}\right)\) identifies an i-th unitary gate and \(\vec{\theta }\) is the collection of the gate parameters of the unitaries, defined as

$$\vec{\theta }={\left({\theta }_{1},\ldots ,{\theta }_{L}\right)}^{T}.$$
(3)

The system state in (2) identifies a \(U(\vec{\theta })\) unitary resulted from the product of the L unitary operations \({U}_{L}\left({\theta }_{L}\right){U}_{L-1}\left({\theta }_{L-1}\right)\ldots {U}_{1}\left({\theta }_{1}\right)\) of the quantum computer. For an input quantum system \(| \varphi \rangle \), the \(| \psi \rangle \) output quantum system of QG is as

$$\begin{array}{ll}| \psi \rangle & =\ | \vec{\theta }\rangle | \varphi \rangle \\ & =\ U(\vec{\theta })| \varphi \rangle \\ & =\ {U}_{L}\left({\theta }_{L}\right){U}_{L-1}\left({\theta }_{L-1}\right)\ldots {U}_{1}\left({\theta }_{1}\right)| \varphi \rangle .\end{array}$$
(4)

The \(f(\vec{\theta })\) objective function subject to a maximization is defined as

$$f(\vec{\theta })=\langle \vec{\theta }| C(z)| \vec{\theta }\rangle ,$$
(5)

where \(C\left(z\right)\) identifies a classical objective function10 of a computational problem, while z is a bitstring resulting from an M measurement.

The C classical objective function represents the objective function of a computational problem \({\mathscr{P}}\) fed into the quantum computer. The C objective function is a subject of maximization via the quantum computer. Objective function examples are the combinatorial optimization problems9, and the objective functions of large-scale programming problems104, such as the graph coloring problem, molecular conformation problem, job-shop scheduling problem, manufacturing cell formation problem, and the vehicle routing problem104.

At a target value \({f}^{\ast }(\vec{\theta })\),

$${f}^{\ast }(\vec{\theta })=f(\vec{{\theta }^{\ast }})=\langle \vec{{\theta }^{\ast }}| {C}^{\ast }(z)| \vec{{\theta }^{\ast }}\rangle ,$$
(6)

the problems are therefore to find a \(\vec{{\theta }^{\ast }}\) that reaches the target state \(| \vec{{\theta }^{\ast }}\rangle \) of the quantum computer and to identify the optimal \({C}^{\ast }\left(z\right)\) computational pathway for \(| \vec{{\theta }^{\ast }}\rangle \).

Definition 1.

(Computational pathway). The connectivity of \(C\left(z\right)\) defines a computational pathway as the sum of \({C}_{ij}\left(z\right)\) objective function values evaluated between quantum states ij in the QG structure:

$$C\left(z\right)=\sum _{ij\in QG}{C}_{ij}\left(z\right).$$
(7)

The \(C\left(z\right)\) computational pathway between quantum states ij sets the connectivity of objective function in a given state \(| \vec{\theta }\rangle \) of the quantum computer.

Definition 2.

(Optimal computational pathway). The \({C}^{\ast }\left(z\right)\) optimal computational pathway of the quantum computer is the computational pathway associated with the optimal (target) state \(| \vec{{\theta }^{\ast }}\rangle \). The \({C}^{\ast }\left(z\right)\) computational pathway sets the connectivity of the objective function in the target state \(| \vec{{\theta }^{\ast }}\rangle \) of the quantum computer.

Definition 3.

(Connectivity graph of the quantum hardware). The \({\mathscr{G}}=\left(V,S\right)\) connectivity graph refers to the fixed connectivity of the hardvare of the QG quantum gate structure, where the v V nodes are quantum systems, while the s S edges are the connections between them. An edge si,j with index pair \(\left(i,j\right)\) identifies a physical connection between quantum systems vi and vj.

Problem statement

The problem statement is given in Problems 1 and 2, as follows.

Problem 1.

(Target state determination of the quantum computer). For a given target objective function value \(f(\vec{{\theta }^{\ast }})\), find the \(| \vec{{\theta }^{\ast }}\rangle \) target state of the quantum computer from an initial state \(| {\vec{\theta }}_{0}\rangle \) and an initial objective function \(f({\vec{\theta }}_{0})\).

Problem 2.

(Computational path of the quantum computer in the target state). Determine the connectivity of the objective function \({C}^{\ast }\left(z\right)\) of \(f(\vec{{\theta }^{\ast }})\) for the target quantum state \(| \vec{{\theta }^{\ast }}\rangle \) of the quantum computer.

Our solutions for Problems 1 and 2 are proposed in Theorems 1, 2, and Lemma 1.

Results

Evaluation of the target state of the quantum computer

Theorem 1.

(Target system state evaulation). The \(| \vec{{\theta }^{\ast }}\rangle \) system state associated with the \(f(\vec{{\theta }^{\ast }})\) target objective function can be evaluated from an initial state \(| {\vec{\theta }}_{0}\rangle \) via a decomposition of the initial objective function \(f({\vec{\theta }}_{0})\).

Proof.

Let \(f({\vec{\theta }}_{0})\) be the initial objective function value associated with \(| {\vec{\theta }}_{0}\rangle \) and with gate parameters \({\vec{\theta }}_{0}\). The \(f({\vec{\theta }}_{0})\) value can be rewritten as

$$f({\vec{\theta }}_{0})={({\vec{\theta }}_{0})}^{T}\chi ,$$
(8)

where χ is a vector of regression coefficients being evaluated via a \({\mathscr{K}}\) kernel machine (see (33)), while \({\vec{\theta }}_{0}\) is decomposed as

$${\vec{\theta }}_{0}=F({\vec{\theta }}_{0})+F\left(U\right),$$
(9)

where \(F({\vec{\theta }}_{0})\) and \(F\left(U\right)\) are orthogonal components, such that \(F({\vec{\theta }}_{0})\) depends on the actual objective function value, while \(F\left(U\right)\) is a component independent from the current value of the objective function (i.e., \(F\left(U\right)\) is a fixed component for an arbitrary \(\vec{\theta }\)) that lies in the null space. Since \({\vec{\theta }}_{0}\) and \(f({\vec{\theta }}_{0})\) are known, the χ regression coefficient vector can be determined from (8).

Using (9), the initial objective function in (8) can be rewritten at a particular χ as

$$f({\vec{\theta }}_{0})={(F({\vec{\theta }}_{0})+F(U))}^{T}\chi ,$$
(10)

where the \(F({\vec{\theta }}_{0})\) component is evaluated at a given χ as

$$F({\vec{\theta }}_{0})={\chi }^{+}f({\vec{\theta }}_{0}),$$
(11)

where + is the Moore–Penrose pseudoinverse92,102. Since \(F\left(U\right)\) has no dependence on the actual system state, it can be expressed from (9) and (11) as

$$F\left(U\right)={\vec{\theta }}_{0}-F({\vec{\theta }}_{0}).$$
(12)

Then, let \(\vec{{\theta }^{\ast }}\) be the parameter vector associated with the target state \(| \vec{{\theta }^{\ast }}\rangle \) of the target objective function \(f(\vec{{\theta }^{\ast }})\).

Applying the same decomposition steps for the target \(f(\vec{{\theta }^{\ast }})\), the component \(F(\vec{{\theta }^{\ast }})\) at a given χ is

$$F(\vec{{\theta }^{\ast }})={\chi }^{+}f(\vec{{\theta }^{\ast }}).$$
(13)

Therefore, the target vector \(\vec{{\theta }^{\ast }}\) can be rewritten via (13) and (12) as

$$\vec{{\theta }^{\ast }}=F(\vec{{\theta }^{\ast }})+F(U)={\vec{\theta }}_{0}+({\chi }^{+}f(\vec{{\theta }^{\ast }})-{\chi }^{+}f({\vec{\theta }}_{0})).$$
(14)

Using the \(\vec{{\theta }^{\ast }}\) gate parameters in (14), the target system state \(| \vec{{\theta }^{\ast }}\rangle \) can be built up to achieve the target objective function \(f(\vec{{\theta }^{\ast }})\). The target system state \(| \vec{{\theta }^{\ast }}\rangle \) of a given \(f(\vec{{\theta }^{\ast }})\) is therefore evolvable from the initial values \({\vec{\theta }}_{0}\), \(f({\vec{\theta }}_{0})\), and χ that can be computed from (8).

Algorithm 1 summarizes the steps of the target system state evolution method. ■

Algorithm 1
figure a

System state evolution of the quantum computer for a target objective function.

The results on the determination of the connectivity of the objective function in the target state are included in Theorem 2.

Connectivity of the objective function in the target state

Theorem 2.

(Connectivity of the objective function in the target state). The \(\left(i,j\right)\) pairs of the si,j edges of \({\mathscr{G}}\), si,j S, in a target objective function \({C}^{\ast }\left(z\right)={\sum }_{\forall {s}_{i,j}\in S}{C}_{{s}_{i,j}}^{\ast }\left(z\right)\) associated to \({f}^{\ast }(\vec{\theta })\) can be determined from \(\vec{{\theta }^{\ast }}\), where \({C}_{{s}_{i,j}}^{\ast }\left(z\right)\) is an objective function component associated to si,j.

Proof.

Let \({\mathscr{G}}=\left(V,S\right)\) be the connectivity graph10 associated with the QG quantum gate structure of the quantum computer (see Definition 3), and let \(\vec{{\theta }^{\ast }}\) be evaluated as given in (14). Let \({\mathscr{X}}\) be the input space and let \({\mathscr{K}}\) be a kernel machine, defined for a given \(x,y\in {\mathscr{X}}\) via kernel function88, as

$${\mathscr{K}}\left(x,y\right)=\Gamma {\left(x\right)}^{T}\Gamma \left(y\right),$$
(15)

where

$$\Gamma :{\mathscr{X}}\to {\mathscr{H}}$$
(16)

is a nonlinear map from \({\mathscr{X}}\) to the high-dimensional Reproducing Kernel Hilbert Space (RKHS) \({\mathscr{H}}\) associated with \({\mathscr{K}}\). Without loss of generality,

$$\dim \left({\mathscr{H}}\right){\rm{\gg }}\dim \left({\mathscr{X}}\right),$$
(17)

and we assume that the map Γ in (16) has no inverse.

The connectivity of the objective function and the pairwise connectivity of the quantum computer’s hardware are not related, since these connections are represented in different layers10. While the physical-layer connectivity is determined by the QG quantum gate structure of the fixed quantum hardware, the connectivity of the \(C\left(z\right)\) objective function is determined in the logical-layer that formulates a computational pathway. As a corollary, the proposed algorithm works on fixed quantum hardware and iterates in the logical layer to determine the connectivity of the objective function such that the objective function is maximized.

Let \(\vec{\kappa }\) be the vector of si,j edges, si,j S, and let \(\vec{\Omega }\) be the vector of the actual \({C}_{{s}_{i,j}}\left(z\right)\) objective function values associated with the si,j edges. The initial computational path of the quantum computer is therefore

$$C\left(z\right)=\sum _{{\kappa }_{i}}{\Omega }_{{\kappa }_{i}}=\sum _{\forall {s}_{i,j}\in S}{C}_{{s}_{i,j}}\left(z\right),$$
(18)

where κi and \({\Omega }_{{\kappa }_{i}}\) identify the i-th elements of \(\vec{\kappa }\) and \(\vec{\Omega }\), respectively.

Then, let ϒ0 be an element of the input space \({\mathscr{X}}\), defined as

$${\Upsilon }_{0}={(\vec{\kappa },\vec{\Omega })}^{T},$$
(19)

and let τ0 be the map of ϒ0 in \({\mathscr{H}}\), as

$${\tau }_{0}=\Gamma \left({\Upsilon }_{0}\right)=\lambda {\vec{\theta }}_{0},$$
(20)

where λ is a matrix of eigenvectors associated with the edge and objective function values in \(| {\vec{\theta }}_{0}\rangle \).

Then, let ϒ* be the target element in \({\mathscr{X}}\) subject to be determined,

$${\Upsilon }^{\ast }={(\vec{{\kappa }^{\ast }},\vec{{\Omega }^{\ast }})}^{T},$$
(21)

where \(\vec{{\kappa }^{\ast }}\) and \(\vec{{\Omega }^{\ast }}\) are target vectors that identify the connectivity of the \({C}_{{s}_{i,j}}^{\ast }\left(z\right)\) objective function values in the target state \(| \vec{{\theta }^{\ast }}\rangle \), such that the \({C}^{\ast }\left(z\right)\) computational path can be evaluated as

$${C}^{\ast }\left(z\right)=\sum _{{\kappa }_{i}^{\ast }}{\Omega }_{{\kappa }_{i}^{\ast }}^{\ast }=\sum _{\forall {s}_{i,j}\in S}{C}_{{s}_{i,j}}^{\ast }\left(z\right),$$
(22)

where \({\kappa }_{i}^{\ast }\) and \({\Omega }_{{\kappa }_{i}^{\ast }}^{\ast }\) refer to the i-th elements of \(\vec{{\kappa }^{\ast }}\) and \(\vec{{\Omega }^{\ast }}\), respectively.

Then, let τ* be the map of the target \({\Upsilon }^{\ast }\in {\mathscr{X}}\) in \({\mathscr{H}}\), defined as

$${\tau }^{\ast }=\Gamma \left({\Upsilon }^{\ast }\right)={\lambda }^{\ast }\vec{{\theta }^{\ast }},$$
(23)

where λ* is a matrix of eigenvectors associated with the edge and objective function values in state \(| \vec{{\theta }^{\ast }}\rangle \).

Since (23) is linear, in the \(| \vec{{\theta }^{\ast }}\rangle \) state, the maps \(\Gamma \left(\vec{\kappa }\right)\) and \(\Gamma \left(\vec{\Omega }\right)\) of \(\vec{{\kappa }^{\ast }}\) and \(\vec{{\Omega }^{\ast }}\), can be rewritten as

$$\Gamma \left(\vec{\kappa }\right)=\mu \vec{{\theta }^{\ast }}$$
(24)

and

$$\Gamma (\vec{\Omega })=\nu \vec{{\theta }^{\ast }}$$
(25)

with

$${\lambda }^{\ast }={\left(\mu ,\nu \right)}^{T}.$$
(26)

Since (23) can be evaluated from (20) in \({\mathscr{H}}\), the task here is therefore to identify ϒ* in \({\mathscr{X}}\) from τ*. As ϒ* is determined, the target vectors \(\vec{{\kappa }^{\ast }}\) and \(\vec{{\Omega }^{\ast }}\) for the target objective function in (22) are also found.

Since the map Γ in (16) has no inverse, finding ϒ* in \({\mathscr{X}}\) from τ* defines an ill-posed problem93,94,99,100,101. In this setting, the determination of ϒ* from τ*, requires the use of a \({\mathscr{P}}\) projector on τ0(20) in \({\mathscr{H}}\), which yields a \({\mathscr{P}}\left({\tau }_{0}\right)\) element in \({\mathscr{H}}\). If τ* lies in (or close to) the span of \(\left\{\Gamma \left({\Upsilon }_{i}\right)\right\}\), where ϒi is an i-th training data, \({\Upsilon }_{i}\in {\mathscr{X}}\), from a training set \({{\mathscr{S}}}_{{\mathscr{X}}}\) of N training data,

$${{\mathscr{S}}}_{{\mathscr{X}}}=\left\{{\Upsilon }_{1},\ldots ,{\Upsilon }_{N}\right\},$$
(27)

then τ* can be represented as a linear combination of the training data93,94,95. As a corollary, \({\mathscr{P}}\left({\tau }_{0}\right)\) yields a close approximation of τ* in \({\mathscr{H}}\):

$${\tau }^{\ast }\approx {\mathscr{P}}\left({\tau }_{0}\right).$$
(28)

The \({\mathscr{P}}\left({\tau }_{0}\right)\) projection is defined as

$${\mathscr{P}}\left({\tau }_{0}\right)=\mathop{\sum }\limits_{i=1}^{n}{\beta }_{i}{V}_{i},$$
(29)

where Vi is a matrix of normalized eigenvectors of \({\mathscr{K}}\), while βi-s are projections as

$${\beta }_{i}=\mathop{\sum }\limits_{j=1}^{N}{\alpha }_{j}^{i}{\mathscr{K}}\left({\Upsilon }^{\ast },{\Upsilon }_{j}\right),$$
(30)

while αi is an i-th coefficient in the eigenvector V as

$$V=\mathop{\sum }\limits_{i=1}^{N}{\alpha }_{i}{\tau }_{i},$$
(31)

where τi is the map of training data ϒi, as

$${\tau }_{i}=\Gamma \left({\Upsilon }_{i}\right).$$
(32)

Then, based on (30) and (31), a j-th component of χ from (8), \(\chi ={\{{\chi }_{j}\}}_{j=1}^{N}\), can be determined as

$${\chi }_{j}=\mathop{\sum }\limits_{i=1}^{N}{\widetilde{\alpha }}_{i}^{j}{\mathscr{K}}({\Upsilon }^{\ast },{\widetilde{\Upsilon }}_{i}),$$
(33)

where \({\widetilde{\Upsilon }}_{i}\) is a training data from a training set \({\widetilde{{\mathscr{S}}}}_{{\mathscr{X}}}\), such that the constraint92,93 of

$$\mu (\Gamma ({\widetilde{{\mathscr{S}}}}_{{\mathscr{X}}}))=\frac{1}{N}\mathop{\sum }\limits_{j=1}^{N}\,\Gamma \,({\widetilde{\Upsilon }}_{j})=0$$
(34)

holds for \({\widetilde{{\mathscr{S}}}}_{{\mathscr{X}}}\), where \(\mu (\Gamma ({\widetilde{{\mathscr{S}}}}_{{\mathscr{X}}}))\) is the mean of the Γ-mapped training points \({\widetilde{{\mathscr{S}}}}_{{\mathscr{X}}}\), while \({\widetilde{\alpha }}_{i}^{j}\) is an i-th coefficient of a j-th eigenvector \({\widetilde{V}}_{j}\),

$${\widetilde{V}}_{j}=\mathop{\sum }\limits_{i=1}^{N}{\widetilde{\alpha }}_{i}^{j}\Gamma ({\widetilde{\Upsilon }}_{i}).$$
(35)

As it can be proven92,93,94, the constraint in (34) satisfied, if the relation of

$$\left\langle \vec{K}\right\rangle \vec{\alpha }=N\lambda \vec{\alpha },$$
(36)

holds for a particular training set \({{\mathscr{S}}}_{{\mathscr{X}}}\), where \(\vec{\alpha }\) is the set of eigenvectors of \(\vec{K}\) with eigenvalues λ, while \(\left\langle \vec{K}\right\rangle \) is the centered kernel matrix of \({\mathscr{K}}\), defined as

$$\left\langle \vec{K}\right\rangle =\vec{K}-{\mathscr{I}}\vec{K}-\vec{K}{\mathscr{I}}+{\mathscr{I}}\vec{K}{\mathscr{I}},$$
(37)

where \(\vec{K}\) is the kernel matrix of \({\mathscr{K}}\), while \({\mathscr{I}}\) is as

$${\mathscr{I}}=I-\vec{J},$$
(38)

where I is the identity matrix, while \(\vec{J}\) is an N × N matrix of ones.

Therefore, χ from (8) can be determined via the use of \(\left\langle \vec{K}\right\rangle \) in (36) for a given \({{\mathscr{S}}}_{{\mathscr{X}}}\), which guarantees that (34) is satisfied, i.e., the \(\Gamma \left({{\mathscr{S}}}_{{\mathscr{X}}}\right)\) mapped training data have zero mean that allows us to evaluate χ in an exact form.

The goal of projection \({\mathscr{P}}\) is to minimize the \({f}_{d}\left({\tau }^{\ast },{\mathscr{P}}\left({\tau }_{0}\right)\right)\) distance in \({\mathscr{H}}\), where

$${f}_{d}\left({\tau }^{\ast },{\mathscr{P}}\left({\tau }_{0}\right)\right)={\left\Vert {\tau }^{\ast }-{\mathscr{P}}\left({\tau }_{0}\right)\right\Vert }^{2}={\left\Vert \Gamma \left({\Upsilon }^{\ast }\right)-{\mathscr{P}}\left({\tau }_{0}\right)\right\Vert }^{2}.$$
(39)

Thus, at a given (29) and (39), the term in (21) can be rewritten as an optimality criteria

$${\Upsilon }^{\ast }=\mathop{\arg \,\min }\limits_{{\Upsilon }^{\ast }\in {\mathscr{X}}}\,{f}_{d}({\tau }^{\ast },{\mathscr{P}}({\tau }_{0})).$$
(40)

By introducing a non-negative regularization parameter Φ93 to weight the distance of \({\left\Vert {\Upsilon }^{\ast }-{\Upsilon }_{0}\right\Vert }^{2}\), the result in (39) at a given \({\Upsilon }_{0}\in {\mathscr{X}}\) can be rewritten as

$$\begin{array}{ll}{f}_{d}({\tau }^{\ast },{\mathscr{P}}\left({\tau }_{0}\right)) & \\ & =\ {\left\Vert \Gamma \left({\Upsilon }^{\ast }\right)-{\mathscr{P}}\left({\tau }_{0}\right)\right\Vert }^{2}+\Phi {\left\Vert {\Upsilon }^{\ast }-{\Upsilon }_{0}\right\Vert }^{2}\\ & =\ {\mathscr{K}}\left({\Upsilon }^{\ast },{\Upsilon }^{\ast }\right)-2\mathop{\sum }\limits_{i=1}^{N}{\ell }_{i}{\mathscr{K}}({\Upsilon }^{\ast },{\Upsilon }_{i})\\ & +\ \Phi ({\left({\Upsilon }^{\ast }\right)}^{T}{\Upsilon }^{\ast }+{\left({\Upsilon }_{0}\right)}^{T}{\Upsilon }_{0}-2{\Upsilon }^{\ast }{\Upsilon }_{0})+\zeta ,\end{array}$$
(41)

where ζ refers to terms independent of ϒ*, while i is defined as

$${\ell }_{i}=\mathop{\sum }\limits_{k=1}^{n}{\beta }_{k}{\alpha }_{i}^{k},$$
(42)

where n is associated to the projection \({\mathscr{P}}\left({\tau }_{0}\right)\), since τ0 is projected to the subspace spanned by the first n eigenvectors V1, …, Vq.

The result in (41) can be simplified by removing all terms independent of ϒ*, such that \({f}_{d}({\tau }^{\ast },{\mathscr{P}}\left({\tau }_{0}\right))\) can be minimized for arbitrary \({\mathscr{K}}\), as

$$\begin{array}{ll}{f}_{d}({\tau }^{\ast },{\mathscr{P}}\left({\tau }_{0}\right))= & {\mathscr{K}}\left({\Upsilon }^{\ast },{\Upsilon }^{\ast }\right)\\ & -2\mathop{\sum }\limits_{i=1}^{N}{\ell }_{i}{\mathscr{K}}({\Upsilon }^{\ast },{\Upsilon }_{i})+\Phi ({\left({\Upsilon }^{\ast }\right)}^{T}{\Upsilon }^{\ast }-2{\Upsilon }^{\ast }{\Upsilon }_{0}),\end{array}$$
(43)

where

$${\mathscr{K}}\left({\Upsilon }^{\ast },{\Upsilon }^{\ast }\right)=\Gamma {\left({\Upsilon }^{\ast }\right)}^{T}\Gamma \left({\Upsilon }^{\ast }\right)={\left({\tau }^{\ast }\right)}^{T}{\tau }^{\ast }.$$
(44)

At a \({\mathscr{P}}\left({\tau }_{0}\right)\) with relation (43), ϒ* is determined as follows. Using (43) with an arbitrary \({\mathscr{K}}\), ϒ* can be evaluated as

$${\Upsilon }^{\ast }=\frac{1}{{\tau }^{\ast }{\mathscr{P}}\left({\tau }_{0}\right)+\Phi }\mathop{\sum }\limits_{i=1}^{N}{\ell }_{i}{\mathscr{K}}({\Upsilon }^{\ast },{\Upsilon }_{i}){\Upsilon }_{i}+\Phi {\Upsilon }_{0},$$
(45)

where the Φ regularization coefficient achieves the stability of ϒ*, while

$${\tau }^{\ast }{\mathscr{P}}\left({\tau }_{0}\right)=\Gamma \left({\Upsilon }^{\ast }\right){\mathscr{P}}\left(\Gamma \left({\Upsilon }_{0}\right)\right)=\mathop{\sum }\limits_{i=1}^{N}{\ell }_{i}{\mathscr{K}}({\Upsilon }^{\ast },{\Upsilon }_{i}),$$
(46)

where \({\mathscr{P}}\left({\tau }_{0}\right)\) is defined in (29).

Then let \({\mathscr{K}}{\prime} \) be the derivative of \({\mathscr{K}}\) such that it formulates the gradient with respect to ϒ* as

$$\begin{array}{ll} & {\nabla }_{{\Upsilon }^{\ast }}({f}_{d}({\tau }^{\ast },{\mathscr{P}}\left({\tau }_{0}\right)))\\ & =\ \mathop{\sum }\limits_{i=1}^{N}{\ell }_{i}{\mathscr{K}}{\prime} ({\Upsilon }^{\ast },{\Upsilon }_{i})({\Upsilon }^{\ast }-{\Upsilon }_{i})+\Phi ({\Upsilon }^{\ast }-{\Upsilon }_{0}).\end{array}$$
(47)

As follows, for a \(\vec{{\theta }^{\ast }}\), the target \(\vec{{\kappa }^{\ast }}\) and \(\vec{{\Omega }^{\ast }}\) can be determined for an arbitrary \({\mathscr{K}}\) via a stable solution ϒ*(45), such that \(\vec{{\kappa }^{\ast }}\) contains the \(\left(i,j\right)\) pairs of the si,j edges for \({C}_{{s}_{i,j}}^{\ast }\left(z\right)\), while \(\vec{{\Omega }^{\ast }}\) identifies the values of \({C}_{{s}_{i,j}}^{\ast }\left(z\right)\) in \(| {\theta }^{\ast }\rangle \).

The proof is concluded here. ■

Computational pathway of the optimal state of the quantum computer

Lemma 1.

The \({C}^{\ast }\left(z\right)\) computational pathway of the optimal quantum state \(| \vec{{\theta }^{\ast }}\rangle \) can be determined for an arbitrary \({\mathscr{K}}\).

Proof.

To construct an iteration method for the determination of \(| \vec{{\theta }^{\ast }}\rangle \) via ϒ*, some preliminary conditions are set as follows. For the \({\mathscr{P}}\left({\tau }_{0}\right)\) projection, we set the condition

$${\mathscr{P}}\left({\tau }_{0}\right)\ne \vec{0},$$
(48)

therefore

$${\tau }^{\ast }{\mathscr{P}}\left({\tau }_{0}\right) > 0.$$
(49)

Then, let \(\varepsilon \left({\Upsilon }^{\ast }\right)\) be the extremum of ϒ* defined94,95 as

$$\varepsilon \left({\Upsilon }^{\ast }\right)=\frac{1}{{\sum }_{j}{\sigma }_{j}}{\sum }_{i}{\Upsilon }_{i}{\sigma }_{i},$$
(50)

where

$${\sigma }_{i}={\ell }_{i}{\mathscr{K}}{\prime} (\varepsilon \left({\Upsilon }^{\ast }\right),{\Upsilon }_{i}).$$
(51)

The gradient with respect to \(\varepsilon \left({\Upsilon }^{\ast }\right)\) is

$${\nabla }_{\varepsilon \left({\Upsilon }^{\ast }\right)}({f}_{d}(\Gamma (\varepsilon \left({\Upsilon }^{\ast }\right)),{\mathscr{P}}\left({\tau }_{0}\right)))=0.$$
(52)

As \({\mathscr{K}}\) is smooth, it can be shown that the condition of (49) always holds, since there is a neighborhood of the extremum93,94 of \({f}_{d}(\Gamma (\varepsilon \left({\Upsilon }^{\ast }\right)),{\mathscr{P}}\left({\tau }_{0}\right))\).

To provide the stability of \({\Upsilon }_{i}^{\ast }\) in an i-th iteration step, we utilize the Φ regularization coefficient from (43) for the evaluation \({\Upsilon }_{i}^{\ast }\), and for the computation the \({f}_{d}^{\left(i\right)}(\cdot )\) is the distance function associated to an i-th iteration step.

The steps are given in Algorithm 2. ■

Algorithm 2
figure b

Computational pathway of the optimal state of the quantum computer.

Conclusions

Gate-model quantum computers represent an implementable way for near-term experimental quantum computations. The resolution of a computational problem fed into a quantum computer can be modeled via reaching the target value of an objective function. The objective function is determined by the actual computational problem. To satisfy the target objective function value, a quantum computer must reach a target system state. In the target system state, the gate parameters of the unitaries pick up values that set the objective function into the target value. Finding the target system state is a challenge that requires several rounds of measurement and system state preparations via the quantum computer. Here, we proved that the target state of the quantum computer can be evaluated from an initial system state and an initial objective function. The solution significantly reduces the cost of objective function evaluation, since the proposed method requires no the preparation of intermediate system states via the quantum computer between the initial and target system states. We defined a method for the evaluation of the computational path of the quantum computer for the target state, and an algorithm to solve the computational path problem in an iterative manner.

Ethics statement

This work did not involve any active collection of human data.