A regression algorithm for accelerated lattice QCD that exploits sparse inference on the D-Wave quantum annealer

We propose a regression algorithm that utilizes a learned dictionary optimized for sparse inference on a D-Wave quantum annealer. In this regression algorithm, we concatenate the independent and dependent variables as a combined vector, and encode the high-order correlations between them into a dictionary optimized for sparse reconstruction. On a test dataset, the dependent variable is initialized to its average value and then a sparse reconstruction of the combined vector is obtained in which the dependent variable is typically shifted closer to its true value, as in a standard inpainting or denoising task. Here, a quantum annealer, which can presumably exploit a fully entangled initial state to better explore the complex energy landscape, is used to solve the highly non-convex sparse coding optimization problem. The regression algorithm is demonstrated for a lattice quantum chromodynamics simulation data using a D-Wave 2000Q quantum annealer and good prediction performance is achieved. The regression test is performed using six different values for the number of fully connected logical qubits, between 20 and 64. The scaling results indicate that a larger number of qubits gives better prediction accuracy.

Sparse coding refers to a class of unsupervised learning algorithms for finding an optimized set of basis vectors, or dictionary, for accurately reconstructing inputs drawn from any given dataset using the fewest number of nonzero coefficients. Sparse coding explains the self-organizing response properties of simple cells in the mammalian primary visual cortex 1,2 , and has been successfully applied in various fields including image classification 3,4 , image compression 5 , and compressed sensing 6,7 . Optimizing a dictionary φ ∈ R M×N q for a given dataset and inferring optimal sparse representations a (k) ∈ R N q of input data X (k) ∈ R M involves finding solutions of the following minimization problem: where k is the index of the input data, and is the sparsity penalty parameter. Note that the convergence of the solution is guaranteed only when the norm of column vectors of the dictionary φ is constrained by an upper bound, which is unity in this study. Because of the L 0 -norm, the minimization problem falls into an NP-hard complexity class with multiple local minima 8 in the energy landscape.
Recently, we developed a mapping of the a (k) -optimization in Eq. (1) to the quadratic unconstrained binary optimization (QUBO) problem that can be solved on a quantum annealer and demonstrated its feasibility on the D-Wave systems [9][10][11] . The quantum processing unit of the D-Wave systems realizes the quantum Ising spin system in a transverse field and finds the lowest or the near-lowest energy states of the classical Ising model, www.nature.com/scientificreports/ using quantum annealing [12][13][14] . Here s i = ±1 is the binary spin variable, h i and J ij are the qubit biases and coupling strengths that can be controlled by a user, and optimization for the Ising model is isomorphic to a QUBO problem with a i = (s i + 1)/2 . By mapping the sparse coding to a QUBO structure, the sparse coefficients are restricted to binary variables a i ∈ {0, 1} , and it makes the L 0 -norm equivalent to the L 1 -norm. Despite this restriction, it was able to provide good sparse representation for the the MNIST 9,11,15 and CIFAR-10 10,16 images. In this paper, we propose a regression algorithm using the sparse coding on D-Wave 2000Q in Sect. Regression algorithm using sparse coding on D-Wave 2000Q and apply the algorithm to a prediction of quantum chromodynamics (QCD) simulation observable in Sect. Application to lattice QCD.

Regression algorithm using sparse coding on D-Wave 2000Q
D } is an input vector known as the independent variable, and y (i) is an output variable known as the dependent variable. A regression model F can be built by learning correlations between the input and output variables on the training dataset, so that it can make predictions ŷ of y for an unseen input data X as Such a regression model can be built using the sparse coding learning implemented on a quantum annealer described below.
• Pre-training d and y (i) so that their standard deviations become comparable. One possible choice is rescaling the data to have a zero mean and a unit variance using the sample mean and sample variance of the training dataset. This procedure is an essential step for the regression algorithm as it makes the reconstruction error for each component comparable.
(2) Using X in the test dataset (M) or those in the combined training and test datasets ( N + M ), perform sparse coding training and obtain the dictionary φ for X.

• Training
(3) Concatenate the input and output variables of the training dataset and build the concatenated vectors Extend the dictionary matrix φ ∈ R D×N q obtained in the pre-training to φ o ∈ R (D+1)×N q , filling up the new elements by zeros. (4) Taking X (i) as the input signal and φ o as an initial guess of the dictionary, perform sparse coding training on the training dataset and obtain the dictionary φ . Through this procedure, φ will encode the correlation between x (i) d and y (i) .

• Prediction
(5) For the test dataset, for which only X (j) is given, build a vector X is an initial guess of y (j) . One possible choice of ȳ (j) is the average value of y (i) in the training dataset. (6) Using the dictionary φ obtained in (4), find a sparse representation a (j) for X (j) o and calculate reconstruction as X ′(j) = φa (j) . This replaces the outlier components, including ȳ (j) , in X (j) o by the values that can be described by φ.
In this regression model, D should be sufficiently large so that the initial guess of the dependent variable ȳ j does not bias the reconstruction. This procedure can be extended to predict multiple variables by increasing the dimension of y, in exchange for prediction accuracy.
Sparse coding on a D-Wave quantum annealer. The a (k) -optimization of the sparse coding problem in Eq. (1), can be mapped onto the D-Wave problem in Eq. (2), by the following transformations 9-11 : In this mapping, each neuron, the sparse coefficient, of the sparse coding model corresponds to a qubit. After a measurement, the quantum state of a qubit collapses to 0 or 1, which indicates that the neuron can have only two states of fire (1) or silent (0). Here the qubit-qubit coupling J shares similarity with the lateral neuron-neuron inhibition in the locally competitive algorithm 17 , and the constant makes the solution sparse by acting a constant field forcing the qubits to stay in a i = 0 ( s i = −1 ) state. By performing the quantum annealing for a given dictionary φ and input data vector X with the transformations given in Eq. (4), one can obtain the optimal sparse representation a.
Scientific Reports | (2020) 10:10915 | https://doi.org/10.1038/s41598-020-67769-x www.nature.com/scientificreports/ An ideal D-Wave 2000Q consists of 2048 qubits, and the entire coupling graph of this 2048-qubit system is called the perfect Chimera 2000Q, whose 1/16 subset is illustrated in Fig. 1. However, the graph is sparselyconnected in which one qubit can couple to only up to 6 other qubits. With the limited connectivity between the qubits, a perfect 2048-qubit Chimera has 6016 couplers. To map a general Ising model problem with arbitrary bipartite couplings to a D-Wave Chimera, in many cases, one requires an additional step called the embedding. In the case of the sparse coding problem, the embedding translates a graph of a fully-connected logical qubits to the Chimera graph of the partially-connected physical qubits by chaining a group of physical qubits together with a certain chain strength ξ . One example of such a mapping of fully-connected logical 6 qubits to the D-Wave 2000Q by chaining 14 physical qubits is described in Fig. 1. This embedding procedure results in a significant reduction of the total available logical qubits that represent the mapped problem; on a perfect D-Wave 2000Q QPU, only up to 65 fully-connected logical qubits can be mapped. In practice, however, some qubits on the QPU are inoperable after a calibration, and the maximum number of logical qubits that could be embedded decreases. For example, the D-Wave 2000Q quantum annealer at Los Alamos National Laboratory (LANL) has only 2032 active qubits with 5924 active couplers. We find that an arbitrary QUBO problem up to 64 fully-connected logical qubits can be embedded in the LANL D-Wave 2000Q.

Application to lattice QCD
QCD is a theory of quarks and gluons, which are the fundamental particles composing hadrons such as pions and protons, and their interactions. It is a part of the Standard Model of particle physics, and the theory has been demonstrated by a large class of experiments over the decades 18,19 . Lattice QCD is a discrete formulation of QCD on a Euclidean space time lattice, which allows us to solve low-energy QCD problems using computer simulations by carrying out the Feynman path integration using Monte Carlo methods 20,21 .
In lattice QCD simulations, a large number of observables are calculated over an ensemble of the Gibbs samples of gluon fields, called the lattices, and computational cost for calculating those observables is expensive in modern simulations. However, the observables' fluctuations over the statistical samples of the lattices are correlated as they share the same background lattice. By exploiting the correlation between them, in Ref. 22 , Gradient Tree Boosting (GTB) regression algorithm was able to replace the computationally expensive direct calculation of some observables by the computationally cheap machine learning predictions of them from other observables.
In this section, we apply the regression algorithm proposed in Sect. Regression algorithm using sparse coding on D-Wave 2000Q to the lattice QCD simulation data used for the calculation of the charge-parity (CP) symmetry violating phase α CPV of the neutron 23  Method. Our goal of the regression problem is to predict the imaginary part of C P,CPV 2pt at t = 10a from the real and imaginary parts of the two-point correlation functions calculated without CPV interactions, C 2pt and C P 2pt , at t = 8a, 9a, 10a, 11a, and 12a, where a is the lattice spacing. It forms a problem with single value of output variable (y) and 20 values (two observables, real/imag, 5 timeslices) of the input variables ( X ). In this application, we use 15616 data points of of these observables measured in Refs. 25,26 divided into 6976 training data and 8640 test data. Using these datasets, we follow the regression procedure proposed in Sect. Regression algorithm using sparse coding on D-Wave 2000Q to predict y of the test dataset that contains around 9 K data points.
The procedure can be summarized as follows. First, we standardize the total data using the mean and variance of the training dataset for normalization. Then, we perform the pre-training and obtained φ for the 20 elements of X only using the test dataset. After appending the y to X as the 21st element in the training dataset, we perform the sparse coding dictionary learning and update φ to encode correlation between X and y. For prediction, input vectors X in the test dataset are augmented to dimension of 21 vectors by appending the average value of y, which is 0 after standardization. Finally, sparse coefficients a for the augmented input vectors are calculated with the fixed dictionary φ obtained above, and predictions of y are estimated by taking the 21st element of the reconstructed vectors on the test dataset.
Note that a sparse coding problem solves for the sparsest representation a and the dictionary φ , simultaneously, by minimizing Eq. (1). First, our optimization for a is performed using the D-Wave 2000Q at a given φ , whose initial guess is given, in general, by random numbers or via imprinting technique. Then, the optimization for φ is performed on classical CPUs. The latter step is an offline learning for the fixed values of a obtained using D-Wave 2000Q. In the offline learning procedure, φ is learned using the batch stochastic gradient descent (SGD) algorithm: is the sparse coding energy function for a given input data given in Eq. (1), and η is the learning rate. In this study, η is initially set to 0.01 and gradually decreased during the training procedure. Batch-learning is used with the batch size of n b = 50 . We repeat the iterative update of the quantum D-Wave inference for a and SGD learning for φ until a convergence is attained. On average, we find the convergence after 4 or 5 iterations. In this study, we use the SAPI2 python client libraries 27 for implementing D-Wave operations.
The sparsity of the sparse representation a associated with the sparsity penalty parameter is calculated by the ratio of nonzero elements in a . In this study, is tuned to the values that make the average sparsity about 20%, because we find that the 20% of sparsity provides an optimal prediction performance, after examining a few different values of . This corresponds to = [0.06, 0.1], which we varied for different N q studies in our experiments. Although the prediction performance could be further optimized by an extensive parameter search, such as that performed in Ref. 28 , the procedure is computationally expensive so ignored in this proof-of-principle study.
Note that the definition of the overcompleteness is not straightforward for the D-Wave inferred sparse coding because the input signal X may have arbitrary real numbers, while the sparse coefficients a could have only binary numbers of 0 or 1. Ignoring the subtlety, for simplicity, the overcompleteness γ for the input signal of dimension 20 (or 21 for extended vectors) can be calculated by γ = N q /20.

Results.
Examples of the reconstruction and prediction from the randomly chosen test data points are visualized in Fig. 2. In the plot, the first 20 elements are the input variables, and the element 21 is the output of the prediction algorithm. As one can see, the reconstruction of the 21st element, which was 0 in their initial guess, is successfully shifted close to their ground truth, as expected.
In order to investigate the prediction accuracy for different N q , we explore the prediction algorithm with six different numbers of qubits N q = 20, 29, 38, 47, 55 and 64, which corresponds to γ ≈ 1 ∼ 3 . Note that the larger N q implies the more difficult optimization problem, and N q = 64 is the maximum number of logical qubits that can be embedded onto the D-Wave 2000Q. In the experiments with the D-Wave 2000Q, we use annealing time τ = 20µs , which is a relatively short annealing time. In addition, we run the experiments with 10 different values of the chain strengths ξ for each input data point to obtain an optimal solution in exchange for longer wallclock time. We performed our D-Wave experiments using 20 reads for each value of ξ and take the lowest energy solution. In Fig. 3, we show the distribution of the normalized original data of the dependent variable y (i) and its prediction error � (i) defined by the difference between the ground truth y (i) and its prediction ŷ (i) : It is clearly demonstrated that (1) the prediction error is much smaller than the fluctuation of the original data, (2) the prediction error is sharply distributed near 0, which indicates no obvious bias in the prediction, and (3) the prediction error tends to be smaller when N q becomes larger.
To evaluate the prediction quality, the recovery of the 21st element in the extended input vector, quantitatively, we calculate the ratio of the standard deviations of the prediction error and that of the original data: Q ≡ σ (�)/σ (y) . Q converges to 0 when the prediction is precise, and Q ≥ 1 indicates no prediction for a statistical data. Note that this definition of the prediction quality does not account for the bias of the prediction because the bias for the prediction of a statistical data can be removed by following the procedure introduced in Ref. 22 based on the variance reduction technique for lattice QCD calculations 29,30 . Figure 4 shows the prediction error Q as a function of the number of qubits. It is clearly demonstrated that the systematic decrease of the prediction error as N q is increased. Although no theory explaining the scaling is Scientific Reports | (2020) 10:10915 | https://doi.org/10.1038/s41598-020-67769-x www.nature.com/scientificreports/ known, we find that the scaling roughly follows the exponential decay ansatz Q ∞ + B · exp[−C · N q ] . By fitting the ansatz to the data points, an asymptotic value of the prediction quality is obtained as Q ∞ ≈ 0.18 or 0.23 for N q → ∞ , depending on whether we include N q = 20 data point or not in the fit. For a comparison, regression algorithms provided by the scikit-learn library 31 on a classical computer are investigated for the same dataset, and GTB regression algorithm [32][33][34] showed the best prediction performance with Q = 0.15 (1). The data points in Fig. 4 are obtained with a fixed training and test datasets without cross validation because of the limited D-Wave 2000Q resources. However, the classical regression algorithms we applied on the same datasets showed prediction quality Q between 0.15 and 0.33 with 2 and 8.6% uncertainties, where the uncertainties are estimated using the bootstrap resampling method following Ref. 22 . Based on this observation, we expect smaller than 10% uncertainties for the data points presented in Fig. 4. Original, or ground truth, data (blue circles) and the reconstruction from the missing-21stelement data using D-Wave 2000Q with N q = 64 (red squares) for two randomly chosen data points. Here the 21st-element is the dependent variable of the prediction, whose initial value before the reconstruction is given by 0. www.nature.com/scientificreports/ Pre-training is demonstrated to lower the prediction error of this regression algorithm, significantly. When performed the prediction with N q = 64 qubits without the pre-training procedure, we find that Q = 0.34 , while it becomes Q = 0.254 with the pre-training. Without the pre-training, furthermore, we find that the required number of iterative updates of the D-Wave inference for a and SGD learning for φ is increased to about 10 iterations.

Conclusion
In this paper, we proposed a regression algorithm using sparse coding dictionary learning that can be implemented on a quantum annealer, based on the formulation of a regression as an inpainting problem. A pre-training technique is introduced to improve the prediction quality. The procedure is described in Sect. Regression model. The regression algorithm was numerically demonstrated using a set of lattice QCD simulation observables and was able to predict the correlation function calculated in the presence of the CPV interactions from those calculated without the CPV interaction. The regression experiment is carried out using the D-Wave 2000Q quantum annealer with minor embedding technique in order to obtain fully-connected logical qubits. The study is performed for six different values of the number of qubits between 20 and 64, and it showed a systematic decrease of the prediction error as the number of qubits is increased (see Fig. 4). With a larger number of qubits and elaborately tuned the sparsity parameter, we expect further improved performance in the future. applied to prediction of the lattice QCD simulation data as a function of N q (red squares). An exponential ansatz is fitted to the data points for all N q (blue solid line) and those excluding N q = 20 (green dashed line).