Generative and discriminative training of Boltzmann machine through quantum annealing

A hybrid quantum-classical method for learning Boltzmann machines (BM) for a generative and discriminative task is presented. BM are undirected graphs with a network of visible and hidden nodes where the former is used as the reading site. In contrast, the latter is used to manipulate visible states’ probability. In Generative BM, the samples of visible data imitate the probability distribution of a given data set. In contrast, the visible sites of discriminative BM are treated as Input/Output (I/O) reading sites where the conditional probability of output state is optimized for a given set of input states. The cost function for learning BM is defined as a weighted sum of Kullback-Leibler (KL) divergence and Negative conditional Log-likelihood (NCLL), adjusted using a hyper-parameter. Here, the KL Divergence is the cost for generative learning, and NCLL is the cost for discriminative learning. A Stochastic Newton-Raphson optimization scheme is presented. The gradients and the Hessians are approximated using direct samples of BM obtained through quantum annealing. Quantum annealers are hardware representing the physics of the Ising model that operates on low but finite temperatures. This temperature affects the probability distribution of the BM; however, its value is unknown. Previous efforts have focused on estimating this unknown temperature through regression of theoretical Boltzmann energies of sampled states with the probability of states sampled by the actual hardware. These approaches assume that the control parameter change does not affect the system temperature; however, this is usually untrue. Instead of using energies, the probability distribution of samples is employed to estimate the optimal parameter set, ensuring that the optimal set can be obtained from a single set of samples. The KL divergence and NCLL are optimized for the system temperature, and the result is used to rescale the control parameter set. The performance of this approach, as tested against the theoretically expected distributions, shows promising results for Boltzmann training on quantum annealers.


Introduction
Boltzmann machine (BM) is an energy-based model defined on an undirected graph and is used for unsupervised learning.The graph vertices are segregated into a set of visible and hidden nodes.The probability of each state is dependent on the total energy of the graph for that state.Moreover, only the state of a visible node is "visible" to the user.Therefore, these visible states' marginalized probabilities are a non-linear function of the energy parameters and can be used to model complicated distributions.These BMs can be trained either using Maximum-likelihood (ML) learning or Contrastive Divergence (CD) learning techniques.It is well known that ML Learning of Markov random fields (MRF) is a challenging task due to the large state space.Due to this complexity, Markov chain Monte Carlo (MCMC) methods typically take a long time to converge on unbiased estimates.CD learning, on the other hand, provides a computationally inexpensive way of training MRFs.However, it provides biased estimates in general [1].
A subclass of BM called the Restricted Boltzmann Machine (RBM) (see Fig1(b)) was proposed by Hinton (2002) [2] where the hidden and visible nodes had a bipartite structure.This structure allows an independent update of visible states, conditioned on the hidden states' knowledge and vice-versa.This property makes training of RBM very efficient on a classical computer.Boltzmann machines have received much attention as building blocks of multi-layer learning architectures for speech and image recognition [3,4].The idea is that features from one RBM can serve as input to another RBM.By stacking RBMs in this way, one can construct the architecture of a Deep Boltzmann machine (see Fig1(c)).It is known that approximate inference in deep Boltzmann machines can handle uncertainty better and deal with ambiguous data [5].
A comparison between the ML and the CD-based training of RBM is presented in [1].The authors suggested that an initial CD-based training and a final ML-based fine-tuning of RBM is the most computationally efficient way of training RBMs with less bias.This bias issue was further studied in [6] where the Persistent Contrastive Divergence (PCD) was developed.In this approach, the Markov Chain is not reset between parameter updates.This step brings the approximate gradient closer to the exact estimate in the limit of a small learning step.This method shows better performance on the testing data than the classical approach; however, it suffers from slow learning rates.A relatively faster approach was provided in [7] using the Fast Persistent Contrastive Divergence (FPCD) method.A tutorial on different training strategies is given in [8].It is intuitive to see that General BM has more representative power than RBM and its derivatives.However, the efficiency of the above-mentioned training methods is not expected to translate to the general case as the data-dependent expectations are not easy to compute, at least using classical techniques.Quantum annealers (QA) have provided a promising way forward to tackle this problem of expectation estimation [9].QA are physical devices that operate on quantum mechanical laws and are designed to minimize the Ising model's energy [10].As they operate on finite temperatures, the simulations on QA results in sampling from the Boltzmann distribution of the Ising energies [11].
Researchers have recently employed this property of QA to train BMs with a slightly more complicated structure than RBMs.For instance, [12] trained a Limited Boltzmann machine (LBM) to recover missing data in the images of chemical vapor deposition (CVD) growth for a MoS 2 monolayer.LBM allows sparse connectivity among the hidden units and, due to this complexity, it is not easy to deal with in a classical setting.
Another direction that researchers have taken is the training of specialized RBMs that can be better represented on the QA architecture, e.g., the chimera RBM which emulates the topology of DWave Quantum annealers [13].This allows the model expectations to be estimated as a single sampling step instead of the k-step CD method.Meanwhile, the data-dependent expectations are estimated as the 1-step CD method due to the RBM's favorable topology.The result of this progress can be seen in the outburst of new applications of RBM in modern machine learning architectures, for instance, Sampling latent space in Quantum variational autoencoders (QVAE) [14], RBM as an associative memory between generative and the discriminative part of the Associative Adversarial Network Model (AAN) [15,16] and Hybrid-Autoencoder-RBM approach to learn reduced dimension space [17].
In this paper, an ML-based approach is studied for a General BM.As discussed earlier, the topology of a highly connected graph is not conducive for CD-based approaches.The major hurdle of generating independent samples of BM is circumvented using QA.At present, the two popular QA devices are the "DWave 2000Q" system with approximately 2000 qubits connected in a Chimera topology and the "DWave Advantage" system with approximately 5000 qubits connected in a Pegasus topology.Considering the physical devices' sparsity, the largest complete graph that can be simulated on these systems has size 64 on the 2000Q and 128 on the Advantage system.The past growth in these systems' computational power suggests the prospect of solving a large-scale problem in the near future.Taking the prospect for granted, large and arbitrarily connected BM can benefit from unbiased estimation via QA.The method developed in this work does not use the graph's topology and is numerically sub-optimal for the cases when such structures are present (e.g., bipartite graph).For such cases, the readers are encouraged to pursue the literature listed above and the bibliography therein.
This paper aims to demonstrate the use of quantum annealers for discriminative and generative tasks involving Boltzmann machines.Generative tasks involve sampling a state from a probability distribution.At the same time, a discriminative BM acts as a classifier for a given dataset.A BM trained for generative and discriminative purposes can be used to sample a labeled dataset from a probability distribution.For example, [18] developed a generative BM for sampling vertical and horizontal stripe patterned images.The theoretical aspects of generative and discriminative training in the context of RBM can be found in [19].The second focus of this work is to analyze the effect of annealing temperature on training.The Boltzmann distribution is dependent on a sampling temperature, and the sampling temperature in QA is shown to be instance-dependent [13].An overview of few temperature-estimation methods for QA is provided in [20].In a training strategy proposed by [21], the problem of temperature estimation is circumvented by assuming certain restrictions on the model and data distributions.However, their approach is machine-specific, in the sense that the knowledge of annealing temperature is required to use the trained model on a new QA machine.A temperature estimation strategy similar, in principle, to the one presented in [22] is adopted here.However, an approach that works on the probability distribution of samples, instead of the energies, is proposed.This ensures that the optimal set can be obtained from a single run.A new technique to look for control parameters with better performance in training cost is also proposed where the KL divergence and NCLL are optimized with respect to the inverse system temperature.This method is employed to estimate the behavior of generative and discriminative costs and further refine the Ising model parameters.

Notations and mathematical prerequisites
A Boltzmann Machine is a probabilistic graphical model defined on a complete graph which is partitioned into "visible" nodes taking up values observed during training denoted by the vector, v, and "hidden" nodes where values must be inferred taking up values denoted by the vector, h.These states collectively define the energy and consequently and the probability of each state.Next, the definition of a graph is stated to introduce useful terminology and notations.

Graph
A graph, G, is a pair of sets (V, C), where V is the set of vertices and C is the set of edges/connections.For each element e ∈ C there is a corresponding ordered pair (x, y); x, y This work additionally requires the graph to be finite, i.e., N V < ∞.Next, the definition of Ising energy is introduced.

Ising model
Ising model is a type of a discrete pairwise energy model on an undirected simple graph, G(V, C).Each vertex, V i ∈ V is assigned a state s i ∈ {0, 1} for all i ∈ 1, . . ., N V .This determines the complete state of the graph as an ordered tuple S = (s 1 , . . ., s i , . . ., s N V ) ∈ {0, 1} N V .The set of all possible states is referred to as S = {0, 1} N V with the total number of states denoted by N T S = |S| = 2 N V .The Ising energy, E, for a particular state, S can be evaluated as follows: where, the first terms represents the energy of labeling a vertex with label s i , and the second term is the energy of labeling two connected vertices as s i and s j .The parameters H i and J k are referred to as the Field strength and Interaction strength, respectively.
The parameter set is represented as a vector, θ = θ 1 , . . ., θ Nv+N C T .In this work, it is specialized to following form: This notation allows to describe energy as a matrix-product evaluated as E(S|θ) = ε(S)θ where ε(S) The distribution of equilibrated states can be modeled, at least approximately, as a Boltzmann distribution: Here, Z denotes the partition function and is estimated as Z = S e −βE(S) .

Generative Boltzmann Machines
The key idea behind a Boltzmann machine is the segregation of the vertices into visible and hidden states.This allows to write any state S of the graph as the following concatenation: where v denotes the state of the visible nodes and h denotes the states of the hidden nodes.Only visible states are observed by the user and their probability can be estimated by marginalizing over all hidden states.Therefore, the probability of a particular visible state, v, is given as, This marginalization allows the BM to represent complex probability distributions.Consider a data set of Each data state occurs with a probability q(v k ) for all k ∈ {1, ..., N DS }, referred to as the true probability of the distribution.The performance of a BM can be judged by comparing the model distribution, p(v) with the true distribution.This comparison can be carried out using the Kullback-Leibler divergence D KL (q||p) defined as, The KL divergence is always non-negative with D KL (q||p) = 0 if and only if q = p almost everywhere.For this property, D KL is chosen to be the cost function for training generative BMs.
Remark: In case, there is no meaningful notion of probability distribution of the data states.The true probability distribution can be substituted as In this case, the KL Divergence is equal to the Log-likelihood of the data set normalized with the cardinality of the data set, N DS .

Discriminative Boltzmann Machines
It is often desired to generate a labelled data set which entails assigning a classification to each visible data point.This classifier can be included in our notation by further segregating the visible state into input-output pair.Consequently the state of the BM is represented as: where, v I and v O denotes the "input" and "output" visible state.The state, v O is used to encode the classification of state v I .Discriminative BMs, also referred to as conditional BMs in literature, are trained for classification using labelled data set.The cost function in this case is taken as the Negative Conditional Log-likelihood L defined as, where, the conditional probability, p(v O |v I ) is estimated as:

Training Method
For a general purpose training strategy, the cost is set as a weighted average of KL Divergence and Negative conditional log-likelihood as described below where the α = 0 signifies a generative BM while α = 1 signifies a discriminative BM.Gradient based techniques are used to carry out the optimization procedure.The gradient is estimated as: And, the hessian is estimated as: The definitions of all statistical quantities are presented in Appendix A and the derivatives of cost functions are estimated in Appendix B

Optimization Scheme
Stochastic gradient and Newton methods have been widely employed in such problems.A comparative performance of many variants of such stochastic methods are studied in [23].In stochastic optimization methods, it has been shown that both Newton methods and gradients-based methods have a local linear convergence rate.However, [24] developed a Newton method that is independent of the condition number in contrast to the gradient-based scheme.
These developments motivate the use of Hessian in the optimization process.Such schemes are very useful in problems concerning sampling from sensitive devices like Quantum annealers.Analyzing the different variations of stochastic methods is out of scope of this work.A mini batch momentum-based approach is adopted.This approach can be easily substituted for one of the more sophisticated ones presented in [23,24].The following momentum-based update rule is used: The parameter, η defines the learning rate of the method and ν defines a momentum rate.A higher momentum rate means that the current learning direction is closer to the learning rate in the previous time step.In general, the momentum is kept low at the beginning and slowly increased as the learning progresses.This technique is employed to reduce oscillations in the final stages of training.The parameter λ modifies the cost function to minimize the magnitude of the learned parameter.In this work.this parameter is identically set to 0 in all test cases and is mentioned only for completeness.The variable, r, denotes the rate of update.In the gradient-based method, it is estimated as: In Newton method, rate of update is estimated as: Remark 1: The Hessian matrix estimated from the sampling process, is usually rank deficient.The main reason is under-sampling.Therefore, inversion of these matrices pose a challenge.In this work, Tikhonov regularization is used where singularity of ∇ 2 θ (t) C is alleviated by adding positive terms to the diagonal as where I is the identity matrix.This regularization results in the following useful property for the rate of update: Remark 2: The above update rule works for unconstrained optimization.A constrained optimization problem can be considered by employing lagrange multipliers.In this study, the constraints are much simpler, These constraints represent the practical range of parameters for Quantum annealers.These bounds are implemented by using following scaling parameter: In any optimization step, if δ > 1, then the parameters are scaled as: θ (t) → θ (t) /δ and the corresponding ∆θ (t) is updated.The optimization procedure is presented in Algorithm 1.The procedure uses a subroutine EstimateDerivatives (Algorithm 2) to estimate the gradients and hessian.This subroutine is discussed in the next section.For gradient based approaches, the estimation of hessian can be tactically avoided from algorithm 2 by ignoring all covariance terms and the hessian in this case is set to a Identity matrix.

Numerical Estimation of Gradient and Hessian
QA can be treated as a black-box that samples the states for a given set of parameters.DWave Quantum annealer is used in this work that operates based on the {+1, −1} states.The energy parameters for the {+1, −1} states and the {1,0} states can be transformed using the relation presented in Appendix C2 .The user can specify the number of samples.The probability of a particular sample state is then be estimated as the ratio of the number of occurrences of that state to the specified number of samples.It is easily noticeable that the gradients and hessian described in terms of statistics of the energy gradient ∇ θ E. The first term in the gradient and the hessian, requires estimation of E(∇ θ E) and Cov(∇ θ E).
In the notation described in Section 2.2, given a sample state S, energy gradient is estimated as: The latter terms in Eq (5) and Eq (6) are more complicated to compute as they require summation over each visible data.
Two possible strategies can be employed in this case.In the first approach, all sampled states are grouped according to the visible/input states.The conditional probabilities are then estimated by treating each data state's respective groups as the sample set.Since the samples from this estimation is unbiased, the QA samples are independent.In theory, the accuracy of these conditional probabilities increases with sample size.However, in practice, the number of samples is finite and should be kept to a minimum possible value to reduce computational cost.This is a critical drawback of this approach.For instance, not every data state needs to appear in the samples.In such cases, the KL Divergence cannot be estimated.
The other approach is to run independent simulations for each data state on the subgraph of hidden nodes, G H , and the subgraph of hidden and output nodes, G HO .The energy parameters of these subgraphs depend on the visible states (for G H ), and input states (for G HO ).The field terms are augmented to include the energy for fixing the states of the removed nodes.An illustration of this procedure is presented in Fig2.One can observe that this process leads to a shift in energy states.For instance, same states in Fig2(a)&(b) have an energy difference of However, the Boltzmann distribution remains unchanged by a uniform shift in energies.The drawback of this method is that this procedure's computational cost grows with the training data size.This growth by itself is not a problem; however, the sampling step is usually the most time-consuming.In general, CD-1 steps are used to determine this term in RBMs quickly.However, this method is not extendable to General BMs.The authors are currently unaware of any scheme that circumvents this computation.This second approach of running independent simulations for each data will be adopted from here onward.The procedure for estimating gradient and hessian from the sampled data is presented in Algorithm 2. It should be noted that the estimation of RHS of Eq (5) and Eq(6) only yields the direction of gradient and hessian, respectively.The size of the update can be subsumed in the learning rates.However, the value of β influences the probability distribution and, consequently, influences trained parameters' value.This issue is discussed in the next section.
It is also worth noting that the exact scaling of the computational complexity of these problems are very hard to ascertain, and most likely not well-defined.The reason being that the computational cost of estimation of gradient and hessian scales with the sample size while the optimal sample size is problem dependent [25].However, for any graph with N V vertices and N S samples, the estimation of gradient requires O(N S N V ) computations while the estimation of Hessian requires at most O(N S N 2 V ) computations.Due to limited number of vertices in quantum annealers (∼ 10 3 ) a quadratic cost is not expected to be a bottle neck in the near term.
results in an augmented field term on the hidden nodes

Effect of Annealing Temperature
Experimental evidence has shown that the apparent annealing temperature, i.e., the temperature corresponding to the Boltzmann distribution of samples, is instance-dependent [13].The corresponding inverse temperature is referred to as β * in this section.The consequence of this instance-dependence is that the quantum annealing systems cannot be rated for specific temperatures, and β * has to be estimated from the samples for each instance of the graph.The knowledge Algorithm 2 EstimateDerivative: Estimation of Gradient and Hessian Update Parameters of G HO and G H 8: {h} ← Sample state of G H

9:
Estimate random variable Estimate E(∇ θ E|v), Cov(∇ θ E|v) 11: Estimate random variable Estimate E(∇ θ E|v I ), Cov(∇ θ E|v I ) 14: Evaluate Eq(5) and Eq (6) 15: end of β * is crucial in developing models capable of being implemented on different computational devices.Even in the same machine, two different embeddings of the same logical graph may lead to different annealing temperatures and consequently show disparities in performance.The key idea behind the estimation of β * is that the Boltzmann distribution of a state can be equivalently written as follows by taking a log on both sides: log p(S) = −βE(S) − log Z β * is estimated as the slope of this line.The exact form is as follows where E(.) denotes the expectation of the variable over all possible states: A similar approach was developed by [13] that uses binning of energies.They estimated the value of β using regression of ratio of probability of two bins of different energies that are sampled for two different control parameter sets.The statistics of different energy levels can be succinctly written as: where D g (E) is the degeneracy of states with energy, E. The authors used the log of ratio of probability, l(β) = p(E 1 ; β)/p(E 2 ; β) for some fixed energy levels.They manipulated the value of β by rescaling the parameters and were able to estimate the β from the slope of l(β) and the scaling factor.Readers should notice that although these two approaches are based on a similar argument, they differ greatly in their application.In the former approach, the intuition is that β represents the slope of the following sampled data, (E(S), log p(S)).Meanwhile, the latter approach uses the data of the probability distribution of energy levels.Both methods have their pros and cons.The second method requires sampling at rescaled parameters assuming that the effective annealing temperature remains invariant with rescaled parameters.This is not usually the case as shown by [20] where a nonlinear relation was estimated between the rescaling of Energy and the effective β, attributed to 'non-Boltzmann' nature of probability distribution.
The variation in the distribution due to small changes in parameters is overshadowed by the noise in the Boltzmann machine.Meanwhile large parameter changes may lead to empty bins leading to an inability to compute probability ratios.For the first approach, that is proposed here, is equivalent to creating a bin of each unique state that is sampled, and this step is computationally more expensive than binning energy levels, especially in the limit of large graphs.
On the favorable side, it requires only one set of samples as a rescaling of parameters is not needed.Note that the probability of samples may be obtained as a direct output from current quantum annealing hardware, as is the case here in case of the D-wave annealer making this approach fairly easy to implement compared to the energy binning method.

Examples
The gradients for the cost as used in the training algorithm remain invariant to the system temperature upto a scaling as seen in Eq (5).Assuming that distribution of states in the optimal solution can be effectively modelled as a Boltzmann distribution for some β, one can extract some useful statistics that can help further refine the model parameters for better performance (in terms of training cost).
The key idea is that the variation of cost components (D KL and N ) with respect to β (close to β * ) can be approximated using the statistics of samples obtained from the optimal parameter set.This information can elucidate in approximate sense, the β at which the BM's performance is most optimal, say β o , for a given choice of parameters (θ * ).The user cannot control the annealing temperature but the parameters can rescaled to achieve the overall effect.It can be seen via Eq (2) that scaling β → cβ * and θ → cθ * has the same effect on the probability distribution.Since the hardware temperature cannot be modified, β o can then be used to rescale the parameters as θ → θ.β o /β * .to further reduce the cost.

Example: Temperature based scaling of training parameters
This effect is experimentally evaluated in Fig 3, where the costs are computed for a fully connected graph with 10 vertices (7 visible and 3 hidden) trained with data representing the truth table of 2-bit adder circuit.The net cost of optimization is estimated for α = 0.5.Here the scaling factor of 1 represents the initial parameter θ * .The individual cost components simultaneously decrease till the scaling factor of 3.These rescaled optimal parameters may not lie in the hardware-specified bounds, this may in fact be the reason why these parameters are not estimated by the training procedure in the first place.Next, we demonstrate a method which allows to evaluate this optimal scaling with only the statistics of samples at θ * .Unlike previous studies, this method assumes Boltzmann behavior only in the vicinity of β * while circumventing the problem of multiple evaluations of samples at different but close parameters.
Here, the Taylor expansions of KL Divergence and the Negative Conditional Log-likelihood as a function of β is employed to compute the optimal temperature.The advantage is that all the coefficients of β in the following expression can be estimated from the sampled states at β * .The Taylor expansion till the second term is as follows: where The exact behavior is estimated by evaluating the Boltzmann distribution, Eq (2).The value of β o is estimated as the minimizer of the approximated quadratic cost function, C app = αD app KL + (1 − α)N app /N DS .Therefore, when approximated cost is convex, the β o is estimated as:  This procedure requires only 1 set of samples in contrast to sampling at all possible β.

Example: Generative and Discriminative training example
As a toy problem, the data illustrated in Fig5 is used to train the BM models.The first data set (Fig5(a)) is unlabelled (Fig5(b)) and has 11 data points.The second data set has 40 data points with 11 labelled as '0' and 29 labelled as '1'.
In all the cases, the training parameters of Eq (7) have a constant value of η = 0.1, ν = 0.7, and λ = 0.The Hessian is inverted using Tikhonov regularization with = 10 −3 .The run time data (see Fig 6) shows that the Newton approach performs better than the gradient-based approach.The fluctuations in the deterministic training case (NumBatch = 1) are due to the DWave sampling step's stochastic nature.It should also be remarked that the parameters were not optimized for individual cases and were arbitrarily picked from a suitable range.
The stochastic training method with 2 batches was employed for the first data set (Generative learning).Two BM's with 3 hidden nodes were considered, first with complete connectivity and the second with Restricted BM architecture.The variation of KL Divergence (training Cost) with the annealing temperature is shown in Fig7.It is observed that trained  BM of the general type performs better than the Restricted type.This is an intuitive result as RBM is a specialized case of the General BM and has less representation capability in comparison.

Conclusion
Quantum annealing has the potential to improve the training of General Boltzmann machines significantly.The stochastic Newton and gradient-based training methods can be employed using direct sampling from quantum states.
In the proposed approach, inclusion of Hessian in training increases the computation cost from O(N V ) to O(N 2 V ).For present Quantum annealers, this quadratic scaling does not pose any practical limitation due to limited number of qubits.This procedure can accelerate the training of a General Boltzmann machine which have higher representation capability.The use of quantum annealers is promising for quantum/classical training since many qubits are available, and the training takes advantage of measurements on the physical realization of the Boltzmann machine [14,26].Unlike the other popular methods like the Contrastive Divergence, this method does not utilize the suggested BM's special topology.However, in practice, having a sparse connection is desirable as that allows embedding larger graphs in the DWave architecture.These methods were employed to carry out generative and discriminative training in toy problems.
The numerical results suggested that stochastic Newton optimization performs better than gradient-based optimization.
It was also observed that adding a small weightage to KL Divergence in discriminative cost greatly improves BM's performance.A major contribution of this paper is in developing a procedure to approximate the behavior of BM in slightly perturbed temperatures.Due to the instance-dependence of the QA sampler, even the same Ising problem with different hardware embedding may behave as two different quantum annealing system.Our procedure is useful in approximating a refined set of rescaled parameters for BM for a given embedding using the statistics of a single sample set of annealed states.In addition, once the hardware parameters are changed and optimized for a given generative or discriminative task using a gradient algorithm, the final parameter set can be further improved using a rescaling strategy.Here, the cost is additionally optimized with respect to the system temperature.It is shown that the approach allows improvement in Boltzmann machine training by comparing against exact analytical distributions.In the future, this work will be tested for other practically relevant benchmark problems alongwith rigorous analysis of training parameters as a function of hardware embedding.

A Definition of statistical quantities
For completeness of notation, the definition of each statistical quantity is provided in context of the random variables used in this paper.Conditional quantities require following conditional probabilities:

Covariances
The dependence on θ and β is dropped for notational convenience.
The gradient of Log-likelihood for a single data is estimated as: The gradient of KL Divergence is estimated as: B.2 Gradient of Negative Conditional Log-likelihood

B.3 Hessian of KL Divergence
Hessian of Log-likelihood for a single data is estimated first.It uses the fact that in the case of Ising type energy, ∂ 2 E ∂θi∂θj = 0 for all possible i and j.
The hessian of KL Divergence is estimated as: B.4 Hessian of Negative Conditional Log-likelihood for a single data Hessian for a single data is estimated as:  = −E(v, h)p(h|v) + p(h|v) h

E(v, h )p(h |v)
Term II is evaluated as: Combining the two terms: Term II is evaluated as:

Figure 2 :
Figure 2: Illustration for estimating parameters of the subgraph.The Input, Output and Hidden nodes are represented with red, blue and grey colors, respectively.The field parameters and interaction parameters are written in blue and red fonts, respectively.The subgraphs are presented in a yellow box.(a) Cyclic graph, G with 2 input nodes, 1 output node and 2 hidden nodes (b) Subgraph of output and hidden nodes, G HO : Fixing the visible input v I = [v 1 , v 2 ] results in an augmented field term on the output and hidden nodes (c) Subgraph of hidden nodes, G H : Fixing the visible data v = [v 1 , v 2 , v 3 ] results in an augmented field term on the hidden nodes

Figure 3 :
Figure 3: Cost components for different scaling parameters.The initial set of parameters (θ * ) are for a BM with 10 vertices, trained for a 2-bit added circuit.The cost corresponds to the mean cost of 20 sample runs.The errorbar represents the minimum and maximum value in the sample set for each scaling parameter.

Figure 4 :
Figure 4: Comparison of approximate and exact values of the training cost of BM with 10 vertices, trained for a 2bit added circuit.

Figure 4 , 2 = 0
Figure 4, shows the individual cost components for the 2-bit adder example with respect to the value of β.The analytical model (dashed line) is estimated by enumerating all possible states of the Ising model and estimating the Boltzmann distribution.The approximated model is estimated using Eq (9).The estimated coefficients are: β * = 2.5251, β o = 5.2321, D KL = 1.4182,N = 21.2122,∂D KL ∂β = −0.1245,∂ 2 D KL ∂β 2 = 0.0559, ∂N ∂β = −1.9677,∂ 2 N ∂β 2 = 0.7170.The figure verifies the expected behaviour that both the components of the costs are simultaneously lowered at β = β o .The cost components for rescaled parameters are D KL = 1.2193,N = 16.5163.The scaling factor for optimal parameters is approximated to be β * /β o ≈ 2.1.This value is very close to the experimentally evaluated value of ∼ 3.

Figure 5 :
Figure 5: Visible data states, D, for training Boltzmann machines.Each row represents a single data point.(a) Each data point represents the phase of a state at 10 spatial points.The '0' phase is accumulated to the left, and the '1' phase is accumulated to the right with at most 1 boundary.(b) Labeled data set with the data points described in part(a) are labeled as '0', and data points with random spatial distribution labeled as '1'.The label is appended at the end of each data point in a black box.

Figure 6 :
Figure 6: Training data for a General BM with 3 Hidden nodes.The cost of training is defined by Eq(4) with α = 0.5.

Figure 8 :
Figure 8: Performance of trained BM with 4 hidden nodes with respect to the cost parameter, α.

v∈{v 1 e 2 =
,...,v D } q(v) (− Var(E|v) + Var(E)) B.6 Derivative of Negative Like-Likelihood w.r.t.Inverse temperatureThe derivative of log-likelihood of conditional probability for a single data, v ≡ [v I , v O ] is calculated first:d ln p(v|v I ) −βE(v,h) − ln v O ,h e −βE(v I ,v O ,h) v, h) e −βE(v,h) h e −βE(v,h ) + v O ,h E(v I , v O , h ) e −βE(v I ,v O ,h ) v O ,h e −βE(v I ,v O ,h ) = h −E(v, h)p(v, h|v) + v O ,h E(v I , v O , h )p(v I , v O , h|v I ) = −E(E|v) + E(E|v I )The second derivative is estimated asd 2 ln p(v|v I ) dβ 2 = h E(v, h) d dβ e −βE(v,h) h e −βE(v,h ) Term I − v O ,h E(v I , v O , h ) d dβ e −βE(v I ,v O ,h ) v O ,h e −βE(v I ,v O ,h )Term IITerm I is evaluated as:d dβ e −βE(v,h) h e −βE(v,h ) = − E(v, h)e −βE(v,h) h e −βE(v,h ) + e −βE(v,h) h E(v, h )e −βE(v,h ) h e −βE(v,h ) −E(v,h)p(h|v) + p(h|v) h E(v, h )p(h |v) = −E(v, h)p(h|v) + p(h|v)E(E|v) C) is undirected if an edge does not have any directionality i.e (x, y) ≡ (y, x).A graph is simple if (x, x) ∈ C for all x ∈ V.The number of vertices are denoted by N V = |V| and the number of edges are denoted by N C = |C|.The indices of connections and vertices are related using the maps, π 1 and π 2 such that for a connection with index, k ∈ {1, .., N C }, the index of the corresponding vertices are π 1 (k) and π 2