Potts model solver based on hybrid physical and digital architecture

The Potts model describes Ising-model-like interacting spin systems with multivalued spin components, and ground-state search problems of the Potts model can be efficiently mapped onto various integer optimization problems thanks to the rich expression of the multivalued spins. Here, we demonstrate a solver of this model based on hybrid computation using physical and digital architectures, wherein a digital computer updates the interaction matrices in the iterative calculations of the physical Ising-model solvers. This update of interactions corresponds to learning from the Ising solutions, which allows us to save resources when embedding a problem in a physical system. We experimentally solved integer optimization problems (graph coloring and graph clustering) with this hybrid architecture in which the physical solver consisted of coupled degenerate optical parametric oscillators. Hybrid computing seeks to divide operations based on the strengths of digital, analogue or physical architectures. Here, approximate solutions to the multi-state Potts model are found using a physical Ising solver, networked degenerate optical parametric oscillators, repeatedly with learning processes.

T he connection between searches for the ground states of physical systems and optimization problems has activated research and development into new types of computation 1 .The realization of such architectures has been led off by Isingmodel solvers [2][3][4][5][6][7][8][9] .These physical-model-solver architectures have solved certain problems much faster than conventional digital architectures such as a CPU [10][11][12][13] .Here, embedding of problems in a physical model sometimes requires an overhead of resources, which can be large enough to be a computational bottleneck.However, the embedding overhead can be reduced by choosing a more appropriate physical model as a solver instead of the Ising model [14][15][16][17][18] .
The Potts model is a fundamental model describing various physical and mathematical problems 19 , such as those of percolation theory 20 .This model is a generalization of the Ising model to multivalued spins; its Hamiltonian is given by where S i ¼ f0; 1; 2; ; M À 1g is an M-component spin on the ith node of the model, where i ¼ 1; 2; 3; ; N f g ; and δ a; b ð Þ is the Kronecker delta function.Since multivalued spins naturally express integers, various integer optimization problems can be straightforwardly mapped onto ground-state search problems based on this model 19 .For example, graph coloring can be described as a Potts model with a smaller Hilbert space than that of the standard Ising model 21 .In the standard Ising-model mapping, M colors in each node are represented as M Ising spins 21 ; thus, the size of its Hilbert space is 2 NM , which is larger than that of the original M-color problem, i.e., M N ð¼ 2 Nlog 2 M Þ: The Ising Hamiltonian has constraint terms to reduce the size of the enlarged space to that of the original problem, while the Potts model without the constraint Hamiltonian has the same size as that of the original.Thus, the Potts model mapping allows us to avoid the embedding overhead (see Supplementary Note 1).On the other hand, there are several challenges in realizing a physical Potts solver, namely the implementations of multivalued spins and interactions described by a Kronecker delta within physical systems.Recently, it has been proposed that physical systems based on a lattice of nonequilibrium Bose-Einstein condensates 17 and a network of three-photon down-conversion oscillators 18 can be used to solve specific Potts problems.
In this study, we demonstrated a scheme to solve the Potts problem using a hybrid architecture of a physical Ising-model solver and digital processing (Fig. 1a).The Potts problem can be approximately solved by iterative calculations of Ising problems with updated interactions evaluated from one-way feedforward connections.Hybrid computation enjoys the advantages of physical solvers through the aid of digital computers 22,23 .The physical solver obtains a low-energy solution of a complex Ising problem (known to be NP 1 ) quickly [10][11][12][13] , while the digital computer can accurately handle input and output, such as interactions and energy, and also run the learning logic (see Fig. 1a).We implemented a Potts solver by using a coherent Ising machine (CIM) 5,10,[24][25][26] and a standard CPU (Fig. 1b).The CIM is a physical Ising-model solver based on coupled degenerate optical parametric oscillators (DOPOs) 24 , in which Ising spins are encoded by utilizing the bifurcation transition in each DOPO.We experimentally solved two integer optimization problems-clustering and coloring-on the same graph (see Fig. 2).

Results and discussion
Theoretical framework.First, we explain how to map a Potts problem on one involving iterative calculations of Ising models.Given an integer L ceil log 2 M À Á , multivalued spin S i can be written by a set of Ising spins with a standard binary representation as , where the delta functional Potts interaction is transformed into multibody Ising-spin interactions This complicated interaction can be simplified by decomposing it into sets of two-body interactions Ising problems with one-way feedforward connections: H ðlÞ Ising ¼ ∑ ij J ðlÞ ij σ ðlÞ i σ ðlÞ j : Here, l represents an iteration number and is called a stage.Interaction matrix J ðl þ 1Þ ij is determined recursively from iteration J ðlÞ ij and solution s ðlÞ i of the previous stage: Feedback connections where the initial Ising interactions are the same as the original Potts interactions J ð1Þ ij ¼ J ij : Figure 1a illustrates the framework of the Potts solver based on hybrid computation.We repeat operation stages including two parts, the Ising-solver part that obtains solution

Feedforward connections
in Eq. 2. The digital part also calculates the Potts energy defined by From Eq. 2, we can derive two other expressions of Potts energy: is the Ising energy of a solution in stage l, and ij is the energy of ferromagnetic states in stage l (e.g., ferromagnetic means s ðlÞ i ¼ À1 for all i).We can conclude that E ðlÞ Potts decreases in an iteration l if each stage yields a solution whose energy is lower than that of the ferromagnetic states: Ising : Note that the ferromagnetic states are trivially obtained for any

Meanwhile, convergence condition ΔE ðlÞ
Potts ¼ 0 is satisfied when the obtained solution is the same as the ferromagnetic state.Equation 3indicates the convergence of a change in spins for the ferromagnetic solutions (see Methods for details).
Low-energy solution S Ã i is, however, not assured to be the ground state of the original Potts model.This problem is mainly attributed to the one-way feedforward connection of J ðlÞ ij described by Eq. 2. Namely, as l increases, the interaction matrix and graph are divided up into more and more submatrices and subgraphs, as illustrated in Fig. 1a.The loss of information from J ij and the reduction of the graph degrade the solution accuracy.Such errors can be circumvented by implementing two kinds of feedback.One is recurrent Potts problem feedback with a new (learned) interaction matrix J new ij (long arrow in Fig. 1a).The other one is digital feedback.Namely, in each digital operation, J ðlÞ ij and S ðlÞ i are modified to improve Potts energy E ðlÞ Potts (rounded arrows in Fig. 1a).The simplest example of digital feedback is filtering.For

ΔE ðlÞ
Potts > 0; we can filter out a bad solution without additional calculations by choosing a better solution in the previous stage.In the next section, we experimentally demonstrate that heuristic feedback algorithms clearly improve the performance of the Potts solver.
Finally, we generalize J ðlÞ ij defined in Eq. 2 to use the feedback algorithms.For convenience, we introduce weight matrix W ðlÞ ij defined by which is a more general form, i.e., J ðlÞ ij ¼ W ðlÞ ij J ij , than Eq. 2. In summary, weights for interactions J ðlÞ ij are "learned" from the solutions of the previous Potts model computation so as to decrease the Potts energy.This framework can be regarded as an artificial-neural-network-like algorithm using a physical Ising solver, where a decrease in the energy cost function (Potts energy) is assured if the solver can find a low-energy Ising solution (see Supplementary Note 9).Note that this property of convergence may allow us to utilize the advantages of physical solvers that can find low-energy solutions quickly.
Graph clustering.We solved a graph clustering problem, which is a task to find the best grouping of nodes.This problem is widely used in various fields, such as community detection in social 27 and biological networks 28,29 .Modularity Q is a good measure for graph clustering problems 30 , and a task to maximize Q can be directly mapped onto a search for the ground state of the Potts model in Eq. 1 31 .Multivalued spin S i identifies a group number to which the ith node belongs.The interaction matrix for this problem is defined as J ij B i B j À CA ij ; where A ij is the adjacency matrix, There is room to study on the definition of J ij 32 , but it is beyond the scope of this paper.Competition between antiferromagnetic and ferromagnetic correlations (namely, a positive B i B j and negative ÀCA ij in J ij , respectively) is the intrinsic difficulty of this problem.Although the number of groups is not given, optimized M is spontaneously obtained through such ferromagnetic-antiferromagnetic competition as discussed below.
We solved a clustering problem on the graph shown in Fig. 2, compiled from the prefectures of Japan and has N = 47 nodes and N edge ¼ 92 edges.The physical solver used here was a CIM implemented with 512 fully connectable nodes (see Methods and Fig. 1b), in which 470 nodes were used in parallel.It has been demonstrated that, if J ij is a dense matrix, a CIM shows better performance than the standard CPU with simulated annealing 10 and the D-wave system 33 .This suggests that a CIM is appropriate for clustering problems in which J ij is dense owing to the term of B i B j with B i ≠ 0: Each CIM calculation took 500 μs (100 5-μs steps), and each digital part took about 30 μs or less.However, the current setup used a slow serial communications interface, and the data transfer of J ðlÞ ij between the CIM and CPU took a few seconds.This bottleneck can be removed by coding the J ij -update logic in fieldprogrammable gate array (FPGA) modules (see Methods).
Figure 3a-d shows the evolutions of 47 DOPOs during 100 operation steps (circulations in a cavity) in four stages (l = 1, 2, 3, and 4).Positive (negative) DOPO amplitudes represent up (down) Ising spins.The black lines in Fig. 3e show the change in modularity Q and the number of groups M in the same operation steps as those in Fig. 3a-d.For l = 1 and 2, the DOPO amplitudes show that antiferromagnetic states appear after several tens of steps (Fig. 3a, b).As a result, each group split in two, and M doubled in value (Fig. 3e).At l = 3, down-spin DOPOs were the majority (Fig. 3c), indicating that ferromagnetic correlations were dominant, and M converged to an optimized value of 5 (Fig. 3e).At the beginning of the steps in each stage, Q decreased drastically, while at the end of steps, a CIM selected a higher value of Q than that in the former stage.At l = 4, the complete ferromagnetic state prevailed finally (Fig. 3d), meaning that the stationary condition was satisfied.As mentioned in Methods, the obtained grouping with Q of 0.646 (see Fig. 2) is the same as the best solution obtained by reliable algorithms, such as the Louvain greedy 34 and Infomap algorithms 35 .Figure 3f shows that the rate of reaching the highest Q of 1000 trials is about 20%.We can conclude that the ferromagnetic-antiferromagnetic competition solved by the CIM provided good groupings with high modularity (see Methods and Supplementary Note 6).
We found that two digital feedback algorithms-domain separation and group reunion-improved the performance of our Potts solver.Figure 3e, f (red and blue lines) reveal that they almost doubled the rate of reaching the highest Q.In addition, the highest Q was reached at early stages, so the calculation time was shortened.The digital processing including both the domain separation and group reunion algorithms takes at most 30 μs; they were on the order of OðN 2 Þ (or OðN edge Þ for sparse A ij ).Domain separation.By detecting magnetic domains (small regions in which spins are in the same), we can recalculate S l ð Þ i to decrease Potts energy.As illustrated in Fig. 3g, an Ising solver sometimes yields a solution consisting of two or more separated magnetic domains (see Supplementary Notes 3, 4, and also ref. 25 ).Nodes in separated domains should be in different groups owing to the lack of ferromagnetic correlations (ÀA ij ).Note that the antiferromagnetic connections (B i B j ) still remain.Namely, the subgraph consisting of red nodes in Fig. 3g is separated, while the corresponding matrix (red square) is not block diagonal because of the antiferromagnetic interactions (see Supplementary Note 4).As depicted in Fig. 3g, by removing the antiferromagnetic elements from J ðlÞ ij , this domain separation feedback yields a new J l ð Þ 0 ij .As a result, the Potts energy decreases in proportion to the sum of removed antiferromagnetic elements, i.e., Detecting domains and numbering them require a calculation time of OðN 2 Þ or OðN edge Þ: The updated S l ð Þ 0 i is determined from the domain number.Now, the number of groups is unlimited, while it was limited to at most 2 l in the case without feedback.As shown in Fig. 3e, f (red lines), domain separation feedback allows us to reach the best solution with M = 5 in the early stages l < 3: Thus, the calculation time can be shortened by up to half.
Group reunion.The group reunion feedback algorithm is schematically shown in Fig. 3h.On the basis of the obtained grouping described by S ðlÞ i ; we can calculate the group-group interactions As shown in Fig. 3h, this interaction matrix is defined on a new graph with M ðlÞ nodes.The new Potts Hamiltonian H Potts ¼ ∑ gg 0 J gg 0 δðS g ; S g 0 Þ describes the task of finding the best way of decreasing the Potts energy by executing reunions of groups.Ferromagnetic group-group interaction J gg 0 < 0 requires reunion.Group reunion feedback can restore the information lost due to the approximation: Namely, J gg 0 includes information about the original J ij , whereas a block diagonal J ðlÞ ij loses it.
The Potts problem for a group reunion task can be efficiently solved by making the following approximation.We unify two groups, g a and g b , for a negative and minimum J g a g b without considering the other negative elements, and repeat the same calculations by updating J gg 0 .In each step, the Potts energy is reduced by 2J g a g b .This approximation works very well for small M (see Supplementary Note 5).The calculation takes OðN 2 Þ or OðN edge Þ time.Group reunion feedback combined with domain separation feedback improves the rate of reaching the highest Q as shown in Fig. 3e, f.
In spite of the improvement with the domain separation and group reunion feedback schemes, experimental results indicated that the success probability of our machine is still not better than those of the Informap and Metropolis algorithms (See Methods and Supplementary Note 6).We consider that one main reason for the relatively low success probability is instability in the optical system, which resulted in the fluctuation of the operational condition in each computation trial.This includes the instability of the optical parametric oscillation caused by the thermal fluctuation for the long-distance fiber in the cavity, and the instability of the relative phase between the DOPO and the injected lights.The optical stability can be improved by implementing precise temperature control of the long-distance fiber in the cavity and by suppressing the phase noise of the pump laser for the second-harmonic generation (see Fig. 1b).
Graph coloring.Graph coloring is the task of coloring connected nodes 19 .We experimentally solved a four-color problem on the graph in Fig. 2a.We set L = 2 for M = 4.The interaction matrix was totally antiferromagnetic J ij A ij > 0; requiring that adjacent nodes be different colors.The four-color theorem 36 assures the existence of a ground state with E Ã Potts ¼ 0: The CIM operated under the same conditions as described above (see Supplementary Note 7).
Figure 4a shows the conditional success rates for 50 instances of Þ , which were obtained as follows: Identical Ising models with J ij in stage one were solved 50 times, and 50 solutions s ð1Þ i;k and 50 Ising models in stage two Then, the conditional success rate was estimated by solving the 50 Ising models with J 2 ð Þ ij;k 100 times.The total success rate averaged over k is about 50%. Figure 4a shows the energy in stage one, E Potts; k , for each s ð1Þ i;k .Successful and failed instances in stage two are clearly separated irrespective of the energy in stage one.This result can be understood from the reduction of the graph in stage two described by W ð1Þ ij;k A ij (see Supplementary Note 3).Coloring fails with 100% probability regardless of the energy in stage one if the reduced graph in stage two has geometrical frustrations, such as a triangular structure (see Supplementary Note 8).
Such frustrations can be dissolved by implementing recurrent Potts problem feedback based on a "learning by mistake" approach.We iteratively execute the Potts solver with where w and w 0 are weights to control the learning.A feedback matrix is defined by L ij W ð2Þ ij A ij ð¼ 0; 1 f gÞ and related to the Potts energy as E Ã Potts ¼ ∑ ij L ij : Here, L ij ¼ 1 represents adjacent nodes in the same color, while L ij ¼ 0 represents those having different colors (or not adjacent).Thus, a finite L ij directly represents a mistake.In the new (learned) Potts problem with J new ij ; interactions on the "mistaken edges" are enlarged, and then the pair of nodes on these edges are correctly colored with high priority in stage one.As a result, frustrations caused by the reduction of the graph in stage two are eliminated by learning.
Figure 4b shows how learning affects the success rate for w ¼ 5; 10; 20; and 40 with w 0 ¼ 40: Each success rate was obtained by performing 50 trials in a two-stage experiment.In each learning step, L ij is determined from the worst-case instance with the largest E Ã potts in 50 trials.As shown in Fig. 4b, success rates improved from 50% to over 80% after few learning steps, eventually reaching nearly 100%.A larger w provides fast but unstable improvements (see Supplementary Note 10).
Figure 4c represents the sum of L ij , corresponding to total counts of mistakes, for four independent learning processes.In each learning process, there was no more than one mistake on the same edge; thus, there was at most four mistakes.The red-colored edges and nodes in Fig. 4c are frequently involved in these mistakes: they can be regarded as the intrinsic origins of frustrations in the graph.By detecting such nodes and edges, the learning process increased the success rate to over 80%.
The present Potts solver can be applied to general coloring problems with M colors 37 .For instance, simple scheduling problems 38 and number puzzles such as Sudoku 39 can be described by graph colorings.It is straightforward to apply the present solver to cases with M ¼ 2 L ; while a small number of additional nodes (at most N) are required to deal with cases in which M ≠ 2 L (see Supplementary Note 2).For comparison, the usual Ising mapping 21 requires MN nodes (up to N 2 ).Thus, the Potts solver can save node resources, which will be a benefit for physical solvers having the limitation of node resources.

Conclusions
We demonstrated a Potts model solver based on a hybrid architecture composed of physical Ising solvers and digital processing.The Potts problem is mapped onto iterative Ising problems with learning of weights for interactions, where convergence is assured if a physical solver can find a low-energy Ising solution.We experimentally realized it with a CIM and a standard CPU (Intel(R) Xeon(R)).We showed that graph coloring and clustering problems can be solved by using simple Ising models (no magnetic fields were used to introduce constraints and there was no need for a large number of spins).The resource overhead for embedding the problem is significantly suppressed.As a tradeoff, iterative calculations with learning (namely, additional computational time) are required.We expect that this additional time is insignificant if the physical solver is fast enough.The cost of the communication between physical and digital systems is an essential problem of hybrid computation, but it can be significantly suppressed by directly coding the learning logic in CIM's measurement-feedback systems.
The proposed method approximates an M-state Potts problem with L (= log 2 M) Ising problems, which means that the method does not guarantee that the ground state of the given Potts problem will be obtained.Although the heuristic feedback schemes that we call domain separation and group reunion significantly improved the solution accuracies, presumably the feedback will not completely compensate for the information lost in dividing a Potts problem into Ising problems for general instances.Nevertheless, we consider that the method is important as a scheme to obtain approximate, but useful solutions to integer optimization problems in a short time, by utilizing very fast computation speed of physical-system based Ising machines and also related algorithms on non-CPU digital devices 40,41 .

Methods
Ferromagnetic solutions as a sign of convergence.By considering ΔE ðlÞ Potts ; we can find a convergence condition characterized by ferromagnetic solutions.Note that there are degenerate ferromagnetic solutions due to spin inversion symmetry, and the degeneracy d FM increases as d FM ¼ 2 l (or d FM ¼ 2M l ð Þ to put it more precisely) because of the reduction of the graph and interaction matrix.Equation 3indicates that these ferromagnetic solutions, except for the complete-down state (s ðlÞ i ¼ À1 for all i), cause a (trivial) change in the multivalued spins, which can be reduced to steady spin states i .Accordingly, we can find another expression for the convergence condition Experimental setup of CIM.As shown in Fig. 1b, the CIM contains a phaseamplifier (PSA), 1-km fiber ring cavity, and an FPGA module.We employ a periodically poled lithium niobate (PPLN) waveguide as the PSA, which amplifies lights with only the 0 or π phase components relative to the pump phase as a result of signal-idler degenerate optical parametric amplification 42,43 .These two amplified components express two of the Ising spins.Because the cavity round-trip time is 5 μs and the pump pulse interval is 1 ns, over 5000 DOPO pulses are generated inside the 1-km cavity, from which 512 DOPO pulses are used as artificial Ising spins.The 512 DOPO pulses are mutually coupled by using the measurement and feedback scheme with the FPGA module 10,26 .We can encode interaction matrix J ij in the FPGA module with eight-bit integers ranging from −128 to 128.For solving the clustering problem,maxjJ ij j $ C ¼ 2N edge ¼ 184 exceeds the maximum range of the FPGA module.Thus, J ij in the CIM is rounded off as RðB i B j À CA ij Þ with R ¼ 1=2: By performing simulated annealing 44 calculations without round off, we confirmed that an error caused by this rounding is not critical to the ground-state search.
Actual computational time.The Ising-solver process of the CIM is completed in 500 μs, which is the time for 100 round trips of DOPO pulses in the 1-km cavity.The obtained spin configurations are stored in the FPGA module and transferred to the CPU to update the interaction matrix of the next stage.The calculations in the digital part take at most 30 μs when both the domain separation and group reunion algorithms are used simultaneously.Since the current FPGA module uses a slow serial communications interface (RS-232C), it takes a few seconds to transfer the annealing results and the updated matrix between the FPGA module and CPU.Although this technical issue is beyond the current scope, it is important to discuss how much we can shorten the transfer time.
For example, by using 10 Gigabit Ethernet (10 Gbps), the transfer time for J ðlÞ ij consisting of 8 512 2 bits is estimated to be 0.2 ms ideally.Note that J ðlÞ ij can be written as J ðlÞ ij ¼ W ðlÞ ij J ij with W ðl þ 1Þ ij δðS ðlÞ i ; S ðlÞ j Þ.Except for the first stage, it is enough to transfer S ðlÞ i of log 2 M 512 bits in a few microseconds.Furthermore, we can directly write the J ðlÞ ij -update logic in an FPGA module, which does not take any time for data transfer, except for the first input and final output.System-on-chip FPGA devices may be used to implement rather complicated feedback algorithms.
Comparison with other algorithms.We compared our experimental clustering results with other algorithms running on a standard CPU.We used reliable algorithms 45,46 , namely the Louvain greedy 34 and Infomap algorithms 35 .The greedy algorithm reached the same best solution of Q ¼ 0:646 with a small rate (about 2%), and it frequently reached the second and third best solution with Q $ 0:643 (over 70%).The Infomap algorithm reached the best solution with highest probability of about 60% (see Supplementary Note 6).Louvain ran in about a few milliseconds, while Infomap took about a few seconds on an Intel(R) Xeon(R) CPU E5-2697 v2 @ 2.70 GHz.However, the number of nodes was too small to evaluate the run times of these algorithms.A further benchmark study like the one in ref. 45 will be left to future work, because the number of nodes is strictly limited in the current setup.

Fig. 1
Fig. 1 Schematic view of Potts model solver and experimental setup of coherent Ising machine (CIM).a Potts model solver composed of the hybrid architecture of iterated physical Ising-model solver and digitally processed update of interaction matrix J ðlÞ ij .The Ising solver sends a solution (set of up or down spins) for digital processing to compute the next interactions J ðl þ 1Þ ij .In each iterative stage, the graph is divided into disconnected subgraphs, and the interaction matrix becomes block diagonal.The nodes in each subgraph belong to a certain group (color) described by a multivalued spin.Digital processing can be used to implement various feedback algorithms to decrease Potts energy E ðlÞ Potts as discussed in the main text.b Experimental setup of a CIM.512 degenerate optical parametric oscillators (DOPOs) in a 1-km-long fiber ring cavity are mutually coupled with J ij through a measurement-feedback scheme assisted by a field-programmable gate array (FPGA) module.The solution of the l-th CIM computation is transferred to the CPU, and the updated J ðl þ 1Þ ij is embedded in the CIM again.The CIM (CPU) computation takes 500 μs (at most 30 μs) in each stage.PPLN periodically poled lithium niobate.SHG second-harmonic generation.PSA phase-sensitive amplifier.FS piezo-based fiber stretcher.PM-DSF polarization-maintained dispersion shifted fiber.BHD balanced homodyne detection.

Fig. 2
Fig. 2 Graph structure of problem solved in this study and best solutions found in each stage of experiments.a Graph of prefectures in Japan, where the number of nodes and edges are 47 and 92, respectively.b One of the solutions obtained in each stage of graph clustering based on modularity.The dotted lines are the reduced edges in each stage.In the third stage, prefectures in Japan are grouped into five regions with a modularity of 0.646, which is the same as the best solution obtained by the other algorithms.c One of the successful solutions of the four-color map problem.In stage two, there are no adjacent same-color nodes.

Fig. 3
Fig. 3 Experimental results for graph clustering and illustration of feedback algorithms.a-d Amplitudes of degenerate optical parametric oscillator (DOPO) of 47 nodes as a function of operation step at stage l ¼ 1; 2; 3; 4: Each color represents a different node.Positive (negative) amplitudes represent up (down) Ising spins.Crossover from antiferromagnetic to ferromagnetic solutions is found as l increases.e Modularity Q and number of groups M without feedback (w/o FB) (thick and thin black lines, respectively) and those with domain separation (DS) algorithm (red lines) and both the group reunion (GR) and DS algorithms (blue lines).f Success rate of reaching the highest Qð¼ 0:646Þ as estimated by sampling 1000 trials.g, h Schematic view of DS and GR feedback algorithms.

Fig. 4
Fig. 4 Experimental results for map coloring problem with four colors.a Conditional success rates for 50 instances of J ð2Þ ij;k in stage two with k ¼ 1; 2; ; 50, and the Potts energy in stage one E ð1Þ Potts; k .Total success rate averaged over k is about 50%.b Change in success rates caused by learning defined as J new ij ¼ J old ij þ wL ij for weights w = 5, 10, 20, and 40 with initial weight w 0 ¼ 40.Matrix L ij characterizes the edges that failed in coloring as detailed in the main text.c Total number of mistakes (sum of L ij ) in independent learning processes with w = 5, 10, 20, and 40.Red nodes and edges were frequently involved in the mistakes.Success rates can be improved by learning such frequently incorrect edges.