Abstract
Recently, several platforms were proposed and demonstrated a proofofprinciple for finding the global minimum of the spin Hamiltonians such as the Ising and XY models using gaindissipative quantum and classical systems. The implementation of dynamical adjustment of the gain and coupling strengths has been established as a vital feedback mechanism for analog Hamiltonian physical systems that aim to simulate spin Hamiltonians. Based on the principle of operation of such simulators we develop a novel class of gaindissipative algorithms for global optimisation of NPhard problems and show its performance in comparison with the classical global optimisation algorithms. These systems can be used to study the ground state and statistical properties of spin systems and as a direct benchmark for the performance testing of the gaindissipative physical simulators. Our theoretical and numerical estimations suggest that for large problem sizes the analog simulator when built might outperform the classical computer computations by several orders of magnitude under certain assumptions about the simulator operation.
Introduction
Finding the global minimum of spin Hamiltonians has been instrumental in many areas of modern science. Such Hamiltonians have initially been introduced in condensed matter to study magnetic materials^{1,2} and by now they became fundamentally important in a vast spread of many other disciplines such as quantum gravity^{3}, combinatorial optimization^{4}, neural networks^{5}, protein structures^{6}, errorcorrecting codes^{7}, Xray crystallography^{8}, diffraction imaging^{9}, astronomical imaging^{10}, optics^{11}, microscopy^{12}, biomedical applications^{13}, percolation clustering^{14} and machine learning^{15}.
The spin degrees of freedom in spin models are either discrete or continuous. In particular, we will be concerned with the XY model, where spins lie on a unit circle s_{j} = cos θ_{j} + i sin θ_{j}, the Ising model where spins take values s_{j} = ±1 and qstate planar Potts model where spins take q discrete values. For N spins the classical Hamiltonians for these models can be written as
where the elements J_{ij} of matrix J define the strength of the couplings between ith and jth spins represented by the phases θ_{i} and θ_{j} and g_{i} is the strength of the external field acting on spin i. For the continuous XY model θ_{j} ∈ [0, 2π), for the Ising model θ_{j} ∈ {0, π}, and for the qstate planar Potts model θ_{j} = 2πj/q, j = 1, …, q.
For a general matrix of coupling strengths, J, finding the global minimum of such problems is known to be strongly NPcomplete^{16}, meaning that an efficient way of solving them can be used to solve all problems in the complexity class NP that includes a vast number of important problems such as partitioning, travelling salesman problem, graph isomorphisms, factoring, nonlinear optimisation beyond quadratic, etc. For instance, the travelling salesman problem of a record size 85,900 has been solved by the state of the art Concorde algorithm in around 136 CPUyears^{17}. The actual time required to find the solution also depends on the matrix structure. For instance, for positive definite matrices, finding the global minimum of the XY model remains NPhard due to the nonconvex constraints but can be effectively approximated using an SDP relaxation^{18} with the performance guarantee π/4^{16}. Sparsity also plays an important role: for sufficiently sparse matrices fast methods exist^{19}. As for many other hard optimisation problems, there are three types of algorithms for minimizing spin Hamiltonian problems on a classical computer: exact methods that find the optimal solution to the machine precision, approximate algorithms that generate the solution within a performance guarantee and heuristic algorithms where suitability for solving a particular problem comes from some empirical testing^{20}. Exact methods can be used to solve small to medium matrix instances, as they typically involve branchandbound algorithms and the exponential worstcase runtime. The heuristic algorithms such as simulated annealing can quickly deliver a decent, but suboptimal (and possibly infeasible) solution^{21}. Finally, global minimization of the XY and Ising models is known to be in the APXhard class of problems^{22}, so no polynomialtime approximation algorithm gives the value of the objective function that is arbitrarily close to the optimal solution (unless P = NP). The problem becomes even more challenging when the task is to find not only an approximation to the global minimum of the objective function, but also the minimisers, as needed, for instance, in image reconstruction. The values of the objective functions can be very close but for the entirely different sets of minimizers.
Recently, several platforms were proposed and demonstrated a proofofprinciple of finding the global minimum of the spin Hamiltonians such as the Ising and XY models using gaindissipative quantum and classical systems: the injectionlocked lasers^{23}, the network of optical parametric oscillators^{24,25}, coupled lasers^{26}, polariton condensates^{27}, and photon condensates^{28}. In the gaindissipative simulators, the phase of the socalled coherent centre (CC) is mapped into the “spin” of the simulator. Such CC can be a condensate^{27,28} or a coherent state generated in a laser cavity^{25,26}. The underlying operational principle of such simulators depends on a gain process that is increased from below until a nonzero occupation appears via the supercritical Hopf bifurcation and the system becomes globally coherent across many CCs. The coherence occurs at the maximum occupancy for the given gain. It was suggested and experimentally verified that the maximum occupancy of the system is related to the corresponding spin Hamiltonian^{27}. When the heterogeneity in densities of the CCs is removed by dynamically adjusting the gain the coherence will be established at the global state of the corresponding spin Hamiltonian^{29}. We refer to these platforms as gaindissipative analog Hamiltonian optimisers^{30} that, despite having different quantum hardware, share the basic principle that suggests the convergence to the global minimum of the spin Hamiltonian.
Here, motivated by the operation of such physical systems, we develop a new class of classical gaindissipative algorithms for solving largescale optimisation problems based on the FokkerPlankLangevin gaindissipative equations written for a set of CCs. We show how the algorithm can be modified to cover various spin models: continuous and discrete alike. We demonstrate the robustness of such iterative algorithms and show that we can tune the parameters for the algorithm to work efficiently on various sizes and coupling structures. We show that such algorithms can outperform the standard global optimiser algorithms and have a potential to become the state of the art algorithm. Most importantly, these algorithms can be used as a benchmark for the performance of the physical gaindissipative simulators. Finally, this framework allows us to estimate the operational time for a physical realisation of such simulators to achieve the global minimum.
The paper is organised as follows. We formulate a general classical gaindissipative algorithm for finding the global minimum of various spin Hamiltonians in Section 1. In Sections 2 and 3 we investigate its performance on global optimisations of the XY and Ising Hamiltonians by comparing it to standard builtin global optimisers of Scipy optimisation library in Python and to the results of breakout local search and GRASP algorithms. We discuss the performance of the actual physical systems in Section 4 and conclude in Section 5.
Gaindissipative approach for minimising the spin Hamiltonians
The principle of operation of the gaindissipative simulator with N CCs for minimisation of the spin Hamiltonians given by Eq. (1) is described by the following set of the rate equations^{29,31}
where Ψ_{i}(t) is a classical complex function that describes the state of the ith CC, \({\gamma }_{i}^{{\rm{inj}}}\) is the rate at which particles are injected nonresonantly into the i− state, γ_{c} is the linear rate of loosing the particles, the coupling strengths are represented by Δ_{ij}K_{ij} where we separated the effect of the particle injection that changes the strength of coupling represented by Δ_{ij} from the other coupling mechanisms represented by K_{ij}. We consider two cases Δ_{ij} = 1 that physically corresponds to the site dependent dissipative coupling and \({{\rm{\Delta }}}_{ij}={\gamma }_{i}^{{\rm{inj}}}(t)+{\gamma }_{j}^{{\rm{inj}}}(t)\) appropriate for the description of the geometrically coupled condensates^{29}. We also include the complex function Dξ_{i}(t) that represents the white noise with a diffusion coefficient D which disappears at the threshold. The coefficients h_{qi} represent the strength of the external field with the resonance q:1^{31}. Compared to the actual physical description^{29,31}, in writing Eq. (2) we neglected the possible selfinteractions within the CC and rescaled Ψ_{i} so that the coefficient at the nonlinear dissipation term Ψ_{i}^{2}Ψ_{i} is 1 and allowed for several (n) resonant terms to be included. By writing \({{\rm{\Psi }}}_{i}=\sqrt{{\rho }_{i}}\exp [{\rm{i}}{\theta }_{i}]\) and separating real and imaginary parts in Eq. (2) we get the equations on the time evolution of the number density ρ_{i} and the phase θ_{i}
where θ_{ij} = θ_{i} − θ_{j}.
As we have previously shown^{29,31}, the individual control of the pumping rates \({\gamma }_{i}^{{\rm{inj}}}\) is required to guarantee that the fixed points of the system coincide with minima of the spin Hamiltonian given by Eq. (1). As the injection rates \({\gamma }_{i}^{{\rm{inj}}}\) raise from zero they have to be adjusted in time to bring all CCs to condense at the same specified number density ρ_{th}. Mathematically, this is achieved by
where ε controls the speed of the gain adjustments. If we take Δ_{ij} = 1 we assign K_{ij} = J_{ij}. If Δ_{ij} depends on the injection rates, the coupling strengths will be modified, so they have to be adjusted as well to bring the required coupling J_{ij} at the fixed point by
where \(\hat{\epsilon }\) controls the rate of the coupling strengths adjustments. Equation (6) indicates that the couplings need to be reconfigured depending on the injection rate: if the coupling strength scaled by the gain at time t is lower (higher) than the objective coupling J_{ij}, it has to be increased (decreased) at the next iteration. We shall refer to numerical realisation of Eqs (2 and 5) with Δ_{ij} = 1 and K_{ij} = J_{ij} and Eqs (2, 5 and 6) with \({{\rm{\Delta }}}_{ij}={\gamma }_{i}^{{\rm{inj}}}(t)+{\gamma }_{j}^{{\rm{inj}}}(t)\) and K_{ij} ≠ J_{ij} as the ‘GainD algorithm’ and the ‘GainDmod algorithm’ respectively.
The fixed point of Eqs (3–6) are
with the total number of particles in the system given by \(M=N{\rho }_{{\rm{th}}}={\sum }_{i}\,{\gamma }_{i}^{{\rm{inj}}}N{\gamma }_{c}\) + \(\sum _{i,j;j\ne i}\,{J}_{ij}\,\cos \,{\theta }_{ij}\) +\(\sum _{q}\,{\rho }_{{\rm{th}}}^{\frac{q}{2}1}\,\sum _{i}\,{h}_{qi}\,\cos \,(q{\theta }_{i})\). Such a value of the total number of particles will be first reached at the minimum of \({\sum }_{i}\,{\gamma }_{i}^{{\rm{inj}}}\), therefore, at the minimum of the spin Hamiltonian given by
Eq. (8) represents the general functional that our GainD and GainDmod algorithms optimise. By choosing which h_{qi} are nonzero we can emulate a variety of spin Hamiltonians. If h_{qi} = 0, then H_{s} represents the XY Hamiltonian. If only h_{2i} = h_{2} are nonzero with \({h}_{2} > {\sum }_{j;j\ne i}\,{J}_{ij}\) for any i, then the second term of the righthand side of the Hamiltonian (8) represents the penalty, forcing phases to be 0 or π. It implies that the minima of H_{s} coincide with the minima of the Ising Hamiltonian. If only h_{qi} = h_{q} for q > 2 are nonzero, then the minima of H_{s} coincide with the minima of the qstate planar Potts Hamiltonian with phases restricted to discrete values θ_{i} = 2πi/q. Finally, introducing nonzero h_{1i} together with nonzero h_{q} for q > 1 brings the effect of an external field of strength \({g}_{i}={h}_{1i}/\sqrt{{\rho }_{{\rm{th}}}}\) in agreement with Eq. (1).
The “NPhardness assumption” suggests that not only any classical algorithm but also any physical simulator cannot escape the exponential growth of the number of operations with the size of the problem^{32}. In order to find the global minimum by evolving Eqs (2 and 5) one would require to span an exponentially growing number of various phase configurations. This can be achieved by either introducing an exponentially slow increase in the pumping rates when approaching the threshold, or by exploring an exponential growth in the number of runs using different noise seeds. In what follows we focus on the second option as it is more practical and corresponds to the operation of the actual physical simulators.
Global minimization of the XY Hamiltonian: h _{qi} = 0
To find the global minimum of the XY Hamiltonian we numerically evolve Eqs (2 and 5) with h_{qi} = 0 using the 4thorder RungeKutta integration scheme.
To illustrate the operational principle of the GainD algorithm for minimising the XY Hamiltonian we consider N = 20 nodes and the coupling strengths J_{ij} that are randomly distributed between −10 and 10, see Fig. 1. Starting from a zero initial condition Ψ_{i} = 0, at the first stage of the evolution (while t < 120) the densities are well below the threshold (Fig. 1a), phases span various configurations (Fig. 1b), and all injection rates are the same (Fig. 1c). When the nodes start reaching, and in some cases overcoming the threshold, the injection rates are individually adjusted to bring all the nodes to the same value while phases stabilise to realise the minimum of the XY Hamiltonian.
A numerical approach for solving NPhard optimisation problems depends on the scale of the problem: intermediatescale problems can be solved with general programming tools while largescale problems require sophisticated algorithms that exploit the structure of a particular type of objective function and are usually solved by iterative algorithms. Since the proposed GainD method based on Eqs (2 and 5) is an iterative algorithm, we aim to investigate its two main aspects. First, we conduct the global convergence analysis on small and midscale problems and verify that with the sufficient number of runs the algorithm finds a global minimum. The fact that the minimum is truly global we confirm by exploiting other optimisation methods. As for any heuristic iterative algorithm, such convergence properties can be established with confidence by performing numerous numerical experiments on different problems. Second, we perform the complexity analysis on largescale problems with a focus on how fast the algorithm converges per run.
To characterise the performance of the GainD algorithm, we compared it to the heuristic global optimisation solvers such as direct Monte Carlo sampling (MC) and the basinhopping (BH) algorithm. Both methods are builtin optimisation algorithms of a wellknown Scipy optimisation library in Python so their performance has been carefully tested. The BH algorithm depends on a local minimisation algorithm performing the optimal decent to a local minimum at each iteration. We considered several local minimisation methods as applied to the minimisation of the XY Hamiltonians and determined that the quasiNewton method of Broyden, Fletcher, Goldfarb, and Shanno (LBFGSB)^{33,34} has shown the best performance (see Supplementary Information). The LBFGSB algorithm is a local minimisation solver which is designed for largescale problems and shows a good performance even for nonsmooth optimisation problems^{33,34}. At each run of the MC algorithm, we generate a random starting point and use LBFGSB algorithm to find the nearest local minimum. These minima are compared to find the global minimum. Applying simple MonteCarlo method to the random coupling matrices allows one to understand how hard these instances are, while the hardness or the easiness of the problems is a critical issue to address for understanding the performance of a newly suggested algorithm, i.e. GainD algorithm. The BH algorithm is a global minimisation method that has been shown to be extremely efficient for a wide variety of problems in physics and chemistry^{35} and to give a better performance on the spin Hamiltonian optimisation problems than other heuristic methods such as simulated annealing^{36}. It is an iterative stochastic algorithm that at each iteration uses a random perturbation of the coordinates with a local minimisation followed by the acceptance test of new coordinates based on the Metropolis criterion. Again LBFGSB algorithm has shown the best performance as a local optimiser at each step of the BH algorithm. Both BH and MC algorithms were supplied with the analytical Jacobian of the objective function for better performance results.
To confirm the global convergence, we compared the GainD algorithms to the BH and MC algorithms by minimizing XY Hamiltonian for various matrices. The numerical parameters and the initial conditions for the GainD algorithm described by Eqs (2 and 5). In particular, we generated 50 real symmetric coupling matrices J = {J_{ij}} of two types. We considered ‘dense’ matrices with elements that are randomly distributed in [−10, 10] and ‘sparse’ matrices where each CC is randomly connected to exactly three other CCs with the coupling strengths randomly generated from the interval with the bounds that are randomly taken from {−10, −3, 3, 10}. For each such matrix, we ran the GainD, BH and MC algorithms starting with 500 random initial conditions for the BH and MC algorithms and with zero initial conditions and 500 different noise seeds for the GainD algorithm. The values of the global minimum of the objective function found by the GainD algorithm and the comparison methods were found to match to ten significant digits. For ‘dense’ matrices the success probabilities of the GainD algorithms were similar to both comparison methods. The distribution of success probabilities over various ‘dense’ matrix instances is shown in Fig. 2(a–c) for N = 50 and suggests that for such matrices the systems have very narrow spectral gap so the distributions are densely packed for probabilities over 93% for the MC, 96% for the BH, and 95% for the GainD algorithm. It is known that the spectrum of random matrices is simple^{37}, so more difficult instances can be specifically constructed as illustrated in Fig. 2(d–f), where the GainD algorithm greatly outperforms the comparison algorithms on ‘sparse’ matrices. Thus, we established the global convergence properties of the proposed GainD algorithms on various problems and verified that the GainD algorithms finds the global minimum. The further advantages of the GainD algorithms over the best classical optimisers for some particular types of the coupling matrices are elucidated elsewhere^{38}.
Global minimization of the Ising Hamiltonian: h _{2i} = h _{2} ≠ 0
To find the global minimum of the Ising Hamiltonian we solve Eqs (2 and 5) with h_{qi} = 0 if q ≠ 2 and h_{qi} = h_{2} numerically. Based on these equations we test the GainD algorithm by finding the maxima of MaxCut optimisation problem on the wellknown GSet instances^{39} and summarise our findings in Fig. 3. The optimal MaxCut values^{40} are plotted with coloured rectangles and the solutions of the GainD algorithm are shown with scatters for 100 runs for each G instance. The algorithm demonstrates good performance with the average found cuts being within 0.2–0.3% for G_{1}–G_{5} and 1.1–1.8% for G_{6}–G_{10} of the optimal solutions. The same numerical parameters were used for all simulations. The computational time for finding each cut has been limited by the same value of 35–40s, single core simulations on MacBook Pro, 2.7 GHz Intel Core i7, 16 GB 2133 MHz LPDDR3, 2.7 GHz Intel Core i7, 16 GB 2133 MHz LPDDR3. The time performance of the state of the art algorithms depends on a particular problem and for G_{1}–G_{10} varies from 13s to 317s (A C++ implementation of BLS algorithm was run on an Intel Xeon E5440 with 2.83 GHz and 2 GB in^{40}) and is within 100–854s for GRASP tabu search^{41} algorithms are programmed in C and compiled using GNU though their solutions are much less deviated from the optimal values. We also confirmed that for the coupling matrix of size N = 10000, namely, G70 problem from the GSet, all found solutions of the MaxCut problem out of 100 runs, 1000 time iterations are within 1.1% of the global minimum from the known optimal solution^{40}. These results are achieved with an average computational time per run of 530s compared to 11365s of the BLS algorithm. Therefore, the proposed GainD algorithm is highly competitive with the existing state of the art MaxCut algorithms at least regarding the computational time. The deviation of solutions from the optimal values can be further reduced by tuning the parameters ρ_{th} and ε or by investigating the extensions to the GainD algorithm. Among such possible modifications is the introduction of individual dynamic rates of the gain adjustments ε_{i}(t).
Projected performance of the GainD simulators
So far we discussed the implementation of the GainD algorithms on a classical computer. An actual physical implementation of these algorithms on simulators will enjoy a superfast operation and parallelism in processing various phase configurations as the system approaches the global minimum from below even if the system behaves entirely classically. Further acceleration could be expected if quantum fluctuations and quantum superpositions contribute to scanning the phase configurations. The times involved into the hardware operation of the GainD simulators vary on the scale of pico to milliseconds. For instance, in the system of nondegenerate optical parametric oscillators (NOPO) the timedevision multiplexing method is used to connect a large number of nodes and the couplings are realised by mutually injecting with optical delay lines with the cavity round trip time being of the order of μs^{25}, it takes an order of 100 picoseconds for the polariton graphs to condense^{27} and 10 ps to 1 ns for photon condensates^{28}. The feedback mechanism can be implemented via optical delay lines (in NOPO system), by holographic reconfiguration of the injection via the spatial light modulator or mirror light masks (e.g. by DLP highspeed spatial light modulators) in solidstate condensates or by electrical injection (e.g. in the polariton lattices^{42}).
The number of runs one needs to reliably find the global minimum grows with the size of the problem N. This growth is expected to be exponential for any algorithm (if P ≠ NP). However, we can compare how time per run grows with the problem size for considered algorithms. We perform the complexity analysis on mid and largescale problems and summarise the results in Fig. 4. The GainD algorithm demonstrates the consistent speedup over the BH algorithm for all problem sizes N in Fig. 4(a). The log plot in Fig. 4(b) indicates that both algorithms show polynomial time per run with the complexity of the GainD algorithm being close to O(N^{2.29}). Note that such polynomial scaling does not preclude the exponential growth of the algorithm due to the “NPhardness assumption”^{32}.
Next we estimate the time scale on which the analog physical simulators can be expected to operate. For instance, the polariton (photon) lattices span the phases on a picosecond scale which can be neglected in comparison with the feedback and adjustment times. By taking an upper limit on this feedback time as 0.1 ms^{43} and counting the number of such adjustments in the GainD and GainDmod algorithms, i.e. the average number of time iterations required to reach a fixed point per run, we can estimate the upper bound of the time needed by the physical implementation of the GainD simulator to find the global minimum. Since the physical simulators benefit from the builtin parallelism in choosing among various phase configurations as the system is pumped from below, we assume that the time it takes to bring the system to the threshold is independent of the number of nodes N. Under these assumptions, the estimated timeperformance of the real physical simulators in minimising the Ising or XY Hamiltonians is shown in Fig. 4 by the solid green (yellow) lines for the GainD (GainDmod) simulators. For large N from Fig. 4 we estimate the speedup of the GainD simulators in comparison with the classical computations to be of the order of 10^{−5}N^{2}–10^{−7}N^{3}. Due to the adaptive setting of the coupling matrix in the GainDmod algorithm, the number of time iterations grows slower with the size of problem N than for the GainD algorithm so that the performance of the GainD simulator can possibly be surpassed by the GainDmod simulator for large N.
Conclusions
Motivated by a recent emergence of a new type of analog Hamiltonian optimisers – the gaindissipative simulators – we formulate a novel gaindissipative algorithm for solving largescale optimisation problems which is easily parallelisable and can be efficiently simulated on classical computers. We show its computational advantages in comparison with builtin methods of Python’s Scipy optimisation library in minimising XY Hamiltonian and the stateoftheart methods in solving MaxCut problem. We argue that the generalisation of the GainD algorithm for solving different classes of NPhard problems can be done for both continuous and discrete problems and demonstrate it by solving quadratic continuous and binary optimisation problems. The GainD algorithm has a potential of becoming a new optimisation algorithm superior to other global optimisers. This algorithm allows us to formulate the requirement for the simulators hardware built using a system of gaindissipative oscillators of different nature. Our algorithm, therefore, can be used to benchmark the existing gaindissipative simulators. When the runtime of the classical algorithm is interpreted in terms of the time of the actual operation of the physical system one might expect such simulators to greatly outperform the classical computer.
Finally, we would like to comment on classical vs quantum operation of such simulators. When a condensate (a coherent state) is formed – the system behaves classically as many bosons are in the same singleparticle mode and noncommutativity of the field operators can be neglected. However, the condensation process by which the global minimum of the spin models is found involves quantum effects. It was shown before, that the condensation process can be described by a fully classical evolution of the Nonlinear Schrödinger equation that takes into account only stimulated scattering effects and neglects spontaneous scattering^{44}. The classical or quantum assignment to gaindissipative simulators depends on whether quantum fluctuations and spontaneous scattering effects during the condensation provide a speedup in comparison with entirely classical noise and stimulated scattering. This is an important question to address in the future research on such simulators and the comparison with the classical algorithm that we developed based on the gaindissipative simulators architecture allows one to see if the time to find the solution scales better than with the best classical algorithms.
References
 1.
Baxter R. J. Exactly Solvable Models in Statistical Mechanics. (Academic Press Limited, 1982).
 2.
Gallavotti, G. Statistical Mechanics: A Short Treatise. (Springer Science & Business Media, 2013).
 3.
Ambjorn, J. A., Anagnostopoulos, K. N., Loll, R. & Pushinka, I. Shaken, but not stirred–Potts model coupled to quantum gravity. Nucl. Phys. B 807, 251 (2009).
 4.
Lucas, A. Ising formulations of many NP problems. Frontiers in Physics 2, 5 (2014).
 5.
Rojas, R. Neural Networks. A Systematic Introduction. (SpringerVerlag, 1996).
 6.
Bryngelson, J. D. & Wolynes, P. G. Spin glasses and the statistical mechanics of protein folding. Proc. Natl. Acad. Sci. USA 84, 7524 (1987).
 7.
Nishimori, H. Statistical Physics of Spin Glasses and Information Processing: An Introduction. (Oxford Univ. Press, 2001).
 8.
Harrison, R. W. Phase problem in crystallography. JOSA 10(5), 1046–1055 (1993).
 9.
Bunk, O. et al. Diffractive imaging for periodic samples: retrieving onedimensional concentration profiles across microfluidic channels. Acta Crystallographica Section A: Foundations of Crystallography 63(4), 306–314 (2007).
 10.
Fienup, C. & Dainty, J. Phase retrieval and image reconstruction for astronomy. Image Recovery: Theory and Application 231, 275 (1987).
 11.
Walther, A. The question of phase retrieval in optics. Optica Acta: International Journal of Optics 10(1), 41–49 (1963).
 12.
Miao, J., Ishikawa, T., Shen, Q. & Earnest, T. Extending xray crystallography to allow the imaging of noncrystalline materials, cells, and single protein complexes. Annu. Rev. Phys. Chem. 59, 387–410 (2008).
 13.
Dierolf, M. et al. Ptychographic Xray computed tomography at the nanoscale. Nature 467(7314), 436 (2010).
 14.
Nishimori, H. & Ortiz, G. Elements of Phase Transitions and Critical Phenomena. (Oxford Univ. Press, 2011).
 15.
Lokhov, A. Y. et al. Optimal structure and parameter learning of Ising models. Science Advances 4, e1700791 (2018).
 16.
Zhang, S. & Huang, Y. Complex quadratic optimization and semidefinite programming. SIAM J. Optim. 16, 871 (2006).
 17.
Applegate, D. L., Bixby, R. E., Chvatal, V. & Cook, W. J. The traveling salesman problem: a computational study. (Princeton university press, 2006).
 18.
Candes, E. J., Eldar, Y. C., Strohmer, T. & Voroninski, V. Phase retrieval via matrix completion. SIAM review 57(2), 225–251 (2015).
 19.
Shechtman, Y., Beck, A. & Eldar, Y. C. GESPAR: Efficient phase retrieval of sparse signals. IEEE transactions on signal processing 62(4), 928–938 (2014).
 20.
Dunning, I., Gupta, S. & Silberholz, J. What Works Best When? A Systematic Evaluation of Heuristics for MaxCut and QUBO. To appear in INFORMS Journal on Computing (2018).
 21.
Kochenberger, G. et al. The unconstrained binary quadratic programming problem: a survey. J Comb. Optim. 28, 5881 (2014).
 22.
Papadimitriou, C. H. & Yannakakis, M. Optimization, approximation, and complexity classes. J. Comput. Syst. Sci. 43(3), 425440 (1991).
 23.
Utsunomiya, S., Takata, K. & Yamamoto, Y. Mapping of Ising models onto injectionlocked laser systems. Opt. Express 19, 18091 (2011).
 24.
Marandi, A., Wang, Z., Takata, K., Byer, R. L. & Yamamoto, Y. Network of timemultiplexed optical parametric oscillators as a coherent Ising machine. Nat. Phot. 8, 937–942 (2014).
 25.
Takeda, Y. et al. Boltzmann sampling for an XY model using a nondegenerate optical parametric oscillator network. Quantum Science and Technology 3(1), 014004 (2017).
 26.
Nixon, M., Ronen, E., Friesem, A. A. & Davidson, N. Observing geometric frustration with thousands of coupled lasers. Phys. Rev. Lett. 110, 184102 (2013).
 27.
Berloff, N. G. et al. Realizing the classical XY Hamiltonian in polariton simulators. Nat. Mat. 16(11), 1120 (2017).
 28.
Dung, D. et al. Variable potentials for thermalized light and coupled condensates. Nat. Phot. 11(9), 565 (2017).
 29.
Kalinin, K. P. & Berloff, N. G. Networks of nonequilibrium condensates for global optimization. New J. Phys. 20, 113023 (2018).
 30.
Kalinin, K. P. & Berloff, N. G. Blockchain platform with proofofwork based on analog Hamiltonian optimisers. arXiv:1802.10091 (2018).
 31.
Kalinin, K. P. & Berloff, N. G. Simulating Ising, Potts and external fields by gaindissipative systems, in press by Phys. Rev. Letts. arXiv:1806.01371 (2018).
 32.
Aaronson, S. Guest column: NPcomplete problems and physical reality. ACM Sigact News 36(1), 30–52 (2005).
 33.
Byrd, R. H., Lu, P., Nocedal, J. & Zhu, C. A limited memory algorithm for bound constrained optimization. SIAM Journal on Scientific Computing 16(5), 1190–1208 (1995).
 34.
Zhu, C., Byrd, R. H., Lu, P. & Nocedal, J. Algorithm 778: LBFGSB: Fortran subroutines for largescale boundconstrained optimization. ACM Transactions on Mathematical Software (TOMS) 23(4), 550–560 (1997).
 35.
Wales, D. J. & Doye, J. P. Global optimization by basinhopping and the lowest energy structures of LennardJones clusters containing up to 110 atoms. The Journal of Physical Chemistry A 101(28), 5111–5116 (1997).
 36.
Kirkpatrick, S., Gelatt, C. D. & Vecchi, M. P. Optimization by simulated annealing. Science 220(4598), 671–680 (1983).
 37.
Tao, T. & Vu, V. Random matrices have simple spectrum. Combinatorica 37(3), 539–553 (2017).
 38.
Kalinin, K. P. & Berloff, N. G. Gaindissipative simulators for largescale hard classical optimisation. arXiv:1805.01371 (2018).
 39.
Gsets are freely available for download at, https://web.stanford.edu/yyye/yyye/Gset/?C=N;O=A.
 40.
Benlic, U. & Hao, J. K. Breakout local search for the maxcut problem. Eng. Appl. of Art. Int. 26(3), 1162–1173 (2013).
 41.
Wang, Y., Lü, Z., Glover, F. & Hao, J. K. Probabilistic GRASPtabu search algorithms for the UBQP problem. Computers & Operations Research 40(12), 3100–3107 (2013).
 42.
Suchomel, H. et al. An electrically pumped polaritonic lattice simulator. arXiv:1803.08306 (2018).
 43.
Phillips, D. B. et al. Adaptive foveated singlepixel imaging with dynamic supersampling. Science Advances 3 (2017).
 44.
Berloff, N. G. & Svistunov, B. V. Scenario of strongly nonequilibrated BoseEinstein condensation. Physical Review A 66(1), 013603 (2002).
Acknowledgements
N.G.B. acknowledges financial support from the NGP MITSkoltech. K.P.K. acknowledges the financial support from Cambridge Trust and EPSRC.
Author information
Affiliations
Contributions
N.B. devised and supervised the research, K.K. ran the computer simulations and prepared the figures. N.B. and K.K. wrote the manuscript.
Corresponding author
Ethics declarations
Competing Interests
The authors declare no competing interests.
Additional information
Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Electronic supplementary material
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Kalinin, K.P., Berloff, N.G. Global optimization of spin Hamiltonians with gaindissipative systems. Sci Rep 8, 17791 (2018). https://doi.org/10.1038/s41598018354161
Received:
Accepted:
Published:
Keywords
 Spin Hamiltonian
 Global Minimum
 Photon Condensate
 Noise Seeds
 Breakout Local Search
Further reading

Discrete Polynomial Optimization with Coherent Networks of Condensates and Complex Coupling Switching
Physical Review Letters (2021)

Highperformance combinatorial optimization based on classical mechanics
Science Advances (2021)

Superpolynomial quantum enhancement in polaritonic neuromorphic computing
Physical Review B (2021)

Noiseenhanced spatialphotonic Ising machine
Nanophotonics (2020)

Physics successfully implements Lagrange multiplier optimization
Proceedings of the National Academy of Sciences (2020)
Comments
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.