Abstract
Boolean satisfiability^{1} (kSAT) is one of the most studied optimization problems, as an efficient (that is, polynomialtime) solution to kSAT (for k≥3) implies efficient solutions to a large number of hard optimization problems^{2,3}. Here we propose a mapping of kSAT into a deterministic continuoustime dynamical system with a unique correspondence between its attractors and the kSAT solution clusters. We show that beyond a constraint density threshold, the analog trajectories become transiently chaotic^{4,5,6,7}, and the boundaries between the basins of attraction^{8} of the solution clusters become fractal^{7,8,9}, signalling the appearance of optimization hardness^{10}. Analytical arguments and simulations indicate that the system always finds solutions for satisfiable formulae even in the frozen regimes of random 3SAT (ref. 11) and of locked occupation problems^{12} (considered among the hardest algorithmic benchmarks), a property partly due to the system’s hyperbolic^{4,13} character. The system finds solutions in polynomial continuous time, however, at the expense of exponential fluctuations in its energy function.
Main
Boolean satisfiability^{1} (kSAT, k≥3) is the quintessential constraintsatisfaction problem, lying at the basis of many decision, scheduling, errorcorrection and computational applications. kSAT is in NP (refs 1, 2, 3), that is its solutions are efficiently (polynomial time) checkable, but no efficient (polynomial time) algorithms are known to compute those solutions. If such algorithms would be found for kSAT, all NP problems would be efficiently computable, because kSAT is NPcomplete^{2,3}.
In kSAT there are given N Boolean variables {x_{1},…,x_{N}},x_{i}∈{0,1} and M clauses (constraints), each clause being the disjunction (OR, denoted as ) of kvariables or their negation . One has to find an assignment of the variables such that all clauses (called collectively as a formula) are satisfied (TRUE=‘1’). When the number of constraints is small, it is easy to find solutions, whereas for too many constraints it is easy to decide that the formula is unsatisfiable (UNSAT). Deciding satisfiability, in the ‘intermediate range’, however, can be very hard: the worstcase complexity of all known algorithms for kSAT is exponential in N.
Inspired by the mechanisms of information processing in biological systems, analog computing received increasing interest from both theoretical^{14,15,16} and engineering communities^{17,18,19,20,21}. Although the theoretical possibility of efficient computation using chaotic dynamical systems has been shown previously^{15}, nonlinear dynamical systems theory has not been exploited for NPcomplete problems in spite of the fact that, as shown previously^{19,20,21}, kSAT can be formulated as a continuous global optimization problem^{19}, and even cast as an analog dynamical system^{20,21}.
Here we present a continuoustime dynamical system for kSAT, with a dynamics that is rather different from previous approaches. Let us introduce the continuous variables^{19}s_{i}∈[−1,1], such that s_{i}=−1 if the ith variable (x_{i}) is FALSE and s_{i}=1 if it is TRUE. We define c_{m i}=1 for the direct form (x_{i}),c_{m i}=−1 for the negated form and c_{m i}=0 for the absence of the ith variable from clause m. Defining the constraint function corresponding to clause m, we have K_{m}∈[0,1] and K_{m}=0 if and only if clause m is satisfied. The goal would be to find a solution s^{*} with s_{i}^{*}∈{−1,1} to E(s^{*})=0, where E is the energy function . If such s^{*}exists, it will be a global minimum for E and a solution to the kSAT problem. However, finding s^{*}by a direct minimization of E(s) will typically fail owing to nonsolution attractors trapping the search dynamics. To avoid such traps, here we define a modified energy function , using auxiliary variables similar to Lagrange multipliers^{20,21}. Let us denote by the continuous domain [−1,1]^{N}. Its boundary is the Nhypercube with vertex set . The set of solutions for a given kSAT formula, called solution space, occupies a subset of . Solution clusters are formed by solutions that can be connected through singlevariable flips, always staying within satisfying assignments^{22}. Clearly, V ≥0 in , and V (s,a)=0 within if and only if is a kSAT solution, for any . We now introduce a continuoustime dynamical system on Ω through:
where is the gradient operator with respect to s and K_{m i}=K_{m}/(1−c_{m i}s_{i}). The initial conditions for s are arbitrary ; however, for athey have to be strictly positive, a_{m}(0)>0 (for example, a_{m}(0)=1). The kSAT solutions are fixed points of (1), for any . The kSAT solution clusters are spanning piecewise compact, connected sets in Q_{N}, and every point in them is a fixed point of (1) (Supplementary Section SA). System (1) has a number of key properties (see Supplementary Information). First, the dynamics in s stays confined to . Second, the kSAT solutions are attractive fixed points of (1). In particular, every point s from the orthant of a kSAT solution s^{*} with the property s^{2}≥N−1+(k−1)^{2}/(k+1)^{2} is guaranteed to flow into the attractor corresponding to s^{*}. Third, there are no limit cycles. Fourth, for satisfiable formulae the only fixed point attractors of the dynamics are the global minima of V with V =0. Note that in principle, the projection of the dynamics onto could be stuck in some point , while da/dt≠0 indefinitely. This does not happen here, as shown in Supplementary Section SE. Moreover, analytical arguments supported by simulations indicate that the trajectory will leave any domain that does not contain solutions, see the discussion in Supplementary Section SE1. Note that the constraint functions (hence their satisfiability) depend directly only on the location of the trajectory in , K_{m}=K_{m}(s), and not on the auxiliary variables. The dynamics in the aspace is simple expansion, and for this reason the features of the full phase space Ωlie within its projection onto . One can actually eliminate entirely the auxiliary variables from the equations by first solving (1b) to give and then inserting it into (1a).
Another fundamental feature of (1) is that it is deterministic: for a given formula f, any initial condition generates a unique trajectory, and any set from has a unique preimage arbitrarily back in time. Hence, the characteristics of the solution space are reflected in the properties of the invariant sets^{7} of the dynamics (1) within the hypercube . The deterministic nature of (1) allows us to define basins of attractions of solution clusters by colouring every point in according to which cluster the trajectory flows to, if started from there. These basins fill up to a set of zero (Lebesgue) measure, which forms the basin boundary^{7}, from where the dynamics (by definition) cannot flow to any of the attractors. A kSAT formula fcan be represented as a hypergraph (or equivalently, a factor graph) in which nodes are variables and hyperedges are clauses connecting the nodes/variables in the clause. Pure literals are those that participate in one or more clauses but always in the same form (direct or negated); hence, they can always be chosen such as to satisfy those clauses. The core of G(f) is the subgraph left after sequentially removing all of the hyperedges having pure literals^{23}. For simple formulae (such as those without a core), the dynamics of (1) is laminar flow and the basin boundaries form smooth, nonfractal sets (Figs 1a,c and 2, top two rows). Adding more constraints develops a core, the spin equations (1a) become mutually coupled, and the trajectories may become chaotic (Fig. 1b, Supplementary Section SF and Fig. S8) and the basin boundaries fractal^{7,8,9} (Figs 1d, 2 and Supplementary Fig. S4). Therefore, as the constraint density α=M/N is increased within predefined ensembles of formulae (random kSAT, occupation problems, kXORSAT and so on) a sharp change to chaotic behaviour is expected at a chaotic transition point α_{χ}, where a chaotic core appears with nonzero statistical weight in the ensemble as . As an example, let us consider 3XORSAT. In this case, owing to its inherently linear nature, it is actually better to work directly with the parity check equations as constraints, instead of their conjunctive normal form. The chaotic core here is a small finite hypergraph, and thus α_{χ} coincides with the socalled dynamical transition point α_{d} computed exactly in ref. 24 (see Supplementary Section SG and Fig. S4). Note, a core can be nonchaotic, and thus the existence of a core is only a necessary condition for the appearance of chaos and in general the two transitions might not coincide. Further increasing the number of constraints (within any formula ensemble) unsatisfiability appears at the threshold value α_{s}>α_{χ} beyond which almost all formulae are unsatisfiable (UNSAT regime)^{11,12,22,24,25,26,27,28}. The closer α is to α_{s}, the harder it is to find solutions, and beyond the socalled freezing transition point α_{f}<α_{s} (called the frozen regime) all known algorithms take exponentially long times or simply fail to find solutions^{11,12}. A variable is frozen if it takes on the same value for all solutions within a cluster, and a cluster is frozen if an extensive number of its variables are frozen. In the frozen regime all clusters are frozen and they are also far apart ( Hamming distance)^{11,12}. For random 3SAT (clauses chosen uniformly at random for fixed α) (ref. 27), (ref. 28) and all known local search algorithms become exponential or fail beyond α=4.21 (ref. 29), and surveypropagation^{25}based algorithms fail beyond α=4.25 (ref. 28). As the frozen regime is very thin in random 3SAT, the socalled locked occupation problems (LOPs) have been introduced^{12}. In LOPs all clusters are formed by exactly one solution; hence, they are completely frozen and the frozen regime extends from the clustering (dynamical) transition point ℓ_{d}to the satisfiability threshold ℓ_{s}, and thus it is very wide^{12}. An example LOP is random ‘+1in3SAT’ (ref. 12), made of constraints that have no negated variables and a constraint is satisfied only if exactly one of its variables is 1 (TRUE). In +1in3SAT , and beyond ℓ_{d} all known algorithms have exponential search times or fail to find solutions (here ℓ=3M/N).
As chaos is present for satisfiable formulae, that is, when system (1) has attracting fixed points, it is necessarily of transient type. Transient chaos^{4,5,6,7} is ubiquitous in systems with many degrees of freedom such as fluid turbulence^{30}. It appears as the result of homoclinic/heteroclinic intersections of the invariant manifolds of hyperbolic (unstable) fixed points of (1) lying within the basin boundary^{7,8,9}, leading to complex (fractal) foliations of the phase space (see Supplementary Section SF). We observed the prevalence of transient chaos in the whole region α_{χ}<α<α_{s} for all of the problem classes we studied. Interestingly, the velocity fluctuations of trajectories in the chaotic regime are qualitatively similar to those of fluid parcels in turbulent flows as shown in Supplementary Section SK. Our findings indicate that chaotic behaviour may be a generic feature of algorithms searching for solutions in hard optimization problems, corroborating previous observations^{10} using a heuristic algorithm based on iterated maps.
In the following we show results on random 3SAT and +1in3SAT formulae in the frozen regime; however, the same conclusions hold for other ensembles that we tested. To investigate the complexity of computation by the flow (1), we monitored the fraction of problems p(t)not solved by continuous time t, as a function of N and α. Figure 3a,c shows that even in the frozen phase, the fraction of unsolved problems by time tdecays exponentially with t, that is, by a law p(t)=re^{−λ(N)t}. The decay rate λ(N) obeys λ(N)=b N^{−β}, with β≈1.6 in both cases, see Fig. 3b,d. From these two equations, the continuous time t(p,N) needed to solve a fixed (1−p)th fraction of random formulae (or to miss solving the pth fraction of them) is:
indicating that the continuous time needed to find solutions scales as a power law with N. Equation (2) also implies powerlaw scaling for almost all hard instances in the limit (Supplementary Section SH). The length in of the corresponding continuous trajectories also scales as a power law with N (Supplementary Fig. S7b and Section SJ). However, note that this does not mean that the algorithm itself is a polynomialcost algorithm, as the energy function V can have exponentially large fluctuations. As the numerical integration happens on a digital machine, it approximates the continuous trajectory with discrete points. Monitoring the fraction of formulae left unsolved as a function of the number of discretization steps n_{step} in the frozen phase, we find exponential behaviour for n_{step}(p,N) (Supplementary Sections SI, SJ and Fig. S6). The difference between the continuous and discretetime complexities is due to the wildly fluctuating nature of the chaotic trajectories (see Fig. 1b and Methods) in the frozen phase. Compounding this, we also observe the appearance of the Wada property^{7,8} in the basin boundaries (Fig. 4). A fractal basin boundary has Wada property if its points are simultaneously on the boundary of at least three colours/basins. (An amusing method that creates such sets uses four Christmas ball ornaments^{7}.) Although the Wada property does not affect the true/mathematical analog trajectories, owing to numerical errors, it may switch the numerical trajectories between the basins. As the clusters are far apart, the switched trajectory will flow towards another cluster into a practically opposing region of until it may come close again to the basin boundary and so on, partially randomizing the trajectory in .
We conjecture that the powerlaw scaling of the continuous search times (2) is due in part to a generic property of the dynamical system (1), namely that it is hyperbolic^{4,6,13} or nearhyperbolic. It has been shown that for hyperbolic systems the trajectories escape from regions far away from the attractors to the attractors at an exponential rate, for almost all initial conditions^{4,6,13}. That is, the fraction of trajectories still searching for a solution after time t decays as e^{−κt}(Supplementary Fig. S9), where κ is the escape rate. Thus, κ^{−1} can be considered as a measure of hardness for a given formula. When taken over an ensemble at a given α, this property generates the exponential decay for p(t)with an average escape rate λ.
The form of the energy function V incorporates the influence of all of the clauses at all times, and in this sense (1) is a nonlocal search algorithm. As shown before, the auxiliary variables can be eliminated; however, they give a convenient interpretation of the dynamics. Namely, one can think of them as providing extra dimensions along which the trajectories escape from local wells, and their form (1b) provides positive feedback that guarantees their escape. Clearly, these equations are not unique, and other forms based on the same principles may work just as well.
Methods
To simulate (1), we use a fifthorder adaptive Cash–Karp Runge–Kutta method with monitoring of local truncation error to ensure accuracy. To keep the numerical trajectory within a tube of small, preset thickness around the true analog trajectory in Ω (Supplementary Fig. S5), the Runge–Kutta algorithm occasionally carries out an exponentially large number of discretization steps n_{step}. However, this happens only for hard formulae, when the analog trajectory has wild, chaotic fluctuations. For easy formulae, both p(t) and p(n_{step}) decay exponentially as shown in Supplementary Fig. S6a, inset.
References
Cook, S. in Proc. Third Ann. Symp. Theory of Computing 151–158 (ACM, 1971).
Fortnow, L. The status of the P versus NP problem. Commun. ACM 52, 78–86 (2009).
Garey, M. R. & Johnson, D. S. Computers and Intractability; A Guide to the Theory of NPCompleteness (W. H. Freeman, 1990).
Kadanoff, L. P. & Tang, C. Escape from strange repellers. Proc. Natl Acad. Sci. 81, 1276–1279 (1984).
Tél, T. & Lai, YC. Chaotic transients in spatially extended systems. Phys. Rep. 460, 245–275 (2008).
Lai, YC. & Tél, T. Transient Chaos: Complex Dynamics on FiniteTime Scales (Springer, 2011).
Ott, E. Chaos in Dynamical Systems 2nd edn (Cambridge Univ. Press, 2002).
Nusse, H. E. & Yorke, J. A. Basins of attraction. Science 271, 1376–1380 (1996).
Grebogi, C., Ott, E. & Yorke, J. A. Basin boundary metamorphoses: Changes in accessible boundary orbits. Physica D 24, 243–262 (1987).
Elser, V., Rankenburg, I. & Thibault, P. Searching with iterated maps. Proc. Natl Acad. Sci. 104, 418–423 (2007).
Achlioptas, D. & RicciTersenghi, F. Random formulas have frozen variables. SIAM J. Comput. 39, 260–280 (2009).
Zdeborová, L. & Mézard, M. Locked constraint satisfaction problems. Phys. Rev. Lett. 101, 078702 (2008).
Cvitanovć, P., Artuso, R., Mainieri, R., Tanner, G. & Vattay, G. Chaos: Classical and Quantum (Niels Bohr Institute, 2010) ChaosBook.org/version13.
Branicky, M. IEEE Workshop on Physics and Computation 265–274 (IEEE Computer Society Press, 1994).
Siegelmann, H. T. Computation beyond the Turing limit. Science 268, 545–548 (1995).
Moore, C. Recursion theory on the reals and continuoustime computation. Theor. Comput. Sci. 162, 23–44 (1996).
Liu, SC., Kramer, J., Indiveri, G., Delbruck, T. & Douglas, R. Analog VLSI: Circuits and Principles (MIT Press, 2002).
Chua, L. O. & Roska, T. Cellular Neural Networks and Visual Computing: Foundations and Applications (Cambridge Univ. Press, 2005).
Gu, J., Gu, Q. & Du, D. On optimizing the satisfiability (SAT) problem. J. Comput. Sci. Technol. 14, 1–17 (1999).
Nagamatu, M. & Yanaru, T. On the stability of Lagrange programming networks for satisfiability problems of propositional calculus. Neurocomputing 13, 119–133 (1996).
Wah, B. W. & Chang, YJ. Tracebased methods for solving nonlinear global optimization and satisfiability problems. J. Glob. Opt. 10, 107–141 (1997).
Achlioptas, D., CojaOghlan, A. & RicciTersenghi, F. On the solutionspace geometry of constraint satisfaction problems. Random Struct. Algorithms 38, 251–268 (2011).
Molloy, M. Cores in random hypergraphs and Boolean formulas. Random Struct. Algorithms 27, 124–135 (2005).
Mézard, M., RicciTersenghi, F. & Zecchina, R. Two solutions to diluted pspin models and XORSAT problems. J. Stat. Phys. 111, 505–533 (2003).
Mézard, M., Parisi, G. & Zecchina, R. Analytic and algorithmic solution of random satisfiability problems. Science 297, 812–815 (2002).
Achlioptas, D., Naor, A. & Peres, Y. Rigorous location of phase transitions in hard optimization problems. Nature 435, 759–764 (2005).
Mertens, S., Mézard, M. & Zecchina, R. Threshold values of random kSAT from the cavity method. Random Struct. Algorithms 28, 340–373 (2006).
Parisi, G. Some remarks on the survey decimation algorithm for ksatisfiability. http://arxiv.org/abs/cs/0301015 (2003).
Seitz, S., Alava, M. & Orponen, P. Focused local search for random 3satisfiability. J. Stat. Mech. P06006 (2005).
Hof, B., de Lozar, A., Kuik, D.J. & Westerweel, J. Repeller or Attractor? Selecting the dynamical model for the onset of turbulence in pipe flow. Phys. Rev. Lett. 101, 214501 (2008).
Acknowledgements
We thank T. Tél and L. Lovász for valuable discussions and for a critical reading of the manuscript.
Author information
Authors and Affiliations
Contributions
M.ER. and Z.T. conceived and designed the research and contributed analysis tools equally. M.ER. carried out all simulations, collected and analysed all of the data and Z.T. wrote the paper.
Corresponding authors
Ethics declarations
Competing interests
The authors declare no competing financial interests.
Supplementary information
Supplementary Information
Supplementary Information (PDF 2565 kb)
Rights and permissions
About this article
Cite this article
ErcseyRavasz, M., Toroczkai, Z. Optimization hardness as transient chaos in an analog approach to constraint satisfaction. Nature Phys 7, 966–970 (2011). https://doi.org/10.1038/nphys2105
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1038/nphys2105
This article is cited by

Solving the big computing problems in the twentyfirst century
Nature Electronics (2023)

Searching for spin glass ground states through deep reinforcement learning
Nature Communications (2023)

Bifurcation behaviors shape how continuous physical dynamics solves discrete Ising optimization
Nature Communications (2023)

Entropic herding
Statistics and Computing (2023)

Ising machines as hardware solvers of combinatorial optimization problems
Nature Reviews Physics (2022)