Vertex coloring of graphs via phase dynamics of coupled oscillatory networks

While Boolean logic has been the backbone of digital information processing, there exist classes of computationally hard problems wherein this paradigm is fundamentally inefficient. Vertex coloring of graphs, belonging to the class of combinatorial optimization, represents one such problem. It is well studied for its applications in data sciences, life sciences, social sciences and technology, and hence, motivates alternate, more efficient non-Boolean pathways towards its solution. Here we demonstrate a coupled relaxation oscillator based dynamical system that exploits insulator-metal transition in Vanadium Dioxide (VO2) to efficiently solve vertex coloring of graphs. Pairwise coupled VO2 oscillator circuits have been analyzed before for basic computing operations, but using complex networks of VO2 oscillators, or any other oscillators, for more complex tasks have been challenging in theory as well as in experiments. The proposed VO2 oscillator network harnesses the natural analogue between optimization problems and energy minimization processes in highly parallel, interconnected dynamical systems to approximate optimal coloring of graphs. We further indicate a fundamental connection between spectral properties of linear dynamical systems and spectral algorithms for graph coloring. Our work not only elucidates a physics-based computing approach but also presents tantalizing opportunities for building customized analog co-processors for solving hard problems efficiently.

Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but • Section 7 describes necessary background for the experimental implementation of such coupled relaxation oscillators using VO 2 (Vanadium Dioxide) devices.
• The Appendix contains some results useful for analyses in Section 5.

Dynamics of a system of coupled relaxation oscillators
We consider a system of n coupled VO 2 oscillators, where each oscillator is a series combination of a VO 2 device, and a parallel combination of a series conductance g s and a loading capacitance c l . The VO 2 device is an MIT (metalinsulator-transition) device which switches between a metallic state and an insulating state depending on the voltage v across it. When v > v h the device switches to a metallic state, and when v < v l the device switches to an insulating state. v l v h and there is hysteresis, i.e. system tries to retain the last state when v l ≤ v ≤ v h . When a VO 2 device is connected in series with a resistance of appropriate magnitude, it shows self sustained oscillations. As can be seen in figure 1.1b, because the stable points of the circuit in both the states (metallic and insulating) lie outside the region of operation, i.e. they are preceded by a transition, the system never settles to a point.
The dynamics of the coupled system with n oscillators coupled pairwise to each other using capacitances can be written as: C c is the coupling capacitance matrix where c c i j is the coupling capacitances between i th and j th oscillators, and represent the sum of rows (or columns).
When all the coupling capacitances are equal to c c , then C c is basically the scaled Laplacian matrix L of the graph with C c = c c L = c c (D − A) where D is the diagonal matrix of degrees of vertices and A is the adjacency matrix of the graph. It should be noted that the loading capacitances are chosen such that diag(C c + C l ) is constant. We envision a system where the oscillators are connected in a graph which is topologically equivalent to the input graph. As such the coupling matrix is programmed by the incidence matrix of the input graph, For each row i in C c every absent edge i j in the graph adds a loading capacitance of magnitude c c to the i th node to maintain a constant diag(C c + C l ). This ensures equal loading effect for all the nodes and symmetric dynamics.
G(s) and H(s) are state dependent matrices with g ik and g sk being the internal conductance and the series conductance of the k th oscillator respectively.
This can be written as: where voltages are normalized to V DD . In rest of the text, the state vector will be represented by x(t) instead of v(t).

A symmetric system with identical oscillators
Let us first consider a symmetric system, i.e. equal internal capacitances (c i ), coupling capacitances (c c ), internal conductances (g i ) and series conductances (g s ). In such case, the adjacency matrix of the graph and D is the diagonal matrix of degrees of vertices. One simple choice of C l is = c c n diag(I) Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but which is constant. Hence the coefficient matrix becomes Let us define B = (c c A − (c i + c c n)I) −1 . Also letŜ be a diagonal matrix where diag(Ŝ ) = s. Then H(s) = g i s and G(s) = g s I + g iŜ where I is the identity matrix. The system of 1.1 can then be written as: We note two important features about the charging transitions: (a) charging processes are very fast compared to the period of oscillations (figure 1.2), which we also refer to as "charging spikes" and (b) Charging of one oscillator has weak (but finite) effect on the other oscillators. Hence, we study the dynamics of coupled relaxation oscillator system in terms of two distinct interacting systems -the linear dynamics in the discharging state s = 0, and the charging transitions.
As the charging processes are very fast, the relative phases of oscillators are same as the relative times of the charging spikes in the oscillator waveforms. This gives a good way to visualize how the relative phases of oscillators evolve with time. For all oscillators, we first note all the time instants when the charging spikes start. The time differences between consecutive charging spikes should settle to a constant value if the oscillators settle, say ∆t i for the i th oscillator. If all the oscillators synchronize to a common frequency then ∆t i = ∆t 0 for all i. Then at any n th charging spike which occur at time instant t n , we can calculate the relative phase of an oscillator w.r.t. a hypothetical oscillator whose charging spikes occur at regular intervals of ∆t i from the start (t = 0) as: When all ∆t i are equal, i.e. the oscillators synchronize, φ(n) calculates the relative phases w.r.t. a common ∆t 0 for all oscillators. We plot φ(n) vs n for all oscillators in figure 1.3. What we observe is that the phases φ(n) converge Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but  The phases φ(n) plotted against n∆t i for four relaxation oscillator systems for solving 3-colorable graphs with the same color partition (5, 5, 5) but with different connectivities. Case (a) is the case of a complete 3-partite graph, and graphs become sparser from (a) to (d). The phase clustering degrades as graphs become sparser and for very sparse graphs (d) the oscillators do not synchronize. The number of colors detected using our algorithm is shown with each graph and the nodes which are assigned the same color are indicated.
and cluster together for dense graphs but as the graphs become sparse, which are considered harder, the the phases do not converge. In the intermediate region between dense and very sparse graphs, the phase do converge but they do not cluster together in groups. In these case our proposed algorithm and reformulation of vertex coloring is particulalry useful because it does not rely on the clustering of phases. Our algorithm does an O(n 2 ) post-processing on the steady state order of phases and calculates a color assignment which is always correct but can have non-optimal coloring, i.e.
the number of colors can be more than the chromatic number.
2 Linear dynamics in the discharge phase s = 0 In the state s = 0 where all the oscillators are in the discharging state, the system is an autonomous linear dynamical system Hence, the time evolution of this dynamical system is governed by the spectral properties of the coefficient matrix.
In an identical system, the equation is Let the eigenvectors of B be µ k .
Proposition 1. The eigenvectors of the coefficient matrix B of the identical system are the same as those of the adjacency matrix A. The eigenvalues µ k of B are related to the eigenvalues of A as follows: Proof. For any matrix M with an eigenvalue m, the eigenvectors of M + αI and β (M + αI) −1 are same as M for any scalars α and β. This can be seen as follows: And eigenvectors remain unchanged for matrix inverse. Also eigenvalues for β (M + αI) −1 will be β/(m + a).
Substituting appropriate values for α and β gives us the required relation between µ k and λ k . Now, the Perron-Frobenius theory [1]implies that largest eigenvalue of A is less than or equal to the maximum row sum which is less than n, i.e. λ max ≤ r max < n Hence, λ k − c i c c − n < 0 for all k which implies that µ k < 0 for all k.

Asymptotic trajectories and asymptotic order of components of the state vector in a linear dynamical system
In a linear dynamical system with the state variable x(t), the order of components of x(t) define a permutation at any time instant t. In state S = 0, the linear dynamical system is where B is real, symmetric and the initial state of the system x(0) = x 0 .

Geometry of permutation regions
For any ordering P of components x i1 > x i2 > ... > x in , the region that corresponds to this ordering is given by is a pair of n-dimensional simplexes with one vertex as the origin and are mirror images of each other about the origin. As such, any line that passes through the origin either passes through both of them, or none.

Asymptotic direction of trajectories
In a linear dynamical system, the asymptotic order of components is hence governed by the asympotic direction in which the system state converges to.

Proposition 2.
In the linear dynamical system x (t) = Bx(t), where the coefficient matrix B is real, symmetric and full-rank, the system trajectory always converges asymptotically to a particular direction. Moreover, if the asymptotic direction is given by d(x 0 , B) where x(0) = x 0 , then d(x 0 , B) lies in the eigenspace of B with the largest eigenvalue (including the sign) almost everywhere, i.e. when the system starts from anywhere except on a set of measure 0.
Proof. Let x(t, x 0 ) be the solution of the dynamical system when the initial starting state x(0) = x 0 . As the fixed point is 0, the asymptotic direction d(x 0 , B) to which the system state converges can be written as is the Lypunov exponent of the trajectory starting from x 0 . As B is real and symmetric, all its eigenvalues are real and the matrix is diagonalizable. Let B = QΛQ T , where Λ is the diagonal matrix with of all eigenvalues. Then This means λ(x 0 ) = λ 1 almost everywhere, i.e. everywhere except on a set of measure 0. Hence Here, the diagonal elements of the middle matrix are ones only for the rows corresponding to the eigenvector λ 1 , and q 1a , q 1b , ... are orthogonal vectors that span E 1 . Hence d(x 0 , B) ∈ E 1 almost everywhere. In case the largest eigenvalue λ 1 of B has multiplicity 1, d(x 0 , B) is simply q 1 a.e.

Asymptotic order of components
The asymptotic order of components of x(t) is determined by the permutation region in which d( The asymptotic order becomes a little more complex when d(x 0 , B) lies at the boundary of two or more permutation regions, i.e. some of the components of d(x 0 , B) are equal. In such cases, T (d(x 0 , B)) is only a partial order as determined by d(x 0 , B). T (d(x 0 , B)) can be extended to a total order by the asymptotic direction of the system in the remaining space E 2 ⊕ E 3 ⊕ ... ⊕ E l . Let us denote this by d(x 0 \E 1 ). Also, let P E 1 be the projection matrix on E 1 , then is at the boundary of some permutation regions, the disambiguation among these regions, i.e. ordering among the components which are equal, is done by d(x 0 , B\E 1 ) as it is perpendicular to d(x 0 , B). Hence, the asymptotic order is determined by both d(x 0 , B) and d(x 0 , B\E 1 ). If d(x 0 , B\E 1 ) lie at the boundary of some other permutation regions, then the argument can be extended in a similar way and the asymptotic order of components is determined by d(x 0 , B), d(x 0 , B\E 1 ) and d(x 0 , B\E 1 ⊕ E 2 ) together, and so on.
The extension of the partial order T (d(x 0 , B)) using T (d(x 0 , B\E 1 )) is similar to the ordinal sum T (d(x 0 , B)) ⊕ T (d(x 0 , B\E 1 )) but a preferential one, i.e. the orders determined by T (d(x 0 , B)) are preferred over those determined in T (d(x 0 , B\E 1 )). Let us denote this operation by the binary operator ⊕ which acts on an ordered pair of two partial orders and gives another partial or total order.
. ⊕ E l is simply determined by the eigenvectors and eigenvalues corresponding to E 2 , E 3 , ..., E l . Hence .. are the eigenvectors corresponding to λ 2 . Extending the argument, we have d(x 0 \E 1 ⊕ E 2 ) ∈ E 3 and so on. Hence, we have the following: Proposition 3. The asymptotic order of components of x(t) in the linear dynamical system x (t) = Bx(t), where the coefficient matrix B is real, symmetric and full-rank, is determined by T (d(x 0 , B)). In case d(x 0 , B) lies on the boundary of some permutation regions then T (d(x 0 , B)) is a partial order which can be extended to a total order as . And in case d(x 0 , B\E 1 ) lies at some boundary then the asymptotic order is determined and so on. Hence, the asymptotic order of components is determined as 3 Linear dynamics in the charging states s 0 When s 0 the system is a linear dynamical system, but the fixed point is not 0. The identical system in a charging state s can be described as The fixed point of the system in a state s is where B = (c c A − (c i + c c n)I) −1 as before (section §2). When g g s , i.e. the chargings are much faster than the dischargings, the fixed points of the system are close to s which are the corners of the unit cube in n dimensions.
Following the arguments as in section §2, even in this case the system trajectory will converge to an asymptotic direction. The asymptotic ordering of components would depend on first the fixed point, and in case the fixed point has equal components then it would also depend on the asymptotic direction of trajectory. This is explained as: Proposition 4. In the linear dynamical system of the charging states x (t) = BG(s) (x(t) − p), where p = g g s +g s is the fixed point and the coefficient matrix B is real, symmetric and full-rank, the asymptotic permutation of the components will be same as the permutation of components of the fixed points, i.e. T (p). In case the fixed point p lies at (or close) to the boundary of some permutation regions, i.e. some components of p are equal, the disambiguation of ordering among these components can be done considering the linear dynamics of x (t) = Bx(t) with fixed point shifted to 0, and following Propositions 3. Hence, the asymptotic order of components is given by where P sE 1 , P sE 2 , . . . are the projections on the eigenspaces of BG(s).
In case the matrix B in the equation x (t) = BG(s) (x(t) − p) is not full rank, the system trajectory does not converge to the point p. If N is the null space of the matrix B and P N is the projection on the null space N, then the convergence limit point for the trajectory starting from x 0 is is p + P N x 0 . Also, N is also the null space for BG(s) for all s. Hence, Proposition 4 can be modified for matrices B which are not full-rank as follows Proposition 5. In the linear dynamical system as described in Proposition 4, but where B is not full rank, the asymp-Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but totic order of components is given by

where P sN is the projection matrix on the null space of BG(s).
When x 0 is close to the eigenspaces, i.e. magnitude of P sN x 0 is very small, the additive term of P N x 0 in the first term does not change the order determined by s. Formally, when max {(P N x 0 ) i } < g s g i +g s T g i g i + g s s + P N x 0 = T (s) ⊕ T (P N x 0 ) and hence,

Approximation by instantaneous charging
If the chargings are very fast, i.e. g s g i → 0, we can approximate the chargings by an instantaneous change in the state from x to x + ∆x by linearizing the system at the time instant when the state changes from s = 0 to the charging state. LetŜ denote a diagonal matrix such that diag(Ŝ ) = s where s is the state vector. When s 0 we have from (1.2) If the k th node charges thenŜ x = v l e k where e k is k − axis vector whose all components are 0 expect the k th which is 1. If the k th node charges completely from v l to v h without any state transition in between, we have which is just a scaled column vector of B. We have the following: Proposition 6. In the dynamical system of (1.2), when s 0 and only a single node charges, the chargings can be approximated by linearizing the system. If the transition occurs from x to x + ∆x then ∆x is given by: ∆x = dv B kk Be k Remark 1. An important point to note here is that this change is independent of x.

Vertex Color-Sorting
As can be seen in the system equation of the capacitively coupled oscillators, the discharge phase (where all oscillators are discharging) is a simple linear differential equation with H(s) = 0. The matrix C−C C is just the Laplacian matrix of the graph of the oscillators and the system dynamics is governed by simply the eigenspectrum of the of the Laplacian matrix of the graph. As such, there are interesting connections between spectral algorithms for graph coloring and the coupled relaxation oscillator circuit. 1. Any ordering of nodes S is a proper k-Color-Sorting for some k such that χ A ≤ k ≤ n. but it should also have lower tendency to change the order when T (x 0 ) is any circular permutation of Q 0 (x 0 ).
Why these conditions hold in the prototypical case of complete graph with equal number of nodes in each color class can be seen as follows.
Explanation for condition 1: The adjacency matrix A in the prototypical case is a low rank matrix with the rank equal to the number of colors, i.e. if it is a k-partite graph then rank is n. The adjacency matrix is a block matrix with equal sized k 2 blocks and the diagonal blocks are 0 and the non-diagonal blocks are 1. One eigenvector of the matrix A is the constant vector [1, 1, 1, ...] which is the diagonal of the n-dimensional cube [v l , v h ] n and also lies at the intersection of all the simplexes of the permutation regions (equation 2.1) and does not affect the asymptotic order of components of x. Hence all the other eigenvectors decide the asymptotic order and lie in the non-positive quadrants.
The eigenvectors of B with least negative eigenvalues (which are the eigenvectors of A with most negative eigenvalues) have components which are equal on each color class (Appendix A.1) and hence should direct the system towards a correct vertex color-sorting in state s = 0. We also know that all the eigenvalues of the coefficient matrix in the state s = 0 are negative, and hence, if the system starts with the correct order of components, i.e. T (x 0 ) = Q 0 (x 0 ) then the system state x will continue to lie in the same permutation region with time.
Explanation for condition 2: Assuming very fast charging and using the instantaneous charging approximation, we see from Proposition 6 that the state transition ∆x is in the direction of the k th column vector of B when the k th node charges. As shown in appendices A.2 and A.3, in case of weak coupling, i.e. c i c c the k th column vector is constant for all non-charging components and hence ∆x does not change the order of the non-charging components.
The variation in the non-charging components of ∆x is inversely propotional to n + m and hence with larger n and m the charging transition x → x + ∆x tries to preserve the order of non-charging components more (Appendix A.3). As shown in figure 5.1 the effect of charging transitions can be seen as small kinks in the waveforms of non-charging components. The magnitude of these kinks is negligible for weak coupling (a), and is clearly visible for stronger coupling (c). Even though the charging transitions affect the non-charging components in the case of a stronger coupling, the order of non-charging components is not disturbed, i.e. the change in all the non-charging components is almost the same (Appendix A.3).
Explanation for condition 3: If the system state x is close to the eigenspace of B with least negative eigenvalue, say E 1 , then x has components which are close for the same color class (Appendix A.1) and components of different color classes will have more separation between them by comparison. If the components of x are ordered in increasing order then it will have a pattern {x a 1 , x a 2 , ..., x b 1 , x b 2 , ..., x c 1 , x c 2 , ...}, where a i are the indices for one color class, b i for another etc. If the order among the color classes is changed, say {x b 1 , x b 2 , ..., x a 1 , x a 2 , ..., x c 1 , x c 2 , ...} even then x will be close to the eigenspace E 1 because of the multiplicity of the least negative eigenvalue (Appendix A.1). The charging transitions of nodes of the same color class will occur consecutively with little time durations between them. This little time does not allow the system state s = 0 which occurs between these transitions to change the order. When all nodes of one particular class have undergone charging processes, the system state x again comes close to the eigenspace E 1 because the components of x belonging to the same color class are again close to each other. Hence, the state s = 0 does not disurb this order as well. The cycle repeats with very fast consecutive charging processes of the next color class. This also gives rise to clustering of the phases of nodes w.r.t. their color classes.  property has been explored with mathematical detail in works related to spectral algorithms for graph coloring [3,4,5].

Monoclinic M1 rutile
When viewed from the perspective of a coupled relaxation oscillator system of (1.2), the above mentioned property of eigenvectors of the adjacency matrix A with most negative eigenvalues will be shared by the eigenvectors of B with the least negative eigenvalues because of Proposition 1. As shown above, the asymptotic order of components of the system state in the discharge phase s = 0 of coupled relaxation oscillator systems depend on the least negative eigenvalues of B. Hence, the relaxation oscillator systems in state s = 0 is expected to direct the system towards correct vertex color sorting, which satisfies condition 1 of Proposition 8. Conditions 2 and 3 also depend on eigenvectors and hence similar arguments of matrix perturbation can be applied.

Prototypical experiments and validation
Vanadium dioxide (VO 2 ) is a prototypical insulator-metal transition material system with strong electron-electron and electron-phonon interactions that has been the subject of intense fundamental and applied research . The above room temperature phase transition (transition temperature = 340 K) in VO 2 has an electronic component characterized by an abrupt change in resistivity (and carrier concentration) up to five orders in magnitude; the large increase in carrier concentration can be attributed to collapse of the 0.6 eV band gap (optically measured) across the insulator-to-metal transition. Further, the phase transition also has a structural component wherein the crystal structure evolves from the monoclinic M1 phase with dimerized vanadium atoms in the low-temperature insulating state to rutile crystal structure in the high-temperature metallic phase.
Despite intense research efforts, the origin of the phase transition in VO 2 has been a subject of debate with competing theories suggesting that the driving force behind the transition could be Mott or Peierl's physics as well as a weighted combination of both the mechanisms. Further, the electrically induced phase transition is VO 2 which is relevant to electronic VO 2 devices like the relaxation oscillators discussed here, is debated to be carrier density driven or of electro-thermal nature.
With respect to the relaxation oscillators discussed here, the unknown nature of origin of the electrically induced phase transition in VO 2 entails that the critical voltage (V h , V l in figure of main text)/ current cannot be quantitatively predicted even though empirically measurements indicate that the typical critical electric field values are in the 20-60 kV/cm range. However, we emphasize that knowing V l and V h , the oscillators can de designed in a deterministic manner. The details of experiments, experimental conditions and the theory connecting these experiments with linear dynamical systems for the case of a single and a coupled pair of oscillators can be found in the authors' earlier publications [6]. Following

A.1 Eigenvectors of B in prototypical case
For n nodes and k color classes, let U be a n×m matrix where each column vector corresponds to one color class where the components of that particular class are k/n and rest are 0. As such, U T AU is a k × k matrix with each entry equal to the average of entries of the corresponding block in A. In the simple case of complete graph with equal number of nodes in each class, U T AU = J − I where J is a square matrix of all ones and I is the identity matrix. If x is an eigenvector of U T AU then Now UU T A is just the scaled version of A and hence, Therefore if x is an eigenvector of U T AU then U x is an eigenvector of A. Also the number of non-zero eigenvalues of A are k which is equal to the rank of U T AU which is full-rank. Hence all the eigenvectors of A can be described where U, G and E are the same matrices that describe F, c ic is as defined above, and D is a k × k matrix given by D = 1 c i + (n + m)c c (βJ k − I k ) and β = c i + nc c c i + mc c Proof. As described above, F = U ⊗ G + V ⊗ E. Here G is a identity matrix and E is a rank 1 matrix. Hence, as shown in [10], the inverse for F can be calculated as where r = c i /c c . As can be seen, this difference can be made very small by weak coupling, i.e. c c c i , but more importantly for increasing n and m this difference reduces