Unified treatment of synchronization patterns in generalized networks with higher-order, multilayer, and temporal interactions

When describing complex interconnected systems, one often has to go beyond the standard network description to account for generalized interactions. Here, we establish a unified framework to simplify the stability analysis of cluster synchronization patterns for a wide range of generalized networks, including hypergraphs, multilayer networks, and temporal networks. The framework is based on finding a simultaneous block diagonalization of the matrices encoding the synchronization pattern and the network topology. As an application, we use simultaneous block diagonalization to unveil an intriguing type of chimera states that appear only in the presence of higher-order interactions. The unified framework established here can be extended to other dynamical processes and can facilitate the discovery of emergent phenomena in complex systems with generalized interactions.

When describing complex interconnected systems, one often has to go beyond the standard network description to account for generalized interactions. Here, we establish a unified framework to simplify the stability analysis of cluster synchronization patterns for a wide range of generalized networks, including hypergraphs, multilayer networks, and temporal networks. The framework is based on finding a simultaneous block diagonalization of the matrices encoding the synchronization pattern and the network topology. As an application, we use simultaneous block diagonalization to unveil an intriguing type of chimera states that appear only in the presence of higher-order interactions. The unified framework established here can be extended to other dynamical processes and can facilitate the discovery of emergent phenomena in complex systems with generalized interactions. DOI: 10.1038/s42005-021-00695-0 Over the past two decades, networks have emerged as a versatile description of interconnected complex systems [1,2], generating crucial insights into myriad social [3], biological [4], and physical [5] systems. However, it has also become increasingly clear that the original formulation of a static network representing a single type of pairwise interaction has its limitations. For instance, neuronal networks change over time due to plasticity and comprise both chemical and electrical interaction pathways [6]. For this reason, the original formulation has been generalized in different directions, including hypergraphs that account for nonpairwise interactions involving three or more nodes simultaneously [7,8], multilayer networks that accommodate multiple types of interactions [9,10], and temporal networks whose connections change over time [11]. Naturally, with the increased descriptive power comes increased analytical complexity, especially for dynamical processes on these generalized networks.
One important class of dynamical processes on networks is cluster synchronization. Many real-world networks show intricate cluster synchronization patterns, where one or more internally coherent but mutually independent clusters coexist [12][13][14][15][16][17][18][19]. Maintaining the desired dynamical patterns is critical to the function of those networked systems [20,21]. For instance, long-range synchronization in the theta frequency band between the prefrontal cortex and the temporal cortex has been shown to improve working memory in older adults [22].
Up until now, synchronization (and other dynamical processes) in hypergraphs [23][24][25], multilayer networks [26][27][28], and temporal networks [29][30][31] have been studied mostly on a case-by-case basis. Recently, it was shown that, in synchronization problems, simultaneous block di-agonalization (SBD) optimally decouples the variational equation and enables the characterization of arbitrary synchronization patterns in large networks [32]. However, aside from multilayer networks, for which the multiple layers naturally translate into multiple matrices [32,33], the full potential of SBD for analyzing dynamical patterns in generalized networks is yet to be realized. As a technique, SBD has also found applications in numerous fields such as semi-definite programming [34], structural engineering [35], signal processing [36], and quantum algorithms [37].
In this Article, we develop a versatile SBD-based framework that allows the stability analysis of synchronization patterns in generalized networks, which include hypergraphs, multilayer networks, and temporal networks. This framework enables us to treat all three classes of generalized networks in a unified fashion. In particular, we show that different generalized interactions can all be represented by multiple matrices (as opposed to a single matrix as in the case of standard networks), and we introduce a practical method for finding the SBD of these matrices to simplify the stability analysis. As an application of our unified framework, we use it to discover higher-order chimera states-intriguing cluster synchronization patterns that only emerge in the presence of nonpairwise couplings.

RESULTS AND DISCUSSION
General formulation and the SBD approach Consider a general set of equations describing N interacting oscillators: where F describes the intrinsic node dynamics and h i specifies the influence of other nodes on node i. We present our framework assuming discrete-time dynamics, although it works equally well for systems with continuous-time dynamics. For a static network with a single type of pairwise interaction, where σ is the coupling strength, the (potentially weighted) coupling matrix C reflects the network structure, and H is the interaction function. When the network is globally synchronized, x 1 = · · · = x N = s, assuming H only depends on x j , the synchronization stability can be determined through the Lyapunov exponents associated with the variational equation where δ = (x 1 − s , · · · , x N − s ) is the perturbation vector, I N is the identity matrix, ⊗ represents the Kronecker product, and J is the Jacobian operator. In the case of undirected networks, Eq. (2) can always be decoupled into N independent low-dimensional equations by switching to coordinates that diagonalize the coupling matrix C [38]. For more complex synchronization patterns, however, additional matrices encoding information about dynamical clusters are inevitably introduced into the variational equation. In particular, the identity matrix I N is replaced by diagonal matrices D (m) defined by where C m represents the mth dynamical cluster (oscillators within the same dynamical cluster are identically synchronized). Moreover, as we show below, when h i (·) includes nonpairwise interactions, multilayer interactions, or time-varying interactions, it leads to additional coupling matrices C (k) in the variational equation. Consequently, the variational equations for complex synchronization patterns on generalized networks share the following form: where s m is the synchronized state of the oscillators in the mth dynamical cluster, and JH (m,k) (s m ) is a Jacobianlike matrix whose expression depends on the class of generalized networks being considered. For Eq. (4), diagonalizing any one of the matrices D (m) or C (k) generally does not lead to optimal decoupling of the equation. Instead, all of the matrices D (m) and C (k) should be considered concurrently and be simultaneously block diagonalized to reveal independent perturbation modes. In particular, the new coordinates should separate the perturbation modes parallel to and transverse to the cluster synchronization manifold, and decouple transverse perturbations to the fullest extent possible.
For this purpose, we propose a practical algorithm to find an orthogonal transformation matrix P that simultaneously block diagonalizes multiple matrices. Given a set of symmetric matrices B = {B (1) , B (2) , . . . , B (L) }, the algorithm consists of three simple steps: where ξ are independent random coefficients which can be drawn from a Gaussian dis- ii. Generate B = iii. Set P = [v (1) , · · · , v (N ) ], where is a permutation of 1, · · · , N such that indexes in the same block are sorted consecutively (i.e., the base vectors v i corresponding to the same block are grouped together).
The proposed algorithm is inspired by and adapted from Murota et al. [39]. There, the authors use the eigendecompostion of a random linear combination of the given matrices to find a partial SBD, but the operations needed for refining the blocks can be cumbersome. Here, we show that the simplified algorithm above is guaranteed to find the finest SBD when there are no degeneracies-i.e., no two v i have the same eigenvalue (see Methods for a proof). Intuitively, this is because a random linear combination of B ( ) contains all the information about their common block structure in the absence of degeneracy, which can be efficiently extracted through eigendecompostion. When there is a degeneracy, cases exist for which the proposed algorithm does not find the finest SBD (see Methods for details). However, these cases are rare in practice and is a small price to pay for the improved simplicity and efficiency of the algorithm.
We note that the algorithm can be adapted to asymmetric matrices, and in all nondegenerate cases it finds the finest SBD that can be achieved by orthogonal transformations. However, this does not exclude the possibility that more general similarity transformations could result in finer blocks for certain asymmetric matrices. (For symmetric matrices, general similarity transformations do not have an advantage over orthogonal transformations.) In Fig. 1, we compare the proposed algorithm with two previous state-of-the-art algorithms on SBD [32,34]. The algorithms are tested on sets of N ×N matrices, each consisting of 10 random matrices with predefined common block structures (see Methods for how the matrices are generated). For each algorithm and each matrix size N , 100 independent matrix sets are tested. Figure 1 shows the mean CPU time from each set of tests (the standard deviations are smaller than the size of the symbols). The algorithm presented here finds the finest SBD in all cases tested and has the most favorable scaling in terms of computational complexity. For instance, it can process matrices with N ≈ 1000 in under 10 seconds (tested on Intel Xeon E5-2680v3 processors), which is orders of magnitude faster than the other methods. The Python and MATLAB implementations of the proposed SBD algorithm are available online as part of this publication (see Code availability).

Cluster synchronization and chimera states in hypergraphs
Hypergraphs [7] and simplicial complexes [40] provide a general description of networks with nonpairwise interactions and have been widely adopted in the literature . However, the associated tensors describing those higher-order structures are more involved than matrices, especially when combined with the analysis of dynamical processes [64][65][66][67][68][69]. There have been several efforts to generalize the master stability function (MSF) formalism [38] to these settings, for which different variants of an aggregated Laplacian have been proposed [24,25,70,71]. The aggregated Laplacian captures interactions of all orders in a single matrix, whose spectral decomposition allows the stability analysis to be divided into structural and dynamical components, just like the standard MSF for pairwise interactions. However, such powerful reduction comes at an inevitable cost: simplifying assumptions must be made about the network structure (e.g., all-toall coupling), node dynamics (e.g., fixed points), and/or interaction functions (e.g., linear) in order for the aggregation to a single matrix to be valid.
Here, we consider general oscillators coupled on hypergraphs without the aforementioned restrictions. For the ease of presentation and without loss of generality, we focus on networks with interactions that involve up to three oscillators simultaneously: The adjacency matrix A (1) and adjacency tensor A (2) represent the pairwise and the three-body interaction, respectively. To make progress, we use the following key insight from Gambuzza et al. [72]: for noninvasive coupling [i.e., H (1) (s, s) = 0 and H (2) (s, s, s) = 0] and global synchronization, synchronization stability in hypergraphs is determined by Eq. (4) ij ; L (2) retains the zero row-sum property and is defined as L ik . Higher-order generalized Laplacians for k > 2 can be defined similarly [72].
Crucially, we can show that the generalized Laplacians are sufficient for the stability analysis of cluster synchronization patterns provided that the clusters are nonintertwined [73,74] (see Supplementary Note 1 for a mathematical derivation). Thus, in these cases, the problem reduces to applying the SBD algorithm to the set formed by matrices {D (m) } (determined by the synchronization pattern) and {L (k) } (encoding the hypergraph structure). For the most general case that includes intertwined clusters, SBD still provides the optimal reduction, as long as the generalized Laplacians are replaced by matrices that encode more nuanced information about the relation between different clusters [75]. The resulting SBD coordinates significantly simplifies the calculation of Lyapunov exponents in Eq. (4) and can provide valuable insight on the origin of instability, as we show below.
As an application to nontrivial synchronization patterns, we study chimera states [76,77] on hypergraphs.
Here, chimera states are defined as spatiotemporal patterns that emerge in systems of identically coupled identical oscillators in which part of the oscillators are mutually synchronized while the others are desynchronized. For a comprehensive review on different notions of chimeras, see the recent survey by Haugland [78].
The hypergraph in Fig. 2a consists of two subnetworks of optoelectronic oscillators. Each subnetwork is a simplicial complex, in which a node is coupled to its four nearest neighbors through pairwise interactions of strength   σ 1 and it also participates in three-body interactions of strength σ 2 . The two subnetworks are all-to-all coupled through weaker links of strength κσ 1 , and in our simulations we take κ = 1/5. The individual oscillators are modeled as discrete maps x i [t + 1] = β sin 2 x i [t] + π/4 , where β is the self-feedback strength that is tunable in experiments [79,80]. For the pairwise interaction, The full dynamical equation of the system can be summarized as follows: Since couplings in previous optoelectronic experiments are implemented through a field-programmable gate ar-ray that can realize three-body interactions, we expect that our predictions below can be explored and verified experimentally on the same platform.
To characterize chimera states for which one subnetwork is synchronized and one subnetwork is incoherent, we are confronted with 10 noncommuting matrices in Eq. (4). Eight of them are {D (1) , · · · , D (8) }, corresponding to one dynamical cluster with 7 synchronized nodes and seven dynamical clusters with 1 node each (distinguished by colors in Fig. 2a). The other two matrices are {L (1) , L (2) }, which describe the pairwise and threebody interactions, respectively. Applying the SBD algorithm to these matrices reveals the common block structure depicted in Fig. 2b. The gray block corresponds to perturbations parallel to the cluster synchronization manifold and does not affect the chimera stability. The other blocks control the transverse perturbations (all localized within the synchronized subnetwork C 1 ) and are included in the stability analysis. This allows us to focus on one 1×1 block at a time and to efficiently calculate the maximum transverse Lyapunov exponent (MTLE) Λ of the chimera state using previously established procedure [81,82]. Reduction in computational complexity achieved by the SBD algorithm for cluster size n = 7, intracluster link density p = 0.5, and intercluster link density q = 0.5 as the cluster number M is varied. The box covers the range 25 th -75 th percentile, the whiskers mark the range 5 th -95 th percentile, and the dots indicate the remaining 10% outliers. Each boxplot is based on 1000 independent network realizations. Fig. 2, SBD coordinates offer not only dimension reduction but also analytical insights. As we show in Supplementary Note 2, because the transverse blocks (colored pink in Fig. 2b) found by the SBD algorithm are all 1 × 1, the Lyapunov exponents associated with chimera stability are given by a simple formula,

For the system in
where λ is a finite constant determined by the synchronous trajectory s[t] of the coherent subnetwork C 1 , which in turn is influenced by both σ 1 and σ 2 . Using Eqs. (7) and (8), we can calculate the MTLE in the σ 1 -σ 2 parameter space to map out the stable chimera region. As can be seen from Fig. 2c, where we fix β = 1.5, chimera states are unstable when oscillators are coupled only through pairwise interactions (i.e., when σ 2 = 0), but they become stable in the presence of three-body interactions of intermediate strength. Figure 2d shows the typical chimera dynamics for β = 1.5, σ 1 = 0.6, and σ 2 = 0.4. According to Eqs. (7) and (8), the higher-order interaction stabilizes chimera states solely by changing the dynamics in the incoherent subnetwork C 2 , which in turn influences the synchronous trajectory in C 1 and thus the value of Γ. This insight highlights the critical role played by the incoherent subnetwork in determining chimera stability [82].
To test the complexity reduction capability of the SBD algorithm systematically, we consider networks consisting of M dynamical clusters, each with n nodes (Fig. 3a), such that: 1. each cluster is a random subnetwork with link density p, to which three-body interactions are added by transforming triangles into 2-simplices; 2. two clusters are either all-to-all connected (with probability q > 0) or fully disconnected from each other (with probability 1 − q).
For the analysis of the M -cluster synchronization state in these networks, the reduction in computational complexity yielded by the SBD algorithm can be measured using r (α) = i n α i /N α , where n i is the size of the ith common block for the transformed matrices. If the computational complexity of analyzing Eq. (4) in its original form scales as O(N α ), then r (α) gives the fraction of time needed to analyze Eq. (4) in its decoupled form under the SBD coordinates. Given that the computational complexity of finding the Lyapunov exponents for a fixed point in an n-dimensional space typically lies between O(n 2 ) and O(n 3 ), here we set α = 3 as a reference for the more challenging task of calculating the Lyapunov exponents for periodic or chaotic trajectories.
In Fig. 3b, we apply the SBD algorithm to {D (1) , · · · , D (M ) , L (1) , L (2) } and plot r (3) against the number of clusters M in the networks. We see a reduction in complexity of at least two orders of magnitude (r (3) ≤ 10 −2 ) for M ≥ 10. This reduction does not depend sensitively on other parameters in our model (n, p, and q).   (4) under the SBD coordinates. The entries of the transformed matrices that are not required to be zero are represented by solid circles. The gray block corresponds to perturbations that do not affect the chimera stability, and the pink blocks represent transverse perturbations that determine the chimera stability. c Linear stability analysis of chimera states based on the SBD coordinates for a range of coupling strength σ and self-feedback strength β. Chimeras are stable when the maximum transverse Lyapunov exponent Λ is negative. d Chimera dynamics for σ = 0.9 and β = 1.1 (green dot in c). Here, xi is the dynamical state of the ith oscillator, and the vertical axis indexes the oscillators in the respective subnetworks marked in a.

Synchronization patterns in multilayer and temporal networks
The coexistence of different types (i.e., layers) of interactions in a network [9,10,83] can dramatically influence underlying dynamical processes, such as percolation [84,85], diffusion [86,87], and synchronization [28,88,89]. Multilayer networks of N oscillators diffusively coupled through K different types of interactions can be described by (9) where L (k) is the Laplacian matrix representing the links mediating interactions of the form H (k) and coupling strength σ k . It is easy to see that the corresponding variational equation for a given synchronization pattern [32,90] is a special case of Eq. (4) and can be readily addressed using the SBD framework.
Temporal networks [11] are another class of systems that can naturally be addressed using our SBD framework. Such networks are ubiquitous in nature and society [91,92], and their time-varying nature has been shown to significantly alter many dynamical characteristics, including controllability [91,93] and synchronizability [94][95][96][97].
Consider a temporal network whose connection pattern at time t is described by L (t) , Here, the stability analysis of synchronization patterns can by simplified by simultaneously block diagonalizing {D (m) } and {L (t) }. This framework generalizes existing master stability methods for synchronization in temporal networks [98], which assumes that synchronization is global and the set of all L (t) to be commutative. We also do not require separation of time scales between the evolution of the network structure and the internal dynamics of oscillators, which was assumed in various previous studies in exchange of analytical insights [30,31]. It is worth noting that {L (t) } can in principle contain infinitely many different matrices. This would pose a challenge to the SBD algorithm unless there are relations among the matrices to be exploited. Here, for simplicity, we assume that L (t) are selected from a finite set of matrices. This class of temporal networks is also referred to as switched systems in the engineering literature and has been widely studied [29].
As an application, we characterize chimera states on a temporal network that alternates between two different configurations. Figure 4a illustrates the temporal evo-  lution of the network, which has intracluster coupling of strength σ and intercluster coupling of strength κσ (again for κ = 1/5, the same optoelectronic oscillator and pairwise interaction function as in Fig. 2). This system has a variational equation with noncommuting matrices {D (1) , · · · , D (6) , L (1) , L (2) }, where L (1) and L (2) correspond to the network configuration at odd and even t, respectively. Applying the SBD algorithm reveals one 6×6 parallel block and two 2 × 2 transverse blocks (Fig. 4b), effectively reducing the dimension of the stability analysis problem from 10 to 2.
Despite the transverse blocks not being 1 × 1, by looking at the transformation matrix P one can still gather insights about the nature of the instability. For example, the first pink block consists of transverse perturbations (localized in the synchronized subnetwork) of the form a, 0, −a, b, −b , while perturbations in the second pink block are constrained to be c, −2(c+d), c, d, d . Depending on which block becomes unstable first, the synchronized subnetwork (and thus the chimera state) loses stability through different routes. The chimera region based on the MTLE calculated under the SBD coordinates is shown in Fig. 4c and the typical chimera dynamics for σ = 0.9 and β = 1.1 are presented in Fig. 4d.
To further demonstrate the utility of the SBD framework, we systematically consider temporal networks that alternate between two different configurations. The network construction is similar to that in Fig. 3, except that here each cluster has time-varying instead of nonpairwise interactions. In the example shown in Fig. 5a, each cluster has red links active at odd t and blue links active at even t, while the black links are always active. Figure 5b confirms that the SBD algorithm consistently leads to substantial reduction in computational complexity. Moreover, as in the case of hypergraphs (Fig. 3), the complexity reduction increases as the number of clusters M is increased. Again, the results do not depend sensitively on cluster size and link densities.

CONCLUSION
In this work, we established SBD as a versatile tool to analyze complex synchronization patterns in generalized networks with nonpairwise, multilayer, and time-varying interactions. The method can be easily applied to other dynamical processes, such as diffusion [87], random walks [60], and consensus [99]. Indeed, the equations describing such processes on generalized networks often involve two or more noncommuting matrices, whose SBD naturally leads to an optimal mode decoupling and the simplification of the analysis.
The usefulness of our framework also extends beyond the generalized networks discussed here. Many real-world networks are composed of different types of nodes and can experience nonidentical delays in the communications among nodes. These heterogeneities can be represented through additional matrices and are automatically accounted for by our SBD framework in the stability analysis [100]. Finally, we suggest that our results may find applications beyond network dynamics, since SBD is also a powerful tool to address other problems involving multiple matrices in which dimension reduction is desired, such as independent component analysis and blind source separation [101,102]. The flexibility and scalability of our framework make it adaptable to various practical situations, and we thus expect it to facilitate the exploration of collective dynamics in a broad range of complex systems. Without loss of generality, we can assume all matrices B ( ) to be in their finest common block form. Our goal is then to prove that, when there is no degeneracy, each eigenvector v i of B is localized within a single (square) block, meaning that the indices of the nonzero entries of v i are limited to the rows of one of the common blocks shared by {B ( ) } (Fig. 6).
We first notice that B inherits the common block structure of {B ( ) }. Thus, for each n i × n i block shared by {B ( ) }, we can always find n i eigenvectors of B that are localized within that block. When the eigenvalues of B are nondegenerate, the eigenvectors are unique, and thus all N = i n i eigenvectors of matrix B are localized within individual blocks.
Based on the results above, it follows that after computing the eigenvectors v i of matrix B (step i of the SBD algorithm) and sorting them according to their associated block (steps ii and iii of the SBD algorithm), the resulting orthogonal matrix P = [v (1) , · · · , v (N ) ] will reveal the finest common block structure. Here, finest is characterized by the number of common blocks being maximal (which is also equivalent to the sizes of the blocks being minimal), and the block sizes are unique up to permutations.
In the presence of degeneracies (i.e., when there are distinct eigenvectors with the same eigenvalue), no theoretical guarantee can be given that the strategy above will find the finest SBD [39]. To see why, consider the matrices B ( ) = diag(b ( ) , b ( ) , . . . , b ( ) ) formed by the direct sum of duplicate blocks. In this case, a generic B has eigenvalues with multiplicity M , where M is the number of duplicate blocks. For example, if u is an eigenvector corresponding to the first block of B, then (ξ 1 u , . . . , ξ M u ) is also an eigenvector of B (with the same eigenvalue) for any set of random coefficients {ξ m }. As a result, the eigenvectors of B are no longer guaranteed to be localized within a single block.
Generating random matrices with predefined block structures. In order to compare the computational costs of different SBD algorithms, we generate sets of random matrices with predefined common block structures. For each set, we start with L = 10 matrices of size N . The th matrix is constructed as the direct sum of smaller random matrices, B ( ) = diag(b