Chaotic, informational and synchronous behaviour of multiplex networks

The understanding of the relationship between topology and behaviour in interconnected networks would allow to charac- terise and predict behaviour in many real complex networks since both are usually not simultaneously known. Most previous studies have focused on the relationship between topology and synchronisation. In this work, we provide analytical formulas that shows how topology drives complex behaviour: chaos, information, and weak or strong synchronisation; in multiplex net- works with constant Jacobian. We also study this relationship numerically in multiplex networks of Hindmarsh-Rose neurons. Whereas behaviour in the analytically tractable network is a direct but not trivial consequence of the spectra of eigenvalues of the Laplacian matrix, where behaviour may strongly depend on the break of symmetry in the topology of interconnections, in Hindmarsh-Rose neural networks the nonlinear nature of the chemical synapses breaks the elegant mathematical connec- tion between the spectra of eigenvalues of the Laplacian matrix and the behaviour of the network, creating networks whose behaviour strongly depends on the nature (chemical or electrical) of the inter synapses.


Introduction
Complex networks [1][2][3] serve as a model for a broad range of phenomena.Brain, 4,5 social interactions, 6 and linguistics 7 are all examples of systems represented by complex networks.In general, networks are useful models for studying systems that have a spatial extension.For instance, insect populations whose interaction between them produces the extinction of one of them, 8 the interaction between proteins 9 and the interaction between gears. 10][13][14][15][16][17][18] In the case of the brain, 5 interconnections between complex subnetworks are typically made by chemical synapses while intraconnections can be formed by both chemical and electric synapses. 19For brain research 19,20 and brain-based cryptography, 21 the interest is to understand the inter and intracouplings such that the units in the complex networks are sufficiently independent (unsynchronous) to achieve independent computations.However, the networks must be sufficiently connected (synchronous) such that information is exchanged between subnetworks and integrated into coherent patterns. 22he academic community has dedicated much attention to elucidate the interplay between topology and behaviour in multiplex networks.][30] Authors have shown an intricate interplay between different aspects of the network topology with weak or strong (not full) synchronisation, which was shown to be dependent on the ratio between interlinks with all the links in networks of phase oscillators, 27 on the number of interlinks in networks of Rössler oscillators 24 and neural networks, 30 and on the ratio between inter and intra links in networks of heterogeneous maps. 26Synchronisation was also shown to depend exclusively or complementarily on the electric or chemical couplings in two coupled neurons 31 and in neural networks. 20,28,30,32 Inparticular, in the work of Ref., 28 it was shown semi-analytically that the stability of the complete synchronous manifold depends on the Laplacian matrix of the electric synapses, the degree of chemical synapses, and the type of chemical synapses (inhibitory or excitatory).The relationship between topology and the diffusive behaviour in multiplex networks composed by two coupled complex networks of ODEs with constant Jacobian was made clear in Refs. 11n this work, we elucidate the interplay among the topological aspects previously described to be relevant in the study of synchronisation (i.e., the eigenvalues of the Laplacian, the ratio α between inter degree and the number of nodes of the subnetworks, and the inter and intra coupling strengths) and complex behaviour in multiplex networks of two undirected coupled equal complex networks.We will show analytically how topology drives and is related not only to weak or strong forms of synchronisation, but also to other complex forms of behaviour: chaos and information transmission.Thus, providing an innovative set of mathematical tools to study how complexity behaviour emerges in multiplex networks.This achievement was possible because we were able to analytically calculate, for the first time, one of the most challenging quantities in nonlinear systems, the complete spectrum of Lyapunov Exponents for a class of multiplex networks with constant Jacobian.This intricate relationship was also studied numerically in multiplex neural networks.
Our results show that in fact the ratio α is the determinant factor for the complex behaviour of the network, which also explain why the ratio between inter and intra or the number of interlinks has been previously seem to drive synchronisation, 27 , 24 , 26 . 30e also show that synchronisation and information, whose quantifiers depend on the spectral gap of the Laplacian, will depend exclusively or complementarily on the inter and intra coupling strengths as observed in, 30,31 and demonstrated in. 28or networks with constant Jacobian, synchronisation and information will depend exclusively on either the intra or the intercoupling strengths, if the two networks have symmetric interconnections, and will depend complementarily on both intra and interconnections, if the two networks have asymmetric interconnections.For the multiplex neural networks, we find that intra and inter couplings will complementarily cooperate to complex behaviour if the two neural complex networks are coupled by inter chemical and excitatory synapses.If intercouplings are of the inhibitory nature, behaviour will mainly depend on the intracoupling.Therefore, it is the excitatory chemical synapses that promote integration between intra (local) and inter (global) synapses in neural networks.On the other hand, in the networks with constant Jacobian, integration between inter and intra comes about by the break of symmetry caused by the asymmetric configuration.Moreover, for this configuration, a bottle-neck effect appears for an appropriately rescaled intercoupling strength.In this case, an increase in the synchronisation level of the network leads to an increase in the capacity of the network to exchange information.

Methods
Each complex network connects with each other in two ways, by a symmetric or an asymmetric interlink configuration.For the symmetric case, each node in a subnetwork can have at most one connection with a corresponding node in the other equal subnetwork (See Fig. 1).The general asymmetric configuration presents nodes in one network that can randomly connect to other nodes in the other network.The considered network configurations are models of extended space-time chaotic systems [33][34][35][36] or chemical chaos. 37,38 t is also a model for two types of structures found in real neural networks. 39The one with stronger community structure (small first eigenvalue of Laplacian matrix, or strong intracouplings), and the one with a high level of bipartiteness, i.e., two similar complex networks strongly connected by intercouplings (larger last eigenvalue of the Laplacian matrix, or strong intercoupling).
We consider two types of dynamics for the nodes of the network.The shift map (see Sec. "Extension to continuous networks" for networks with continuous-time descriptions), forming a discrete network of diffusively connected nodes, and the Hindmarsh-Rose (HR) neuron, 40 connected with inter chemical and intra electrically synapses.
Let X represents the state variables of a network with N = 2N 1 nodes formed by two equal coupled complex networks composed each of N 1 nodes that are coupled by 12 "long-range" inter-connections.The dynamical description of the nodes is given by either the discrete-time function F(x n (mod 1) or the continuous-time function f(x i ), representing the Hindmarsh-Rose neuron model.
The discrete network of shift maps is described by where α = 12 N 1 represents an effective inter degree of the network.The network can be written in a matricial form by are Laplacian matrices and T stands for the transpose.G represents the Laplacian of the two uncoupled complex networks and its intra links (the Laplacian matrix A) and L represents the inter-couplings Laplacian matrix between the complex networks.D 1 and D 2 represent the identity degree of the adjacency matrices B and B T , respectivelly, representing the inter couplings.Their components are defined as (D 1 ) ii = ∑ j B i j and (D 2 ) ii = ∑ j B T i j , with null off diagonal terms.It can be written in an even more compact form by where The network HR neurons represented by the coupling in the first coordinate is described by where f 1 represents the first component of the HR vector flow dynamics, x (i) is a vector with components (x 3 ) representing the variables of neuron i, G is the Laplacian for the intra electrical couplings, and C (with components C i j ) is an adjacency matrix representing the inter chemical couplings.The chemical synapses function S is modelled by the sigmoidal function S(x 1 ) = 1 1 + e −λ (x 1 −Θ syn ) , with Θ syn = −0.25,λ = 10 and V syn = 2.0 for excitatory and V syn = −2.0 for inhibitory.In the brain, short-range connections among neurons happen by electric synapses, due to the potential difference of two neighbouring neuron body cells.In this work, the intra electrical synapses are mimicking this local interaction.Long-range connections are done by the chemical synapses, the inter connections in this work.However, to compare results between the HR networks and the discrete networks, the two subnetworks of HR neurons will have equal topologies, a configuration unlikely to be found in the brain.but that can however be interpreted as paradigmatic models of small brain circuits.
As a measure of chaos, we consider the sum of the positive Lyapunov exponents of the network, denoted by H KS .As a measure of the ability of the network to exchange information, we consider an upper bound for the Mutual Information Rate (MIR) between any two nodes in the network: in which λ 1 and λ 2 represent the two largest positive Lyapunov exponents of the network.We assume that these two largest Lyapunov exponents are approximations for the two largest expansion rates (or finite-time finite-resolution Lyapunov exponents) calculated in a bi-dimensional space 41 composed by any two nodes of the network.Equation ( 4) is constructed under the hypothesis that given two time-series, x 1 (t) and x 2 (t), an observer is not able to have a infinite resolution measurement of a trajectory point, but can only specify the location of a x 1 × x 2 point within a cell belonging to an order-T Markov partition, and thus the correlation of points decay to approximately zero after T iterations.For dynamical networks such as the ones we are working with, measurements can be done with higher resolution and it is typical to expect that the expansion rates on any 2D subspace formed by the state variables of two nodes are very good approximations of the 2 largest Lyapunov exponents of the network.Such a choice implies that I C in Eq. ( 4) is an invariant of the network and it represents the maximal rate of mutual information that can be realised when measurements are made in any two nodes of the network, and no time-delay reconstruction is performed.Details about the equivalence between Lyapunov exponents and expansion rates can be seen in Refs., 41 and an explicitly numerical comparison can be seen in Ref. 42 An extension of Eq. ( 4) to measure upper bounds of MIR in larger subspaces of a network (composed by group of nodes or multivariable subspaces) can also be seen in Ref. 41 Synchronisation is detected by various approaches.Linear stability of the synchronous manifold for complete synchronisation in the discrete network will be calculated analytically.For both types of networks, the level of weak synchronisation will be estimated by the value of H KS , since the higher H KS is (and the larger with respect to I C ), the less synchronous nodes in the network are.Notice also that if H KS = I C , the network is generalised synchronous and possesses only one positive Lyapunov exponent.For the network of Hindmarsh-Rose neurons, we measure synchronisation by calculating the order parameter r and the local order parameter r link as introduced in Ref., 43 the order parameter calculated considering the phase difference between all pair of nodes in the continuous network, as an estimation for the synchrony level of the network.The phase φ i of a node i is calculated using the equation for its derivative φ = ẋ2 x 1 − ẋ1 x 2 x 2 1 +x 2 2 derived in Refs. 44and. 45

Shift map networks
To calculate the Lyapunov exponents of the discrete network (see Sec. "Extension to continuous networks" for an extension to continuous networks), we recall that since the map produces a constant Jacobian ([2I + M]) the Lyapunov spectra of the synchronisation manifold described by n is equal to the spectra of Lyapunov exponents of the network (where typically x In addition, the Lyapunov exponents of the synchronisation manifold are simply the Lyapunov exponents of the Master Stability Function (MSF), 46 the equations that describe the variational equations of Eq. ( 1) linearly expanded around the synchronisation manifold (assuming x n ) and diagonalised, producing N equations in the m eigenmodes: where µ m represents the eigenvalues of M ordered by magnitude, i.e., µ 0 = 0 ≤ µ 1 ≤ µ 2 , . . ., ≤ µ N−1 .The ordered Lyapunov exponents are given by the logarithm of the absolute value of the derivative of the MSF in (5), which leads to In this work, we consider two network configurations.Firstly, the symmetric configuration, when the two networks are connected by 12 undirected interlinks, and each node in a network connects to at most one corresponding node in the other subnetwork.Secondly, the asymmetric configuration, when the two networks are connected by only one undirected random interlink.So, D 1 = D 2 .
For the symmetric configuration 12 (see also Ref. 28 ), we have that μ2i = εω i μ2i+1 = εω i + 2γα (7)   where μ2i are the unordered eigenvalues of M (i = 0, 1, 2, . . ., N 1 − 1) and ω i represents the ordered set of eigenvalues of the Matrix A (such that ω i+1 ≥ ω i , and ω 0 = 0), whose unordered spectra is given by ωi = 2 1 − cos 2πi for a closed ring topology, or ω 1 = 0, ω i = 1 (for i = 1, . . ., N 1 − 2), and ω N−1 = N 1 for a star topology, and ω 1 = 0, ω k = N 1 , for all-to-all topology.The inter degree α represents the effective connection that every node in one subnetwork will have with the other.If 2γα < εω 1 , then µ 1 = 2γα, otherwise µ 1 = εω 1 .Complete synchronisation of the shift map network is linearly stable if |2 − µ 1 | < 1, however notice that our study considers coupling ranges outside of the complete stability region.The second largest eigenvalue, µ 1 , and therefore I C (and the stability of the synchronous manifold) will only depend on the inter connections if and these quantities will only depend on the intra connections if this inequality is not satisfied.It is fundamental to mentioning that the eigenvalues obtained in Eq. ( 7) using the expansion in 12 provide values that are exact in the topologies considered in this work (demonstration to appear elsewhere).Consequently, the Lyapunov exponents calculated by Eq. ( 6) are also exact.
For the symmetric configuration, if inequality ( 8) is satisfied, , then the upper bound for the MIR exchanged between any two nodes in this network, assuming λ 2 > 0, is 4/10 given by , if inequality ( 8) satisfied ( 9) Therefore, the upper bound for the MIR will either depend on γ or on ε.If λ 2 ≤ 0, then For the asymmetric configuration, 12 we have that the second largest eigenvalue and, therefore, I C (and the stability of the synchronous manifold) will only depend on the interconnection.If this inequality is not satisfied, these quantities will depend mutually on both types of connections.Since α always appears in the second largest eigenvalue, the smallest its value the largest will be I C .Our analytical results are valid for all asymmetric configurations considered in Ref., 12 however in this paper we focus on the "bottleneck" configuration, where there is only one random interlink.For the asymmetric bottle neck configuration, if inequality ( 12) is satisfied, N 1 ) , otherwise.Since λ 1 = log (2), then the upper bound for the MIR exchanged between any two nodes in this network, assuming λ 2 > 0, is given by , if inequality ( 12) satisfied ( 13) Therefore, the upper bound for the MIR will either depend on γ, if inequality ( 12) is satisfied, or on both couplings if this inequality is not satisfied.If λ 2 ≤ 0, then Figures 2(A-D) are parameter spaces (ε × γ) showing whether inequality (8) (A-C) or inequality (12) (D) are satisfied (white) or not (black).Figures 2(E-H) show the value of I C .In Figs.2(E-G) we show results for the symmetric configuration.I C will only depend on the intercoupling γ if inequality ( 8) is satisfied, and will only depend on the intracoupling ε if this inequality is not satisfied.In Figs.2(H), for the bottleneck configuration, I C will only depend on the inter coupling if this inequality is satisfied, but will depend on both inter and intra couplings if this inequality is not satisfied.The sum of Lyapunov exponents is given by ), where P represents the number of positive Lyapunov exponents of the network.From this equation, it becomes clear that if N 1 is increased and the topology considered makes ω i to increase proportional to N 1 , but the ratio α is maintained (meaning that inter connections grow only proportional to N 1 ), then the term εω i becomes predominant in H KS , and as a consequence, chaos in the network becomes more dependent on ε than on γ.To illustrate this argument, let us consider the symmetric configuration and assume that ε and γ are sufficiently small such that all Lyapunov exponents are positive.Then, the summation to calculate H KS has N terms and Thus, the term with ε dominates for larger N 1 .This becomes even more evident, if the topology is an all-to-all: The predominance of the intra coupling can be seem in all panels of Fig. 2(I-L), for a network of two coupled ring networks.Similar results to other network configurations can be seen in Supplementary Material.

Extension to continuous networks
These results can be extended to linear networks of ODEs.As an example, consider a continuous network of 1D coupled linear ODEs described by ˙ x = [αI + M] x.Then, the Lyapunov exponents of this system are equal to the Lyapunov exponents of the synchronisation manifold and its transversal modes, and therefore are equal to λ m+1 = α − µ m .

Hindmarsh-Rose networks
The Lyapunov Exponents of the HR neural networks are calculated numerically.
For symmetric HR neural networks with inhibitory inter connections, H KS is mostly dependent only on the electrical intra coupling, as can be seen from Figs. 3(A),(E),(I) (for coupled ring complex networks), and the results shown in Supplementary Material, for other networks.The quantity I C is also mostly dependent on the electrical intra coupling in asymmetric configurations with inter inhibitory synapses (see Fig. 3(B) and Fig. 3(F)), but for the asymmetric and inhibitory configuration (Fig. 3(J)), I C values depend mutually on both inter and intra couplings.Therefore, in most of the cases studied, neural networks formed by complex networks connected with inhibitory connections will have a behaviour (H KS and I C ) that mainly depends on the intra electric coupling.If inter connections are excitatory, both H KS and I S are a non-trivial function of the inter and intra coupling, as it can be seen in Figs.3(C-D),(G-H), (K)-(L).The inter degree α is also determinant for the similar behaviours observed in symmetric neural networks (for both inhibitory and excitatory) of different sizes, as one can check by verifying how similar the parameter spaces of Figs.3(A-B) are with the ones in Figs.3(E-F), or the parameter spaces of Figs.3(C-D) and the ones in Figs.3(G-H)).
To understand why if different neural networks have equal inter-degree 12 /N 1 , then they will have similar parameter spaces for H KS and I C , we consider the conjecture of Ref. 47 that shows that Lyapunov exponents and Lyapunov Exponent of the synchronisation manifold (LESM) (defined by x (i) = x (j) = x s ) are connected.Then, we remark that if each neuron in the network has the same inter-degree, k, then 12 /N 1 = k.This is a necessary condition in order to obtain a Master Stability Function (MSF) of the network as derived in. 28The linear stability of this network and the ith LESM of this network will depend on a function Γ = −kγS(x s 1 ) − γ(x s 1 −V syn )S (x s 1 )(k − µ i ) − εω i , where µ i represents the eigenvalues of the Laplacian matrix B. Inhibition or excitation contributes to the stability of the MSF and to the LESM through the term γ(x s 1 −V syn )S (x s 1 )(k − µ i ).If the coupling is inhibitory, all the terms in the function Γ will be negative, and they all typically contribute to making the network more stable and to have smaller values of LESM.But both terms, kγS(x s 1 ) and γ(x s 1 −V syn )S (x s 1 )(k − µ i ), can be neglected, since S is nonzero during a spike and S is only nonzero at the moment of the beginning of a spike.Therefore, the stability of the synchronisation manifold, as well as the LEs and I C (using 47 ) will mainly depend on the value of the intra coupling ε (see also Fig. 5 in Ref. 28 ).If, however, the coupling is excitatory, we cannot neglect the term γ(x s 1 −V syn )S (x s 1 )(k − µ i ).If two networks 6/10 with different sizes have the same k for each neurone, then the eigenvalues of B for the larger network will be the same of the ones for the smaller network but appearing with multiplicity given by the dimension of the matrix.If the two different networks have the same topology, then some of the smallest eigenvalues of A for the larger network might be similar.These smallest eigenvalues contribute to making the term εω i small, but with a magnitude comparable to the magnitude of the term γ(x s 1 −V syn )S (x s 1 )(k − µ i ).Thus, if k is made constant, larger networks might present similar parameter spaces for H KS and I C .

The bottleneck effect
In the bottleneck configuration, the inter-degree decreases to 1/N 1 .This results in a value of γα smaller when compared to this value for symmetric configurations.Consequently, given two networks, one symmetric and another asymmetric, both with the same N 1 and the same γλ 2 , the value of λ 2 for the asymmetric bottleneck configuration will be larger than λ 2 for the symmetric configuration, which leads to that I C for the asymmetric case is smaller than I C for the symmetric case.However, if we rescale γ used in the asymmetric bottleneck configuration to keep the quantity γα constant in all our simulations, the term εω 1 appearing in µ 1 will compensate λ 2 when inequality ( 12) is satisfied, finally producing an asymmetric network that has a larger value of I C than the corresponding symmetric one.Regarding the neuronal networks, the bottleneck effect is evident as one compare Fig.

Discussion
A topic of research that has attracted great attention in multiplex networks was the search for a better understanding of how weak or strong synchronisation (not full) is linked to the various aspects of the network topology.Previous works have provided complementary, but not unified conclusions regarding this relationship.One of the difficulties into clarifying this matter is that the relationship between the spectrum of eigenvalues of the connecting Laplacian matrix and the synchronous behaviour of the network is poorly understood when the network is in a typical natural state and there is no full synchronisation.Our main contribution in this work was to understand this relationship when a multiplex network is out of full synchronisation.We went a step further and have also understood how relevant aspects of the network topology are related to chaos and information transmission.Thus, providing an innovative set of mathematical tools to study how complexity behaviour emerges in multiplex networks.

Figure 1 .
Figure 1.[Colour online] Examples of symmetric network topologies with N = 10 and 12 =10 considered in this work.Subnetworks have a ring topology in (A), a star topology in (B), and an all-to-all topology in (C).Black lines represent intra links, and Gray (red online) lines represent inter links.

Figure 2 .
Figure 2. [Colour online] Results for networks of shift maps, with two coupled rings.(A-C) White (black) region indicates values of ε and γ for which inequality (8) is satisfied (not satisfied).(D) Colour code same as in (A-C), but based on inequality (12).(E-H) color code shows the value of I C .(I-L) Sum of positive Lyapunov exponents.N=10 and 12 =5 in (A), (E) and (I), N=20 and 12 =10 in (B), (F), (J), N=30 and 12 =15 in (C), (G), and (K), N=10 and 12 =1 in (D), (H), and (L).In (L), the maximal value of γ equal to 2.5 was chosen to allow that the range of values for the quantity γα is the same in figures (I)-(L).

3 (
L) (asymmetric) with Figs.3(D) and 3(F).No bottleneck effect was verified for inhibitory inter synapses.Concluding, a decrease in synchronisation can increase the capacity of the network to exchange information.