Limitations and tradeoffs in synchronization of large-scale networks with uncertain links

The synchronization of nonlinear systems connected over large-scale networks has gained popularity in a variety of applications, such as power grids, sensor networks, and biology. Stochastic uncertainty in the interconnections is a ubiquitous phenomenon observed in these physical and biological networks. We provide a size-independent network sufficient condition for the synchronization of scalar nonlinear systems with stochastic linear interactions over large-scale networks. This sufficient condition, expressed in terms of nonlinear dynamics, the Laplacian eigenvalues of the nominal interconnections, and the variance and location of the stochastic uncertainty, allows us to define a synchronization margin. We provide an analytical characterization of important trade-offs between the internal nonlinear dynamics, network topology, and uncertainty in synchronization. For nearest neighbour networks, the existence of an optimal number of neighbours with a maximum synchronization margin is demonstrated. An analytical formula for the optimal gain that produces the maximum synchronization margin allows us to compare the synchronization properties of various complex network topologies.

We denote the nominal graph Laplacian by L := [l(ij)] ∈ R N ×N and uncertain graph Laplacian by L R := [l R (ij)] ∈ R N ×N , where l(ij) = −µ ij , if i = j, and, e ij ∈ E eij ∈E µ ij , if i = j , and, The nominal graph Laplacian L is a sum of the graph Laplacian for the purely deterministic graph (V, E D ), and of the mean Laplacian for the purely uncertain graph (V, E U ). Hence, L may be written as L = L D + L U , where L D , is the Laplacian for the graph over V with edge set E D . L U is the mean Laplacian for the graph over V with edge set E U . We combine the individual systems to create the network system (x t ) written as, where I N is the N × N identity matrix, . Given the stochastic nature of the network system, we propose the following definition for mean square exponential synchronization [1].

Mean Square Synchronization Result
Since the subsystems are identical, the synchronization manifold is spanned by the vector, 1 = [1, . . . , 1] . The dynamics on the synchronization manifold are decoupled from the dynamics off the manifold and are essentially described by the dynamics of the individual system, which could be stable, oscillatory, or complex in nature. We now apply a change of coordinates to decompose the system dynamics on and off the synchronization manifold. Let L = V ΛV , where V is an orthonormal set of vectors given by V = 1 √ N U , and U is a set of orthonormal vectors also orthonormal to 1. Furthermore, Λ = diag{λ 1 , · · · , λ N }, where 0 = λ 1 < λ 2 ≤ · · · ≤ λ N are the eigenvalues of L. Letz t = V x t andw t = V ṽ t . Multiplying (6) from the left side by V , we obtaiñ whereψ , andw t := v tŵ t , wherē Furthermore, we have From (8), we obtainx whereΛ = diag{λ 2 , · · · , λ N }. For the synchronization of system (6), we only need to demonstrate the mean square stability to the origin of theẑ dynamics, as given in (12). This feature is exploited to derive the sufficiency condition for mean square synchronization of the coupled system, as shown in the following lemma. Lemma 3 The system described by Eq. (6) is MSS, as given by Definition 1, if there exists L > 0, K > 0, and 0 < β < 1, such that Proof. To prove this result, we show the second moment ofẑ t dynamics is equivalent to the mean square error dynamics for each pair of systems. Then, we apply the stability results to the error dynamics to complete the proof. Consider Eq. (12). We have Then, we have Now, suppose there exist L > 0, K > 0, and 0 < β < 1, such that This can be rewritten as Thus, from (18) we obtain for all systems, S k and S l , whereK(ẽ 0 ) : . Hence, the proof.
We will now provide a slightly modified Eq. (12). We know L R = eij ∈E U ξ ij t ij ij , where ij ∈ R has values 1 and −1 in i th and j th entries, respectively, the remaining values are zeros. Thus, ij ij = 2 for all e ij ∈ E. Hence, if whereˆ ijˆ ij = 2. Thus, we can write Eq. (12) aŝ In Lemma 3, we prove mean square exponential stability of (21) guarantees the mean square synchronization of the coupled network of Lure systems, as given by (6). We will now utilize this to provide the sufficiency condition for mean square stabilization of Lure systems interacting over a network.
Theorem 4 The network system in Eq. (6) is mean square synchronizing, if there exists a positive constant, p, that satisfies δ > p, where where λ N U is the maximum eigenvalue of L U , and λ 2 D is the second smallest eigenvalue of L D .
Proof. We first construct an appropriate Lyapunov function, V (ẑ t ) =ẑ t Pẑ t , that guarantees mean square stability. From (21), defining ∆V : , we obtain, Now suppose for some R P > 0, P satisfies Using (24) and algebraic manipulations as given in [3], we can rewrite ∆V as where η t (ξ t (t)) be given by η t (ξ t (t)) = W − 1 2 (P A(ξ t ) − I N −1 )ẑ t − W 1 2ψ t and W := (δI N −1 − P ). Since φ(·) is monotonic and globally Lipschitz with Using (26) and writing ρ = trace(P ), we obtain, Hence, (24) is sufficient for MSS of (1) from condition for Mean Square Stability of the Reduced System. Furthermore, the equation in (24) can be rewritten using [4] (Proposition 12.1,1) as where A 0 (ξ t ) = a 0 I N −1 − gΛ − gU L R U and a 0 = a − 1 δ . We observe this condition requires us to find a symmetric Lyapunov function matrix, P , of order N (N −1)

2
. We now reduce the order of computation by using network properties. For this, consider P = pI N −1 , where p < δ is a positive scalar. This gives us δI N −1 > P . Using this and (21), we rewrite the condition in (29) as follows We knowˆ ijˆ ij = ij U d U d ij = ij ij = 2 and Then, bounding L U in (31) using (32), we obtain, Substituting (33) into (30), a sufficient condition for inequality (30) to hold is given by Equation (34) is a diagonal matrix equation that provides a sufficient condition for MSS as for all eigenvalues λ j ofΛ. This is simplified as where δ > p > 0 and α 2 0 = (a 0 − gλ) 2 + 2γτ λg 2 for all λ ∈ {λ 2 , . . . , λ N } are eigenvalues of the nominal graph Laplacian. Now, for each of these conditions to hold true, we must satisfy condition (36) for the minimum value of α 2 o with respect to all possible λ. Now, λ * provides minimum values for α 2 0 found by setting Using λ * , we conclude, if (36) is to be satisfied for all λ ∈ {λ 2 . . . , λ N }, it must satisfy (36) for the farthest such λ from λ * . Since eigenvalues of a graph Laplacian are positive and monotonic non-decreasing, all we need is to satisfy In the following discussion we will provide a system theoretic interpretation to the proposed definition of mean square synchronization margin. For completion, we restate the definition for mean square synchronization margin.

Definition 5 (Mean Square Synchronization Margin)
The margin for synchronization for network system (6) is defined as

System Theoretic Interpretation of Syncronization Condition and Margin
The first step towards the system theoretic interpretation to margin definition is to show the stability condition derived in Theorem 4, i.e., Eq. (22), has the following equivalent, The main point of the equivalence is the equivalent stability condition on the right-hand side is independent of p. We observe from (36), if p = q is a valid solution, so is p = 1 q . Applying the inequality of arithematic and geometric mean (AM-GM Inequality) to p and 1 p , from Eq. (36) we obtain, We know, if p = q > 1 is a solution of (48), then p = 1 q and p = 1 are also solutions, since we obtain from (40), Hence, we have for some r > 0, Now, consider some > 0, such that From (43), we obtain, Using (42), and (44), and setting p = 1 + > 1, we obtain Hence, (40) is a necessary and sufficient condition for (22), implying equivalence (39). One observes the sufficient condition, as provided in Theorem 4 (i.e., Eq. (22)), is a Riccati equation in one dimension. For the scalar i.i.d. random variable, ζ, writing In fact, the above condition is a sufficient condition for stability of the following scalar nonlinear system. (46) can be rewritten as We would like to relate the above stability condition to results from robust stability theory. The central premise of this theory is, if the product of the two gains consisting of a system in the forward loop and the feedback loop is less than unity, then the feedback interconnection is stable. The product of the two gains is referred to as loop gain. In Fig. 1(a), we represent the dynamics of a scalar system (47) as a feedback interconnection of a mean linear system with two feedback loops consisting of nonlinearity and stochastic uncertainty. In the following, we analyze each of these feedback loops separately (as shown in Figs. 1(b) and 1(c)) to derive the stability condition in terms of loop gain. Then, we combine the two separate stability conditions to show the main result of this paper can alternatively be interpreted in terms of loop gains, thereby leading to the proposed definition of a synchronization margin. Robust Stability to Norm-bounded Nonlinearity: In Fig. 1(b), we have a scalar linear system in the forward loop and a norm bounded nonlinearity in the feedback loop. The scalar system is given by where a 0 = a − 1 δ is the dynamics of the linear system with mean uncertainty in feedback and G represents the system with the transfer function, G(z) = 1 z−â . In our context, the feedback loop with gain, g, comes from the nominal network, but is not shown in the figure for simplicity of explanation. Furthermore, the linear dynamics is assumed stable. Results from robust stability and Small Gain Theorem [5,6] can be used to study the stability of the closed loop. If G ∞ represents the H ∞ norm of the system and φ 0 ∞ denotes the H ∞ norm of the feedback nonlinearity, the closed loop is stable if The condition for robust stability provided in (50) can be reformulated as Robust Stability to Stochastic Uncertainty: Using results developed in [7] under a setting of a linear time invariant system with vector states, we can analyze the stability of the feedback interconnection of a scalar linear system with stochastic uncertainty (as shown in Fig. (1)c). The condition for mean square stability of the closed loop system, as obtained from the Small Gain Theorem in [7], is where G M S and ξ t g M S denote the mean square norm of the system and the stochastic uncertainty, respectively. It is shown in [7], that G M S = G 2 2 for a scalar system, where G 2 denotes the H 2 -norm of the transfer function G(z). The H 2 -norm of the scalar system G(z) with an impulse response h(k) =â k is given by Thus, the mean square norm of the the linear system is given by The mean square norm of the stochastic uncertainty with zero mean is simply given by its variance, i.e., ξ t g 2 M S = σ 2 g 2 . The stability condition (52) for mean square stability of the feedback loop for the system in Fig. (1)c can now be formulated as Robust Stability to Norm-bounded Nonlinearity and Stochastic Uncertainty: Stability conditions from the two individual feedback loops as discussed in the previous two sections can be combined to obtained the stability condition in terms of loop gain for the entire system as shown in Fig. 1(a). In particular, the condition for the mean square synchronization of the network, where both the norm-bounded nonlinearity and the stochastic uncertainty are present, can be written as The stability condition in Eq. (56) is the correct method for combining the stability condition for the two individual feedback loops as expressed in Eqs. (51) and (55) for the following reasons. Using E ξt [(â − ξ t g) 2 ] =â 2 + σ 2 g 2 and the equivalent relationship (39), we notice (56) is exactly the mean square synchronization condition as derived in the main results of the paper. Furthermore, when σ = 0 (i.e., no stochastic uncertainty), condition (56) reduces to condition (51). Similarly, when δ = ∞ (i.e., zero nonlinearity), then condition (56) reduces to (55). Hence, Eq. (56) can be viewed as the proper generalization of the stability conditions from the two individual loops to the entire system with these two loops operating in tandem. Discussion on Synchronization Margin: Mean square synchronization condition as expressed in Eq. (56) can be used to derive the expression for mean square synchronization margin. In particular, we note condition (56), after algebraic manipulation, can equivalently be written as The equivalent condition (57) has a nice system theoretic-based interpretation in terms of loop gain. In particular, the quantity, can be thought of as the gain of the mean linear system with nonlinearity in the feedback loop and g 2 σ 2 as the gain of the stochastic uncertainty. The system will have a larger margin of stability, if the product of these two quantities is further from one. In particular, the smaller the quantity 1 (1− 1 δ ) 2 −â 2 , the greater the variance, σ 2 g 2 , that can be tolerated to maintain stability, and, hence, more robust the system is to stochastic uncertainty and vice versa. This motivates us to propose the following definition of synchronization margin as a quantity, which measures how far the loop gain is from one.
This is precisely the definition of synchronization margin as proposed in Definition 5.

Interplay of Internal Dynamics, Network Topology and Uncertainty Characteristics
We now study the interplay of the internal dynamics (a), nonlinearity bound (δ), network topology (λ), and the uncertainty characteristics (γ) through simulations, using a set of parameter values. To nullify the bias of uncertain link locations, we choose to work with a large number of uncertain links to obtain τ ≈ 1. In Fig. 2, we provide different orientations of the 3-dimensional plots used to discuss the interplay between various system, network, and uncertainty parameters over a network with 1000 nodes. In Figs. 2(a) and 2(b), we plot the boundary for the region with a positive ρ SM in the a − λ −γ space, for g = 0.01, and δ = 2. In Figs. 2(c) and 2(d), we plot the boundary for the region with a positive ρ SM in the a − λ −γ space, for g = 0.01, and a = 1.125.

Location of Uncertainty and Network Topology
In the main document, we briefly discussed the impact of the parameter, τ , on the synchronization margin, ρ SM .
There exists an inverse relation between τ and ρ SM , given by , thus, making higher τ detrimental for robustness (low value of ρ SM ). In this section, we will study the interplay between network topology and the location of uncertainty within the network. In particular, we wish to analyze the average robustness of networks to any single interconnection being uncertain. The idea is to observe if certain connectivity patterns make a network more robust to uncertainty at a single interconnection, averaged over all possible interconnections being individually uncertain. These ideas are aimed towards understanding the average impact of localized cyber-attacks on networks. To put things in perspective, a simple spatially invariant network will have the same τ value for any particular link being uncertain as the network structure is identical when any of the links are removed. However, if some of the links were to be changed to form different interconnections while keeping the number of edges the same, it might be possible to increase the robustness of the network. To ensure that we isolate the impact of location in this study, we assume unit link weights for all network links. For this analysis, we choose our networks to have emerged from a Small World phenomenon, a class of networks that emerge naturally in physical and social scenarios. Small World networks, classified as random networks, [13] were first introduced in [8], and constructed from nearest neighbor networks with random rewiring of links between nodes with a chosen probability. The fundamental idea of the Small World phenomenon suggested that as networks emerge in a natural setting, even though most nodes connect with neighbouring nodes, there is a chance of long distance interaction between nodes. The propensity of such long distance interactions is measured by the rewiring probability e. When the rewiring probability, e = 0, the network is a nearest neighbor network. As e increases, the network loses its nearest neighbor property and has an increasing number of long distance interconnections. This can be visualized through the schematic in Fig. 3, which shows the change in the network connectivity as the rewiring probability increases from e = 0 to e = 1 [8]. For the purpose of this study, we wish to understand if a higher proclivity for long distance interconnections is beneficial for emerging networks, especially when after a network has emerged, any particular interconnection could be subject to uncertainty in the form of attacks or fluctuations in the link weight. We would like to clarify that the rewiring probability of a Small World network has no relation to the interconnection uncertainty, and the link uncertainty is only studied for a fixed network, that has emerged through the Small World phenomenon for a given rewiring probability e. The procedure for this study is outlined in the following paragraph. We initially consider a network of 50 nodes with 8 nearest neighbors per node. Then, we increase the rewiring probability from p = 0 to p = 1 in steps of 0.1 to obtain various random networks. For each such network, we cycle through all the individual links making them uncertain. Then, we compute the value of τ corresponding to the particular uncertain link with a unit mean value for the interconnection weight. These values of τ are used to find the average value of τ avg for the given network. Since these networks are random in nature and the interconnections are formed by probabilistically rewiring links from a nearest neighbor network, we obtain the τ avg values for 50 samples of random networks for a chosen probability, p, which are then used to estimate the mean value of τ avg given byτ avg . Then we plot theτ avg values as a function of the rewiring probability in Fig. 4. A curve connecting the data points (blue line in Fig. 4) is used to indicate the trend.
We plot the trend of the average synchronization margin, ρ SM , in Fig. 4 (black markers with a red curve connecting them that indicates the trend). It can be observed, as the value of average τ decreases, the synchronization margin increases, indicating increased randomness (high rewiring probability) in network interconnections makes the network more robust to link uncertainty. It should be noted, there are small variations in the trend shown by ρ SM , which we attribute to the computation of the average value for τ , λ 2 , and λ N for 50 samples of Small World networks. Overall, this trend suggests as the rewiring probability in increased, the network robustness increases. Thus, we conclude nearest neighbour and Small World networks are less robust to the injection of uncertainty at a random link location in the network, compared to random network.

Optimal Gain Result
In this subsection we provide the lemma and proof for the results, which provides a method to design the optimal coupling gain for synchronization.
Lemma 6 For the network system in Eq. (6) with SM given by Eq. (38), the optimal gain, g * , to achieve maximum SM is Proof. We observe from (40) to maximize the synchronization margin with respect to the coupling gain, g, we must minimize α 2 0 with respect to g, and maximize α 2 0 with respect to λ. This is a regular saddle-point optimization problem [9]. Hence, for a given λ, This provides us with optimal gain, and the corresponding α 2 0 , , and, The only important eigenvalues for the graph Laplacian that provide limitations on synchronization (small magnitude of ρ SM ) are λ 2 and λ N . Hence, we obtain, Since λ N ≥ λ 2 , we have g * (λ 2 ) ≥ g * (λ N ) and α 2 0 (λ 2 , g * (λ 2 )) ≥ α 2 0 (λ N , g * (λ N )).
This gives g * (λ 2 ) as the optimal gain. Furthermore, at the optimal gain, we always have λ sup = λ 2 . Defining, χ := max{λ N , λ 2 + 2γτ }, we can write the optimal gain, Hence, for λ sup = λ 2 , we obtain We will now provide a lemma which will help readers understand the discussion in this paper on the significance of the Laplacian eigenvalues. For G(V, E) with node set V and edge set, E, let E V be all possible connections between nodes in V . Then, forẼ = E V \ E, the graph,G = (V,Ẽ), is the compliment of G. Let 0 = λ 1 < λ 2 ≤ · · · ≤ λ N be the eigenvalues for G and 0 =λ 1 <λ 2 ≤ · · · ≤λ N be the eigenvalues forG. We state below, the lemma connecting the eigenvalues of graph, G, and its complement,G [10].

Simulation Results
In this subsection, we verify the sufficient condition obtained for mean square synchronization through simulation results. We consider the following 1D system, where a = 1.125, δ = 8, and v t is additive white Gaussian noise with zero mean and variance ω 2 . Here, φ(x) is given by where 1+10 0.1 , and = 0.3. The internal dynamics of the system, as described by Eq. (74), consists of a double-well potential, with an unstable equilibrium point at the origin and two stable equilibrium points at x * = ± a−1 a−2 + m2(a−1) m2(a−1)−1 = ±0.5237. So, with no network coupling, i.e., g = 0, the internal dynamics of the agents will converge to the positive equilibrium point, x * > 0, for positive initial conditions. Similarly, if the initial condition is negative, the systems converge to the negative equilibrium point, x * < 0. The double-well potential system is a prototypical example for modeling synchronization phenomena occurring in the natural sciences and engineering systems. For example, collective motion in molecular dynamics [11] and synchronization of generators in the power grid [12] can essentially be modeled using double-well potential.
Effect of coupling gain: We couple this system over a network of 100 nodes, generated as a random network with the Small World property. We choose 60% of the links to be uncertain, making τ ≈ 1. The coupling gain for this system is g = 0.005. The nominal Laplacian for the network is a standard Laplacian with unit weight. Thus, for all links, e ij , connecting nodes i and j, µ ij = 1. This network has λ N = 52.55 and λ 2 = 26.23. We now choose 50% of the links in the network to have uncertain weights. The uncertainty in the network link weights is chosen as a uniform variable with zero mean and variance, σ 2 = 2, such that both these eigenvalues satisfy the required condition from the main result. The CoD of the link uncertainty isγ = σ 2 µ = 2. In Fig. 5(a), we plot the results for synchronization of these 100 systems with simulated additive white Gaussian noise with zero mean and variance, ω 2 = 0.1, which show the systems synchronize in an interval around the equilibrium point. For systems over the network with identical parameters to those in the previous case and identical link noise variance, if the coupling gain is decreased to g = 0.001, which does not satisfy the requirement for the main result, we observe the system is unable to synchronize (Fig. 5(b)). Effect of number of neighbors: Next, we study the effect on the group's synchronization ability, due to a change in the number of neighbors for an agent. For this we choose a nearest neighbor network with 100 nodes. The simulation parameters are chosen as a = 1.05, δ = 16, g = 0.05, γ = 1 12 , and τ ≈ 1. The variance for the uncertainty in the links is chosen small, to clearly observe the effects due to a change in neighbors. As discussed previously, increasing the link uncertainty adds to some numerical inaccuracies in the system causing an additive noise-like effect. Furthermore, the additive noise is also assumed absent to facilitate a clear observation of synchronization.
We first choose a network with 6 neighbors per agent. For this network we observe the system is unable to synchronize and all the agents break into multiple clusters with each cluster having a small number of agents, Fig.  6(a). The agents do not obtain sufficient state information to bind them to the synchronization manifold, due to the small number of neighbors. Now, we increase the number of neighbors to 20 for each agent. As the number of neighbors increases, the agents synchronize to the synchronization manifold extremely well, with very little noise, Fig.  6(b). Furthermore, the rate of synchronization is very high as observed from the simulations, where the agents seem to synchronize within the first 100 seconds and then collectively move to the synchronization manifold. Synchronization of the agents is observed for a number of neighbors starting at 16 until the number of neighbors reaches 28. As the number of neighbors increases, we observe significant oscillations before the agents synchronize. Finally, we increase the number of neighbors to 32 for each agent. This increase in the number of neighbors seems to benefit the synchronization initially, since all agents quickly coalesce together. However, as they approach the synchronization manifold, the high number of neighbors causes the systems to fluctuate significantly about the manifold, leading to an oscillating band of desynchronized agent states, Fig. 6(c). This is the beginning of a desynchronized state for the agents. More neighbors for an agent would destabilize the individual system dynamics.