## Abstract

Networks in nature do not act in isolation, but instead exchange information and depend on one another to function properly^{1,2,3}. Theory has shown that connecting random networks may very easily result in abrupt failures^{3,4,5,6}. This finding reveals an intriguing paradox^{7,8}: if natural systems organize in interconnected networks, how can they be so stable? Here we provide a solution to this conundrum, showing that the stability of a system of networks relies on the relation between the internal structure of a network and its pattern of connections to other networks. Specifically, we demonstrate that if interconnections are provided by network hubs, and the connections between networks are moderately convergent, the system of networks is stable and robust to failure. We test this theoretical prediction on two independent experiments of functional brain networks (in task and resting states), which show that brain networks are connected with a topology that maximizes stability according to the theory.

### Similar content being viewed by others

## Main

The theory of networks of networks relies largely on unstructured patterns of connectivity between networks^{3,4,6}. When two stable networks are fully interconnected with one-to-one random connections, such that every node in a network depends on a randomly chosen node in the other network, small perturbations in one network are amplified by the interaction between networks^{3,6}. This process leads to cascading failures, which are thought to underpin catastrophic outcomes in man-made infrastructures, such as blackouts in power grids^{3,4}.

By contrast, many stable living systems, including the brain^{9} and cellular networks^{10}, are organized in interconnected networks. Random networks are very efficient mathematical constructs to develop theory, but the majority of networks observed in nature are correlated^{11,12}. Correlations, in turn, provide structure and are known to influence the dynamical and structural properties of interconnected networks, as has been recently shown^{13}.

Most natural networks form hubs, increasing the relevance of certain nodes. This adds a degree of freedom to the system, in determining whether hubs broadcast information to other networks or, conversely, whether cross-network communication is governed by nodes with less influence in their own network.

We develop a full theory for systems of structured networks, identifying a structural communication protocol that ensures the system of networks is stable (less susceptible to catastrophic failure) and optimized for fast communication across the entire system. The theory establishes concrete predictions of a regime of correlated connectivity between the networks composing the system.

We test these predictions with two different systems of brain connectivity based on functional magnetic resonance imaging (fMRI) data. The brain organizes in a series of interacting networks^{9,14}, presenting a paradigmatic case study for a theory of connected correlated networks. We show that for two independent experiments of functional networks in task and resting states in humans, the systems of brain networks organize optimally, as predicted by the theory.

Our results provide a plausible explanation for the observation that natural networks do not show frequent catastrophic failure as expected by theory. They offer a specific theoretical prediction of how structured networks should be interconnected to be stable. And they demonstrate, using two examples of functional brain connectivity, that the structure of cross-network connections coincides with theoretical predictions of stability for different functional architectures.

We present a theory based on a recursive set of equations to study the cascading failure and percolation process for two correlated interconnected networks. The theory is a generalization of an analytical approach for single networks previously developed^{15} to study cascading behaviour in interconnected correlated networks (analytic details in Supplementary Section I). Here we refer to the most important aspects of the theory and the corresponding set of predictions. The theory can be extended to *n*-interconnected networks by following ref. 16.

We consider two interconnected networks, each one having a power-law degree distribution characterized by exponent *γ*, *P*(*k*_{in}) ∼ *k*_{in}^{−γ}, valid up to a cutoff *k*_{max} imposed by their finite size. Here *k*_{in} is the number of links of a node towards nodes in the same network. This power law implies that a few nodes will be vastly connected within the network (hubs) whereas the majority of nodes will be weakly connected to other nodes in the network.

The structure between interconnected networks can be characterized by two parameters: *α* and *β* (Fig. 1a). The parameter *α*, defined as

where *k*_{out} is the degree of a node towards nodes in the other network, determines the likelihood that hubs of each network are also the principal nodes connecting both networks. For *α* > 0 the nodes in network A and B which connect both networks will typically be hubs in A and B respectively (Fig. 1a, right panels). Instead, for *α* < 0 the two networks will be connected preferentially by nodes of low degree within each network (Fig. 1a, left panels).

The parameter *β* defines the indegree–indegree internetwork correlations as^{11,12}:

where *k*_{in}^{nn} is the average indegree of the nearest neighbours of a node in the other network. It determines the convergence of connections between networks—that is, the likelihood that a link connecting networks A and B coincides in the same type of node. Intuitively, equations (1) and (2) can be seen as a compromise between redundancy and reach of connections between both networks. For *β* > 0 connections between networks are convergent (assortative, Fig. 1a, top panels), whereas for *β* < 0 they are divergent (disassortative, Fig. 1a, bottom panels). Uncorrelated networks have *α* = 0 and *β* = 0.

We analyse how the system of two correlated networks breaks down after random failure (random attack) of a fraction 1 − *p* of nodes for different patterns of between-network connectivity characterized by (*α*, *β*). We adopt the conventional percolation criterion of stability and connectivity measuring how the largest connected component breaks down following the attack^{3}. In classic percolation of single networks, two nodes of a network are randomly linked with probability *p* (ref. 17). For low *p*, the network is fragmented into subextensive components. Percolation theory of random networks demonstrates that as *p* increases, there is a critical phase transition in which a single extensive cluster or giant component spans the system (the critical *p* is referred to as *p*_{c}).

A robust notion of stability in a system of networks can be obtained by identifying *p*_{c} at which a cohesive mutually connected network breaks down into disjoint subcomponents under different forms of attack. Network topologies with low *p*_{c} are robust, as this indicates that the majority of nodes ought to be removed to break it down. In contrast, high values of *p*_{c} are indicative of a fragile network which breaks down by removing only a few nodes.

Here we analyse two qualitatively different manners in which the networks interact and propagate failure. In one mode (conditional interaction, Fig. 1b) a node in network B cannot function (and hence is removed) if it loses all connectivity with network A after the attack^{3}. In the second condition (redundant interaction, Fig. 1c) a node in network B may survive even if it is completely decoupled from network A, if it remains attached to the largest component of network B (ref. 4). To understand why these two responses to failure are pertinent in real networks it helps to exemplify the interaction between power and data networks. If electricity can flow only through the cables of the power network, a node in the data network unplugged from the power system shuts off and stops functioning. This situation corresponds to two networks coupled in a conditional manner; a case treated in ref. 3 considering one-to-one random connections between networks. Consider instead the case of a printer or any peripheral which can be plugged to the main electricity network but can also receive power through a USB cable from the computer. A node may still function even if it is disconnected from the other network, if it remains connected to its local network. This corresponds to the redundant interaction as treated by ref. 4 in the unstructured case.

We first investigate the stability of two interacting scale-free networks for a value of *γ* set arbitrarily to 2.5 and *k*_{max} = 100 in a regime where each isolated network is stable and robust to attack^{18}. The attack starts with the removal of a fraction of 1 − *p* nodes chosen at random from both networks. This attack produces extra failures of, for instance, nodes in B. In the case of conditional interaction: if the nodes in B disconnect from the giant component of network A or disconnect from the giant component of B. In the case of redundant interaction: if the nodes in B disconnect from the giant component of network A and the giant component of network B. In the conditional mode, this process may lead to new failures in network A producing a cascade if they lose connectivity in B. Other nodes in A may also fail as they get disconnected from the giant component in A, and the cascading process iterates until converging to a final configuration. By definition, only the conditional mode may produce cascading effects, but not the redundant mode. The theoretical analysis of this process leads to a set of recursive equations (Supplementary Section I) that provides a stability phase diagram for the critical percolation threshold *p*_{c}(*α*, *β*) under attack in redundant and conditional failures for a given (*γ*, *k*_{max}), as seen in Fig. 2.

Figure 2 reveals that the relation between a network’s internal structure and the pattern of connection between networks critically determines whether attacks lead to catastrophic cascading failures (high *p*_{c}) or not (low *p*_{c}). For conditional interactions, the system of networks is stable when *α* < 0 (indicated by low *p*_{c}(*α*, *β*), left-blue region in Fig. 2a) or for *α* ≳ 0.5 and *β* > 0 (light blue top-right quadrant), and becomes particularly unstable for intermediate values of 0 < *α* < 0.5 and *β* < 0. This result shows that the system of networks is stable when the hubs are protected (*α* < 0) by being isolated from network–network connectivity or when, in contrast, the bulk of connectivity within and across networks is sustained exclusively by a very small set of hubs (large *α*, *β*). Intermediate configurations, where hubs interconnect with low-degree nodes, are highly unstable because hubs can be easily attacked via conditional interactions, and lead to catastrophic cascading after attack. Similar unstable configurations appear in the one-to-one random interconnectivity^{3}.

When two networks interact in a redundant manner, the system of networks is less vulnerable to attacks (Fig. 2b). This expected result is manifested by the fact that, even for small values of *p* ∼ 0.1, the system of networks remains largely connected for any (*α*, *β*). The non-intuitive observation is that the relation between network internal structure and the pattern of connection between networks which optimizes stability differs from the conditional interaction (Fig. 2a). In fact, *α* < 0 leads to the less stable configurations (larger value of *p*_{c} in Fig. 2b, red region), and the only region which maximizes stability corresponds to high values of *α* and *β* > 0 (blue region in Fig. 2b)—that is, an interaction where connection between networks is highly redundant and carried only by a few hubs of each network. Thus, the parameters that maximize stability for both interactions lie in the region *α* ≍ 1 and *β* > 0.

Systems of brain networks present an ideal candidate to examine this theory for the following two reasons. First, local-brain networks organize according to a power-law degree distribution^{19,20}. Second, some aspects of local function are independent of long-range global interactions with other networks (as in the redundant interaction), such as the processing of distinct sensory features, whereas other aspects of local connectivity can be shut-down when connectivity to other networks is shut down (as in conditional interaction), such as integrative perceptual processing^{21}. Hence, the theory predicts that to assure stability for both modes of dependencies, brain networks ought to be connected with positive and high values of *α* and positive values of *β*.

Next, we examine this hypothesis for two independent functional magnetic resonance imaging (fMRI) experiments: human resting-state data obtained from the NYU public repository^{22} and human dual-task data^{23} previously used to investigate brain network topology^{14,24,25} (see Methods and Supplementary Section II for details). We first identify functional networks (resting state, Fig. 3a and dual task, Fig. 3b) made of nodes connected by strong links—that is, by highly correlated fMRI signals^{14}. These networks are interconnected by weak links (low correlation in the fMRI signal) following the methods of ref. 14. The indegree distribution of the system of networks follows a bounded power law (Fig. 3c, d and Table 1) and the exponents *α* and *β* show high positive values for both experiments (Fig. 3e, f and Table 1).

To examine whether these values are optimal for the specific (*γ*, *k*_{max})-parameters of these networks, for each experiment, we projected the measured values of *α* and *β* to the theoretically constructed stability phase diagram quantified by *p*_{c}(*α*, *β*) in the conditional and redundant modes (Fig. 4). Remarkably, the experimental values of *α* and *β* (white circles) lie within the relatively narrow region of parameter space that minimizes failure for conditional and redundant interaction. Overall these results demonstrate that brain networks tested under distinct mental states share the topological features that confer stability to the system.

Our result hence provides a theoretical revision to the current view that systems of networks are highly unstable. We show that for structured networks, if the interconnections are provided by hubs of the network (*α* > 0.5) and for moderate degrees of convergence of internetwork connection (*β* > 0), the systems of network are stable. This stability holds in the conditional interaction^{3} and in a more robust topology of redundant interaction^{4}. The redundant condition is equivalent to stating that the system of networks merges into a single network (ingoing and outgoing links are treated as the same). Hence the condition of optimality for this topology equates to saying that the size of the giant component formed by the connection of both networks is optimized. As a consequence, the maximization of robustness for both conditions is equivalent to maximizing robustness in the more conventional conditional interaction, where links of one network are strictly necessary for proper function of the other network, and a notion of information flow and storage using the classic percolation theory definition of the size of the maximal mutual component across both networks. In other words, these parameters form a set of interacting nodes which are maximally large in size and robust to failure.

The most natural metaphors for man-made systems of networks are electricity (wires) and the Internet or voice connectivity (data). A more direct analogue to this case in a living system such as the brain would be the interaction between anatomic, metabolic and vascular networks (wires) and their coupling to functional correlations (data)^{26}. Here instead we adopted the theory of networks of networks to investigate the optimality of coupled functional brain modules. The consistency between experimental data and theoretical predictions even in this broader notion of coupled networks is suggestive of the possible broad scope of the theory, making it a candidate to study a wider range of interconnected networks^{27}.

## Methods

### Experimental analysis.

The interdependent functional brain networks are constructed from fMRI data following the methods of ref. 14. First, the blood oxygen level dependent (BOLD) signal from each brain voxel (node) is used to construct the functional network topology based on standard methods^{19,20} using the equal-time cross-correlation matrix, *C*_{ij}, of the activity of pairs of voxels (Supplementary Section II).

The derivation of a binary graph from a continuous connectivity matrix relies on a threshold *T*, where the links between two nodes (voxels) *i* and *j* are occupied if *T* < *C*_{ij} (refs 14, 19), such as in bond percolation. A natural and non-arbitrary choice of threshold can be derived from a clustering bond percolation process. The size of the largest connected component of voxels as a function of *T* reveals clear percolation-like transitions^{14} in the two data sets, identified by the jumps in the size of the largest component in Fig. 3a, b. The emergent networks in resting state correspond to the medial prefrontal cortex, posterior cingulate and lateral temporoparietal regions, all of them part of the default mode network (DMN) typically seen in resting state data^{22}. In dual task, as expected for an experiment involving visual and auditory stimuli and bi-manual responses, the responsive regions include bilateral visual occipito-temporal cortices, bilateral auditory cortices, motor, premotor and cerebellar cortices, and a large-scale bilateral parieto-frontal structure.

### Scaling of correlations in the brain.

We identify functional networks (see Fig. 3a, b right panels) made of nodes connected by strong links (strong BOLD signal correlation *C*_{ij}) which are interconnected by weak links (weak BOLD signal correlation)^{14,28}. Statistical analysis based on standard maximum likelihood and KS methods^{29} (Supplementary Section IIA) yield the values of the indegree exponents of each functional brain network: *γ* = 2.85 ± 0.04 and *k*_{max} = 133 for resting state and *γ* = 2.25 ± 0.07, *k*_{max} = 139 for dual task (Fig. 3c, d). The obtained exponents *α* show high positive values for both experiments: *α* = 1.02 ± 0.02 and 0.92 ± 0.02 for resting-state and dual-task data, respectively (Fig. 3e). The internetwork connections show positive exponents for both systems: *β* = 0.66 ± 0.03 and *β* = 0.79 ± 0.04 for resting state and dual task, respectively (Fig. 3f).

Hence, in accordance with the predictions of the theory, these two interdependent brain networks derived from qualitatively distinct mental states (resting states and strong engagement in a task which actively coordinates visual, auditory and motor function) show consistently high values of *α* and positive values of *β*. Figure 4 shows the theoretical phase diagram *p*_{c}(*α*, *β*) in conditional and redundant modes calculated for coupled networks with the experimental values *γ* = 2.25 and 2.85. Left panels show the prediction of *p*_{c}(*α*, *β*) in the conditional mode of failure and right panels correspond to the redundant mode. The experimental (*α*, *β*) are shown as white circles lying in stable regions of the phase diagram (low *p*_{c}). Interestingly, the convergence of internetwork connections, *β*, is slightly higher under task conditions, adding a new degree of freedom to the system of networks, the dynamic allocation of functional connections governed by context-dependent processes such as attention or learning for the case of brain networks. Further research is planned to investigate the neuronal mechanisms underlying internetwork communication routines specified by *β*.

## References

Little, R. G. Controlling cascading failure: Understanding the vulnerabilities of interconnected infrastructures.

*J. Urban Technol.***9**, 109–123 (2002).Rosato, V. Modeling interdependent infrastructures using interacting dynamical models.

*Int. J. Crit. Infrastruct.***4**, 63–79 (2008).Buldyrev, S. V., Parshani, R., Paul, G., Stanley, H. E. & Havlin, S. Catastrophic cascade of failures in interdependent networks.

*Nature***464**, 1025–1028 (2010).Leicht, E. A. & D’Souza, R. M. Percolation on interacting networks. Preprint at http://arxiv.org/abs/0907.0894 (2009).

Brummitt, C. D., D’Souza, R. M. & Leicht, E. A. Suppressing cascades of load in interdependent networks.

*Proc. Natl Acad. Sci. USA***109**, E680–E689 (2012).Gao, J., Buldyrev, S. V., Stanley, H. E. & Havlin, S. Networks formed from interdependent networks.

*Nature Phys.***8**, 40–48 (2012).Bianconi, G., Dorogovtsev, S. N. & Mendes, J. F. F. Mutually connected component of network of networks. Preprint at http://arxiv.org/abs/1402.0215 (2014).

Bianconi, G. & Dorogovtsev, S. N. Multiple percolation transitions in a configuration model of network of networks.

*Phys. Rev. E***89**, 062814 (2014).Dosenbach, N. U. F. et al. Distinct brain networks for adaptive and stable task control in humans.

*Proc. Natl Acad. Sci. USA***104**, 11073–11078 (2007).Vidal, M., Cusick, M. E. & Barabási, A-L. Interactome networks and human disease.

*Cell***144**, 986–998 (2011).Pastor-Satorras, R., Vázquez, A. & Vespignani, A. Dynamical and correlation properties of the Internet.

*Phys. Rev. Lett.***87**, 258701 (2001).Gallos, L. K., Song, C. & Makse, H. A. Scaling of degree correlations and its influence on diffusion in scale-free networks.

*Phys. Rev. Lett.***100**, 248701 (2008).Radicchi, F. Driving interconnected networks to supercriticality.

*Phys. Rev. X***4**, 021014 (2014).Gallos, L. K., Makse, H. A. & Sigman, M. A small world of weak ties provides optimal global integration of self-similar modules in functional brain networks.

*Proc. Natl Acad. Sci. USA***109**, 2825–2830 (2012).Moore, C. & Newman, M. E. J. Exact solution of site and bond percolation on small-world networks.

*Phys. Rev. E***62**, 7059–7064 (2000).Gao, J., Buldyrev, S. V., Havlin, S. & Stanley, H. E. Robustness of a network formed by

*n*interdependent networks with a one-to-one correspondence of dependent nodes.*Phys. Rev. E***85**, 066134 (2012).Bollobás, B.

*Random Graphs*(Academic, 1985).Cohen, R., Ben-Avraham, D. & Havlin, S. Percolation critical exponents in scale-free networks.

*Phys. Rev. E***66**, 036113 (2002).Eguiluz, V. M., Chialvo, D. R., Cecchi, G. A., Baliki, M. & Apkarian, A. V. Scale-free brain functional networks.

*Phys. Rev. Lett.***94**, 018102 (2005).Bullmore, E. & Sporns, O. Complex brain networks: Graph theoretical analysis of structural and functional systems.

*Nature Rev. Neurosci.***10**, 186–198 (2009).Sigman, M. et al. Top-down reorganization of activity in the visual pathway after learning a shape identification task.

*Neuron***46**, 823–835 (2005).Shehzad, Z., Kelly, A. M. C. & Reiss, P. T. The resting brain: Unconstrained yet reliable.

*Cereb. Cortex***10**, 2209–2229 (2009).Sigman, M. & Dehaene, S. Brain mechanisms of serial and parallel processing during dual-task performance.

*J. Neurosci.***28**, 7585–7598 (2008).Russo, R., Herrmann, H. J. & de Arcangelis, L. Brain modularity controls the critical behavior of spontaneous activity.

*Sci. Rep.***4**, 4312 (2014).Gallos, L. K., Sigman, M. & Makse, H. A. The conundrum of functional brain networks: small-world efficiency or fractal modularity.

*Front. Physiol.***3**, 123 (2012).Honey, C. J. et al. Predicting human resting-state functional connectivity from structural connectivity.

*Proc. Natl Acad. Sci. USA***106**, 2035–2040 (2009).Schneider, C. M., Yazdani, N., Araújo, N. A. M., Havlin, S. & Herrmann, H. J. Towards designing robust coupled networks.

*Sci. Rep.***3**, 1969 (2013).Schneidman, E., Berry, M. J., Segev, R. & Bialek, W. Weak pairwise correlations imply strongly correlated network states in a neural population.

*Nature***440**, 1007–1012 (2006).Clauset, A., Shalizi, C. R. & Newman, M. E. J. Power-law distributions in empirical data.

*SIAM Rev.***51**, 661–703 (2009).

## Acknowledgements

This work was funded by NSF-PoLS PHY-1305476 and NIH-NIGMS 1R21GM107641. We thank N. A. M. Araújo, S. Havlin, L. Parra, L. Gallos, A. Salles and T. Bekinschtein for clarifying discussions. Additional financial support was provided by the Brazilian agencies CNPq, CAPES and FUNCAP, the Spanish MINECO BFU2012-39958, CONICET and the James McDonnell Foundation 21st Century Science Initiative in Understanding Human Cognition—Scholar Award. The Instituto de Neurociencias is a Severo Ochoa center of excellence.

## Author information

### Authors and Affiliations

### Contributions

All authors contributed equally to the work presented in this paper.

### Corresponding author

## Ethics declarations

### Competing interests

The authors declare no competing financial interests.

## Supplementary information

### Supplementary Information

Supplementary Information (PDF 780 kb)

## Rights and permissions

## About this article

### Cite this article

Reis, S., Hu, Y., Babino, A. *et al.* Avoiding catastrophic failure in correlated networks of networks.
*Nature Phys* **10**, 762–767 (2014). https://doi.org/10.1038/nphys3081

Received:

Accepted:

Published:

Issue Date:

DOI: https://doi.org/10.1038/nphys3081