Main

The theory of networks of networks relies largely on unstructured patterns of connectivity between networks3,4,6. When two stable networks are fully interconnected with one-to-one random connections, such that every node in a network depends on a randomly chosen node in the other network, small perturbations in one network are amplified by the interaction between networks3,6. This process leads to cascading failures, which are thought to underpin catastrophic outcomes in man-made infrastructures, such as blackouts in power grids3,4.

By contrast, many stable living systems, including the brain9 and cellular networks10, are organized in interconnected networks. Random networks are very efficient mathematical constructs to develop theory, but the majority of networks observed in nature are correlated11,12. Correlations, in turn, provide structure and are known to influence the dynamical and structural properties of interconnected networks, as has been recently shown13.

Most natural networks form hubs, increasing the relevance of certain nodes. This adds a degree of freedom to the system, in determining whether hubs broadcast information to other networks or, conversely, whether cross-network communication is governed by nodes with less influence in their own network.

We develop a full theory for systems of structured networks, identifying a structural communication protocol that ensures the system of networks is stable (less susceptible to catastrophic failure) and optimized for fast communication across the entire system. The theory establishes concrete predictions of a regime of correlated connectivity between the networks composing the system.

We test these predictions with two different systems of brain connectivity based on functional magnetic resonance imaging (fMRI) data. The brain organizes in a series of interacting networks9,14, presenting a paradigmatic case study for a theory of connected correlated networks. We show that for two independent experiments of functional networks in task and resting states in humans, the systems of brain networks organize optimally, as predicted by the theory.

Our results provide a plausible explanation for the observation that natural networks do not show frequent catastrophic failure as expected by theory. They offer a specific theoretical prediction of how structured networks should be interconnected to be stable. And they demonstrate, using two examples of functional brain connectivity, that the structure of cross-network connections coincides with theoretical predictions of stability for different functional architectures.

We present a theory based on a recursive set of equations to study the cascading failure and percolation process for two correlated interconnected networks. The theory is a generalization of an analytical approach for single networks previously developed15 to study cascading behaviour in interconnected correlated networks (analytic details in Supplementary Section I). Here we refer to the most important aspects of the theory and the corresponding set of predictions. The theory can be extended to n-interconnected networks by following ref. 16.

We consider two interconnected networks, each one having a power-law degree distribution characterized by exponent γ, P(kin) kinγ, valid up to a cutoff kmax imposed by their finite size. Here kin is the number of links of a node towards nodes in the same network. This power law implies that a few nodes will be vastly connected within the network (hubs) whereas the majority of nodes will be weakly connected to other nodes in the network.

The structure between interconnected networks can be characterized by two parameters: α and β (Fig. 1a). The parameter α, defined as

where kout is the degree of a node towards nodes in the other network, determines the likelihood that hubs of each network are also the principal nodes connecting both networks. For α > 0 the nodes in network A and B which connect both networks will typically be hubs in A and B respectively (Fig. 1a, right panels). Instead, for α < 0 the two networks will be connected preferentially by nodes of low degree within each network (Fig. 1a, left panels).

Figure 1: Modelling degree–degree correlations between interconnected networks.
figure 1

a, Hubs (red nodes) and non-hubs (blue nodes) have kout outgoing links (wiggly blue links) according to the parameter α. When α < 0, the outgoing links are more likely to be found attached to non-hub nodes. When α > 0, hubs are favoured over non-hub nodes. Nodes from different networks are connected according to β. When β > 0, nodes with similar degree prefer to connect between themselves, and when β < 0, nodes connect disassortatively. For simplicity we exemplify the outgoing links emanating from only a few nodes in network A according to (α, β). b, Conditional mode of failure: a node fails every time it becomes disconnected from the largest component of its own network, or loses all its outgoing links. All stable nodes have at least one outgoing link. We exemplify only one cascading path for simplicity. In reality, we investigate the cascading produced by removal of 1 − p nodes from both networks. With the failure of the hub indicated in the figure (Stage 1), all its non-hub neighbours also fail because they become isolated from the giant component in A (Stage 2). In Stage 3 the upper hub from network B fails, owing to the conditional interaction, because it loses connectivity with network A even though it is still connected in B. With the failure of this second hub all its non-hub neighbours become isolated, leading to their failure (Stage 4). This leads to a further removal of the second outgoing link and the cascading failure propagates back to network A (Stage 5). Because no more nodes become isolated, the cascading failure stops with the mutual giant component shown in Stage 5. At this point we measure the fraction of nodes in the giant component of A and B. c, Redundant interaction: the failure of a node only leads to further failure if its removal isolates its neighbours in the same network. The failure of the hub (Stage 1) does not propagate the damage to the other network (Stage 2 and 3) and therefore there is no cascading in this interaction. We measure the fraction of nodes in the mutually connected giant component. We note that nodes can be stable even if they do not have outgoing links, as long as they belong to the mutually connected component. Thus, the mutually connected giant component may contain nodes which are not part of the single giant component of one of the networks, as shown in Stage 3, network A.

The parameter β defines the indegree–indegree internetwork correlations as11,12:

where kinnn is the average indegree of the nearest neighbours of a node in the other network. It determines the convergence of connections between networks—that is, the likelihood that a link connecting networks A and B coincides in the same type of node. Intuitively, equations (1) and (2) can be seen as a compromise between redundancy and reach of connections between both networks. For β > 0 connections between networks are convergent (assortative, Fig. 1a, top panels), whereas for β < 0 they are divergent (disassortative, Fig. 1a, bottom panels). Uncorrelated networks have α = 0 and β = 0.

We analyse how the system of two correlated networks breaks down after random failure (random attack) of a fraction 1 − p of nodes for different patterns of between-network connectivity characterized by (α, β). We adopt the conventional percolation criterion of stability and connectivity measuring how the largest connected component breaks down following the attack3. In classic percolation of single networks, two nodes of a network are randomly linked with probability p (ref. 17). For low p, the network is fragmented into subextensive components. Percolation theory of random networks demonstrates that as p increases, there is a critical phase transition in which a single extensive cluster or giant component spans the system (the critical p is referred to as pc).

A robust notion of stability in a system of networks can be obtained by identifying pc at which a cohesive mutually connected network breaks down into disjoint subcomponents under different forms of attack. Network topologies with low pc are robust, as this indicates that the majority of nodes ought to be removed to break it down. In contrast, high values of pc are indicative of a fragile network which breaks down by removing only a few nodes.

Here we analyse two qualitatively different manners in which the networks interact and propagate failure. In one mode (conditional interaction, Fig. 1b) a node in network B cannot function (and hence is removed) if it loses all connectivity with network A after the attack3. In the second condition (redundant interaction, Fig. 1c) a node in network B may survive even if it is completely decoupled from network A, if it remains attached to the largest component of network B (ref. 4). To understand why these two responses to failure are pertinent in real networks it helps to exemplify the interaction between power and data networks. If electricity can flow only through the cables of the power network, a node in the data network unplugged from the power system shuts off and stops functioning. This situation corresponds to two networks coupled in a conditional manner; a case treated in ref. 3 considering one-to-one random connections between networks. Consider instead the case of a printer or any peripheral which can be plugged to the main electricity network but can also receive power through a USB cable from the computer. A node may still function even if it is disconnected from the other network, if it remains connected to its local network. This corresponds to the redundant interaction as treated by ref. 4 in the unstructured case.

We first investigate the stability of two interacting scale-free networks for a value of γ set arbitrarily to 2.5 and kmax = 100 in a regime where each isolated network is stable and robust to attack18. The attack starts with the removal of a fraction of 1 − p nodes chosen at random from both networks. This attack produces extra failures of, for instance, nodes in B. In the case of conditional interaction: if the nodes in B disconnect from the giant component of network A or disconnect from the giant component of B. In the case of redundant interaction: if the nodes in B disconnect from the giant component of network A and the giant component of network B. In the conditional mode, this process may lead to new failures in network A producing a cascade if they lose connectivity in B. Other nodes in A may also fail as they get disconnected from the giant component in A, and the cascading process iterates until converging to a final configuration. By definition, only the conditional mode may produce cascading effects, but not the redundant mode. The theoretical analysis of this process leads to a set of recursive equations (Supplementary Section I) that provides a stability phase diagram for the critical percolation threshold pc(α, β) under attack in redundant and conditional failures for a given (γ, kmax), as seen in Fig. 2.

Figure 2: Stability phase diagram of pc(α, β) for conditional and redundant failure.
figure 2

Percolation threshold pc(α, β) predicted by theory for coupled networks for generic values γ = 2.5 and kmax = 100 in conditional interaction (a) and redundant interaction (b). We use a bounded power law for closer comparison with experimental data. For a given system, the results are independent of a large enough cutoff. For the conditional interaction the system is more stable (low value of pc) when α < 0 as well as for α 1 and β > 0, and exhibits a maximum in pc (unstable) close to α 0.25 and β < 0. The redundant interaction is instead more unstable for α < 0 and becomes stable for α 1 and β > 0. Thus the best compromise between both modes of failure is for values located in the upper-right quadrant (α 1, β > 0).

Figure 2 reveals that the relation between a network’s internal structure and the pattern of connection between networks critically determines whether attacks lead to catastrophic cascading failures (high pc) or not (low pc). For conditional interactions, the system of networks is stable when α < 0 (indicated by low pc(α, β), left-blue region in Fig. 2a) or for α 0.5 and β > 0 (light blue top-right quadrant), and becomes particularly unstable for intermediate values of 0 < α < 0.5 and β < 0. This result shows that the system of networks is stable when the hubs are protected (α < 0) by being isolated from network–network connectivity or when, in contrast, the bulk of connectivity within and across networks is sustained exclusively by a very small set of hubs (large α, β). Intermediate configurations, where hubs interconnect with low-degree nodes, are highly unstable because hubs can be easily attacked via conditional interactions, and lead to catastrophic cascading after attack. Similar unstable configurations appear in the one-to-one random interconnectivity3.

When two networks interact in a redundant manner, the system of networks is less vulnerable to attacks (Fig. 2b). This expected result is manifested by the fact that, even for small values of p 0.1, the system of networks remains largely connected for any (α, β). The non-intuitive observation is that the relation between network internal structure and the pattern of connection between networks which optimizes stability differs from the conditional interaction (Fig. 2a). In fact, α < 0 leads to the less stable configurations (larger value of pc in Fig. 2b, red region), and the only region which maximizes stability corresponds to high values of α and β > 0 (blue region in Fig. 2b)—that is, an interaction where connection between networks is highly redundant and carried only by a few hubs of each network. Thus, the parameters that maximize stability for both interactions lie in the region α 1 and β > 0.

Systems of brain networks present an ideal candidate to examine this theory for the following two reasons. First, local-brain networks organize according to a power-law degree distribution19,20. Second, some aspects of local function are independent of long-range global interactions with other networks (as in the redundant interaction), such as the processing of distinct sensory features, whereas other aspects of local connectivity can be shut-down when connectivity to other networks is shut down (as in conditional interaction), such as integrative perceptual processing21. Hence, the theory predicts that to assure stability for both modes of dependencies, brain networks ought to be connected with positive and high values of α and positive values of β.

Next, we examine this hypothesis for two independent functional magnetic resonance imaging (fMRI) experiments: human resting-state data obtained from the NYU public repository22 and human dual-task data23 previously used to investigate brain network topology14,24,25 (see Methods and Supplementary Section II for details). We first identify functional networks (resting state, Fig. 3a and dual task, Fig. 3b) made of nodes connected by strong links—that is, by highly correlated fMRI signals14. These networks are interconnected by weak links (low correlation in the fMRI signal) following the methods of ref. 14. The indegree distribution of the system of networks follows a bounded power law (Fig. 3c, d and Table 1) and the exponents α and β show high positive values for both experiments (Fig. 3e, f and Table 1).

Figure 3: Analysis of interconnected functional brain networks.
figure 3

a, Clustering analysis to obtain the system of networks for resting-state data for a typical subject out of 12 scans analysed. Left plot shows the fraction of nodes in the largest network versus T. We identify one percolation-like transition with the jump at Tc = 0.854. Strong ingoing links define the networks and correspond to T > Tc (ref. 14). At Tc, the two largest networks, shown in the right panel in the network representation and in the inset in the brain, merge. Interconnecting weak outgoing links are defined for 0.781 ≤ T < Tc (plotted in grey). b, The same clustering analysis is done to identify the interconnected network in dual task14. We show a typical scan out of a total of 16 subjects. The strong ingoing links have T > Tc = 0.914, and weak outgoing links are defined for 0.864 ≤ T < Tc. c, Indegree kin distribution for the resting-state experiment. d, Indegree kin distribution for the dual-task experiment. The black lines in c and d are fits to the data in accordance with the methods presented in Supplementary Section IIA. The tail of the distributions follows P(kin) kinγ, with γ = − 2.85 and γ = − 2.25 respectively. e, Outdegree kout as a function of kin for resting-state and dual-task experiments, according to equation (1). f, kinnn as a function of kin for resting-state and dual-task experiments, according to equation (2). The black lines in e and f are linear fits to the data.

Table 1 Parameters characterizing the studied human brain networks.

To examine whether these values are optimal for the specific (γ, kmax)-parameters of these networks, for each experiment, we projected the measured values of α and β to the theoretically constructed stability phase diagram quantified by pc(α, β) in the conditional and redundant modes (Fig. 4). Remarkably, the experimental values of α and β (white circles) lie within the relatively narrow region of parameter space that minimizes failure for conditional and redundant interaction. Overall these results demonstrate that brain networks tested under distinct mental states share the topological features that confer stability to the system.

Figure 4: Stability phase diagram for brain networks.
figure 4

Percolation threshold pc(α, β) obtained from theory for two coupled networks with power-law exponents and cutoff given by the brain networks in resting state (a) and dual task (b). The left panels are for conditional interactions and the right panels for redundant interactions. The white circles represent the data points of the real brain networks. They indicate that the brain structure results from a compromise of optimal stability between both modes of failure.

Our result hence provides a theoretical revision to the current view that systems of networks are highly unstable. We show that for structured networks, if the interconnections are provided by hubs of the network (α > 0.5) and for moderate degrees of convergence of internetwork connection (β > 0), the systems of network are stable. This stability holds in the conditional interaction3 and in a more robust topology of redundant interaction4. The redundant condition is equivalent to stating that the system of networks merges into a single network (ingoing and outgoing links are treated as the same). Hence the condition of optimality for this topology equates to saying that the size of the giant component formed by the connection of both networks is optimized. As a consequence, the maximization of robustness for both conditions is equivalent to maximizing robustness in the more conventional conditional interaction, where links of one network are strictly necessary for proper function of the other network, and a notion of information flow and storage using the classic percolation theory definition of the size of the maximal mutual component across both networks. In other words, these parameters form a set of interacting nodes which are maximally large in size and robust to failure.

The most natural metaphors for man-made systems of networks are electricity (wires) and the Internet or voice connectivity (data). A more direct analogue to this case in a living system such as the brain would be the interaction between anatomic, metabolic and vascular networks (wires) and their coupling to functional correlations (data)26. Here instead we adopted the theory of networks of networks to investigate the optimality of coupled functional brain modules. The consistency between experimental data and theoretical predictions even in this broader notion of coupled networks is suggestive of the possible broad scope of the theory, making it a candidate to study a wider range of interconnected networks27.

Methods

Experimental analysis.

The interdependent functional brain networks are constructed from fMRI data following the methods of ref. 14. First, the blood oxygen level dependent (BOLD) signal from each brain voxel (node) is used to construct the functional network topology based on standard methods19,20 using the equal-time cross-correlation matrix, Cij, of the activity of pairs of voxels (Supplementary Section II).

The derivation of a binary graph from a continuous connectivity matrix relies on a threshold T, where the links between two nodes (voxels) i and j are occupied if T < Cij (refs 14, 19), such as in bond percolation. A natural and non-arbitrary choice of threshold can be derived from a clustering bond percolation process. The size of the largest connected component of voxels as a function of T reveals clear percolation-like transitions14 in the two data sets, identified by the jumps in the size of the largest component in Fig. 3a, b. The emergent networks in resting state correspond to the medial prefrontal cortex, posterior cingulate and lateral temporoparietal regions, all of them part of the default mode network (DMN) typically seen in resting state data22. In dual task, as expected for an experiment involving visual and auditory stimuli and bi-manual responses, the responsive regions include bilateral visual occipito-temporal cortices, bilateral auditory cortices, motor, premotor and cerebellar cortices, and a large-scale bilateral parieto-frontal structure.

Scaling of correlations in the brain.

We identify functional networks (see Fig. 3a, b right panels) made of nodes connected by strong links (strong BOLD signal correlation Cij) which are interconnected by weak links (weak BOLD signal correlation)14,28. Statistical analysis based on standard maximum likelihood and KS methods29 (Supplementary Section IIA) yield the values of the indegree exponents of each functional brain network: γ = 2.85 ± 0.04 and kmax = 133 for resting state and γ = 2.25 ± 0.07, kmax = 139 for dual task (Fig. 3c, d). The obtained exponents α show high positive values for both experiments: α = 1.02 ± 0.02 and 0.92 ± 0.02 for resting-state and dual-task data, respectively (Fig. 3e). The internetwork connections show positive exponents for both systems: β = 0.66 ± 0.03 and β = 0.79 ± 0.04 for resting state and dual task, respectively (Fig. 3f).

Hence, in accordance with the predictions of the theory, these two interdependent brain networks derived from qualitatively distinct mental states (resting states and strong engagement in a task which actively coordinates visual, auditory and motor function) show consistently high values of α and positive values of β. Figure 4 shows the theoretical phase diagram pc(α, β) in conditional and redundant modes calculated for coupled networks with the experimental values γ = 2.25 and 2.85. Left panels show the prediction of pc(α, β) in the conditional mode of failure and right panels correspond to the redundant mode. The experimental (α, β) are shown as white circles lying in stable regions of the phase diagram (low pc). Interestingly, the convergence of internetwork connections, β, is slightly higher under task conditions, adding a new degree of freedom to the system of networks, the dynamic allocation of functional connections governed by context-dependent processes such as attention or learning for the case of brain networks. Further research is planned to investigate the neuronal mechanisms underlying internetwork communication routines specified by β.