Introduction

A key characteristic of complex systems, as large collections of interconnected entities, is their robustness against damage, whether it is genetic mutations in gene–gene interaction networks1, extinction of species in ecosystems2, failure of internet routers3, or unavailability of transportation means4. This common property might be deeply rooted in the unexpected resistance of their structures to disintegration5,6,7,8,9. The structure is often represented as networks, where nodes play the role of entities and links specify the connections between them. In this framework, network integrity—defined as the availability of link sequences connecting every pair of nodes—and its maintenance has proven fundamental for correct functioning of the system as a whole4,10.

In contrast, disintegration is often caused by internal failures or external attacks, widely, modeled in terms of progressive removal of nodes or links. Consequently, a network contracts and dismantles into a number of components of different sizes, each containing a number of interconnected nodes while being disconnected from other components. As the shrinking process proceeds, the size of the largest of these connected component (LCC) decays until it vanishes and the network dismantles into isolated nodes.

The size of LCC has been widely adopted as a proxy for network robustness under random or targeted removals5,7,10. The latter includes identifying the central nodes and detaching them according to their ranking, aiming for the maximum possible damage. Asking which set of node is relevant for a fast disintegration led to various definitions and proxies where none outperforms the others in every scenario—e.g., ranking based on the betweenness centrality proves more effective for certain classes of networks7,11. It is however surpassed by the degree centrality in other cases10. More recent sophisticated descriptors are available and often work well in a range of scenarios12,13,14. A systematic evaluation of state-of-the-art methods reveals that one of the best approaches is based on adaptive betweenness, one of the oldest measures, while being computationally expensive15, compared with other effective and fast measures such as CoreHD16. The problem of optimal percolation and network dismantling remains open, while the aim of this article is to provide a novel framework to study the disintegration process emphasizing two points, seemly missing in the literature. First, most centrality measures rely on network descriptors such as degree or shortest path. Evidently, the information content of a network as a whole can not be fully captured by these proxies. Second, the importance of network integrity, and maintaining it under damage, is to sustain the node–node communications. Thus, understanding the information exchange among the nodes beyond shortest-path communication, and how it is affected in the disintegration process, requires a multiscale framework—e.g., to differentiate between the short- and long-range signaling between the nodes, not necessarily passing through the shortest paths, as captured by the betweenness descriptor. The goal of our paper is to better understand the effectiveness of macroscopic functions describing information dynamics within complex systems17,18,19 for assessing the centrality of nodes and the robustness of networks. To this end, we propose using network entanglement as a new centrality measure to design targeted attack strategies, and study network robustness. Network entanglement is rooted in a recently introduced statistical field theory that describes the information dynamics within networks20. In this framework, the diffusion of information can be calculated using a propagator with a tunable parameter β encoding the propagation time and playing the role of a multiscale lens. Our findings unravel the non-trivial relationship between topological importance of nodes and their centrality for information dynamics, characterized by entanglement, at micro-, meso-, and macro-scale. We use the entanglement to study the disintegration of a range of synthetic networks, as well as real-world social, biological, and transportation systems, highlighting the sensitivity of network’s macroscopic functions to their topological properties. Our results show that entanglement, despite being computationally slow, outperforms the considered existing centrality measures in breaking the networks up to their critical fraction.

Results

Theoretical grounds

Information flow between components of complex systems can be mapped into the dynamics of a field on top of the network, governed by a general differential equation which, after linearization, reduces to a Schrodinger-like equation with a quasi-Hamiltonian \(\hat{{\bf{H}}}\). It directly leads to the propagator \({e}^{-\beta \hat{{\bf{H}}}}\), from which an ensemble of operators can be obtained acting like information streams, directing the flow of the field from unit to unit20. The superposition of the information streams weighted by their activation probabilities shapes a Gibbsian-like density matrix that has been used to describe the state of interconnected systems and investigate their macroscopic properties such as Von Neuman entropy.

Remarkably, when the dynamical process governing the evolution of the field is continuous diffusion, the Von Neumann entropy of the ensemble coincides with the spectral entropy proposed to study the analyze the networks from an information-theoretic perspective17,21 and used, successfully, to investigate a range of empirical systems, from transportation systems19 to the human microbiome17, the pan-viral interactomes22 and brain23. Consequently, for a network G of N nodes, with connectivity represented as the adjacency \(\hat{{\bf{A}}}\) (Aij = 1 if nodes i and j are connected, it is 0 otherwise), the quasi-Hamiltonian takes the shape of the combinatorial Laplacian \(\hat{{\bf{L}}}=\hat{{\bf{D}}}-\hat{{\bf{A}}}\), where \(\hat{{\bf{D}}}\) in space of canonical vectors is diagonal, defined as Dij = δijki and ki = ∑jAij denotes the degree of ith node (with \(\bar{k}\) being the average degree of the nodes), one obtains the density matrix:

$${\hat{{\boldsymbol{\rho }}}}_{\beta }=\frac{{e}^{-\beta \hat{{\bf{L}}}}}{\,{\text{Tr}}\,\left({e}^{-\beta \hat{{\bf{L}}}}\right)},$$
(1)

in terms of the ratio between the propagator of diffusion dynamics on top of the network, with β encoding the propagation time, and its trace encoding the partition function \({Z}_{\beta }=\,{\text{Tr}}\,\left({e}^{-\beta \hat{{\bf{L}}}}\right)\), which measures the dynamical trapping19—i.e., the tendency of structure to trap the flow of information. Using Eq. (1), the Von Neumann entropy can be obtained as:

$${S}_{\beta }(G)=-\,{\text{Tr}}\,\left({\hat{{\boldsymbol{\rho }}}}_{\beta }{{\rm{log}}}_{2}{\hat{{\boldsymbol{\rho }}}}_{\beta }\right).$$
(2)

The entropy measures the mixedness of the streams, quantifying the diversity of information dynamics in the system. Interestingly, the functional diversity of nodes, as sender or receiver of information, has been shown proportional to the Von Neumann entropy20.

To quantify the importance of a single node x in the interconnected system, we, firstly, detach it from the network G with its corresponding incident edges. The removed node and its incident edges form a star network, indicated by δGx, having the size kx + 1 where kx is the degree of node x. The remainder of G shapes the perturbed network \({G}_{x}^{\prime}\), that has N − 1 nodes (see Fig. 1). Consequently, the effect of node removal on the mixedness of information streams or, in other words, the functional diversity of the system defines the entanglement:

$${M}_{\beta }(x)=[{S}_{\beta }({G}_{x}^{\prime})+{S}_{\beta }(\delta {G}_{x})]-{S}_{\beta }(G),$$
(3)
Fig. 1: Detachment process and entanglement centrality.
figure 1

a The process of detaching node x and its incidence edges from the original network G is plotted, forming a perturbed network \({G}_{x}^{\prime}\), colored in blue, and a star network δGx including the detached node x and its neighbors, colored in red for clarity. b The entanglement of each node is shown as a function of the propagation time β, for a random geometric network with the number of nodes N = 100 and radius 0.15, where each trajectory is colored according to the degree of the detached node. In this plot, the vertical axis show the propagation time β and the horizontal encodes the entanglement. The collective entanglement \({\bar{M}}_{\beta }\), defined as average entanglement of all the nodes, is represented by the orange dashed line that reaches a minimum around the middle scales.

as a function of node position in the network, the topology, the dynamical process governing the flow and the propagation time β (see Fig. 1). It is worth noting that the entanglement can not be captured by network centrality measures such as degree and betweenness (see Fig. 2).

Fig. 2: Centrality correlations.
figure 2

The four heatmaps represent the pairwise Spearman correlation between a number of centrality measures, including entanglement, for different topologies such as random geometric graph (RGG), Watts–Strogats (WS), Erdős–Rényi (ER), and Barabasi–Albert (BA). The result indicates no specific similarity between entanglement—at small, mid, and large scales—and other descriptors. In this figure, the vertical and horizontal axes encode the considered centrality measures and colors indicates their pairwise Spearman correlation.

Here, we analytically show that at extremely small temporal scales, the entropy can be approximated as the network size lim\({\,\!}_{\beta \to 0}{S}_{\beta }(G)={{\rm{log}}}_{2}N\) (see “Methods”). Also, we introduce a mean-field approximation of the Von Neumann entropy, based on continuous diffusion process as:

$${S}_{\beta }^{MF}=\frac{1}{{\rm{log}}2}\left(\beta \bar{k}\frac{{Z}_{\beta }-1}{{Z}_{\beta }}+{\rm{log}}{Z}_{\beta }\right),$$
(4)

demonstrating that it has higher precision at large temporal scales, where it can be approximated as lim\({\!\;}_{\beta \to \infty }{S}_{\beta }^{MF}\approx \beta \bar{k}{{\rm{log}}}_{2}C\) (see “Methods”), with C being the number of disconnected components in the network G.

We use the above analytical treatments to uncover the behavior of node-network entanglement introduced in Eq. (3), at extremely small and large scales (see “Methods”):

  • β → 0: \({M}_{\beta }(x)\approx {{\rm{log}}}_{2}({k}_{x}+1)\)

  • β → : \({M}_{\beta }(x)\approx \beta \bar{k}{{\rm{log}}}_{2}{C}_{x}^{\prime}\)

where kx is the degree of the removed node and \({C}_{x}^{\prime}\) is the number of disconnected components, in the perturbed network \({G}_{x}^{\prime}\).

Clearly, entanglement is determined by the degree of the node at small scales. While, at the large scale, entanglement centrality evaluates the direct role of nodes in keeping the global integrity of network, by considering the number of disconnected components generated consequent to their detachment. It is worth remarking here that a network has highest integrity if it has only one connected component—i.e., for every pair of nodes, there is at least one link or sequence of links (path) that connects them. The presented findings analytically show that in the extreme cases of short- and long-range communications, the topological metrics capture the effect of node removal on the state of information dynamics.

Entanglement analysis of synthetic networks

We consider five different classes of networks, including Barabasi–Albert24, Erods–Rényi, random geometric, stochastic block model with four communities, and Watts–Strogatz25 models, frequently used to mimic the topology of natural and man-made complex systems26. For each model, an ensemble of ten independent realizations of N = 256 nodes has been considered. We have kept the average degree approximately equal to 12, to allow for a more meaningful comparison across models. We adopt a variety of centrality measures in static and adaptive strategy—the latter involving the re-calculation of a measure after each action on the system—to identify the important nodes and attack the network, assessing their robustness.

The results clearly show that the entanglement centrality at large scales performs as effective as or faster than the other measures considered here in dismantling the network up to its critical fraction, the point at which the network starts to break into disconnected components (see Figs. 3 and 4). Remarkably, the entanglement becomes more sensitive to the transport properties of networks (see “Methods”).

Fig. 3: Robustness analysis of synthetic networks, using static centrality measures.
figure 3

Disintegration of different network topologies, including Barabasi–Albert, Erdős–Rényi, random geometric, stochastic block model, and Watts–Strogatz models is considered. The robustness of an ensemble of each network model is tested against random failures and targeted attacks based measures of node centrality including betweenness, load, clustering, eigenvector, PageRank, closeness, current flow closeness, harmonic, subgraph, degree, and entanglement at three temporal scales (-small, -mid, -large), the last one introduced in this study and represented by dashes for more clarity (a). All these centrality measures are used in static fashion—i.e., the ranking of the nodes according to each measure is computed only once, at the beginning. Entanglement centrality, tuned at relatively large propagation time-scale, performs equal or faster than other measures in breaking the network up to its critical fraction (b). Except for entanglement at small and middle scales, the centrality measures presented in the boxes at the bottom are ordered according to the overall performance of the considered measures, across all numerical experiments.

Fig. 4: Robustness analysis of synthetic networks, using adaptive centrality measures.
figure 4

Disintegration of different network topologies, including Barabasi–Albert, Erdős–Rényi, random geometric, stochastic block model, and Watts–Strogatz models is considered. The robustness of an ensemble of each network model is tested against random failures and targeted attacks based measures of node centrality including betweenness, load, clustering, eigenvector, PageRank, closeness, current flow closeness, harmonic, subgraph, degree, and entanglement at three temporal scales (-small, -mid, -large), the last one introduced in this study and represented by dashes for more clarity (a). In contrast with Fig. 3, here, all these centrality measures are used in adaptive fashion—i.e., the ranking of the nodes according to every measure is updated after each node removal. Similar to the static analysis, the entanglement centrality, tuned at relatively large propagation time-scale, performs equal or faster than other measures in breaking the network up to its critical fraction (b). Except for entanglement at small and middle scales, the centrality measures presented in the boxes at the bottom are ordered according to the overall performance of the considered measures, across all numerical experiments.

Entanglement analysis of real-world networks

To go beyond classical synthetic models, which are not characterized by complex topological correlations, we apply our framework to a variety of real-world complex networks, representing the structure of biological, transportation, and social systems, under progressive attacks based on the discussed centrality measures.

Data sets include the neural network of the nematode worm C. elegans (N = 297) representing neurons linked by their neural junctions25, the US airports network (N = 500) representing the busiest commercial airports in the United States in 2002 with links encoding the flights between them27, the protein–protein interaction network of yeast28,29 (N = 1458), the network of Internet pages of EPA30 (N = 4253), and the citation network corresponding to the Digital Bibliography & Library Project31 (N = 12,495). All network sizes reported here are the number of nodes in the largest connected component, before the dismantling begins.

As expected, all these real-world networks show higher robustness against random node removals, implying their ability to maintain their function under random failures. However, adopting the right targeted attack strategy can effectively disintegrate them (see Fig. 5). Interestingly, our analysis indicates that, for all the considered empirical systems, the entanglement centrality provides an effective dismantling strategy (see Fig. 5), comparable with the best measures.

Fig. 5: Robustness analysis of empirical networks.
figure 5

The robustness of biological, social, and transportation empirical networks against random failures and targeted attacks is represented. More specifically, the analyzed networks include the neural network of the nematode worm C. elegans, the US airports, the protein–protein interactions of yeast, the Internet pages of Environmental Protection Agency and the citation network corresponding to the Digital Bibliography & Library Project. Static attacks based on entanglement at small, middle (mid), and large scale are shown as dashed lines and compared against other centrality measures. Similar to the synthetic networks, here, entanglement at large temporal scales is effective in driving the network to its critical point. Interestingly, in some cases, small-scale entanglement is faster in causing the total dismantling, yet it performs as well as or worse than large scale in reaching the critical fraction.

Conclusion

Analyzing the robustness of complex systems is still a challenging task. Here, we have used Gibbsian-like density matrices to quantify the entanglement between nodes and their networks, in order to characterize the impact of node removal on system function. To this aim, we measure the change in the Von Neumann entropy of a network, caused by the detachment of nodes and their incident edges. It has been previously shown that the Von Neumann entropy measures the functional diversity of nodes as senders of information20. Therefore, entanglement is a proxy of node’s importance for the functional diversity of system.

Our framework is multiscale, with propagation time β playing the role of a tunable parameter which allows one to study the response of the network at micro-, meso-, and macro-scales. To this aim, we have developed a mean-field approximation of entropy to analytically explain the behavior of entanglement centrality at extreme cases and performed numerical experiments to extensively study its effect on system’s robustness at different scales. Our results indicate that for small temporal scales β → 0, the degree of each node determines its entanglement with the network, and entanglement centrality coincides with the well-known degree centrality. At very large scales β → , entanglement centrality measures the direct role of each node in the integrity of network—i.e., how many disconnected components will appear if the node is detached. Finally, we have shown that the collective entanglement—i.e., the average entanglement of all the nodes with the network—reaches its minimum at a specific choice of β = βc (see “Methods”). Interestingly, at this scale, we demonstrate that entanglement centrality is rather sensitive to the node’s impact on the diffusion dynamics on top of the network, and not the structure. More specifically, according to our measure, a node is ranked higher if its detachment causes a larger increase in the partition function of the system. The partition function provides a proxy for dynamical trapping, an important transport property that indicates the tendency of network to hinder the flow of information19: therefore, strategies can be designed to lower the partition function and, consequently, enhance the diffusive flow among nodes. Conversely, at this scale, targeting the nodes according to entanglement centrality, leads to increase in the partition function and, consequently, hinders transport properties.

Here, we used entanglement centrality to define static—i.e., non-adaptive—and adaptive attack strategies, demonstrating that under both setups, it is possible to efficiently dismantle the networks. It is worth mentioning that our approach requires solving the eigenvalue problem that has high computational complexity. Our derivations provide a fast and simple way to calculate entanglement at extremely large and small temporal scales. More theoretical investigations are still required, generalizing our derivations to the meso-scale and reducing the computational cost. We developed an approximated version of our algorithm showing that it is significantly faster and also highly accurate (see “Methods”). However, we acknowledge that this approximated version is still slower compared with a number of other effective centrality measures, especially in case of large networks. Nevertheless, our results clearly demonstrate the sensitivity of statistical physics of complex information dynamics, in detecting the nodes that are central for network integration, which is the main aim of the present study. In fact, our analysis of both synthetic and real-world networks, where different attack strategies are compared to network entanglement at large scale, indicates that our measure performs as well as or faster than other measures, in damaging the network up to its critical fraction, across a range of scenarios. However, it becomes slower than some other measures, after the critical fraction is reached, yet still comparable to the others. This result indicates that entanglement can be used to quickly disrupt the flow exchange, but can not be used to disintegrate a system faster than more traditional approaches.

Overall, our results provide more insight into the relationship between structure and dynamics of information. More specifically, the nodes that are critical for the dynamics of information field on top of the networks are those keeping the structure integrated. Moreover, the presented framework opens the door for further investigation of the network contraction process, from a multiscale perspective, and its relation with the dynamics and transport properties of the complex systems.

Methods

Mean-field entropy

A mean-field approximation of the network Von Neumann entropy has been recently suggested for the random walk-based density matrices19. Similarly, here, we derive a mean-field entropy for the case of continuous diffusion. The eigenvalue spectrum of the Laplacian follows:

  • 0 = λ1 ≤ λ2 ≤ ... ≤λN

  • \(\,{\text{Tr}}\,\left(\hat{{\bf{L}}}\right)=\mathop{\sum }\limits_{i=1}^{N}{\lambda }_{i}=\mathop{\sum }\limits_{i=1}^{N}{k}_{i}=2m\)

where m is the number of links in the network, where no self loops exist.

At this step, it is worth noting that \({\hat{{\boldsymbol{\rho }}}}_{\beta }\) and \(\hat{{\bf{L}}}\) can be eigen-decomposed as follows:

$$\hat{{\bf{L}}}=\hat{{\bf{Q}}}\hat{{\boldsymbol{\Lambda }}}{\hat{{\bf{Q}}}}^{-1},$$
(5)
$${\hat{{\boldsymbol{\rho }}}}_{\beta }=\hat{{\bf{Q}}}\frac{{e}^{-\beta \hat{{\mathbf{\Lambda }}}}}{{Z}_{\beta }}{\hat{{\bf{Q}}}}^{-1},$$
(6)

being the columns of \(\hat{{\bf{Q}}}\) the eigenvectors of the Laplacian matrix and \(\hat{{\mathbf{\Lambda }}}\) is the diagonal matrix of eigenvalues of the Laplacian matrix. For the density matrix, the eigenvalues follow \({\nu }_{i}(\beta )={e}^{-\beta {\lambda }_{i}}/{Z}_{\beta },i=1,2...,N\). The Laplacian matrix and the density matrix can be eigen-decomposed simultaneously, in the basis of eigenvectors of the Laplacian matrix.

Furthermore, Eq. (2) can be rewritten as:

$${S}_{\beta }(G) = \, -\,{{\text{Tr}}}\,\left({\hat{{\boldsymbol{\rho }}}}_{\beta }{{\rm{log}}}_{2}{\hat{{\boldsymbol{\rho }}}}_{\beta }\right)\\ = \, \frac{\beta }{{\rm{log}}2}\,{{\text{Tr}}}\,\left(\hat{{\bf{L}}}{\hat{{\boldsymbol{\rho }}}}_{\beta }\right)+{{\rm{log}}}_{2}{Z}_{\beta },$$
(7)

where \({\rm{log}}\) without indicating its base refers to the natural logarithm and the trace in the first term can be written as the following summation:

$$\,{\text{Tr}}\,\left(\hat{{\bf{L}}}{\hat{{\boldsymbol{\rho }}}}_{\beta }\right)=\mathop{\sum }\limits_{i=1}^{N}{\lambda }_{i}{\nu }_{i}(\beta )=\mathop{\sum }\limits_{i=C+1}^{N}{\lambda }_{i}\frac{{e}^{-\beta {\lambda }_{i}}}{{Z}_{\beta }},$$
(8)

the last step is justified by the fact that λ1, ..., λC = 0 for a network with C connected components. It is worth mentioning that the isolated nodes are considered to be separate components and are included in C.

A mean-field approximation of the above summation can be obtained by neglecting the higher-order terms as follows:

$$\langle \lambda \nu (\beta )\rangle = \, \langle (\lambda -\bar{\lambda }+\bar{\lambda })(\nu (\beta )-\bar{\nu }(\beta )+\bar{\nu }(\beta ))\rangle \\ = \, \bar{\lambda }\bar{\nu }(\beta )+\langle (\lambda -\bar{\lambda })(\nu (\beta )-\bar{\nu }(\beta ))\rangle \\ \approx \, \bar{\lambda }\bar{\nu }(\beta ).$$
(9)

To increase the precision, the terms in the summation corresponding to λi = 0 must be excluded from the mean values of both sets of eigenvalues. Consequently, the mean-value for the Laplacian matrix follows:

$$\bar{\lambda }=\frac{1}{N-C}\mathop{\sum }\limits_{i=C+1}^{N}{\lambda }_{i}=\frac{2m}{N-C},$$
(10)

and for the density matrix:

$$\bar{\nu }(\beta )=\frac{1}{N-C}\mathop{\sum }\limits_{i=C+1}^{N}\frac{{e}^{-\beta {\lambda }_{i}}}{{Z}_{\beta }}=\frac{1}{N-C}\frac{{Z}_{\beta }-C}{{Z}_{\beta }}.$$

It follows that:

$$\,{{\text{Tr}}}\,\left(\hat{{\bf{L}}}\hat{{\boldsymbol{\rho }}}\right)= \, (N-C)\langle \lambda \nu (\beta )\rangle \\ \approx \, \frac{2m}{N-C}\frac{{Z}_{\beta }-C}{{Z}_{\beta }}$$
(11)

which, for a network with no isolated nodes and only one connected component (C = 1), and comparably large size N 1, reduces to:

$$\,{{\text{Tr}}}\,\left(\hat{{\bf{L}}}{\hat{{\boldsymbol{\rho }}}}_{\beta }\right)\approx \bar{k}\frac{{Z}_{\beta }-1}{{Z}_{\beta }},$$
(12)

where \(\frac{2m}{N-1}\approx \frac{2m}{N}=\bar{k}\) is the average degree of nodes.

From here, it is straightforward to combine Eqs. (2) and (12) to obtain the mean-field entropy:

$${S}_{\beta }^{MF}=\frac{1}{{\rm{log}}2}(\beta \bar{k}\frac{{Z}_{\beta }-1}{{Z}_{\beta }}+{\rm{log}}{Z}_{\beta }).$$
(13)

Whereas for networks with isolated nodes and disconnected components the mean-field entropy reads:

$${S}_{\beta }^{MF}=\frac{1}{{\rm{log}}2}(\beta \frac{2m}{N-C}\frac{{Z}_{\beta }-C}{{Z}_{\beta }}+{\rm{log}}{Z}_{\beta }).$$
(14)

First-order correction

The precision of mean-field entropy can be quantified using a correction term, indicating its difference from the exact entropy \({S}_{\beta }={S}_{\beta }^{MF}+\delta {S}_{\beta }^{MF}\). In Eq. (9), the cross-correlation term \(\langle (\lambda -\bar{\lambda })(\nu (\beta )-\bar{\nu }(\beta ))\rangle\) was ignored to derive the mean-field entropy on the same correction term:

$$\delta {S}_{\beta }^{MF} = \, \frac{\beta }{{\rm{log}}2}\langle (\lambda -\bar{\lambda })(\nu (\beta )-\bar{\nu }(\beta ))\rangle \\ = \, \frac{\beta }{{\rm{log}}2}\langle (\lambda -\bar{\lambda })\nu (\beta )\rangle \\ = \, \frac{\beta }{{\rm{log}}2}\frac{{e}^{-\beta \bar{k}}}{{Z}_{\beta }}\mathop{\sum }\limits_{i=2}^{N}{\delta }_{i}{e}^{-{\delta }_{i}\beta },$$
(15)

where \({\delta }_{i}={\lambda }_{i}-\bar{\lambda }\) and \(\bar{\lambda }=\bar{k}\), as shown in the previous section.

According to Eq. (15), at large propagation times β, the correction term decays exponentially with the exponent \(\bar{k}\). Consequently, the precision of mean-field approximation is higher, when long-range interactions between the nodes are under consideration. Similar results about the precision of mean-field approximation have been previously obtained for density matrices obtained from random walk dynamics19.

Multiscale derivations

For small scales the partition function can be written as \({Z}_{\beta }=\,{\text{Tr}}\,\left({e}^{-\beta \hat{{\bf{L}}}}\right)\approx \,{\text{Tr}}\,\left(\hat{{\bf{I}}}\right)-\beta \,{\text{Tr}}\,\left(\hat{{\bf{L}}}\right)=N-2\beta m\) and the density matrix follows \({\hat{{\boldsymbol{\rho }}}}_{\beta }=\frac{1}{{Z}_{\beta }}{e}^{-\beta \hat{{\bf{L}}}}\approx \frac{1}{{Z}_{\beta }}(\hat{{\bf{I}}}-\beta \hat{{\bf{L}}})\).

If the propagation time goes to zero limit β → 0, it can be shown that the density matrix is \({\hat{{\boldsymbol{\rho }}}}_{0}=\hat{{\bf{I}}}/N\) and the Von Neumann entropy depends, only, on the network size \({S}_{0}=-\mathop{\sum }\limits_{i=1}^{N}1/N{{\rm{log}}}_{2}1/N={{\rm{log}}}_{2}N\).

Assume the size of original network G is N. Then the size of perturbed network after removal of a node \({G}_{x}^{\prime}\) (see Fig. 1) is N − 1 and the size of the star network corresponding to the detached node δGx depends on its degree kx + 1. Therefore, the entanglement at β → 0:

$${M}_{0}(x) = \, [{{\rm{log}}}_{2}(N-1)+{{\rm{log}}}_{2}({k}_{x}+1)]-{{\rm{log}}}_{2}(N)\\ = \, {{\rm{log}}}_{2}\left(\frac{N-1}{N}\right)+{{\rm{log}}}_{2}({k}_{x}+1)\\ \approx \,{{\rm{log}}}_{2}({k}_{x}+1)$$
(16)

is proportional to the degree of the removed node. This proves that entanglement centrality and degree centrality coincide, for very small β.

Note that, for a network with C connected components, the Laplacian matrix has exactly C zero eigenvalues, while all other eigenvalues are greater than zero. Therefore, the partition function can, generally, be rewritten as \({Z}_{\beta }=C+\mathop{\sum }\limits_{i=C+1}^{N}{e}^{-\beta {\lambda }_{i}}\) and approximated as Zβ ≈ C, for large β. Also, Taylor expanding the logarithm of partition function around this point, one can find \({\rm{log}}{Z}_{\beta }\approx \frac{{Z}_{\beta }-C}{{Z}_{\beta }}\). We put this result into Eq. (14) to find the mean-field entropy at large β:

$${S}_{\beta }^{MF}\approx (\beta \frac{2m}{N-C}+1){{\rm{log}}}_{2}{Z}_{\beta },$$
(17)

which, in case of NC becomes \((\beta \bar{k}+1){{\rm{log}}}_{2}{Z}_{\beta }\) which can be approximated as:

$${S}_{\beta }^{MF}\approx \beta \bar{k}{{\rm{log}}}_{2}{Z}_{\beta },$$
(18)

since \(\beta \bar{k}\gg 1\). Also, in the limit case the above equation becomes:

$$\mathop{\rm{lim}}\limits_{\beta \to \infty }{S}_{\beta }^{MF}\approx \beta \bar{k}{{\rm{log}}}_{2}C.$$
(19)

The star network corresponding to the removed node has only one connected component Cx = 1. As \({{\rm{log}}}_{2}1=0\), the entropy follows \({S}_{\infty }^{MF}(\delta {G}_{x})=0\) for the star network. Let the number of connected components in G and \(G^{\prime}\) be, respectively, C and \(C^{\prime}\), and their average numbers indicated by \(\bar{k}\) and \(\bar{k}^{\prime}\). The entanglement at the limit of large β →  follows:

$${M}_{\beta }(x)=\beta (\bar{k}^{\prime} {{\rm{log}}}_{2}C^{\prime} -\bar{k}{{\rm{log}}}_{2}C).$$
(20)

In case the network is large, the removal of one node does not change its average degree dramatically \(\bar{k}\approx \bar{k}^{\prime}\). Therefore, the entanglement can be reduced to:

$${M}_{\beta }(x)=\beta \bar{k}{{\rm{log}}}_{2}\frac{{C}_{x}^{\prime}}{C}.$$
(21)

Of course, in case the initial network is completely connected (C = 1), we obtain:

$${M}_{\beta }(x)=\beta \bar{k}{{\rm{log}}}_{2}{C}_{x}^{\prime},$$
(22)

which is the case for all the synthetic networks considered in this work.

Finally, using Eq. (18), one can write the entanglement of node x as:

$${M}_{\beta }(x)\approx (\beta \bar{k}+1){{\rm{log}}}_{2}\frac{{Z}_{\beta }^{\prime}(x)}{{Z}_{\beta }}$$
(23)

at the meso-scale. Consequently, the collective entanglement (see Fig. 1) follows:

$${\bar{M}}_{\beta }\approx \frac{\beta \bar{k}+1}{N}\mathop{\sum }\limits_{x=1}^{N}{{\rm{log}}}_{2}\frac{{Z}_{\beta }^{\prime}(x)}{{Z}_{\beta }}.$$
(24)

Taylor expanding each term in the summation around its minimum (\({Z}_{\beta }^{\prime}(x)={Z}_{\beta }\)) and keeping only the first-order term, we obtain:

$${\bar{M}}_{\beta } \approx \, \frac{\beta \bar{k}+1}{N{\rm{log}}2}\mathop{\sum }\limits_{x=1}^{N}\left[0+\frac{{Z}_{\beta }^{\prime}(x)-{Z}_{\beta }}{{Z}_{\beta }}\right]\\ = \, \frac{\beta \bar{k}+1}{N{Z}_{\beta }{\rm{log}}2}\mathop{\sum }\limits_{x=1}^{N}{{\Delta }}{Z}_{\beta }(x),$$
(25)

where \({{\Delta }}{Z}_{\beta }(x)={Z}_{\beta }^{\prime}(x)-{Z}_{\beta }\). As Mβ nears its minimum, higher precision of the above linearization is expected. The scale at which the collective entanglement is at its minimum defines βc. Finally, the entanglement centrality of node x at βc follows:

$${M}_{{\beta }_{c}}(x)=\frac{\beta \bar{k}+1}{N{Z}_{\beta }{\rm{log}}2}{{\Delta }}{Z}_{{\beta }_{c}}(x)$$
(26)

Accordingly, βc appears to be a suitable characteristic scale for disintegration of networks (see Fig. 6 for more information), due to its importance for the transport properties of network. However, its value can not be analytically calculated and therefore adds to the computation cost of the algorithm. Instead, we use the inverse of the second eigenvalue of the Laplacian, which defines the diffusion time, to study the small, middle, and large temporal scales. More specifically, the size of the second information stream is proportional to \({e}^{-\beta {\lambda }_{2}}\), where λ2 is the second eigenvalue. The small, middle, and large scale are at temporal scales β at which, respectively, \({e}^{-\beta {\lambda }_{2}}=0.9\), \({e}^{-\beta {\lambda }_{2}}=0.33\), and \({e}^{-\beta {\lambda }_{2}}=0.01\). At the large scale, the assumptions leading to Eq. (26) are valid and, similarly, the entanglement is expected to be sensitive to information transport.

Fig. 6: Collective entanglement.
figure 6

Collective network entanglement Mβ as a function of the propagation time beta for (a) a regular lattice (c), an Erdos–Renyi, and (e) a Barabasi–Albert network. The green dashed line corresponds to the minimum collective entanglement and defines the time-scale βc. Centrality of each node according to different measures (entanglement for β = βc, betweenness, closeness, and PageRank) for (b) the regular lattice (d) the Erdos–Renyi, and (f) Barabasi–Albert network. By comparing centrality measures on the same network one can see that it is evident that network entanglement is not trivially related to existing centrality measures.

Algorithmic complexity

The algorithmic complexity of calculating the Von Neumann entropy is the same as solving the eigenvalue problem laying between \({\mathcal{O}}({N}^{2})\) and \({\mathcal{O}}({N}^{3})\), depending on the sparseness of the adjacency matrix. Requiring the calculation of entropy after detaching every node, the complexity of entanglement algorithm is one scale higher than entropy. Thus, the entanglement centrality approach is computationally costly for large networks, compared to measures such as degree and betweenness that directly analyze the structure and not the perturbations of dynamics within it.

Our theoretical derivations and numerical results indicate that entanglement centrality is highly effective at relatively large temporal scales β. This fact can be exploited to provide an approximated version of entanglement centrality, which is more scalable relative to the accurate version. Note that the eigenvalues of the propagator, defining the size of information streams, are exponential function of the eigenvalues of the Laplacian matrix \({e}^{-\beta {\lambda }_{i}},i=1,2,...N\). At large β we assume that all these exponential functions decay, except for the smallest λ1 = 0 and the second smallest eigenvalue λ2. Therefore, instead of computing the full spectrum, one can approximate the entropy using the first two eigenvalues of the Laplacian matrix. We show that this approximation not only reduces the computational complexity of Von Neumann entropy by roughly one scale, but also can be used to calculate the entanglement centrality accurately (see Fig. 7).

Fig. 7: Algorithmic complexity.
figure 7

The algorithmic complexity of entropy computations is compared, between the (a) exact and (b) approximated entropy. Red dots indicate the average computation time over five realizations of Barabasi–Albert networks of different number of nodes, with attachment parameter m = 3. Note that the error bars, defined as standard deviation, corresponding to the exact entropy plot are negligible. The dashed lines show the regression with polynomials of orders 1, 2, and 3. Below, four panels show the high Spearman correlation between the ranking provided by exact and approximated versions of the entanglement for (c) Barabasi–Albert, (d) Erdos–Renyi, (e) Stochastic block model, and (f) Watts–Strogatz networks. Each plot shows one realization of one of the members of the synthetic network ensemble, used through this paper. In these plots, each dot corresponds to a node, where y and x axes indicate the approximated and exact entanglement of the node, respectively. The Spearman correlations of exact and approximate entanglement of nodes are reported on top of the plots.

Centrality measures

A variety of centrality measures have been adopted in the literature to find the relative importance of the nodes for network integrity. We used a set of centrality measures to compare against entanglement, including degree, clustering32, eigenvector33, closeness34, katz35, betweenness, load36, harmonic37, PageRank, subgraph38, current flow closeness39, and CoreHD16.