Unraveling the effects of multiscale network entanglement on disintegration of empirical systems

Complex systems are large collections of entities that organize themselves into non-trivial structures that can be represented by networks. A key emergent property of such systems is robustness against random failures or targeted attacks ---i.e. the capacity of a network to maintain its integrity under removal of nodes or links. Here, we introduce network entanglement to study network robustness through a multi-scale lens, encoded by the time required to diffuse information through the system. Our measure's foundation lies upon a recently proposed framework, manifestly inspired by quantum statistical physics, where networks are interpreted as collections of entangled units and can be characterized by Gibbsian-like density matrices. We show that at the smallest temporal scales entanglement reduces to node degree, whereas at the large scale we show its ability to measure the role played by each node in network integrity. At the meso-scale, entanglement incorporates information beyond the structure, such as system's transport properties. As an application, we show that network dismantling of empirical social, biological and transportation systems unveils the existence of a optimal temporal scale driving the network to disintegration. Our results open the door for novel multi-scale analysis of network contraction process and its impact on dynamical processes.

and network dismantling remains open, while the aim of this article is to provide a novel framework to study the disintegration process emphasizing two points, seemly missing in the literature.
First, most centrality measures rely on network descriptors such as degree or shortest path. Evidently, the information content of a network as a whole can not be fully captured by these proxies.
Second, the importance of network integrity, and maintaining it under damage, is to sustain the node-node communications. Thus, understanding the information exchange among the nodes beyond shortest-path communication, and how it is affected in the disintegration process, requires a multi-scale framework-e.g. to differentiate between the short-and long-range signalling between the nodes, not necessarily passing through the shortest paths, as captured by the betweenness descriptor. Therefore, the main research question of our work is not limited to defining a novel centrality measure and compare its impact on network robustness. Instead, we are interested in better understanding if operators such as the network density matrix, inspired by quantum statistical physics and information theory [16][17][18] , are able to capture the main features of communication flows, beyond shortest paths, and exploit them to better characterize system's resilience to targeted attacks.
To this aim, we propose network entanglement, described by a Gibbsian-like density matrix 16 which is derived from the propagator of diffusion dynamics, with a tunable parameter β encoding the propagation time and playing the role of a multi-scale lens. In the following we show the properties of our measure at the micro-, meso-and macro-scale, while demonstrating the existence of an information-theoretic optimal scale, β c , at which node's impact is determined by its role in the transport properties of system. At this scale, we study the disintegration of a range of synthetic networks as well as real-world social, biological and transportation networks, to show that dismantling is always comparable with the one obtained from other approaches across different scenarios.

Results
Theoretical grounds. The information content of complex networks can not be fully captured by means of traditional descriptors such as the degree distribution and diameter. For this reason, a variety of tools and methods have been introduced with roots in statistical physics and information theory 19 .
Recently, it has been shown that networks can be viewed as collections of entangled entities represented by a grounded density matrix resembling the Gibbs state 16,20 that is used, successfully, to analyse a range of empirical networks from transportation systems 18 to the human microbiome 16 and brain 21 .
The Gibbsian density matrix of a network G with N nodes, represented by an adjacency matrix A (A ij = 1 if nodes i and j are connected, it is 0 otherwise ), has been originally proposed 16 as the exponential function of the combinatorial Laplacian matrix L = D − A, where D is a diagonal matrix defined by D ii = k i and k i = j A ij denotes the degree of i−th node, as follows in terms of the ratio between the propagator of diffusion dynamics on top of the network, with β encoding the diffusion time, and its trace encoding the partition function Z β = Tr e −βL , which plays an important role in the transport properties of networks 18 . Using Eq. 1 the Von Neumann entropy can be obtained as Recently, a mean-field approximation of the Von Neumann entropy has been introduced to simplify the many term summation and allow for analytical derivations 18 . However, that approximation is limited to the case of random walk dynamics and can not be used for the purpose of this article. Consequently, here, we derive a mean-field entropy (See Methods) that is valid for the case of continuous diffusion: where C is the number of disconnected components of the network and m is the overall number of links. In most networks, the number of nodes is much larger than the number of disconnected components N C, and, therefore, Eq. 3 can be approximated as wherek is the mean degree of nodes. Also, in case of large β, the mean field entropy reduces to Defining network entanglement. To quantify the importance of a single node x in the interconnected system, we, firstly, detach it from the network G with its corresponding incident edges. The removed node and its incident edges form a star network, indicated by δG x , having the size k x + 1 where k x is the degree of node x. The remainder of G shapes the perturbed network G x , that has N − 1 nodes (See Fig. 1).
We define the entanglement between each node x and the network as By tuning the propagation time β, the entanglement between the nodes and network is expected to change. Using Eq. 2 and 5, we show (See Methods) that in extreme cases, the entanglement centrality follows: • β → 0 : M β (x) ≈ log 2 (k x + 1) Figure 1: Detachment process. The process of detaching node x and its incidence edges from the original network G is plotted (top). The entanglement of each node is shown as a function of Markov time β, for an arbitrary network (bottom). Each trajectory is colored according to the degree of the detached node, to highlight that there is no trivial relationship between entanglement and degree across scale. The collective entanglementM β , defined as average entanglement of nodes, is shown by orange dashes.
where k x is the degree of the removed node and C x is the number of disconnected components, in the perturbed network G x . Clearly, if the entanglement is used as a centrality measure, it coincides with the degree centrality at small scales. It is worth remarking here that a network has highest integrity if it has only one connected component-i.e. for every pair of nodes, there is at least one link or sequence of links (path) that connects them. Therefore, at the large scale, entanglement centrality evaluates the direct role of nodes in keeping the integrity of network, by considering the number of disconnected components generated consequent to their detachment.
The intermediate scales exhibit even richer information. To better characterize this information, we define the collective entanglement as the average entanglement of all the nodes Fig. 1).
Let us assume that this collective variable reaches its minimum at some optimal scale β c , which is still unknown. We analytically show (See Methods) that the centrality of any node x, near β c , is proportional to the change in the partition function Z β caused by its detachment: where ∆Z β (x) = Z β (x) − Z β , and Z β (x) is the partition function of the perturbed networks ∆G x .
The partition function Z β has been recently related to dynamical trapping of information flow within a system topology, to assess the transport properties of complex networks 18 . Therefore, at this scale, a node is more central if its removal hinders the diffusion within the network more effectively than other ones.
In the following we study the dismantling process at the temporal scale β c . Yet, it is worth mentioning that entanglement centrality provides a meaningful measure in other choices of β It is evident that network entanglement is not trivially related to existing centrality measures.
which are discussed so far.  Fig. 1) for each network to find its β c and used it to find the entanglement centrality for each nodes, according to Eq. 6. The procedure is schematically represented in Fig. 2.
The results clearly show that the entanglement centrality performs as effective as or faster than the other measures considered here in dismantling the network up to its critical fraction, the point at which the network starts to break into disconnected components (See Fig. 3). Remarkably, for random geometric and stochastic block model networks, the disintegration happens significantly faster when using network entanglement. In the case of Barabasi-Albert networks, after the critical fraction, betweenness and entanglement centrality at β c act significantly slower than degree and PageRank centrality. As expected, all these real-world networks show high robustness against random node removals, implying their ability to maintain their function under random failures. However, adopting the right targeted attack strategy can effectively disintegrate them (See Fig. 4). Although degree and PageRank centrality perform better than other classical measures, in dismantling the transportation networks, such as NYC metro and US airports network, they are outperformed by betweenness and closeness centrality in the system. This result highlights the lack of a universal attack strategy that can be considered always valid, regardless of network features. Interestingly, our analysis indicates that, for all the considered empirical systems, the entanglement centrality provides an effective dismantling strategy (See Fig. 4), comparable with the best measures and outperforming the others, thus providing a promising candidate for such a universal attack strategy.

Discussion
Analyzing the robustness of complex systems is still a challenging task. Here, we have used node determines its entanglement with the network, and entanglement centrality coincides with the well-known degree centrality. At very large scales β → ∞, entanglement centrality measures the direct role of each node in the integrity of network -i.e. how many disconnected components will appear if the node is detached. Finally, we have shown that the collective entanglement -i.e.
the average entanglement of all the nodes with the network -reaches its minimum at a specific choice of β = β c . Interestingly, at this scale, we demonstrate that entanglement centrality is rather sensitive to the node's impact on the diffusion dynamics on top of the network, and not the structure. More specifically, according to our measure, a node is ranked higher if its detachment causes a larger increase in the partition function of the system. The partition function provides a proxy for dynamical trapping, an important transport property that indicates the tendency of network to hinder the flow of information 18 : therefore, strategies can be designed to lower the partition function and, consequently, enhance the diffusive flow among nodes. Conversely, here, we target the nodes according to entanglement centrality, aiming for maximum increase in the partition function that, consequently, hinders transport properties.
Of course, the detachment of nodes during the disintegration process alters the topology and, as a consequence, the importance of the remaining nodes. For this reason, adaptive attack strategies -where the centrality of each node is re-calculated after each perturbation is appliedbecome interesting. However effective, they are computationally slow, especially, in case of large networks 15 . Thus, we adopt the static -i.e., non-adaptive -attack strategy in this work -i.e.
ranking of nodes according to each centrality measures is calculated only once, at the beginning of disintegration process. Despite this apparent limitation, we show that network entanglement is still able to capture higher-order interactions that are exploited to efficiently dismantle a network.
The analysis of both synthetic and real-world networks, where different attack strategies are compared to network entanglement at β c indicates that our measure performs as well as or faster than other measures, in damaging the network up to its critical fraction, across a range of scenarios.
However, it becomes slower than some other measures, after the critical fraction is reached, yet still comparable to the others. This result indicates that entanglement can be used to quickly disrupt the flow exchange, but can not be used to disintegrate a system faster than more traditional approaches.
As mentioned before, the entanglement centrality at β c aims to disrupt the dynamics on top of the network, by hindering the diffusive flow. Therefore, a plausible interpretation of our numerical experiments is that disrupting the dynamics comes along with the dismantling of the structure, up to the critical fraction.
Overall, the presented framework opens the doors for further investigation of the network contraction process, from a multi-scale perspective, and its relation with the dynamics and transport properties of the complex systems.

Methods
Mean-field entropy. A mean-field approximation of the network Von Neumann entropy has been recently suggested for the random walk based density matrices 18 . Similarly, here, we derive a mean-field entropy for the case of continuous diffusion. The eigenvalue spectrum of the Laplacian follows: where m is the number of links in the network, where no self loops exist.
At this step, it is worth noting that ρ β and L can be eigen-decomposed as follows: Furthermore, Eq. 2 can be rewritten as: where the trace in the first term can be written as the following summation the last step is justified by the fact that λ 1 , ..., λ C = 0 for a network with C connected components.
It is worth mentioning that the isolated nodes are considered to be separate components and are included in C.
A mean-field approximation of the above summation can be obtained by neglecting the higher-order terms as follows: To increase the precision, the terms in the summation corresponding to λ i = 0 must be excluded from the mean values of both sets of eigenvalues. Consequently, the mean-value for the Laplacian matrix followsλ and for the density matrix It follows that which, for a network with no isolated nodes and only one connected component (C = 1), and comparably large size N 1, it reduces to where 2m N −1 ≈ 2m N =k is the average degree of nodes.
From here, it is straightforwad to combine Eq. 2 and Eq. 15 to obtain the mean-field entropy Whereas, for networks with isolated nodes and disconnected components the mean-field entropy reads: Multiscale derivations. For small scales the partition function can be written as Z β = Tr e βL ≈ Tr (I) − βTr (L) = N − 2βm and the density matrix follows If the propagation time goes to zero limit β → 0, it can be shown that the density matrix is ρ 0 = I/N and the Von Neumann entropy depends, only, on the network size S 0 = Assume the size of original network G is N . Then the size of perturbed network after removal of a node G x (See Fig. 1), is N − 1 and the size of the star network corresponding to the detached node δG x depends on its degree k x + 1. Therefore, the entanglement at β → 0 is proportional to the degree of the removed node. This proves that entanglement centrality and degree centrality coincide, for very small β.
Note that, for a network with C connected components, the Laplacian matrix has exactly C zero eigenvalues, while all other eigenvalues are greater than zero. Therefore, the partition function can, generally, be rewritten as Z β = C + N i=C+1 e −βλ i and approximated as Z β ≈ C, for large β. Also, Taylor expanding the logarithm of partition function around this point, one can find We put this result into Eq. 3 to find the mean-field entropy at large β: which, in case of N C becomes (βk + 1) log 2 Z β which can be approximated as since βk 1. Also, in the limit case the above equation becomes The star network corresponding to the removed node has only one connected component C x = 1. As log 2 1 = 0, the entropy follows S M F ∞ (δG x ) = 0 for the star network. Let the number of connected components in G and G be, respectively C and C , and their average numbers indicated byk andk . The entanglement, at the limit of large β → ∞ follows In case the network is large, the removal of one node does not change its average degree dramaticallyk ≈k . Therefore, the entanglement can be reduced to Of course, in case the initial network is completely connected (C = 1), we obtain which is the case for all the synthetic networks considered in this work.
Finally, using Eq.5, one can write the entanglement of node x as at the meso-scale. Consequently, the collective entanglement (See Fig. 1), follows Taylor expanding each term in the summation around its minimum (Z β (x) = Z β ) and keeping only the first order term, we obtain where ∆Z β (x) = Z β (x) − Z β . As M β nears its minimum, higher precision of the above linearization is expected. The scale at which the collective entanglement is at its minimum defines β c . Finally, the entanglement centrality of node x at β c follows Centrality measures. A variety of centrality measures have been adopted, in the literature, to find the relative importance of the nodes for network integrity. In this section, we, briefly, review some of them that are used through this paper, including degree, betweenness, closeness, eigenvector, PageRank and clustering centrality.
Degree Centrality. In an undirected network, the degree of each node is the number of its connections. Consequently, the degree centrality considers a node with higher number of connections more influential and, therefore, more important. Let A be the adjacency matrix, where A ij = 1 encodes a connection between nodes i and j while A ij = 0 shows that they are not connected. Thus, the degree k i of node i is given by Closeness Centrality. It measures the importance of the node based on its average distance from the others, determined by the shortest path length. The shortest path between two nodes i and j is a path -i.e. sequence of links-connecting them that has minimum number of links. Let the average length of shortest paths connecting node i to all the nodes of the network be g i , the closeness centrality of node i is given by c i = 1/g i , indicating how close the node is to other nodes on average.
Betweenness Centrality. According to betweenness centrality, a node's importance is determined by the number of shortest paths that pass through it, connecting other nodes. In other words, assuming the shortest path to be the dominant pathway of information flow between the nodes, a node with high betweenness centrality is fundamental for node-node communications.
Eigenvector Centrality. This centrality measure assesses the importance of a node, by the importance of their neighbors. Let e i be the eigenvector centrality of node i which depends on the sum of eigenvector centrality of its neighbors followed by e i = 1 A ij e j , where α is a constant.
Interestingly, it leads to an eigenvalue problem A e = α e, for which the largest eigenvalue is considered to ensure the positivity of the components of the eigenvector.
Page Rank Centrality. Originally, this measure has been designed to investigate the world wide web. It is based on the definition of non-absorbed random walks, governed by the google matrix, on top of networks. According to this measure, the centrality of each node is proportional to the probability that the random walker visits it. 32 Clustering Centrality. Clustering centrality is based upon the definition of local clustering coefficient of nodes, which measures how densely the neighboring nodes are connected 23 . More specifically, clustering coefficient of each node is proportional to the number of triads it shapes with the other nodes.
Contributions. AG performed the theoretical analysis, the numerical experiments and wrote the paper. MS performed the numerical experiments and the data analysis. JB performed part of the theoretical analysis and wrote the manuscript. MDD conceived and designed the study and wrote the manuscript.
Competing financial interests. The authors declare no competing financial interests.