Geometric renormalization of weighted networks

The geometric renormalization technique for complex networks has successfully revealed the multiscale self-similarity of real network topologies and can be applied to generate replicas at different length scales. In this letter, we extend the geometric renormalization framework to weighted networks, where the intensities of the interactions play a crucial role in their structural organization and function. Our findings demonstrate that weights in real networks exhibit multiscale self-similarity under a renormalization protocol that selects the connections with the maximum weight across increasingly longer length scales. We present a theory that elucidates this symmetry, and that sustains the selection of the maximum weight as a meaningful procedure. Based on our results, scaled-down replicas of weighted networks can be straightforwardly derived, facilitating the investigation of various size-dependent phenomena in downstream applications.

that preserve the weighted structure of the network in the flow.Here, we propose a theory for the renormalization of weighted networks that supports the selection of the maximum, or supreme, as an effective approximation to allocate weights in the renormalized layers of real networks.Our theory is sustained by the renormalizability of the WS D model, which entails that the GRW transformation should be a rescaled p-norm on the set of weights to be renormalized.Alternatively, the GR technique was recently extended to weighted networks using an ad hoc approach that treats weights as currents or resistances in a parallel circuit-renormalizing by the sum of the weights or by the inverse of the sum of their inverses, respectively [21].The two methods are recovered as particular limits of our theory.
To begin with, we provide evidence that self-similarity is a pervasive symmetry not only in the multiscale organization of real network topologies but also in the multiscale ulfonding of their weights.To that end, we implement a GRW transformation, which requires the preliminary application of the GR technique to unweighted networks [14].
The GR technique operates on the geometric embedding of a network, as described in previous works [10,13], obtained by maximizing the likelihood that the network topology is generated by the geometric soft configuration model S D [22].In this model, nodes are assigned coordinates representing popularity and similarity dimensions, and distances between them determine the probability of connection p ij = 1/(1 + χ β ij ), where χ ij = d ij /(µκ i κ j ) 1/D .Parameter µ controls the average degree, and β > D controls the level of clustering and quantifies the level of coupling between the network topology and the geometry.The hidden degree κ i of node i ∈ [1, N ]-equivalent to a radial coordinate in the hyperbolic plane in the purely geometric formulation of the model, named H D+1 [23]-measures the popularity of the node, with higher values indicating a greater likelihood of connecting to other nodes.In D = 1, the similarity subspace is represented as a circle of radius R = N/(2π) with unit density.Each node i is assigned an angular coordinate θ i in the circle, and angular distances d ij = R∆θ ij between pairs of nodes account for factors other than degrees that influence the tendency to form connections.Nodes closer in the similarity subspace have a higher likelihood of being connected.Hyperbolic embeddings of unweighted networks can be obtained using the Mercator mapping tool [13], which employs statistical inference techniques to identify the hidden degrees and angular coordinates while adjusting parameters β and µ accordingly.
Once the geometric map of a real network is generated, GR divides the similarity circle into non-overlapping blocks of consecutive nodes of size r.These blocks are then coarse grained forming supernodes in a new layer.Each supernode is positioned within the angular region defined by the corresponding block, preserving the order of nodes.Any links between nodes in one supernode and nodes in another are renormalized into a single link connecting the two supernodes.This way, GR eliminates short-range couplings and produces a new network topology that is self-similar to the original except for the average degree, which increases in the renormalization flow [14].
The GRW technique involves assigning intensities to the links in the new layer based on the weights in the original layer, following a specific prescription.This transformation can be iterated starting from the original network at layer l = 0, with the iteration bounded to approximately l max ∝ log(N ) steps due to the finite size of real networks.As a result, a sequence of self-similar network layers l-each r times smaller than the original one-is produced forming a multiscale weighted shell of the original network.The process is visually depicted in Fig. S1 of the Supplemental Material (SM).The crux of GRW lies in how the weights are renormalized to ensure that their characteristics, such as global and local weight distributions and the relationship between strength and degree, are preserved throughout the renormalization flow.
An effective and simple prescription, referred to as sup-GRW, is to define the weight of the link between two supernodes as the maximum, or supremum, of the weights in the existing links between their constituent nodes in the original layer.We applied the sup-GRW technique to 12 different real weighted networks from different domains including biology, transportation, knowledge, and social systems.The networks were processed using blocks of size r = 2.Additional details can be found in the SM.
The behavior of the weights in the renormalization flow of two of the networks are shown in Fig. 1(a)-(d), while Figs.S2-S5 present the corresponding results for the remaining networks.The relations strength-degree are shown in Fig. S5.The probability density functions (pdf) of weights and strengths in the different layers collapse once rescaled by the average weight and average strength, respectively, in the corresponding layer.Furthermore, the power-law relations between strength and degree also overlap once the degrees are rescaled by the average degree of the layer, as demonstrated in Figs.S4 and S5.To quantify the local heterogeneity of the weights, we measured their disparity around nodes as a function of the degree, as described in the Methods section of the SM.The results show, again, statistical invariance across layers.
Notice that, by construction, the average weight and the average strength in the sup-GRW layers grows with l.While this behavior does not provide fundamental information for characterizing the description of the weighted structure of the network, it may still be interesting to understand how w and s depend on the scale of observation l.This is particularly relevant considering that weights in real networks are often expressed in real-world units.The corresponding results are presented in Figs.S6 and S7.Furthermore, the sup-GRW transformation exhibits the semigroup structure with respect to the composition, similar to the behavior observed in GR for unweighted networks.This means that a certain number of iterations with a given coarse graining factor are equivalent to a single transformation with a higher coarse graining factor.The findings shown in Fig. S8 provide support for this claim.
We also tested an alternative prescription, referred as sum-GRW, where weights in the new layer are assigned by summing the weights of existing links between the nodes in supernodes, following the prescription described in Ref. [21].While this strategy proves effective for many real networks, there are certain cases in which self-similarity is not maintained in the renormalization flow.When sum-GRW is applied, the global distribution of weights, the local heterogeneity of weights in nodes, and the relation between strength and degree become increasingly heterogeneous compared to the original graph.This is observed in the Openflights and the scientific collaboration network, as illustrated in Fig. 1(e)-(h), and in Figs.S9 and S10 for the remaining networks.
The reported results are supported by a theoretical framework that clarifies the conditions under which each of the two weight assignment prescriptions, selecting the supremum of weights between supernodes or their sum, yields good performance.Our theory is based on the WS D model [15], that uses the S D model to mimic the topology of real networks.In the WS D model, weights are assigned to connections between two connected nodes i and j as follows: Similar to ki ∝ κ i in the S D model, the WS D model ensures that the expected strength of node i, si , is proportional to the hidden strength σ i , si ∝ σ i .When α = 0, the weights are independent of the underlying geometry and primarily influenced by node degrees, while α = D implies that weights are maximally coupled to the underlying metric space with no direct contribution of the degrees.Finally, ij is a random variable with mean equal to one and the variance of which regulates the level of noise in the network.In the subsequent analysis, we assume the noiseless version of the model to simplify analytical calculations, which means ij = 1 ∀(i, j).
To control the correlation between strength and degree and, consequently, adjust the strength distribution, we assume a deterministic relation between hidden variables σ and κ of the form σ = aκ η , yielding s(k) ∼ ak η as observed in real complex networks.Working under this assumption, a valid GRW transformation should preserve the relation between strength and degree, and in particular the exponent η, meaning that the renormalized hidden degree and strength should satisfy σ = a (κ ) η (to simplify notation, we have used prima to denote quan-tities in the renormalized layer).Using Eq. ( 1) and the GR equations for the topological model [14], this requirement leads to the following expression for the renormalized weights where the sum runs over the links between nodes within supernodes i and j, derivation in SM.Parameter φ ≡ β D(η−1)+α depends on both the weighted and unweighted structure of the network, and C = ν /ν (a /a) 2 r α/D .In practice, however, we rescale weights by the average weight in each layer, rendering the constant C irrelevant.
According to the weighted model, for a network with a specific value of φ, the GRW transformation of weights Eq. ( 2), denoted as φ-GRW, preserves the exponent η that characterizes the relation between strength and degree.At the same time, since the distribution of hidden degrees is assumed to be preserved by GR, the distribution of hidden strengths and the distribution of weights are also preserved.This is valid as long as β > (γ − 1)/2.Otherwise, the power-law distribution of hidden degrees looses its self-similarity in the unweighted renormalization flow and this breaks the self-similarity of weights.Also, note that the φ-GRW transformation has semigroup structure with respect to the composition, regardless of the value of φ.
We validated the self-similarity of the φ−GRW transformation in the real and synthetic networks, Figs.S11-S12 and Figs.S14-S17, respectively, including its semigroup property.In all cases, the self-similar behavior of the distribution of weights and strengths, and the powerlaw relation between strength and degrees in the renormalization flow is clear across length scales, which validates our analytic calculations.
Notice that the transformation in Eq. ( 2) is a φ-norm, which is a generalization of the Euclidean norm.As φ increases, the φ-norm becomes progressively dominated by the supremum of the terms w mn in Eq. (2) .In fact, the sup-GRW prescription is recovered in the limit φ = ∞ of φ-GRW.In addition, renormalizing by the sum is equivalent to setting φ = 1, and the renormalization of weights by the inverse of the sum of inverse values corresponds to φ = −1.
To clarify the efficacy of approximating φ−GRW as sup-GRW, we checked the asymptotic behavior of the φnorm as a function of the number of elements E in the set of coarse-grainable weights and of the level of heterogeneity in the weights, see section VI in SM for more details.Figure 2 shows the result of applying the supremum and the sum prescriptions as compared with renormalizing weights using φ-GRW in two of the real networks analyzed in this letter, see Figs.S20 and S21 for the rest.In synthetic networks, we simulated weights using a distribution p(ω mn ) ∼ ω −δ mn , where δ allowed us to tune the level of heterogeneity, and produced sets of weights that (c) Asymptotics of the φ-norm.We used the weights {ωmn} in the Openflights and Collaboration networks, and performed an iteration of φ−GRW to calculate the renormalized weight ω with Eq. ( 2).Note that when r = 2, the number of links E between the nodes in two supernodes could be 1, 2, 3 or 4.So we displayed the renormalized weight ω (φ = 1) and ω (φ = ∞) versus ω (φ * ) for different E, where φ * is the inferred value with φ * = β/(η − 1 + α).Sup-GRW corresponds to the case φ = ∞ while sum-GRW to φ = 1.
were renormalized using Eq. ( 2) with C = 1 and different values of φ.We also renormalized the same sets using the alternative sum and supremum prescriptions, the results are shown in Figs.S18 and S19.
In heterogeneous networks with a markedly scale-free character of the weight distribution, very small deviation from the supremum are observed and this occurs primarily for very low values of φ and low-weight values.As the number of elements E increases and the degree distribution becomes more homogeneous, these deviations progressively become larger.As expected, higher values of φ reduce the discrepancy between the φ-norm and the supremum estimator.Nevertheless, across a wide range of parameter values, which encompass those for realistic networks, there is generally a good agreement between the φ-norm and the selection of the supremum, with any existing deviations being quite minor.While for some empirical weight distributions, sup-GRW and sum-GRW yield the same renormalized weights, e.g., the JCN in Figs.S20 and S21, it is important to note that, in general, the relation between hidden strength and hidden degree is not preserved under sum-GRW.See Methods section in SM for more details.
The preservation of the relation σ = aκ η allows us to approximate analytically the flow of the average strength from the flow of the average degree.In GR, the average degree changes from layer to layer approximately as k (l+1) = r ξ k (l) , with a scaling factor ξ depending on the connectivity structure of the original network [14].Combining this with Eq. ( 2) and imposing that the rescaling constant of weights C does not change in the flow, we obtain which, due to the proportionality between observed and hidden strength, implies that the flow of the average observed strength follows the same scaling.Therefore, in D = 1, the strength increases with a scaling factor that depends on the exponent η, on the coupling α between topology and geometry, and on the scaling factor ξ for the flow of the average degree, see Methods in SM for details.This leads to an analytic approximation for the growth of the average strength as a function of the average degree which agrees with the measurements in synthetic networks where the average weight may increase, stay flat, or decrease in the flow as shown in Fig. 3.All together, our results suggest that sup-GRW is a good approximation for real networks and offers certain advantages over φ-GRW.One advantage is that it avoids the need to estimate parameters that capture the coupling between the weighted structure of the network and the underlying geometry, which can be challenging in practice.Sup-GRW is equivalent to setting φ = ∞ and, due to the nature of the transformation, it is effectively reached for relatively low values of φ.In addition, renormalizing by the sum is equivalent to setting φ = 1, which in general does not preserve the exponent η of the relation between σ and κ, see the Methods section in SM for analytical calculations.
Beyond theoretical considerations, the practical application of GRW extends to the generation of scaled-down replicas of weighted networks.These replicas can serve as valuable testbeds for evaluating the scalability of computationally intensive protocols or studying processes where the size of a real network plays a role.The generation of a scaled-down replica involves obtaining a reduced version of the topology, as described in Ref. [14], and subsequently rescaling the weights in the renormalized network layer to mathc the level of the original network.The detailed procedure can be found in the SM, and the results for the scaled-down replicas of real weighted networks are presented in Figs.S25-S28.
In summary, the extension of the geometric renormalization framework to weighted networks demonstrates that multiscale self-similarity characterizes not only the topology but also the weighted structure of real networks, provided the appropriate renormalization scheme is applied.Moreover, the weights in these networks result from processes that determine the intensities of interactions, and our findings suggest that these processes follow the same underlying principles across different length scales.Notably, the transformation implied by the theory is closely approximated by using the maximum weight prescription, a highly effective approach that can be readily applied to real networks despite the presence of sig-nificant noise affecting their weights.This observation justifies our confidence that noise will not fundamentally alter the qualitative results reported in this study.
The present work represents a significant step towards establishing a comprehensive framework for the renormalization of network structure and opens up possibilitis for renormalizing dynamical processes on real networks.In future research, it will be essential to incorporate not only the topology of connections and their weights but also their directionality, which is crucial in many realworld processes.
C. Results for sup-GRW FIG. S1.Geometric renormalization transformation for weighted networks.Each layer is obtained after a GRW step with resolution r starting from the original network in l = 0.Each node i in red is placed at an angular position on the similarity circle and has a size proportional to the logarithm of its hidden degree.Straight solid lines represent the links in each layer with weights denoted by their thickness.Coarse-graining blocks correspond to the blue shadowed areas, and dashed lines connect nodes to their supernodes in layer l + 1.Two supernodes in layer l + 1 are connected if and only if some node of one supernode in layer l is connected to some node of the other, with the supremum among the weights of links between the constituent nodes as the weight of the new connection (dark blue links give an example).The GRW transformation has semigroup structure with respect to the composition.In the figure, the transformation with r = 4 goes from l = 0 to l = 2 in a single step.
B: Methods

Description of empirical data sets
• Cargo ships.The international network of global cargo ship movements consists of the number of shipping journeys between pairs of major commercial ports in the world in 2007 [1].
• E. coli.Weights in the metabolic network of the bacteria E. coli K-12 MG1655 consist of the number of different metabolic reactions in which two metabolites participate [2,3].
• US commute.The commuting network reflects the daily flow of commuters between counties in the United States in 2000 [4].
• Facebook like Social Network(Facebook).The Facebook-like Social Network originate from an online community for students at University of California, Irvine, in the period between April to October 2004 [5,6].In this network, the nodes are students and ties are established when online messages are exchanged between the students.The weight of a directed tie is defined as the number of messages sent from one student to another.
We discard the directions for any link and preserve the weight ω ij with the sum of bidirectional messages, i.e., ω ij = ω i→j + ω j→i .Notice that we only consider the giant connected component of the undirected and weighted networks in this paper.
• Collaboration.This is the co-authorship network of based on preprints posted to Condensed Matter section of arXiv E-Print Archive between 1995 and 1999 [7].Authors are identified with nodes, and an edge exists between two scientists if they have coauthored at least one paper.The weights are the sum of joint papers.Notice that we only consider the giant connected component of the undirected and weighted networks in this paper.
• Openflights.Network of flights among all commercial airports in the world, in 2010, derived from the Openflights.orgdatabase [8].Nodes represent the airports.The weights in this network refer to the number of routes between two airports.We discard the directions for any link and preserve the weight ω ij with the sum of bidirectional weights, i.e., ω ij = ω i→j + ω j→i .Notice that we only consider the giant connected component of the undirected and weighted networks in this paper.
• Journal Citation Network (JCN).The citation networks from 1900 to 2013 were reconstructed from data on citations between scientific articles extracted from the Thomson Reuters Citation Index [9].A node corresponds to a journal with publications in the given time period.An edge is connected from journal i to journal j if an article in journal i cites an article in journal j, and the weight of this link is taken to be the number of such citations.In this work, we use undirected and weighted networks generated from 3 different time windows, 2008-2013, 1985-1990 and 1965-1975.The data are obtained from Ref. [10].
• New Zealand Collaboration Network (NZCN).This is a network of scientific collaborations among institutions in New Zealand.Nodes are institutions (universities, organizations, etc.) and edges represent collaborations between them.In particular, two nodes i, j are connected if Scopus lists at least one publication with authors at institutions i and j, in the period 2010-2015.The weights of edges record the number of such collaborations.
The data are obtained from Ref. [11].Notice that we only consider the giant connected component of the undirected and weighted networks in this paper.
• Poppy and foxglove hypocotyl cellular interaction networks.These networks capture global cellular connectivity within the hypocotyl (embryonic stem) of poppy and foxglove.Nodes represent cells and edges are their physical associations in 3D space.Edges are weighted by the size of shared intercellular interfaces, and nodes annotated with cell type.The data are obtained from Ref. [12].
Network statistics can be found in Table S1.
TABLE S1.Overview of the considered real-world networks.Columns are: the name of each network (Name), the number of nodes (N ), the average degree ( k ), the average local clustering coefficient ( c ), the hyperbolic embedding parameter β and µ, fitting exponent (γ) in degree distribution, fitting parameters (a and η) in strength-degree relations, the trade-off between the contribution of degrees and geometry to weights (α), the noise ( 2 ), parameter φ = β/(η − 1 + α), and the references about the data sources (Ref.

Network embedding to produce geometric network maps
We embed each considered network into hyperbolic space using the algorithm introduced in Ref. [13], named Mercator.Mercator takes the network adjacency matrix A ij (A ij = A ji = 1 if there is a link between nodes i and j, and A ij = A ji = 0 otherwise) as input and then returns inferred hidden degrees, angular positions of nodes and global model parameters.More precisely, the hyperbolic maps were inferred by finding the hidden degree and angular position of each node, {κ i } and {θ i }, that maximize the likelihood L that the structure of the network was generated by the S 1 model, where and is the connected probability.

The definition of disparity
The disparity of nodes.The disparity quantifies the local heterogeneity of the weights attached to a given node i and is defined as where ω ij is the weight of the link between node i and its neighbor j.From this definition, we see that the disparity scales as Y ∼ k −1 i , whenever the weights are roughly homogeneously distributed among the links.Conversely, whenever the disparity decreases slower than k −1 i implies that weights are heterogeneous and that the large strength of a node is due to a handful of links with large weights.

Theoretical derivation of the renormalized weights
Under GR, the hidden variables of supernodes in the resulting layer, κ and θ , are calculated as a function of the hidden variables of the constituent nodes as The expressions above and Eq. ( 1) in main text altogether imply that the renormalized weight should be In the last step, we have assumed that, for every pair of nodes (m, n), we can obtain the product κ m κ n from the corresponding weight ω mn , which is not true in general, as some links might not exist.However, this should be a reasonable approximation, since it only misses the smallest products of hidden degrees.Now, the above transformation cannot be performed without the precise distances in the embedding, as it depends on d mn , but recalling that d mn = R∆θ mn , where ∆θ mn stands for the angular separation between the nodes, and the fact that all such distances are approximately equal to the angular separation between the supernodes to which the nodes belong (∆θ mn ≈ ∆θ ij ), we can see that fixing α = α will remove all dependency on the distance, .Therefore, the weighted model predicts that the exponent η characterizing the relation between strength and degree is preserved in the renormalized network if weights are transformed following Eq.(B6) (in the noiseless case) and the value of φ that corresponds to the considered network is used.

Theoretical derivation of the flow of the average strength
We start from Eq. (B6) (D = 1) and impose that the rescaling variable is constant in the flow such that the transformation of weights keeps the same units in all scales of observation.The transformation of the relation between hidden strength and hidden degree is We can also obtain the transformation of the free parameter ν using its expression from [15] and the expression for the parameter µ, and therefore to where we have used the expression for the flow of the average degree.We use κ η = γ−1 γ−1−η κ η 0 (η < γ − 1) to compute its flow, and we obtain and we impose C = 1 to obtain from which ψ > 0 implies an increasing average strength in the flow while it decreases if ψ < 0.

The transformation sum-GRW does not preserve the relation between strength and degree
The sum-GRW transformation is where e runs over all pairs of nodes (m, n) with m in supernode i and n in supernode j and d mn = R∆θ mn , where ∆θ mn stands for the angular separation between the nodes.All such distances are approximately equal to the angular separation between the supernodes to which the nodes belong (∆θ mn ≈ ∆θ ij ), and one can take α = α.Comparing Eq. ( 1) in main text and (B8), we can write and using R = R/r 1/D and Eq.(B9) altogether, we have Therefore, in the noiseless version ( mn = 1 ∀(m, n)), we can obtain the hidden strength σ i in the supernodes layer as which proves that, in general, the relation between hidden strength and hidden degree is not preserved under sum-GRW.

Scale down replicas
1. We obtain a renormalized network layer by applying the sup-GRW method with a given value of r and number of iterations to match the target network size.
2. Typically, the average degree of the renormalized network layer is higher than the original one.Thus, to obtain a scaled down network replica of the topology, we decrease the average degree in the renormalized layer to that in the original network as explained in Ref. [14], such that k . The main idea is to reduce the value of µ (l) to a new one µ new , which means that the connection probability of every pair of nodes (i, j), p ij,new .Therefore, the probability for a link to exist in the pruned network reads: In particular, we prune the links using µ l) with h = 1 as initial value.After an iteration for all the links in the layer, we give h a new value h(1 | is below a given threshold, that we set to 0.1.
3. Finally, we rescale the weights in the resulting network by a global factor to match the average weight of the original network.Specifically, we calculate the average weight w 10 −1 10 0 10 1 10 −1 10 0 10 1 10 2 10 −3 10 −2 10 −1 res for different layers l.Each row indicates an empirical network. (n) ) JCN(1985-1990) 10 −2 10 0 10 2 10 −3 ) ) res) = s(l) (k)/ s (l) .Inset shows average strength as a function of degree.Last column: disparity of nodes as a function of their degree.Each row indicates an empirical network. ) 10 −1 10 0 10 1 10 2 10 −3 ) ) (w) ) res) = s(l) (k)/ s (l) .Inset shows average strength as a function of degree.Last column: disparity of nodes as a function of their degree.Each row indicates an empirical network.The geometric renormalization transformation has Abelian semigroup structure with respect to the composition, meaning that a certain number of iterations of a given resolution are equivalent to a single transformation of higher resolution.We here validated the semigroup structure in sup-GRW transformation with synthetic and empirical networks.Given an original network, we performed the sup-GRW with r = 2 and r = 4, respectively.When the geometric renormalization transformation to the same network size, we compared the their network properties.Figure S8 shows the results for a representative synthetic network.

FIG. 1 .
FIG. 1. Self-similarity of real weighted networks along GRW flows.The first row shows the sup-GRW flow of the pdf of weights and their disparity in nodes in Openflights (a)-(b) and Collaboration (c)-(d).The same in the second row for the sum-GRW flow.The number of layers in each shell is determined by the original network size, and r = 2 in all cases.
used that R = R/r 1/D .Finally, we can choose any appropriate relation between primed and unprimed global parameters leading toω ij = C of the resulting network from step (2) and the average weight w (0) in the original network.Then we rescale the weight of each link by the factor c = w (0) FIG. S2.Network properties for sup-GRW in different empirical networks.First column: complementary cumulative degree distribution P (l) c (k (l) res), Second column: the degree-dependent clustering coefficient c(l) (k (l) res), Last column: normalized average nearest-neighbour degree k(l) nn,n(k FIG. S3.Network properties for sup-GRW in different empirical networks.First column: complementary cumulative degree distribution P (l) c (k (l) res), Second column: the degree-dependent clustering coefficient c(l) (k (l) res), Last column: normalized average nearest-neighbour degree k(l) nn,n(k

FIG. S4 .
FIG. S4.Network properties for sup-GRW in different empirical networks.First column: complementary cumulative weight distributions P (l) c (w (l) res) of rescaled weights w (l) res = w (l) / w (l) for different layers l.Second column: complementary cumulative strength distributions P (l) c (s (l) res) of rescaled strengths s (l) res = s (l) / s (l) for different layers l.Third column: average rescaled strengths as a function of rescaled degrees, i.e., s(l)

FIG. S5 .
FIG. S5.Network properties for sup-GRW in different empirical networks.First column: complementary cumulative weight distributions P (l) c (w (l) res) of rescaled weights w (l) res = w (l) / w (l) for different layers l.Second column: complementary cumulative strength distributions P (l) c (s (l) res) of rescaled strengths s (l) res = s (l) / s (l) for different layers l.Third column: average rescaled strengths as a function of rescaled degrees, i.e., s(l) FIG. S6.(a) average clustering coefficient, (b) average degree, (c) rescaled average weight and (d) corresponding average strength, (e) unrescaled average weight and (f) corresponding average strength for different layers l.

FIG. S9 .
FIG. S9.Network properties for sum-GRW in different empirical networks.First column: complementary cumulative weight distributions P (l) c (w (l) res) of rescaled weights w (l) res = w (l) / w (l) for different layers l.Second column: complementary cumulative strength distributions P (l) c (s (l) res) of rescaled strengths s (l) res = s (l) / s (l) for different layers l.Third column: average rescaled strengths as a function of rescaled degrees, i.e., s(l) res(k

FIG. S10 .
FIG. S10.Network properties for sum-GRW in different empirical networks.First column: complementary cumulative weight distributions P (l) c (w (l) res) of rescaled weights w (l) res = w (l) / w (l) for different layers l.Second column: complementary cumulative strength distributions P (l) c (s (l) res) of rescaled strengths s (l) res = s (l) / s (l) for different layers l.Third column: average rescaled strengths as a function of rescaled degrees, i.e., s(l) res(k

FIG. S11 .
FIG. S11.Network properties for φ-GRW in different empirical networks.First column: complementary cumulative weight distributions P (l) c (w (l) res) of rescaled weights w (l) res = w (l) / w (l) for different layers l.Second column: complementary cumulative strength distributions P (l) c (s (l) res) of rescaled strengths s (l) res = s (l) / s (l) for different layers l.Third column: average rescaled strengths as a function of rescaled degrees, i.e., s(l) res(k

FIG. S12 . 2 k
FIG. S12.Network properties for φ-GRW in different empirical networks.First column: complementary cumulative weight distributions P (l) c (w (l) res) of rescaled weights w (l) res = w (l) / w (l) for different layers l.Second column: complementary cumulative strength distributions P (l) c (s (l) res) of rescaled strengths s (l) res = s (l) / s (l) for different layers l.Third column: average rescaled strengths as a function of rescaled degrees, i.e., s(l) res(k

3 ω 1 FIG
FIG. S17.φ-GRW in synthetic network with different η.First column: complementary cumulative weight distributions P (l) c (w (l) res) of rescaled weights w (l) res = w (l) / w (l) for different layers l.Second column: complementary cumulative strength distributions P (l) c (s (l) res) of rescaled strengths s (l) res = s (l) / s (l) for different layers l.Third column: average rescaled strengths as a function of rescaled degrees, i.e., s(l) res(k