Abstract
We consider two elementary (maxflow and uniformflow) and two realistic (maxmin fairness and proportional fairness) congestion control schemes, and analyse how the algorithms and network structure affect throughput, the fairness of flow allocation, and the location of bottleneck edges. The more realistic proportional fairness and maxmin fairness algorithms have similar throughput, but path flow allocations are more unequal in scalefree than in random regular networks. Scalefree networks have lower throughput than their random regular counterparts in the uniformflow algorithm, which is favoured in the complex networks literature. We show, however, that this relation is reversed on all other congestion control algorithms for a region of the parameter space given by the degree exponent γ and average degree 〈k〉. Moreover, the uniformflow algorithm severely underestimates the network throughput of congested networks, and a rich phenomenology of path flow allocations is only present in the more realistic αfair family of algorithms. Finally, we show that the number of paths passing through an edge characterises the location of a wide range of bottleneck edges in these algorithms. Such identification of bottlenecks could provide a bridge between the two fields of complex networks and congestion control.
Introduction
Twentyfirst century life depends on the reliability of critical infrastructure networks. Because of high construction costs, these networks often end up operating close to the unstable region: a small increase in flow leads to network congestion or even shutdown^{1,2,3,4}. Without properly designed congestion control, the consequences can be catastrophic, as in the congestion collapse on the Internet^{5}. In 1986, the Internet (then ARPANet) was a slow (56 Kbps) and small network with a large number of hosts (5,089)^{6}. In October that year, the link between University of California, Berkeley and Lawrence Berkeley National Laboratory (360 m long) suffered a drop in flow rate by three orders of magnitude from 32 Kbps to 40 bps. The reason for the collapse is the control mechanism implemented at the time, which focused on congestion at the receiver. The bottleneck, however, was congestion on the network. Two years later, Van Jacobson redesigned the TCP congestion control algorithm^{7}, enabling the Internet to expand in size and speed. Today, we need algorithms to share scarce network resources during times of crises^{8, 9}. In the future, we will require algorithms to share the capacity of electrical distribution networks for the charging of electric vehicles^{4}. Moreover, when transport becomes autonomous, we may need algorithms to ease traffic congestion^{10, 11}, and an understanding of the role of fairness, efficiency and network structure on such algorithms could improve the way society manages transportation. These challenges revive the topic of congestion control and uncover a new range of problems of network design at the interface between physics, engineering and the social sciences^{4, 8, 12,13,14,15}. Although much work has been done to characterise congestion control mechanisms^{5, 16,17,18} and the topology of large random networks^{19,20,21,22,23}, little is known about the effect of network structure on congestion control. Furthermore, while congestion control methods have been in operation in communication networks since the 1980s, the relative performance of these algorithms on large random networks remains elusive.
A network is at the onset of congestion when at least one edge is carrying traffic at its capacity^{24}. When there is an attempt to increase traffic on that edge beyond its capacity, the network becomes congested, and the flow on the edge does not increase any further, even if the traffic load presented to the edge increases. In modelling congested complex networks, researchers typically look for the value of a control parameter for which the network reaches the onset of congestion. Studies have focused on the onset of congestion as a function of network structure and parameters^{25}, optimal topologies for local search with congestion^{26,27,28}, scaling of fluctuations in a model of an M/M/1 queueing system^{29}, improved routeing protocols^{30}, the impact of community structure on the transport of information^{31}, an edge weighting rule to lower costs with node capacity and increase the packet generation rate at the onset of congestion^{32}, and the emergence of extreme events in interdependent networks^{33}. These studies have the limitation that the sending frequency of packets (or rate) is uniform on the network and, consequently, the transition from free flow to congestion is determined by the nodes with the largest betweenness centrality. Hence, only the node(s) with the largest betweenness are fully utilised at the onset of congestion, and thus this method considerably underestimates the flow that congested networks can transport (see Methods, Section ‘Uniformflow’). While traditionally network flows are modelled by maximising the network throughput (maxflow) or minimising the costs (minimumcost), such efficient allocations can leave some users with zero flow, an unfair solution from the user point of view. Congestion control algorithms solve these problems by achieving costeffective and scalable network protocols that well utilise the network capacity, sharing it among users in a fair way. These algorithms allocate path flows to paths connecting source to sink nodes. In doing so, they capture fairness by a family of user utility functions, called αfair ^{18, 34}:
where j = 1, …, R is a path (or user), and ƒ_{ j } is the path flow assigned to path j. The algorithms maximise the aggregate utility \(U(\alpha )={\sum }_{j=1}^{R}{U}_{j}({f}_{j},\alpha )\), under the constraint that the path flows are feasible, i.e., all path flows are nonnegative and no edge flow exceeds edge capacity.
For α = 0, we recover the maxflow (MF) allocation that maximises the network throughput^{35} \(U\mathrm{(0)}={\sum }_{j=1\,}^{R}{f}_{j}\). For α = 1, we find the proportional fairness (PF) allocation, an algorithm that manages congestion via Lagrange multipliers, which can be interpreted as an edge price. The proportional fairness optimisation problem is convex, and Slater’s qualification constraint implies that its primal and dual formulations are equivalent^{36}. The primal problem is solved for the path flows, whereas the dual is solved for the Lagrange multipliers or shadow prices. Both the primal and the dual problems can be posed as decentralised optimisation problems and solved as a system of coupled ODEs^{37}, which is much more efficient in large realworld networks than centralised control. Algorithmically, in the primal problem, source nodes ramp up the path flow additively but decrease it multiplicatively if at least one edge of the path is used close to capacity. The size of the system of coupled ODEs in the primal is determined by the number of paths in the network; in contrast, the number of ODEs in the dual is given by the number of network edges and is thus only dependent on network structure. Hence, if the number of paths is much larger than the number of network edges it is preferable to solve the dual instead of the primal^{5, 8, 37}. The maxmin fairness (MMF) allocation is defined by α = ∞ in Eq. (1); it is typically found, however, with a more efficient algorithm that maximises the use of network resources by users with the minimum allocation. Once these’poor‘ users get the largest possible allocation, the process repeats iteratively for the next less welloff users^{16, 38}. Intuitively, a set of path flows is maxmin fair if the wealthy can only get wealthier by making the poor even poorer. The uniformflow (UF) problem is determined by the maximisation of the aggregate utility U(α) any α ≥ 0, with the added constraint that all path flows are the same, which implies the optimum is independent of α. The αfairness utility function provides a social planner with a way to understand the tradeoff between efficiency (α = 0) and a continuum of models of fairness, such as proportional fairness (α = 1) and maxmin fairness (α = ∞)^{16,17,18, 37,38,39,40,41,42}. The proportional fairness allocation is especial because the system and the users simultaneously maximise their utility functions, and because it is implemented in communication networks^{5} (see Methods, Section ‘The mathematics of congestion control’).
Results
To gain insights into the behaviour of the αfairness family of algorithms, and to illustrate the phenomenon of congestion collapse, we first analyse the network throughput on a ring lattice. We consider a simple protocol that distributes edge capacity proportionally to flows on the paths that pass through an edge (see Methods, Section ‘Avoiding congestion collapse on the ring lattice’). A long path, which uses all network edges, competes for flow with a set of short paths that use only two edges each. Individual paths may increase the flow they inject into the network with the aim of raising their edge capacity quota; queues then build up at the nodes, and the lattice becomes congested. Surprisingly, as the injected flow grows, the network throughput does not converge to an upper bound as intuitively expected, but to zero. This collapse, however, can be avoided if we control congestion with the αfair family of algorithms of Eq. (1). Intuitively, network throughput should decrease with an increase in α, so that it is larger or equal for maxflow than for proportional fairness, greater or equal for proportional fairness than for maxmin fairness, and in turn larger or the same for maxmin fairness than for uniformflow. In other words: we expect that the price to pay for increasing equity is a decrease in throughput, such that the proportional fairness allocation is a tradeoff between efficiency (maxflow) and fairness (maxmin fairness, and uniformflow). Our intuition is right for small ring lattices, but as the number of nodes in the ring grows throughput in the proportionally fair and maxflow allocations converge. Indeed, proportional fairness penalises long paths because these use more network resources than short paths. As the size of the ring grows, the long path uses a higher proportion of network capacity, thus getting less and less flow, and proportional fairness converges to maxflow. In contrast, maxmin fairness yields a lower throughput than these two protocols because it assigns the same allocation to all paths (see Methods, Section’Avoiding congestion collapse on the ring lattice’). Hence, the ring lattice illustrates the counterintuitive phenomena of congestion collapse, as well as, in the presence of congestion control, the surprising converge of proportional fairness to maxflow as the ring size grows. These observations made on a regular network structure with a regular structure of paths are in sharp contrast with our findings on random networks.
We next study the effect of controlling congestion on scalefree (SF) (with exponent 2 < γ < 3), ErdösRényi (ER), and random regular (RR) substrate networks with average node degree 3 ≤ K ≤ 8. Flows take place on a transport overlay network, which is the subgraph formed by a set of R shortest paths, chosen with uniform probability among all possible shortest paths on the substrate network (see Methods, Section ‘Network Models’). From now on, we consider only the ‘transport overlay network’ when we refer to random networks and omit this term from the text. We now ask the question: to what extent is being fair compatible with maximising network throughput on random networks?
To analyse the interplay between algorithms and network structure, we next compute ‘the price of fairness’^{43}, that is the relative system efficiency loss under a ‘fair’ allocation compared to the one that maximises the sum of user utilities:
where \( {\mathcal F} \in \{{\rm{MF}},{\rm{PF}},{\rm{MMF}},{\rm{UF}}\}\) is the algorithm (maxflow, proportional fairness, maxmin fairness, or uniformflow), and \(F( {\mathcal F} ,{\mathscr{N}})\) is the throughput of the algorithm \( {\mathcal F} \) for the chosen network structure. We denote the network structure by \({\mathscr{N}}\), such that for scalefree networks we write \({\mathscr{N}}:={\rm{SF}}(\gamma ,\langle k\rangle ,R)\), and we characterise ErdösRényi networks (γ = ∞) by \({\mathscr{N}}:={\rm{ER}}(\infty ,\langle k\rangle ,R)\). Moreover, we write \({\mathscr{N}}:={\rm{RR}}(\gamma ,\langle k\rangle ,R)\) to denote the corresponding random regular networks both for scalefree and ErdösRényi networks (see Methods, Section ‘Network Models’). The efficient algorithm (maxflow) has a price of fairness of zero, whereas an algorithm that results in zero network throughput has a price of fairness of one. Figure 1(A) shows that in contrast to the ring lattice, the price of fairness of proportional fairness in random networks is larger than zero, and of comparable magnitude to the price of fairness of maxmin fairness, for all network structures we analysed, showing that the throughput of proportional fairness now approaches maxmin fairness. To characterise the fairness of each algorithm, we show the inequality of path flows in Fig. 1B by the Gini coefficient (see Methods, Section ‘Gini coefficient’).
An ideal congestion control algorithm would have high throughput (low price of fairness) and low inequality (low Gini coefficient) for any network structure. However, such general algorithm does not exist, because the maximisation of throughput leads to inequality. Indeed, to maximise throughput in a network with constant edge capacity, a few paths receive all the network capacity of the edges they pass through, whereas a majority of paths will be allocated zero path flow (see Fig. 2C). The coexistence of both types of paths leads to the vast inequality in path flows. The αfairness family of algorithms increases the equity of path flows with increasing α. As a consequence, however, αfairness lowers network throughput as α increases from α = 0 (maxflow) to α = ∞ (maxmin fairness), and this mechanism captures the efficiencyfairness tradeoff.
Figure 1(A) and (B) show that the efficient algorithm (maxflow) has a high level of inequality (high Gini coefficient) and uniformflow has low throughput (high price of fairness), thus illustrating why there is little incentive to implement these algorithms in the realworld, despite the recent interest on uniformflow^{25,26,27,28,29,30,31,32,33}. In contrast, proportional fairness and maxmin fairness are tradeoffs between efficiency and fairness, as illustrated by the midrange values of the price of fairness and Gini coefficient for all network structures analysed. Taken together, these features uncover the effect in network throughput and fairness of elementary (maxflow and uniformflow) versus elaborate (proportional fairness and maxmin fairness) congestion control algorithms.
We observe in Fig. 1A that for maxflow, maxmin fairness and proportional fairness, the price of fairness is largely independent of network structure. Similarly, Fig. 1B shows that for maxflow, the inequality of path flows (measured by the Gini coefficient) is also largely independent of network structure. These observations suggest that proportional fairness and maxmin fairness are similar algorithms with only minor dependence on network structure. Surprisingly, however, the inequality of path flow allocations for proportional fairness and maxmin fairness depends mainly on network structure (see Fig. 1B). Hence, network designers that implement congestion control should be aware that scalefree and random regular network structures have similar throughput, but scalefree topologies induce larger inequality in path flows. This is especially important, because proportional fairness is often implemented in realworld networks (e.g., the Internet), and the effect of network structure on the inequality of path flows is revealed by our study of the αfairness family of algorithms, but cannot be disentangled from an analysis of maxflow or uniformflow only. Thus, previous studies of maxflow (large inequality)^{44, 45} and uniformflow^{25,26,27,28,29,30,31,32,33} (no inequality) miss the effect of network structure on the inequality of path flow allocations, and our study is a natural extension to congestion control algorithms of the body of work in the complex networks literature.
To study the effect of demand on the throughput and inequality of path flows, we analyse how these quantities vary with the number R of shortest paths in the network, and we study networks with k = 3 and γ = 2.1. Figure 2A is a plot of network throughput as the number R of shortest paths grows. The Gini coefficient, plot in Fig. 2B, quantifies the growth in the inequality of path flows as a function of R (see Methods, Section ‘Gini coefficient’). Network throughput increases with the number of paths, since the capacity of more edges is used. Because the network size is fixed, however, the growth in throughput slows down inevitably as more paths are added to the network. The asymptotic value of throughput, and the way this slowing down takes place characterises the efficiency of the algorithm and network structure. Figure 2A, shows that the increase in throughput with R is much slower for uniformflow than for the other algorithms. This result illustrates the poor performance of uniformflow for a broad range of R values and thus complements Fig. 1, which compares algorithms only for R = 1500. The throughput and Gini coefficient curves do not intersect in Fig. 2, and thus the relative performance of algorithms does not change much with R.
In maxflow, the path flow allocations share edge capacity on mincuts among a relatively small number of paths, leaving most paths with zero flow (see Fig. 2C), thus creating a large inequality in the assignment of path flows. Although maxflow is an extreme case, because it is the only analysed algorithm that can leave paths with zero flow, inequality is present in all congestion control algorithms. Indeed, the increase in throughput with R is also accompanied by a raise in the inequality of path flow allocations in maxflow, proportional fairness and maxmin fairness.
Traditionally, congestion control algorithms have been designed to counterbalance the phenomenon that maxflow may exclude some paths (i.e., users) from using the network. However, little is known about the behaviour of these algorithms as a function of network structure. Here we take a step towards filling this gap by analysing the effect on network throughput of varying γ and 〈k〉. To do this, we consider the relative throughput,
where F(.) is the network throughput, \( {\mathcal F} \) is the fairness algorithm, and \({\mathscr{N}}\) identifies the network structure and parameters (γ = ∞ denotes ErdösRényi and their random regular networks). The ratio \(\rho ( {\mathcal F} ,{\mathscr{N}})\) isolates the effect of node degree distribution in throughput by comparing scalefree and ErdösRényi networks against the null model of random regular networks (see Methods, Section ‘Network Models’). We observe that \({\mathrm{lim}}_{k\to (N\mathrm{1)}}\rho ( {\mathcal F} ,{\mathscr{N}})=1\) because both networks in the ratio converge to fully connected graphs in this limit. Together with the relative network throughput, we consider the number \(\varphi ({\mathscr{N}},i)\) of paths passing through edge i:
where H is the edgepath incidence matrix (see Methods, Section ‘The mathematics of congestion control’). Because edge capacity is one (c _{i} = 1), the path flows assigned by uniformflow are given by \(\mathrm{1/}\,{\rm{\max }}\,\{\phi ({\mathscr{N}},i)i=\mathrm{1,}\ldots ,E\}\). Thus we have the exact relation between ρ and φ for uniformflow:
We found, \(\rho ({\rm{UF}},{\mathscr{N}}) < 1\) for all γ and 〈k〉, due to the higher maximum concentration of paths in scalefree networks than in random regular networks, i.e. \(\mathop{{\rm{\max }}}\limits_{i}\,\phi ({\rm{SF}}(\gamma ,\langle k\rangle ,R),i) > \mathop{{\rm{\max }}}\limits_{i}\,\phi ({\rm{RR}}(\gamma ,\langle k\rangle ,R),i)\), as illustrated in Fig. 3A.
Similarly to uniformflow, we could expect \(\rho ( {\mathcal F} ,{\mathscr{N}}) < 1\) for all values of γ and 〈k〉 Surprisingly, however, as Fig. 3B–D show, \(\rho ( {\mathcal F} ,{\mathscr{N}})\) can be smaller or larger than one depending on the region of parameter space (γ, 〈k〉). Moreover, the dividing line \(\rho ({\mathscr{F}},{\mathscr{N}})=1\) is largely independent of the algorithm, indicating that network structure is the primary factor behind the relative throughput \(\rho ( {\mathcal F} ,{\mathscr{N}})\) in the αfairness family. The network structure is, however, not the only parameter influencing \(\rho ( {\mathcal F} ,{\mathscr{N}})\). To show the effect of algorithms on \(\rho ({\mathscr{F}},{\mathscr{N}})\), we analyse small values of γ and 〈k〉 in Fig. 2A and Fig. 3B–D. For γ = 2.1, 〈k〉 = 3 and R = 15000, throughput in maxflow is 24% higher for scalefree than for random regular networks. For proportional fairness and maxmin fairness, however, this value increases to 63% and 68%, respectively, disentangling flow in scalefree and random regular networks in this region of parameter space, as can be observed on the highlighted cells of the heatmaps in Fig. 3B–D. Figure 2A shows that this happens because for R = 15 000 proportionally fair and maxmin fair throughput saturate in random regular networks, while throughput steadily grows with R in maxflow and all algorithms in scalefree networks.
Figure 3B–D show the dividing line between \(\rho ( {\mathcal F} ,{\mathscr{N}})\) smaller and larger than one in parameter space. This dividing line is approximately the same in all αfairness algorithms. Hence, we make use of structural network measures to gain insights on system behaviour on both sides of the line. We consider two main factors that influence network throughput. First, path length affects network throughput because paths transport a constant path flow on each of their edges. Hence, paths consume capacity from each edge they pass through, and thus the longer they are the larger the number of edges that have their available capacity reduced. Second, the pattern of path intersections influences throughput because these networks have limited edge capacity (c = 1). Indeed, if a large number of paths pass through a limited set of edges, network throughput is restricted by the pattern of path intersections, because the limited capacity of these edges is shared among this large set of paths. In contrast, path flows and network throughput are larger if routeing is such that paths broadly avoid each other. We use \(\phi ({\mathscr{N}},i)\) as a simple measure to characterise the pattern of path intersections. To uncover the behaviour of path length and path intersections, we select two cells in the heatmap: (γ = 2.1, 〈k〉 = 3) to represent \(\rho ( {\mathcal F} ,{\mathscr{N}}) < 1\) and (γ = 2.5, 〈k〉 = 8) for \(\rho ({\mathscr{F}},{\mathscr{N}}) > 1\).
To shed light on the mechanisms that explain the surprising \(\rho ({\mathscr{F}},{\mathscr{N}}) > 1\) region, we show in Fig. 3E and G the histogram of path length, and in Fig. 3F and H the histograms of \(\phi ({\mathscr{N}},i)\) for two selected representative cells. Why do random regular networks accommodate higher flow than scalefree for \(\rho ( {\mathcal F} ,{\mathscr{N}}\mathrm{) < 1}\)? An analysis of the cell (γ = 2.5, 〈k〉 = 8) shows the probability distribution of path length is similar in scalefree and random regular networks (see Fig. 3E). However, the distribution of the number \(\phi ({\mathscr{N}},i)\) of paths passing through an edge i is heavytailed for scalefree, but not for random regular networks: a relatively large number of edges are crossed by many paths in scalefree than in random regular networks (see Fig. 3F). This heavytailed distribution of \(\phi ({\mathscr{N}},i)\) is an indicator of edge congestion in scalefree networks. Moreover, in random regular networks, we observe that a small number of paths pass through many edges, but not, as one would expect in congested networks, that a large number of paths pass through a limited number of edges. Hence, the distribution of \(\phi ({\mathscr{N}},i)\) illustrates why congestion tends to be higher in scalefree than random regular networks for \(\rho ( {\mathcal F} ,{\mathscr{N}}\mathrm{) < 1}\). Why do scalefree networks accommodate higher flow than random regular for \(\rho ({\mathscr{F}},{\mathscr{N}}) > 1\)? An analysis of the cell (γ = 2.1, 〈k〉 = 3) shows two effects. First, paths are significantly longer in random regular (\(\langle l\rangle =8.1\)) than in scalefree networks (\(\langle l\rangle =4.3\)). Longer paths in random regular networks consume more network resources and also intersect with other paths more often than in the region \(\rho ( {\mathcal F} ,{\mathscr{N}}) < 1\), and thus will be more congested than shorter paths. Second, we only observe higher values of \(\phi ({\mathscr{N}},i)\) in scalefree networks than in random regular networks for a small numbers of edges. This small number of congested edges is not, however, large enough to invert the ratio \(\rho ( {\mathcal F} ,{\mathscr{N}})\). Taken together, these two effects make possible that scalefree networks accommodate larger flow than random regular for low values of γ and 〈k〉.
Currently, researchers find the onset of congestion in complex networks from betweenness centrality^{26,27,28,29,30,31,32,33}, a measure that captures the number of paths that cross through nodes or edges^{22}. This uniformflow approach finds the onset of congestion by locating the node or edge with the highest betweenness centrality, which is crossed by the largest number of paths. Figures 1 and 2 illustrate, however, that uniformflow severely underestimates throughput in the αfair family of algorithms at the onset of congestion, because it allocates path flows by sharing only the capacity of the most congested node or edge among the paths that pass through it. The uniformflow algorithm thus allocates path flows globally by maximising locally the path flows that cross through the most congested edge. Hence structural measures, such as betweenness centrality, that determine the uniformflow allocation analytically, might not be good predictors of network throughput in more realistic congestion algorithms.
Here we are interested in the question of whether the number of paths passing through individual edges can be used to locate bottleneck edges in the αfair family of algorithms. To investigate this problem, we use \(\phi ({\mathscr{N}},i)\) as a measure of edge load that, similarly to betweenness centrality, captures the interaction between paths on network edges. Edge betweenness centrality counts the number of shortest paths, which connect all possible sourcesink pairs, passing through an edge. In contrast, \(\phi ({\mathscr{N}},i)\) counts the number of paths passing through the edge, which depends on the particular routeing used, but not just on shortest path routeing. Here, edge betweenness correlates with \(\phi ({\mathscr{N}},i)\), however, because we select shortest paths with uniform probability from all shortest paths.
Figure 4(A–C) show that if the number R of paths is sufficiently large, edges with high value \(\phi ({\mathscr{N}},i)\) are used up to capacity (i.e., are bottleneck edges), with negligible standard deviation. To further relate the number of paths that cross through each edge with the location of bottleneck edges, we show in Fig. 4D–F the frequency of bottleneck (shaded) and nonbottleneck (clear) edges as a function of \(\phi ({\mathscr{N}},i)\) for proportional fairness (see Fig. S1 of the Supplementary Information for maxmin fairness and maxflow). An analysis of the shaded area on the tail of the distributions shows that a large percentage of edges with high \(\phi ({\mathscr{N}},i)\) are bottlenecks. For example, if we consider the 10% of edges with the largest value of \(\phi ({\mathscr{N}},i)\) in SF(2.1, 3, 15 000) (ER(∞, 4, 15 000)), we find that on average [MF = 95.3, PF = 95.3, MMF = 95.1]% ([MF = 99.0, PF = 99.3, MMF = 95.6]%) of these are bottlenecks, representing [12.8, 21.5, 22.8]% ([12.4, 21.5, 27.7]%) of all bottleneck edges (the first value, enclosed in squared brackets, corresponds to maxflow, the second value to proportional fairness and the third value to maxmin fairness). Thus, we find that, apart from a few exceptions, edges with high \(\phi ({\mathscr{N}},i)\) are bottlenecks. These results are largely independent of the congestion control algorithm (parameter α) and of the network topology (exponent γ and average node degree 〈k〉). Our numerical analysis generalises the reasoning that congestion can be characterised by the structure of paths, and can be interpreted as an extension of the analytical results for uniformflow^{26,27,28,29,30,31,32,33} to the more realistic αfair family of congestion control algorithms combined with routeing that is determined or approximated by shortest paths.
The relation between the routeing of paths and the location of congested edges is crucial for both network designers and operators. Network designers wish to anticipate the location of bottlenecks during the design stage, so as to avoid weak links in the areas with the highest expected traffic, and to place the sensor and communication network infrastructure so as to minimise expenses with the overlaid control network. Likewise, network operators wish to determine the links that require a capacity upgrade if routeing changes. Hence, predicting the location of bottleneck edges from the routeing of paths, may be important in realworld networks that implement congestion control algorithms (e.g., the TCP/IP Internet congestion control protocol implements proportional fairness).
Discussion
We first analysed the tradeoff between the efficiency and fairness of the αfair family of congestion control algorithms in random networks. We found that the proportional and maxmin fairness algorithms generate similar throughput when results are averaged over the range of network parameters and benchmarked against the null model of random regular networks. This is significant because, in realworld systems that resemble random networks, a network operator can choose to implement proportional fairness instead of maxmin fairness (the fair algorithm) with little sacrifice in fairness and throughput, and with surprisingly simple decentralised algorithms^{5}. We also found that the inequality of path flows in proportional and maxmin fairness depends on the structure of the network: path flows are considerably more unequal in scalefree than in random regular networks. Moreover, we showed that maxflow creates high inequality in path flow allocations and uniformflow generates low throughput, and thus these two algorithms are too elementary to be implemented in realworld networks.
We next characterised the growth in the network throughput and Gini coefficient as a function of the number R of shortest path for a chosen scalefree network structure (γ = 2.1, 〈k〉 = 3) and the corresponding random regular structure. We found that the price to pay for the increase in throughput as we independently increase R or decrease α is an increase in the inequality of path flow allocations. We found inequality present in all algorithms, but prevalent in maxflow. Indeed, we showed that maxflow assigns zero path flow to a substantial fraction of paths, thus creating a significant inequality in the allocation of path flows. Our analysis indicates that these results are consistent across a wide range of the number R of paths in the network.
Whereas this broad analysis over network parameters or a chosen network structure starts to disentangle the fairness of algorithms as a function of network structure, we next showed it is not enough to fully describe the network throughput. We compared the network throughput in congestion control algorithms with the null model of random regular networks in the parameter space formed by the node degree distribution exponent γ, and the average node degree 〈k〉. For the uniformflow algorithm, we found that random regular networks, which have a more homogeneous node degree distribution than scalefree, systematically transport less flow than scalefree networks on the onset of congestion. Surprisingly, for the αfair family of algorithms, we found that random regular networks can support less or more flow than scalefree, depending on the region of the parameter space. Moreover, we showed that the dividing line between these two regions of parameter space can be justified by structural network measures, but that it is broadly independent of the congestion algorithm. Realworld networks could uncover further insights about the interplay between the α  fair family of algorithms and network topology.
An analysis of the effect of network structure based solely on uniformflow would conclude that random regular networks have higher throughput than scalefree networks for all values of γ and 〈k〉. Our results show that this conclusion is misleading. The uniformflow approach leaves networks severely underutilised in comparison with more elaborate congestion control algorithms. We showed that uniformflow is a crude algorithm to gain insights about the network throughput of complex networks and our findings highlight the limitations of the current line of work^{25,26,27,28,29,30,31,32,33} on complex networks. Congestion control protocols such as maxmin fairness or proportional fairness avert congestion by allocating path flows that are determined as an outcome of an optimisation procedure. Although the result is a higher level of inequality than in uniformflow, these protocols significantly increase the network throughput and thus are superior to uniformflow. The price to pay for elaborate algorithms for congestion control is that the rate λ of packet production becomes source node dependent, and the critical rate is no longer found analytically. It would be hard to argue, however, that these are important factors in the modelling of realworld congested networks. Previous work on congested complex networks with uniformflow^{25,26,27,28,29,30,31,32,33} identifies congestion with the appearance of the first bottleneck. Inspired by this idea, we investigated whether the number of paths passing through individual edges can locate bottleneck edges in the more realistic αfair family of algorithms. We found that congestion on complex networks can be found not only on the edge with the largest number of paths, but on a bigger set of edges. Such edges are crossed by a high number of paths, and thus have high edge betweenness, if the routeing also follows the shortest paths.
In summary, we combined two very well established and related, but so far separated, research areas: congestion control and complex networks. We explained the main milestones in the more than 30year old line of work in congestion control, and we compared the results from this body of literature with congestion control algorithms studied in the complex networks community in the last 15 years, which identifies the onset of congestion by considering homogeneous (uniform) path flows. On the one hand, our results show the severe limitations of the uniformflow approach, which is the conventional algorithm to study congestion in the complex networks literature. On the other hand, we illustrated that structural characteristics typically favoured in complex networks can characterise congested edges for the αfair family of control algorithms, an approximation that has not received enough attention in the field of congestion control. We believe that our paper has potential to open the work in congestion control to complex networks scientists and, viceversa that it will reveal the rich field of network science to researchers working on congestion control.
Methods
The mathematics of congestion control
Let \({\mathscr{G}}=({\mathscr{V}},\varepsilon )\) be an undirected and connected graph, with nodeset \({\mathscr{V}}\) and edgeset \(\varepsilon \), such that edge \(i\in \varepsilon \) has capacity c _{ i }. The network has N nodes and E edges, and a set of R source and sink pairs (s _{ j }, t _{ j }) with \({s}_{j},{t}_{j}\in {\mathscr{V}}\) for j = 1, …, R. Each source and sink pair (s _{ j }, t _{ j }) is connected by a path r _{ j }, such that \( {\mathcal R} ={\cup }_{j=1}^{R}\{{r}_{j}\}\) is the set of all source to sink paths on the network. The relationship between edges and paths is given by the edgepath incidence matrix H, such that H _{ ij } = 1 if edge i belongs to path r _{ j }, and H _{ ij } = 0 otherwise. Matrix H has dimensions E × R, and maps paths to the edges contained in these paths. All edges of a path r _{ j } transport the same path flow f _{ j }. The flow F _{ i } on edge i is then the sum of path flows over all paths that cross the edge:
A vector f of path flows is feasible if \(H\,f\le c\) and \({f}_{j}\ge 0\) for \(j=\mathrm{1,}\ldots ,R\), where c is the vector of edge capacities. An edge is a bottleneck if the flow passing through it is equal to the edge capacity. We define the network congestion control problem^{5}:
where α ≥ 0 is a parameter and \({U}_{j}({f}_{j},\alpha )\) is defined by Eq. (1).
In maxflow (\(\alpha \mathrm{=0}\)), to increase a path flow by ε, we have to decrease a set of other power path flows, such that the sum of the decreases is larger or equal to ε. In contrast, in maxmin fairness (\(\alpha \to \infty \)), to increase a path flow by ε, we have to decrease at least by ε a set of other path flows that are less or equal to the former. Finally, to increase a path flow by a percentage ε in proportional fairness (α = 1), we have to decrease a set of other power path flows, such that the sum of the percentage decreases is larger or equal to ε ^{5}.
Maxmin fairness
Formally, a vector f of path flows is maxmin fair, if it is feasible and if for any other feasible vector f ′ of path flows, there exists a path \({r}_{j}\in {\mathcal R} :{f}_{j}^{\text{'}} > {f}_{j}\) implies that there exists another path \({r}_{l}\in {\mathcal R} :{f}_{l}^{\text{'}} < {f}_{l}\) and \({f}_{l}\le {f}_{j}\) ^{17}. The maxmin fairness allocation is the solution of problem^{7} for \(\alpha \to \infty \). The allocation is typically found, however, with an iterative algorithm^{17} that locates the bottleneck edges. The algorithm first increases all path flows uniformly from zero until it maximises the smallest path flows, that is until it finds the first bottleneck edges. The path flows on paths that pass through these bottlenecks cannot be increased because the edges are used to their full capacity, and hence the algorithm fixes these path flows, and updates the residual capacity still available to other paths. Next, the process is repeated for the paths that do not have yet a fixed path flow. To describe the algorithm formally, we define \({ {\mathcal R} }^{(m)}\) to be the set of paths on the network at iteration m, and \({ {\mathcal R} }_{i}^{(m)}\) to be the subset of paths in \({ {\mathcal R} }^{(m)}\) that cross through edge \(i\). Before we start the algorithm, we assign \({ {\mathcal R} }^{\mathrm{(1)}}= {\mathcal R} \) and \({c}_{i}^{\mathrm{(1)}}={c}_{i}\) for all edges, and a path flow \({f}_{j}^{\mathrm{(0)}}\mathrm{=0}\) to each path \({r}_{j}\in { {\mathcal R} }^{\mathrm{(1)}}\). Next, we initialise the iteration counter m = 1. In the first step of the MMF algorithm, for each edge \(i\) with nonzero capacity that belongs to at least one path, we define the edge capacity divided equally among all paths that pass through the edge at iteration m of the algorithm as:
for all \({c}_{i}^{(m)}\ne 0\). We then find the minimum of \({s}_{i}^{(m)}\), given by
In the second step of the MMF algorithm, we increase all path flows of paths in \({ {\mathcal R} }^{(m)}\) by \({\rm{\Delta }}{f}^{(m)}\), such that
The effect is to saturate the set of bottleneck edges \({\varepsilon }_{B}^{(m)}=\{i\in \varepsilon :{\sum }_{j\mathrm{=1}}^{R}{H}_{ij}{\rm{\Delta }}{f}^{(m)}={c}_{i}^{(m)}\}\), and consequently also to saturate the set of paths that contain at least one bottleneck edge. Next, we create a residual network, by subtracting the capacity used by the path flows,
Note that all bottleneck edges will be saturated, that is each will have \({c}_{i}^{(m+\mathrm{1)}}=0\) after this step. We also say that all paths that contain at least one bottleneck edge are saturated paths, to mean that their path flow will not be increased in subsequent iterations of the MMF algorithm. We say that \({ {\mathcal R} }^{(m+\mathrm{1)}}\) is the set of augmenting paths because the path flows of paths in \({ {\mathcal R} }^{(m+\mathrm{1)}}\) can still be increased in subsequent iterations of the algorithm, and update it following:
Finally, if \({ {\mathcal R} }^{(m+\mathrm{1)}}\) is not empty, we increase the iteration counter \(m\leftarrow m+1\), and go back to the first step, otherwise we stop.
Proportional fairness
A vector of path flows \({f}^{\ast }=({f}_{1}^{\ast },\ldots ,{f}_{R}^{\ast })\) is proportionally fair if it is feasible and if for any other feasible vector of path flows f, the sum of proportional changes in the path flows is nonpositive^{37, 46}:
The proportionally fair allocation is found from problem^{7} with the utility function in Eq 1. for α = 1, and we refer to this problem as the primal^{5}. The optimisation problem is convex because the aggregate utility U(f) is concave and the inequality constraints are convex. Thus, any locally optimal point is also a global optimum, and we can use results from the theory of convex optimisation to find the proportional fair flow allocation (see refs 47 and 48 for a brief introduction to Lagrange multipliers, and ref. 36 on convex optimisation). The Lagrangian is given by refs 37 and 46.
where \(\mu =({\mu }_{1},\ldots ,{\mu }_{E})\) is a vector of Lagrange multipliers. The Lagrange dual function^{36} is then given by \({{\rm{\sup }}}_{f}L(f,\mu )\), which is easily determined analytically by \(\partial L({f}^{\ast },{\mu }^{\ast })/\partial f=0\) as
and thus
After removing the constant term in equation (16) and converting to a maximisation problem, we obtain the dual problem^{37, 46}
where \(\mu =({\mu }_{1},\ldots ,{\mu }_{E})\) is a vector of dual variables. The primal problem is convex and the inequality constraints are affine. Hence, Slater’s condition is verified and thus strong duality holds. This means that the duality gap, i.e., the difference between the optimal of the primal problem^{7} and the optimal of the dual problem^{17}, is zero^{36}. The primal objective function depends on R variables (the path flows) and is constrained by an affine system of equations, whereas the dual objective function depends on E variables (the edges) and is constrained only by the condition that the dual variables are nonnegative. Thus, the dual problem^{17} is more efficient to solve than the primal when the number of paths exceeds number of network edges. The optimal path flows can then be recovered from the optimal Lagrange multipliers with Eq. 15.
The decentralized implementation of proportional fairness relies on a feedback mechanism on path flows^{5}: multiplicatively decrease path flows of paths that pass through bottlenecks and additively increase all other path flows. The combination of the fast correction (multiplicative decrease) and slow rampup (additive increase) is the mechanism behind the TCP internet congestion control protocol. Crucially, this mechanism requires each bottleneck to send a feedback signal to the sender of each path, with the information that the path flow should be additively increased or multiplicatively decreased. Knowledge of where to place sensors and where to connect to the communication network that sends the feedback signals is thus important for the network designer and operator.
Uniformflow
The uniformflow allocation can be found for any \(\alpha \ge 0\) from problem^{7} and Eq. (1), with an additional set of constraints ensuring that path flows are uniform:
The optimal uniformflow allocation is αinvariant, because \({U}_{j}({f}_{j},\alpha )\) is a monotonically increasing function of f _{ j } for any \(\alpha \ge 0\). Algorithmically, the uniformflow allocation can also be found as the solution to the first iteration (m = 1) of the maxmin fairness algorithm, since the algorithm maximises the minimum path flow allocation, and all path flows are the same at the end of the first iteration.
The onset of congestion in complex networks is often determined by the uniformflow allocation^{25,26,27,28,29,30,31,32,33}. At each time step, source node n generates a packet with probability λ and sends it towards the sink node along a shortest path. The expected number of packets in the network at each time step is \(\lambda ND\), where D is the average shortest path length. Moreover, the probability that a packet will pass through a node \({n}_{max}\) with the largest betweenness is \({B}_{{n}_{max}}/{\sum }_{n\mathrm{=1}}^{N}{B}_{n}\) (here, the betweenness centrality \({B}_{n}\) of a node \(n\) equals the number of shortest paths between all pairs of nodes in the network going through node \(n\) [22, page 28]). The average number of packets that node \({n}_{max}\) receives at each time step is thus \({Q}_{in}=\lambda D{B}_{{n}_{max}}/((N\mathrm{1)}D)\), where we used the simplification that the sum of the betweenness values of all nodes is the number of pairs of nodes on the network multiplied by the average path length, \({\sum }_{n\mathrm{=1}}^{N}{B}_{n}=N(N\mathrm{1)}D\). At each time step, the node with the highest betweenness can deliver \({Q}_{out}={c}_{{n}_{max}}\) packets, and hence the onset of congestion is given by \({Q}_{out}={Q}_{in}\), that is
This deduction considers a network of capacitated nodes, but we can have capacity constraints on the links instead, and packets may queue at the nodes for service. Congestion control algorithms are similar for node and link capacity, and here we analyse random networks with link capacity, because this is the standard in the modelling of communication networks^{5}.
The reasoning leading to Eq. (19) assumes that the number \({\lambda }_{c}/(N\mathrm{1)}\) of packets injected into the network per path at each time step is the same for all paths, and that it is determined by the ratio between the node capacity \({c}_{{n}_{max}}\) and the number of paths \({B}_{{n}_{max}}/(N\mathrm{1)}\) passing through node \({n}_{max}\). The obvious drawback of this approach is that the estimate of \({\lambda }_{c}\) in Eq. (19). considers only the first bottleneck to appear in the network, and thus underestimates the load typically present in congested networks.
Avoiding congestion collapse on the ring lattice
Consider a ring lattice of N nodes, each connected to its nearest neighbours by an edge with finite capacity c, as illustrated in Fig. 5. We relax the constraint that all edges of path \({r}_{j}\) transport the same path flow \({f}_{j}\), allow instead queues to build up at the nodes, and thus edge flows to differ on edges along a path. User j injects a flow \({f}^{\mathrm{(0)}}\) into the network at node \(j\in \mathrm{\{1,}\ldots ,N\}\) on a short path j; the flow \({f}_{j}^{\mathrm{(1)}}\) passes from node j to node \((j+\mathrm{1)}({\rm{mod}}\,N)\); the flow \({f}_{j}^{\mathrm{(2)}}\) passes from node \((j+\mathrm{1)}({\rm{mod}}\,N)\) to node \((j+\mathrm{2)}({\rm{mod}}\,N)\) and exits the network. Consider also that node \(1\) is a source and sink of a long path, user N + 1, that passes through all nodes, with flow \({f}_{N+1}^{(j)}\) on the edge linking node j to node \((j+\mathrm{1)}({\rm{mod}}\,N)\). The subscript j identifies the short path, as well as its first node. The superscripts (1) and (2) index the edges the short paths pass through, and superscript \((j)\) indexes edges the long path cross.
To illustrate the mechanism of congestion collapse, we assume a simple congestion control scheme that distributes the total flow \({F}_{j}={f}^{\mathrm{(0)}}+{f}_{(N+j\mathrm{2)}({\rm{mod}}\,N)+1}^{\mathrm{(1)}}+{f}_{N+1}^{(j\mathrm{1)}}\) at node j proportionally to the paths that pass through the node:
The network is not congested for \({f}^{\mathrm{(0)}} < (c{f}_{N+1}^{\mathrm{(0)}})\mathrm{/2}\), and in this case the throughput is \(N{f}^{\mathrm{(0)}}+{f}_{N+1}^{\mathrm{(0)}}\). If the network is congested, flows decrease along a path, that is \({f}^{\mathrm{(0)}} > {f}_{j}^{\mathrm{(1)}} > {f}_{j}^{\mathrm{(2)}}\), and queues build up at each node. The proportional allocation of edge capacities may motivate individual users to increase f ^{(0)} in order to receive a larger share of network capacity. However, as the flow f ^{(0)} injected at each short path grows, the length of queues at the nodes also grow and the network throughput decreases. This congestion collapse is a consequence of the collapse of throughput both for short and long paths. Indeed, in the limit \(\{{f}^{\mathrm{(0)}},{f}_{N+1}^{\mathrm{(0)}}\}\to \infty \), the system of Eqs (20–22) yields \({f}_{1}^{\mathrm{(1)}}=c\mathrm{/2}\), \({f}_{j}^{\mathrm{(1)}}=c\) for \(j\ge 2\) and \({f}_{j}^{\mathrm{(2)}}=0\) for all short paths; and \({f}_{N+1}^{\mathrm{(1)}}=c\mathrm{/2}\), and \({f}_{N+1}^{(j)}=0\) for \(j\ge 2\) on the long path. Hence, \(\sum _{j\mathrm{=1}}^{N}{f}_{j}^{\mathrm{(2)}}+{f}_{N+1}^{(N)}\to 0\).
Let us now assume the more restrictive condition that \(N\ge 2\) paths carry a path flow, and consider the effect of controlling congestion. Because the path flow is constant on all edges along a path, we have a path flow f for every short path, and a path flow \({f}_{N+1}\) for the long path, such that \({f}_{N+1}+2f=c\) at every edge, and thus
One maxflow allocation is \(f=c\mathrm{/2}\) and \({f}_{N+1}=0\), leaving user N + 1 with no access to the network, with a network throughput of Nc/2. The maxmin fair solution is \(f={f}_{N+1}=c\mathrm{/3}\), with a network throughput of \((N+\mathrm{1)}c\mathrm{/3}\). The proportionally fair solution is found by maximising \(U=\,\mathrm{log}\,{f}_{N+1}+{\sum }_{j\mathrm{=1}}^{N}(log\,(c{f}_{N+1})log\,2)\) over \({f}_{N+1}\), yielding the path flow on the long path:
Combining Eqs 23 and 24 yields the proportionally fair path flow on the short paths:
Hence, the proportional fair network throughput is \(({N}^{2}+2)c/(2(N+1))\). As the size of the network increases, the proportional fair allocation approaches the maxflow solution, leaving the long path with ever smaller flow. In contrast, the maxmin fair protocol assigns the same allocation to all paths independently of network size, at the expense of having a lower throughput than proportional fairness and maxflow. Thus, this example shows that proportional fairness and maxflow generate the same throughput on an infinitely size ring lattice, and that this is higher than the throughput provided by maxmin fairness.
Network Models
We are interested in global congestion patterns, and thus require connected networks. We generate undirected, unweighted (i.e., unit capacity) and connected scalefree (SF) networks following the static model^{49}, and call these the substrate networks. We start with N = 2000 disconnected nodes and assign a weight \({w}_{i}={i}^{\beta }\) to each node i (\(i=\mathrm{1,}\ldots ,N\)), where \(\beta \in \mathrm{[0,}\,\mathrm{1)}\). We randomly select two nodes i and j with probability proportional to \({w}_{i}\) and \({w}_{j}\), respectively, and connect them if they are not yet connected, avoiding selfloops and multiedges. We repeat this procedure until the average node degree of the largest connected component is 〈k〉, and keep only the largest connected component. The degree distribution follows a powerlaw with exponent γ = (1+β)/β. We generate scalefree networks with average degree 〈k〉∈{3, 4, 5, 6, 7, 8}, and γ = {2.1, 2.3, 2.5, 2.7, 2.9, 3.1}. We treat ErdösRényi networks as a special case of scalefree networks for β = 0 (γ = ∞). This procedure generates networks with different number of nodes and edges, dependent on γ and 〈k〉. To overcome this limitation, we compare each network generated by the static model with a connected random regular (RR) graph with the same average degree as the scalefree (SF) or ErdösRényi (ER) network. This RR network has the same number of nodes and edges as the corresponding SF network generated with the static model, and thus can be seen as a rewired graph, such that each node has a fixed degree. We then use the RR network as a null model, against which we analyse the features of the corresponding SF or ER network.
In realworld flow networks, most flow between a pair of source and sink nodes will be located over only one route^{50}, and that is typically the shortest path because it minimises the cost of transport^{51, 52}. Researchers have explored a variety of alternatives to shortest path routeing^{30, 51, 53}, yet there is no clear alternative to shortest path routeing from this effort and these algorithms have often been designed for specific models and scalefree networks. An alternative way to determine the routeing would be to find all elementary paths (paths that do not traverse any node more than once) between source and sink node pairs, but this is only practical for small networks because the number of paths grows exponentially with network size^{54}. Hence, we analyse routeing between a source and sink pair along shortest paths only. We chose R shortest paths with uniform probability from the set of all shortest paths, and extract the transport overlay network ^{55} composed of the edges that are crossed by at least one path on the substrate network. This transport overlay network is the union of all R shortest paths, and is the subgraph of the substrate network that carries flow.
Gini coefficient
To characterise inequalities in the flow allocations, we analyse the Gini coefficient of path flows. The Gini coefficient is defined as^{56}
where u and ν are independent identically distributed random variables with probability density g and mean μ. In other words, the Gini coefficient is one half of the mean difference in units of the mean. The difference between the two variables receives a small weight in the tail of the distribution, where \(g(u)g(v)\) is small, but a relatively large weight near the mode. Hence, G is more sensitive to changes near the mode than to changes in the tails. For a random sample (x _{l}, \(l=\mathrm{1,}\,\mathrm{2,}\,\ldots ,\,n\)), the empirical Gini coefficient, \(\widehat{G}\), may be estimated by a sample mean
The Gini coefficient is used as a measure of inequality, because a sample where the only nonzero value is x has \(\mu =x/n\) and hence \(\widehat{G}=(n\mathrm{1)/}n\to 1\) as \(n\to \infty \), whereas \(\widehat{G}=0\) if all data points have the same value
Additional information
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
 1.
Ottino, J. M. Engineering complex systems. Nature 427, 399–399 (2004).
 2.
Scala, A. et al. Power Grids, Smart Grids and Complex Networks, 97–110. NATO Science for Peace and Security Series C: Environmental Security (Springer Netherlands, 2014).
 3.
RosasCasals, M. & Solé, R. Analysis of major failures in Europes power grid. International Journal of Electrical Power & Energy Systems 33, 805–808 (2011).
 4.
Carvalho, R., Buzna, L., Gibbens, R. & Kelly, F. Critical behaviour in charging of electric vehicles. New J. Phys. 17, 095001 (2015).
 5.
Kelly, F. & Yudovina, E. Stochastic Networks (Cambridge University Press, 2014).
 6.
Bidgoli, H. The Internet Encyclopedia, vol. II (John Wiley & Sons Inc, 2004).
 7.
Jacobson, V. Congestion avoidance and control. In Proceedings of SIGCOMM 88, 314–329 (ACM, 1988).
 8.
Carvalho, R. et al. Resilience of Natural Gas Networks during Conflicts, Crises and Disruptions. PLoS One 9, e90265 (2014).
 9.
Carvalho, R. et al. Robustness of transEuropean gas networks. Phys. Rev. E 80, 0161069 (2009).
 10.
Giridhar, A. & Kumar, P. R. Scheduling Automated Traffic on a Network of Roads. IEEE Transactions on Vehicular Technology 55, 1467–1474 (2006).
 11.
Tachet, R. et al. Revisiting Street Intersections Using SlotBased Systems. PLOS ONE 11, 1–9 (2016).
 12.
Liu, Y.Y., Slotine, J.J. & Barabási, A.L. Controllability of complex networks. Nature 473, 167–173 (2011).
 13.
Devine, M. T., Gleeson, J. P., Kinsella, J. & Ramsey, D. M. A Rolling Optimisation Model of the UK Natural Gas Market. Networks and Spatial Economics 14, 209–244 (2014).
 14.
Nepusz, T. & Vicsek, T. Controlling edge dynamics in complex networks. Nature Physics 8, 568–573 (2012).
 15.
Szolnoki, A., Perc, M. & Szabo, G. Accuracy in strategy imitations promotes the evolution of fairness in the spatial ultimatum game. Epl 100, 28005 (2012).
 16.
Bertsekas, D. P. & Gallager, R. Data Networks (Prentice Hall, 1992).
 17.
Pioro, M. & Medhi, D. Routing, Flow, and Capacity Design in Communication and Computer Networks (Morgan Kaufmann, 2004).
 18.
Srikant, R. The Mathematics of Internet Congestion Control (Birkhäuser, Boston, 2004).
 19.
Albert, R. & Barabasi, A. L. Statistical mechanics of complex networks. Rev. Mod. Phys. 74, 47–97 (2002).
 20.
Boccaletti, S., Latora, V., Moreno, Y., Chavez, M. & Hwang, D. U. Complex networks: Structure and dynamics. Phys. Rep.Rev. Sec. Phys. Lett. 424, 175–308 (2006).
 21.
Caldarelli, G. ScaleFree Networks: Complex Webs in Nature and Technology (Oxford University Press, New York, 2007).
 22.
Cohen, R. & Havlin, S. Complex Networks: Structure, Robustness and Function (Cambridge University Press, New York, 2010).
 23.
Newman, M. Networks: An Introduction (Oxford University Press, New York, 2010).
 24.
Kobayashi, H., Mark, B. L. & Turin, W. Probability, Random Processes, and Statistical Analysis: Applications to Communications, Signal Processing, Queueing Theory and Mathematical Finance (Cambridge University Press, 2011).
 25.
Zhao, L., Lai, Y. C., Park, K. & Ye, N. Onset of traffic congestion in complex networks. Phys. Rev. E 71, 026125 (2005).
 26.
Guimerà, R., DíazGuilera, A., VegaRedondo, F., Cabrales, A. & Arenas, A. Optimal network topologies for local search with congestion. Physical Review Letters 89, 248701 (2002).
 27.
Guimera, R., Arenas, A., DiazGuilera, A. & Giralt, F. Dynamical properties of model communication networks. Phys. Rev. E 66, 026704 (2002).
 28.
Cholvi, V., Laderas, V., Lopez, L. & Fernandez, A. Selfadapting network topologies in congested scenarios. Phys. Rev. E 71, 035103(R) (2005).
 29.
Duch, J. & Arenas, A. Scaling of fluctuations in traffic on complex networks. Physical Review Letters 96, 218702 (2006).
 30.
Sreenivasan, S., Cohen, R., Lopez, E., Toroczkai, Z. & Stanley, H. E. Structural bottlenecks for communication in networks. Phys. Rev. E 75, 036105 (2007).
 31.
Danon, L., Arenas, A. & DiazGuilera, A. Impact of community structure on information transfer. Phys. Rev. E 77, 036103 (2008).
 32.
Yang, R., Wang, W. X., Lai, Y. C. & Chen, G. R. Optimal weighting scheme for suppressing cascades and traffic congestion in complex networks. Phys. Rev. E 79, 026112 (2009).
 33.
Chen, Y. Z. et al. Extreme events in multilayer, interdependent complex networks and control. Scientific Reports 5, 17277–17277 (2015).
 34.
Mo, J. H. & Walrand, J. Fair endtoend windowbased congestion control. IEEEACM Trans. Netw. 8, 556–567 (2000).
 35.
Ahuja, R. K., Magnanti, T. L. & Orlin, J. B. Network Flows: Theory, Algorithms, and Applications (Prentice Hall, 1993).
 36.
Boyd, S. & Vandenberghe, L. Convex Optimization (Cambridge University Press, New York, 2004).
 37.
Kelly, F. P., Maulloo, A. K. & Tan, D. K. H. Rate control for communication networks: shadow prices, proportional fairness and stability. Journal of the Operational Research Society 49, 237–252 (1998).
 38.
Carvalho, R., Buzna, L., Just, W., Helbing, D. & Arrowsmith, D. K. Fair sharing of resources in a supply network with constraints. Phys. Rev. E 85, 046101 (2012).
 39.
Chiu, D. M. & Jain, R. Analysis of the increase and decrease algorithms for congestion avoidance in computernetworks. Computer Networks and ISDN Systems 17, 1–14 (1989).
 40.
Johnson, S. D. & D’Souza, R. M. Inequality and Network Formation Games. Internet Mathematics 11, 253–276 (2015).
 41.
Low, S. H., Paganini, F. & Doyle, J. C. Internet congestion control. IEEE Control Systems Magazine 22, 28–43 (2002).
 42.
Massoulie, L. & Roberts, J. Bandwidth sharing: Objectives and algorithms. IEEEACM Trans. Netw. 10, 320–328 (2002).
 43.
Bertsimas, D., Farias, V. F. & Trichakis, N. The Price of Fairness. Oper. Res. 59, 17–31 (2011).
 44.
Carmi, S., Wu, Z., López, E., Havlin, S. & Eugene Stanley, H. Transport between multiple users in complex networks. The European Physical Journal B 57, 165–174 (2007).
 45.
Carmi, S., Wu, Z., Havlin, S. & Stanley, H. E. Transport in networks with multiple sources and sinks. Epl 84, 28005 (2008).
 46.
Tan, D. K. H. Mathematical Models of Rate Control for Communication Networks. Ph.D. thesis, Statistical Laboratory, University of Cambridge (1999).
 47.
Courant, R. & Hilbert, D. Methods of Mathematical Physics.vol. 1 (WileyInterscience, 1989).
 48.
Ball, K. Optimization and Lagrange Multipliers, chap. III. 64, 255–257 (Princeton University Press, New Jersey, 2008).
 49.
Goh, K. I., Kahng, B. & Kim, D. Universal behavior of load distribution in scalefree networks. Physical Review Letters 87, 278701 (2001).
 50.
Nace, D. & Pioro, M. M.M. Fairness and Its Applications to Routing and LoadBalancing in Communication Networks: A Tutorial. IEEE Commun. Surv. Tutor. 10, 5–17 (2008).
 51.
Danila, B., Yu, Y., Marsh, J. A. & Bassler, K. E. Optimal transport on complex networks. Phys. Rev. E. 74, 046106 (2006).
 52.
Barthelemy, M. Spatial networks. Phys. Rep.Rev. Sec. Phys. Lett. 499, 1–101 (2011).
 53.
Goh, K. I., Noh, J. D., Kahng, B. & Kim, D. Load distribution in weighted complex networks. Phys. Rev. E. 72, 4 (2005).
 54.
Ogryczak, W., Luss, H., Pioro, M., Nace, D. & Tomaszewski, A. Fair Optimization and Networks: A Survey. J. Appl. Math. 25 (2014).
 55.
Wang, H. J., Hernandez, J. M. & Van Mieghem, P. Betweenness centrality in a weighted network. Phys. Rev. E. 77, 046105 (2008).
 56.
Ullah, A. & Giles, D. E. A. Handbook of Applied Economic Statistics. (CRC Press, New York, 1998).
Acknowledgements
We thank Matej Cebecauer for conducting numerical experiments and analysing results in early versions of the manuscript. We thank Dirk Helbing for granting access to the ETHZ Brutus highperformance cluster. This work was supported by the Alan Turing Institute, call for collaboration in the Lloyd’s Register Foundation Programme to support datacentric engineering under grant number LRF1605, by the Engineering and Physical Sciences Research Council under grant number EP/I016023/1, by VEGA (project 1/0463/16), APVV (project APVV150179) and by FP 7 (project ERAdiate 621386).
Author information
Affiliations
University of Žilina, Univerzitná 8215/1, 01026, Žilina, Slovakia
 Ľuboš Buzna
School of Engineering and Computing Sciences, Durham University, Lower Mountjoy, South Road, Durham, DH1 3LE, UK
 Rui Carvalho
Authors
Search for Ľuboš Buzna in:
Search for Rui Carvalho in:
Contributions
Ľ.B. and R.C. conceived the experiment(s), Ľ.B. and R.C. wrote the manuscript
Competing Interests
The authors declare that they have no competing interests.
Corresponding author
Correspondence to Ľuboš Buzna.
Electronic supplementary material
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Further reading

EpiRank: Modeling Bidirectional Disease Spread in Asymmetric Commuting Networks
Scientific Reports (2019)
Comments
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.