Local dominance unveils clusters in networks

Clusters or communities can provide a coarse-grained description of complex systems at multiple scales, but their detection remains challenging in practice. Community detection methods often define communities as dense subgraphs, or subgraphs with few connections in-between, via concepts such as the cut, conductance, or modularity. Here we consider another perspective built on the notion of local dominance, where low-degree nodes are assigned to the basin of influence of high-degree nodes, and design an efficient algorithm based on local information. Local dominance gives rises to community centers, and uncovers local hierarchies in the network. Community centers have a larger degree than their neighbors and are sufficiently distant from other centers. The strength of our framework is demonstrated on synthesized and empirical networks with ground-truth community labels. The notion of local dominance and the associated asymmetric relations between nodes are not restricted to community detection, and can be utilised in clustering problems, as we illustrate on networks derived from vector data.


Introduction
Many real-world datasets can be viewed as a collection of objects embedded into a global metric space, thereby providing a vector representation 1 .Alternatively, networks have become another fundamental way to model complex systems with a focus on direct pairwise interactions between constituents [2][3][4] .In the case of social systems, for instance, these complementary representations may correspond to a set of socio-demographic variables for each individual, e.g., in a Blau space 5 , or to a social network of interactions between individuals, e.g., via a mobile communication network 6 or spatio-temporal cooccurrence interactions 7 .In each representation, real-world systems tend to exhibit groups: regions of high density in the spatial representation, known as clusters, or high-density subgraphs in the network, known as communities.Such cluster or community structure provides a coarse-grained representation of the underlying complex system [8][9][10][11] , often associated to different functions and impacting its collective behaviours [12][13][14] , and their unsupervised detection is thus essential in different areas of data science 1,10 .
In the vector representation, the introduction of a dissimilarity function and ideally of a distance in a metric space, provides a natural way to identify the center of a cluster, e.g., the medoid in a general metric space 15,16 , and a hierarchy would form within a cluster between central and other more peripheral nodes, implying an asymmetric relationship between them.On the other hand, in the case of asymmetric pairwise interactions, which can be associated to an implicit hierarchy 17 and have long been recognized [18][19][20][21][22] in various network systems, community detection methods for networks place much less emphasis on the concept of community center and hierarchy within communities.We can always use network centrality measures on the subgraphs identified as communities to identify core and peripheral nodes a posteriori, but these roles are not central to community detection 23,24 , in stark contrast to clustering methods based on embedding the data in a metric space.
In this paper, we propose a community detection algorithm in networks, Local Search (LS), that explicitly uses the notion of local dominance and identifies community centers based on local information.In our method, every node is given at most one parent node deemed to be higher up in a partial ranking.Nodes that have a dominant position in their immediate neighborhood 18 or even beyond are identified as local leaders 18 .This defines a rooted tree that spans the network and gives rise to community centers that are local leaders 18 with both a larger degree than the nodes in their basin of attraction and a relatively long distance to other local leaders higher up in the ranking.Our approach possesses several interesting properties.Firstly, it provides a new perspective on community detection and delivers community centers and a hierarchy within the community and even a hierarchy among communities as an explicit part of our algorithm, and so mimics advantageous features of the methods based on embedding data in a metric space.Secondly, the identification of communities through local dominance is highly efficient, as it uses purely local topological information and breadth-first search, and runs in linear time.The method does not require the heuristic optimization of an objective function that relies on a global null model 9,[25][26][27][28][29] or computationally costly spreading dynamics [30][31][32] .Also, our method does not rely on a similarity measure for which there is a wide choice, with an associated uncertainty and variability in results, such as is found in hierarchical clustering based methods 8,14,33,34 .Finally, LS is not as susceptible to noise as most methods 10,35 , and is less therefore susceptible to finding spurious communities in random graph model realisations 36 .
We demonstrate the strength of LS on several classical but challenging synthetic benchmarks and on standard empirical networks with known ground-truth community labels.Our numerical evaluation also includes network representations derived from vector data.As the LS method naturally provides community centers and local hierarchies, it creates an explicit analogy with the notion of cluster centers and distances within clusters that are found in vector clustering methods.Moreover, we also show that applying LS on discretised version of data cloud points outperforms classical unsupervised vector data clustering methods on benchmarks 16 .

Local search algorithm
Cluster analysis and community detection share many conceptual similarities, but often have a contrasting focus.Cluster analysis puts emphasis on the center of a cluster 15,16 , while community boundaries often play a more predominant role in community detection 37 .Community centers can be inferred from some community detection algorithm outputs, for example, the nodes associated to the largest absolute weights of the leading eigenvector of the modularity matrix, or exhibiting a higher density of connections inside the communities, are deemed to be community centers, core members or provincial hubs 23,38 .But centers are only a by-product of those algorithms, rather than at their core of methodologies.
The approach that we propose here is explicitly focusing on community centers to identify clusters, which is motivated by the existence of underlying asymmetries between nodes [19][20][21] , the concept of local leaders 18 in networks and borrows ideas from density and distance based clustering algorithms on vector data 16 .In our local searching (LS) algorithm, the local dominance refers to a leader-follower relation, and we pose a further restriction that each node has eventually at most one out-going link pointing to its leader.We hypothesise that communities are organized around centers that are nodes with both a dominant position at its neighborhood (e.g., has a larger degree, or other centrality measures, than its neighbors) and distant enough from other potential centers.Then based on community centers, partition is naturally ensuing.The process of our LS algorithm involves four steps: Firstly, we calculate the degree k u of each node (see digits in Fig. 1A).Secondly, we point each node u to its largest-degreeneighbor v if this neighbor is no smaller than itself on degree (i.e., k v ≥ k u and k v = max {k j | j ∈ V (u)}, where V (u) is the set of neighboring nodes of u).Nodes with in-going edge(s) and no out-going edge are local leaders 18 that dominates its vicinity (see nodes f, m, and p in Fig. 1B).Such local leaders are like rich-among-poor and are potential community centers.Thirdly, for each local leader u, we use a local breath-first searching (LBFS) to find it a nearest local leader v with k v ≥ k u and record its shortest path length to node v as l u = d uv , which is larger than one (see long-dash arrows in Fig. 1C, and see Fig. 1D for a better extracted local dominance relation, which is in a reverse direction of arrows).The LBFS process stops after finding such a local leader v, which is the reason why we call it local BFS, so it generally searches a small region and does not traverse the whole network.For local leader(s) with the maximal degree, we do not perform such a LBFS, and directly assign the maximal l u of other local leaders (see mathematical descriptions in Methods).
After performing LBFS to all local leaders except the maximal node, we can determine community centers according to degree k u and distance along the local dominance relation l u (in the network in Fig. 1, nodes f and m stands out as centers, which have both a large k u and l u , see Fig. 1E).Note that for nodes except local leaders, their l u = 1.By multiplying normalized k u and normalized l u , we can better quantitatively identify community centers via a notable gap between candidates (see Fig. 1F, and details in Methods and Supplementary Note 1).Lastly, after the identification of community centers, the group label can be assigned to its followers along the local dominance relation (i.e., the reverse direction of arrows in Fig. 1C) in one single step.
We name our framework as local searching (LS) algorithm, since it only require local information of nodes and rely on efficient LBFS processes for local leaders, which takes up a very small fraction of the whole network (see Supplementary   2).In a Ravasz-Barabási network 39 which displays stronger heterogeneity, (C) the LS algorithm groups all first-level nodes and all sixteen second-level peripheral clusters into one community, and four small communities emerge (see Supplementary Fig. 4 for more details); (F) the Louvain algorithm partitions each second-level branching as a separate community and misclassifies a first-level peripheral cluster into its own community, a result of traversal order and modularity optimization process in the Louvain algorithm.
Table 1).The identification of local dominance relation is quite resilient to missing and noisy links.Our LS algorithm is of a linear time complexity in terms of the number of edges (see Methods and Supplementary Table 1) and is in no need of iteratively optimizing an objective function that relies on a global null model in other state-of-the-art methods 9,[26][27][28][29] or resorting to spreading dynamics 30,31 .In addition, our LS algorithm is also capable of identifying multiscale communities structure, as local dominance relation also provides us hierarchies between communities via asymmetric relationship between communities centers.The strength of our framework is demonstrated on several classical challenging synthesized test cases (see Figs. 2-3) and empirical network datasets with ground-truth community labels (see Table 2).Finally, we also show how it provides a connection to clusters in a metric space, and our LS algorithm outperforms current state-of-the-art unsupervised both clustering and community detection methods when applied to discretised vector data clouds (see Figs. 5-6 and Table 4).
As the implementation of our algorithm was done in Python, we use the NetworkX package implementation of the Louvain algorithm, our main point of comparison, as they are both of a linear time complexity, to obtain fair comparison for running time (see Table 2), and we also compare with a broader range of popular community detection algorithms on partition performance (see Table 3), some of which are slower but more accurate ones.Our LS algorithm still ranks first or second on the partition performance for five out of seven networks.More details of our LS algorithm can be found in Methods and Supplementary note 1.

Synthetic networks
Here, we use well-known benchmark networks to illustrate how the LS method functions and in which situations it performs well.For illustration, we mainly contrast the results obtained by the LS method to those obtained by the Louvain method 9 , which is widely applied due to its good performance and high efficiency.In addition, both algorithms have a linear time complexity, and thus the comparison on performance between them are more fair.We first look at a circular regular network, where all nodes are equivalent and thus no community structure should be discovered.LS correctly identifies a single community (Fig. 2A), by contrast, modularity forces community structure to exist and finds five communities (Fig. 2D).Let us look in detail at the reason why LS finds a single community.First, each node will point to all its adjacent neighbors as they all have the same degree, and since node are sequentially traversed and they will not point to their followers, loops cannot be formed, see Supplementary Fig. 1C and Supplementary Note 1.1.1 for a proof.After all nodes have been considered, each node will only keep one outgoing link with an equal probability, and eventually a tree structure will be formed.Because of the homogeneity of the graph, the tree only allows the identification a single community center and therefore of a single community.Because all nodes are equivalent, the labeling and thus order in which they are visited, is irrelevant.We note that in the case of a clique, an extreme case of regular network, the mapping of local hierarchy can yield a range of structure from a chain to a star structure, see Supplementary Fig. 1B for more details.In all cases, only one center is identified.By contrast, the Louvain method would partition a homogeneous regular network into several communities by optimizing modularity (see Fig. 2D).
Our second application focuses on Erdős-Rényi (ER) random graphs, which is still relatively homogeneous though not strictly homogeneous.While in the limit of an infinite random graph no community structure exists, in finite-size ER graphs, fluctuations may create spurious community structures 11,36 , as well as weak or spurious hierarchies between nodes.In this example, the LS method detects fewer communities than the Louvain algorithm, see Fig. 2B and Fig. 2E.In ER random networks, the degree distribution is relatively restricted around its average, but the system nonetheless exhibits fluctuations in the degrees.Large degree nodes are more likely to connect to each other, as the connection probability between them, k i k j /2E, are among the highest ones.When two large nodes are connected, there will be a directed out-going link pointing from one node to the other, making one of them a follower.Thus the LS method detects fewer communities.On the other hand, when we fix the size of the network and increase the connection probability p, the number of communities detected by the Louvain algorithm also decreases but it consistently finds more communities than the LS method (see Supplementary Fig. 2).In addition, we are able to detect isolated nodes as noise (see grey nodes in Fig. 2B), as these nodes are of a small degree but infinite l i .
We also consider an extension of the ER random graph model, the stochastic block models shown in Supplementary Fig. 3 and discussed in Supplementary Note 1.1.2.For random networks generated by stochastic block model [40][41][42] with two blocks, when the inter-connection probability is zero, c out = 0, the Louvain algorithm detects two communities that align with ground truth, but this is a reflection of the resolution limit 43 , as when analyzing each community (each ER graph), it may partition it into more than ten communities (see Supplementary Fig. 2).By contrast, the LS algorithm still detects as many community centers as when looking at each individual random network.This result can be understood by the local nature of the algorithm, where the structure of one disconnected cluster does not affect the communities found in the other and thus LS algorithm does not suffer from resolution limit.And when fixing the intra-connection probability c in and increasing c out , the boundary of the two communities becomes blurred.We find that when slightly increasing c out , the number of community partitions given by the Louvain algorithm increases drastically (see Supplementary Fig. 3B).By contrast, the F 1 -score of the LS algorithm is relatively stable, though not too high, and outperforms Louvain when c out is larger (see Supplementary Fig. 3).
Finally, we consider a hierarchical benchmark, the Ravasz-Barabási network model 39 with two layers, which naturally provides a model with a hierarchy between the center and peripheral nodes.The clustering proposed by LS method groups explicitly reflects the hierarchical nature of the model by grouping first-level nodes and all sixteen second-level peripheral clusters into one community centerd at the original seed node, as it dominates their neighborhood; and four small communities emerge due to the existence of four centers, which have a degree larger than their neighbours and a longer path length to the original seed node, i.e., l i > 1, see Supplementary Fig. 4C for the decision graph that identifies community centers.The Louvain algorithm offers an alternative partitioning that ignores the hierarchical nature of the model and finds five communities of roughly equal size, and misclassifies a peripheral cluster as a separate small community (see Fig. 2F).This example is interesting in that the clustering provided by Louvain here provides a reasonable, yet alternative, answer that ignores one aspect of the data.This reminds us that different clustering methods rely on different underlying mechanisms and, as often occurs when using unsupervised methods, the outputs are rarely strictly right or wrong.The outputs should be understood not only in terms of the data, but of the methods as well.Still, it is worth noting that the Louvain algorithm misclassifies a first-level peripheral cluster into another community (see the blue cluster in Fig. 2F), due to the traversal order used by the algorithm and modularity optimization process (see Supplementary Note 1.1.3for more details).When we further modify the network generated by the Ravasz-Barabási model by adding a third-level branching to one of the second-level central cluster, and add noise in the connectivity to other second-level central clusters, the LS method still detects meaningful hierarchical structure, see Supplementary Fig. 4B and Supplementary Fig. 4D.

Detection of multiscale community structure
As partially reflected in the decision graph of the LS algorithm for the Ravasz-Barabási network (see Supplementary Fig. 4C-D), the reliance on local dominance of our method to identify local leaders naturally lends itself to detect multiscale community structure 14,34,45 .To illustrate this point, we generate a multiscale network made of two levels: four top-level communities with 400 nodes each and inter-connection probability p 1 = 0.0002, each top level community contains four second-level communities with 100 nodes each and p 2 = 0.035 14,34 .Each second-level community is generated by the standard Barabási-Albert model 44 with m = 7 that yields ⟨k⟩ = 14 (see Fig. 3A).The LS method correctly identifies two levels of community structure with a notable gap between first four top-level centers, which have similar ki × li , and other potential centers, as shown in Fig. 3B.Then taking the twelve subsequent centers, these sixteen centers together correspond to the sixteen second-level communities, and their affiliation within each top-level communities are correct (see the tree structure for local leaders in Fig. 3C).As all sixteen second-level communities are statistically equivalent, the directionality of community centers (Fig. 3C) is determined by fluctuations in the network generating mechanism.The partition obtained by the LS method has an F 1 -score of 0.99 at the top level and of F 1 = 0.56 at the second level.Misclassifications at the second level mainly come from a relatively large inter-connection probability p 2 , which blurs the boundary between communities.In comparison, the Louvain algorithm only detects four large communities that correspond to the top-level ones with F 1 -score equals 1, but it cannot detect second-level smaller communities due to the resolution limit 43 .This demonstrates the strength of the LS method on detecting smaller scale community structure.
One reason that LS works on detecting multiscale structure resides in the fact that the average path length between nodes is governed by the connection probability 46 .The distance between nodes from different second-level communities within the same top-level community is on average shorter than the distance between nodes from different top-level communities, and thus the hierarchical structure is uncovered by the LS method.Another reason is the intrinsic heterogeneity in each second-level community.
By contrast, when keeping the average degree and inter-connection probability (p 1 and p 2 ) the same, and replacing the second-level communities by ER random networks with p = 0.14, which also yields ⟨k⟩ = 14 (see Fig. 3D), the whole network becomes more homogeneous (see Supplementary Fig. 5).In this case, the LS method can still detect four top-level communities (see Fig. 3E) but mis-identify some second-level communities (e.g., communities c2 and d1 are missing in this example, see Fig. 3F) and detect more smaller communities (29 second-level communities are detected instead of 16).The mis-identification of some second-level communities is due to the largest degree node u in those ground-truth communities being directly connected to a node v in other communities with k v ≥ k u , and thus u is considered as followers.This is more common in such a random setting, as there are more nodes with a relatively large degree beyond the reference value (i.e., the smallest degree of all of the largest node in each ground-truth second-level communities, see Supplementary Fig. 5 for more details).By contrast, in the scale-free case, there are fewer nodes beyond the reference value.For example, in the random multiscale network in Fig. 3D, the reference value is 34, and there are 60 nodes beyond it; in comparison, in the scale-free one, there are only 31 nodes beyond its reference value.The homogeneity makes the detection of such communities harder, if this minimum value become only slightly smaller, there will be much more nodes beyond the reference value in the random setting (see Supplementary Fig. 5B).Mis-affiliation, i.e., one local leader in community b3 follows the center of d4 instead of other centers in community d, is also partially due to a similar reason and partially due to randomness.The discussion above also imply that the LS method would be vulnerable to targeted failure -connecting two community centers would diminish one center as a follower and their corresponding communities merge as one (see Supplementary Fig. 24).In addition, due to randomness, two or more local leaders might emerge in the same second-level communities, which will lead to split of the community (e.g., there are two local leaders in communities c2).These would constitute cases where the LS method is not appropriate.N is the number of nodes in the network, E is the number of edges, ⟨k⟩ is the average degree of the network, ⟨d⟩ refers to the average shortest path length between all node pairs, ⟨CC⟩ refers to the average clustering coefficient, ρ refers to assortativity, and α refers to the power-exponent of the degree distribution if it can be reasonably well fitted by a power law.

Real-world benchmark networks
We now test the LS algorithm and demonstrate its strength on several empirical benchmark networks (see Table 1) with known ground-truth community labels, see Table 2.We chose to compare with the Louvain algorithm in Table 2 because both algorithms are linear, making them well-suited for large-scale networks and facilitating a more meaningful comparison.And the Louvain method is the most widely used community detection algorithm implemented in most network packages.LS is faster than Louvain for 7 of the 8 benchmarks.The speed advantage becomes more noticeable as the networks get larger (see Table 2).For example, for the DBLP network 48 with 317,080 nodes and 1,049,866 edges, our LS method takes 45 seconds, while Louvain takes 256 seconds.

7/19
The LS method is not only faster, but also classifies better than the Louvain algorithm measured by the F 1 -score for 5 out of 7 examples with ground-truth community labels (see Supplementary Fig. 9 and Supplementary Note 2 for more details and discussions on the evaluation by F 1 -score, and we also make comparisons between algorithms on performance evaluated by conductance 48 , see Supplementary Table 4).In Table 3, we extend our comparison of the LS algorithm to include other popular algorithms with different perspectives on community detection, some of which are with greater accuracy albeit slower in implementation.For example, the GDG (geodesic density gradient) algorithm 49 first embeds the network into vector space based on shortest path length between nodes and then applies an iterative clustering algorithm, which is similar to mean-shift algorithm 50 , both of which are costly in computation, to obtain partitions of communities.GDG algorithm achieves the best performance in two out of seven networks, and has an obvious advantage over other methods on Citeseer (see Table 3 and Supplementary Table 2 for more details).Another important type of method is inferential ones, which provides powerful tools without arbitrariness and further advances our understanding of community structure of networks 51 .Inferential algorithms, which generally rely on SBM as generative models, can explain the inability to detect communities in the very sparse limit 52 , help eliminate the resolution limit in Louvain algorithm and detect hierarchical structure of complex networks 53 .Inferential methods can be adapted to different types of networks ranging from weighted networks 54 to directed ones 55 and hypergraphs 56 .However, inferential methods are generally computationally expensive 53,57 .The inferential algorithm described in Refs. 53,57 s a much better performance than the LS on the Football network (see Supplementary Table 3), whose degree distribution is quite homogeneous.Other algorithms typically only attain the first position in one out of seven networks.While our LS algorithm takes the lead in two out of seven networks and secures second place in an additional two.We note that the best performing algorithms are well distributed among the benchmarks, which reflects that real networks have generally different generating mechanisms that are better captured by some algorithm than others 26,27 .This suggests that achieving optimal performance across all scenarios is highly improbable 58 , aligning with the No Free Lunch theorem 27 .It is, however, interesting that LS consistently ranks first or second in the four out of seven benchmark networks, and is overall the best classifier, suggesting that the notions of local dominance, hierarchy and community centers are pervasive in real networks, whose degree distributions are generally heterogeneous.It is also instructive to understand why LS does not perform well on the Football network 8,47,59 .It is due to the fact that the Football network is fairly homogeneous, and we have already explained why the LS method does not perform well in this situation, see the subsection on multiscale community detection.There is also significant connectivity between the largest degree nodes in the ground-truth communities, thus some of them become direct followers to others and their communities are merged.If a portion of links between the largest degree nodes were removed, the partition given by the LS method would be much closer to the ground truth.
Targeted link removal and addition can significantly change the structure of a network and the outcome of community detection algorithms.The LS algorithm is not immune to that effect, as it relies on local leaders to separate communities, therefore, intentional targeted link addition between two community centers would make one of them a follower and lead to just a single community, which will dramatically reduce the performance of the classification.For example, if we connected the president and instructor in the Zachary Karate Club network, then the LS method only yields a single community (Supplementary Fig. 24), which is the case before the split 60 .This also lends us a way to identify critical links for merging or splitting communities 61 .The identification of local hierarchy is, on the other hand, more robust against links missing or adding at random, see Supplementary Figs.22-23.In addition, the number of communities detected by LS is also closer to the ground truth, see Table 2.For example, for the Zachary Karate Club network, Louvain detects four communities, while LS detects two, which is consistent with reality.As usually more potential centers can be detected in real networks, see Supplementary Fig. 6, and might correspond to meaningful multiscale structure.As for the Polblogs network, where LS finds three instead of two communities indicated by current labeling, and there is debate whether three groups should be considered as the ground truth (i.e., apart from liberal and conservative, there is a neutral community) 62 .This partially explains why the LS does not work that well on this example.This also reflects the importance and difficulty of obtaining ground-truth labels, if there are any 27 .Although the evaluation of the classification performance of an algorithm with a ground truth is standard practice 63 , establishing the ground truth for community assignment usually require detailed survey, which can be difficult for very large networks 41,63 , and is usually regarded as distinct from metadata available 27,41 .The choice(s) of the ground truth(s) is crucial and there might be "alternative" ground truth that emerge from unsupervised clustering analysis and are validated a posteriori.The notion of alignment between ground truth and structure is indeed crucial to obtain good clusters 64 .For example, in the well known Zachary Karate Club network 60 , the metadata of nodes can also be their gender, age, major, ethnicity, however, most of which are irrelevant to the community structure when interested in understanding the split of the club 27,41 , but might be relevant to understand other type of community structure.Apart from evaluations based on ground-truth labels, various evaluation criteria purely based on network structure (e.g., optimizing modularity, conductance, cut) have been proposed, however, they may deviate from the real generating process of networks and will not be suitable for all scenarios.For example, maximizing modularity cannot generate good partitions in ecological networks, as herbivores in the same community will not prey on each other, thus there are no dense connections within the same ecological community.In this sense, if there can be some ground-truth labels, using F1 score is a more objective evaluation.Overall, our LS algorithm have a pretty good performance, it is ranked first in two out of seven networks and ranked second in another two when compared to other popular algorithms, some of which are slower but more accurate ones.And our algorithm is the fastest one.For the GDG algorithm, d is the dimension of embedding space, and t is the number of iterations.

Applications to urban systems
Our final example of real-world networks is to uncover the structure of spatial interactions in cities.It also showcases the capacity of LS to adapt to weighted networks, with node degree replaced by the node strength and the least weighted shortest path, where the distance between two adjacent nodes is the reverse of the volume of mobility flow.Many cities have or will evolve from a monocentric to a polycentric structure 67 , which can be inferred from the patterns induced in human mobility data.We use human mobility flow networks derived from massive cellphone data at the cellphone tower resolution with careful noise filtering and stay location detection [68][69][70] for three cities in different continents: Dakar 71 , Abidjan 7,72 , and Beijing 73,74 (see references here and Supplementary Material of ref. 7 for more details on obtaining the mobility flow network from cellphone data).The LS algorithm can detect both communities with strong internal interactions and meaningful community centers, see Supplementary Fig. 8 for the decision graph.We find that for the smaller cities Dakar and Abidjan, communities are more spatially compact, while in the larger city, Beijing, they are more spatially mixed, see Fig. 4.This indicates that in Beijing, interactions are less constrained by geometric distance, which might be due to a more advanced transportation infrastructure and a superlinearly stronger and diversified interactions tendency in larger cities 7,75,76 .In addition, the identified community centers correspond to important interaction spaces in cities, see Fig.

Clustering vector data via the LS algorithm
Community detection and vector data clustering share many similarities, but are often considered separately and having contrasting focus.Our use of local leaders identified by local dominance was directly inspired by the concept of the center of a cluster, which is characterised by a higher centrality measure in its vicinity/neighborhood (e.g., density or degree) and a relatively long distance (i.e., a large l i ) to the nearest object with a large centrality.Local dominance concretely and explicitly identifies fundamental asymmetric leader-follower relation between objects, which naturally give rises to centers.This creates a direct link between the two viewpoints of network science and data science.It is therefore natural to ask whether LS would perform well, or even better, than vector data clustering methods on a discretised version of a data cloud.
To cluster vector data with the LS method, we first need to discretise it into a network.Many methods exist to perform this task, including ε-ball, k-nearest-neighbors (kNN) and its variants (such as mutual kNN, continuous kNN), relaxed maximum spanning tree 77 , percolation or threshold related methods 35,78 , and more sophisticated ones 79 .Here, we employ the commonly used ε-ball method that sets a distance threshold ε and connects vectors, which become nodes, whose ε-balls overlap, see Fig. 5A and inset.This process can be accelerated by using R-trees and are implemented in a time complexity of O(N log N) 76,80 (see Supplementary Note 1.4).After traversing all nodes, a network encoding a geometric closeness within ε between nodes is obtained, see Fig. 5B.The ε-ball method preserves spatially local information, e.g., the vector density in the metric space can be interpreted as degree in the constructed network, and coarse-grains continuous distance between objects into discrete values.This makes the determination of centers clearer (see Fig. 5C and D).The choice of ε influences greatly the structure of the network obtained, here we chose ε to be near the network percolation value to ensure a minimally connected graph [81][82][83] , more details on determining ε can be found in Supplementary Note 3.1.
Applying the LS algorithm on the constructed network for a series of well-known two dimensional benchmark data (Fig. 6A and E, and Supplementary Fig. 10 for more cases), yields the expected clusters (Fig. 6B and F, and Supplementary Fig. 11).By contrast, the Louvain algorithm generally obtains more and smaller clusters in a relatively fragmented way (Fig. 6C and G, and Supplementary Fig. 13 for more examples) on the same networks.The reason is that the Louvain algorithm overlook the transitivity of local relations 85 .The state-of-the-art unsupervised clustering algorithm density and distance based (DDB) 16 applied to the original vector data yields expected clusters in most cases, see Fig. 6D and Ref. 16 for other examples.This confirms the universality of local hierarchy between objects and the analogy between our community centers and cluster centers.However, the DDB algorithm fails in the test case 84 in Fig. 6H due to a mixture of local and global metrics in this associate rule 84 , which do not affect the LS method works (see Fig. 6F).From a network perspective, certain dynamics can give rise to meaningful clusters with arbitrary shapes in metric space (e.g., synchronization or spreading dynamics are usually only

A B D C
Figure 5. Conversion from vector data to a network via the ε-ball method and the analogy between the community centers of networks and the cluster centers of vector data.An example of data cloud and (B) its dicretised network representation by (Inset) the ε-ball method.(C) The decision graph by the density and distance based (DDB) algorithm 16

. (D)
The decision graph by the LS method.Cluster centers are data points of both a higher density ρ i than its neighbors and relatively far from other points with a larger density (i.e., a large d i ) 16 .The density ρ i of a data point i is simply the number of nodes within a certain radius ε, and it is equivalent to the degree of node i in the corresponding network (i.e., k i = ρ i ).The network constructing process is a coarse-graining and discretization process, where the absolute distance value is not preserved (e.g., in the Inset, d 32 > d 34 for the original vector data, but l 32 = l 34 = 1 in the network).The Euclidean distance between any data points is based on a global metric, but the topological path length between two nodes are based on a local metric.For example, d 24 is only slightly larger than d 34 , but in the network, l 24 = 2 and l 23 = 1 (see the Inset); though d 21 ≈ 2d 23 according to global metric, node 2 and node 1 are not reachable in the network based on the local metric.Cluster centers identified by the DDB algorithm matches community centers identified by the LS method, which are all marked as stars.
Comparisons on the clustering performance between the LS, Louvain, and DDB algorithms for two dimensional benchmark vector data.(A) and (E) represent networks constructed from vector data using the ε-ball method (see Supplementary Note 3.1 and Supplementary Fig. 10 for details on the network constructions).(B) and (F) show the result of the LS method that correctly identify clusters that align with common consensus (see Supplementary Fig. 11 for more cases).
In addition, LS can detect noisy points (marked in grey) that are of low degrees but long l i .(C) and (G) show the partitions obtained from the Louvain method which are more fragmented than the LS results (see Supplementary Fig. 13 for more cases).(D) and (H) show the results obtained from the DDB method which provides correct partitions to most benchmark data, see (D) and Ref. 16 for other cases, but fails in the test case in (E), where both a low density manifold and a high density cluster exist, due to its local association rule 84 being affected by a mixture of local and global metrics.LS and Louvain methods are performed on the constructed networks shown in A and E, and the DDB algorithm is performed on the original vector data.
turns out to be an advantage for cluster analysis (see more discussions in Supplementary Note 3).  4. Comparisons on the clustering performance between the LS, Louvain, and DDB algorithms for high dimensional vector data.D denotes the dimension of the dataset, N denotes the number of objects, and N c denotes the number of clusters from the ground-truth or identified from algorithms.The hand-written figures in MNIST is of dimension 28×28=784 pixels; and in Olivetti, the face image is of dimension 92×112=10,304 pixels.The Olivetti dataset with N=100 comprises the first 100 images of 10 people from the original data set.The original dataset comprises 400 images of 40 different people.Our LS algorithm outperforms DDB in all high-dimensional and large-scale data sets except in Iris, whose dimension is quite low.Note that as the DDB algorithm does not have a clear recognition of the number of clusters (i.e., no clear gaps between centers in the decision graph) 16 for MNIST and Olivetti, the number of clusters identified by DDB are putative based on the ground truth (i.e., selecting the top ten or forty nodes in the decision graph), which is marked in brackets.Digits highlight in bold is the ones closest to the ground truth among all three algorithms.

Discussion
Community detection and cluster analysis are analogous as both aim to group objects into categories based on some notion of similarity.In this work, we develop a fast and scalable community detection method based on the notion of a community center which echoes the commonly used concept of a cluster center.The identification of community and cluster structures requires a heterogeneous system: uniformly distributed data points and strictly regular networks do not possess meaningful mesoscopic cluster structure.Heterogeneity leads to the emergence of more important loci in a data space, or central nodes in a network.The notion of center is pervasive in cluster analysis, but underused in community detection.We define community centers as local leaders that are both of a high degree, corresponding to a high density in cluster analysis, and relatively distant from other local leaders, corresponding to cluster separability.The nodes belonging to each community defined by their center are identified by basins of attraction 34 based on the dominance existing between nodes, which indicates the asymmetric leader-follower relationship and defines a local hierarchy.While dominance is an explicit characteristic of edges in a directed network, it can be seen as an intrinsic hidden higher-order directionality between nodes even in undirected networks.The resulting local hierarchy reflects asymmetric interactions between objects inferred from the local connectivity of nodes that then naturally defines leaders and community affiliations, as well as hierarchies among communities.In addition, the position of local leaders and distribution of shortest path length l u between local leaders can be developed into some indicators for depicting network structure.And with the concept of local leaders and corresponding local hierarchy, automated discovery 24 and evolution dynamics of communities 61 can be ensuing studies.
The local hierarchy structure is quite robust against random noise, and is based on local information.Moreover, in contrast to most state-of-the-art clustering and community detection methods, the LS method does not depend on the structure of the entire network as of most existing methods 24 .We are able to detect communities in a small region and avoid the computational burden of analysing the whole network 24 .In cluster analysis, approximating similarity relations between objects by a distance matrix actually assumes that every object is in a direct relation with all others, which is also the case for modularity optimization algorithms that utilize a random null model, which also assumes that each node has a probability to interact with every other node 10 .In addition, community detection methods also generally assume a mutual relation between objects, which is an important formal metric property and an implicit feature of an undirected connectivity matrix.Local hierarchy implicitly violates such an assumption, but it turns out that abandoning such a restriction gives better flexibility to the clustering method (see Supplementary Note 3 for more details).Finally, our LS algorithm is fast and scalable with a linear time complexity, which is crucial for analyzing large scale networks, and also performs well on most benchmarks, except the ones that do not possess the type of heterogeneity (e.g., football network 47 ) exploited by the LS method.
Overall, the performance of the LS method is particularly good given its simplicity.On benchmark network models, it outperforms the currently most widely used community detection method, the Louvain modularity optimisation algorithm.The LS method consistently ranks higher than any other methods when the performance is averaged over several data sets, see Table 3.We have also shown that the LS method is naturally able to detect multiscale structure of communities in complex networks.This implies that while not necessarily identifying the partition defined by some existing ground truth, it finds a good approximation of it and the output can then be used as starting point for other slower but more accurate and dedicated community detection methods, offering a significant speed up.
Given the similarity in spirit between LS and clustering methods, we applied LS to ε-ball discretised version of benchmark vector data, both low and high dimensional.For low-dimensional data, we find it provides the expected clusters and outperforms Louvain modularity optimisation algorithm ran on the discretised data, which generally yields too many communities and performs poorly.LS also outperforms DDB, a state-of-the-art unsupervised clustering method, on some challenging cases in the presence of low-density manifolds.For high-dimensional data, LS still outperforms DDB, but not Louvain, although on closer inspection, Louvain obtains a better F 1 -score, but suffers again from providing too many communities, outbalancing the advantage in F 1 -score.
We hypothesise that the discretisation step of creating a network from vector data acts as a topological filter, which enhances the key property of the data that makes cluster detection work: the existence of well defined cluster centers and a clearer identification of local hierarchy.The performance of any community detection algorithm is going to be influenced by the discretisation method used, and more work is needed to understand the relationship between topological denoising and the performance of the community detection algorithms, as different community detection methods might respond differently to different discretisation schemes.
Another area for future work is to adapt LS to find "halo" nodes residing at the boundary of two or more communities (e.g., node d in Fig. 1), detect overlapping communities 13 potentially by producing line graphs [88][89][90] or clique graphs 59 , and identify critical link responsible for the merging or splitting dynamics of communities 61 .Another point that could be improved is when two or more local leaders are equivalent on both degree and distance to a node.We currently assign it to a local leaders at random but we could look at other options.
Finally, another possible direction for future research concerns the definition of dominance itself.In this article, it was built on a specific network property, the degrees of the nodes.For a weighted network, it would be appropriate to use strength rather than degree and we would retain all the benefits of the LS method.Extending LS algorithm to directed networks is worth closer investigations in the future.In directed networks, two types of local leaders, the "integrators" (determined by in-degree) and the "influencers" (by out-degree), might be needed, which can lead to two types of clustering.The influence of edge directionality should be closely examined, as influence may propagate in the reverse direction of the directed edge.For example, on Twitter, information often flows from a user to their followers.Additionally, directionality affects the calculation of path lengths between nodes.Apart from using degree, dominance could also be based on other node centrality measures but most of these require global network calculations which would slow the algorithm considerably.If Dominance was based on non-structural properties, such as numerical attributes for nodes already defined in the data, then the LS approach would still work well.

Methods
The Local Search (LS) algorithm Cluster analysis and community detection share many conceptual similarities, but often have a contrasting focus.Cluster analysis puts emphasis on the center of a cluster 15,16 , while community boundaries often play a more predominant role in community detection 37 .Community centers can be inferred from some community detection algorithm outputs, for example, the nodes associated to the largest absolute weights of the leading eigenvector of the modularity matrix, or exhibiting a higher density of connections inside the communities, are deemed to be community centers, core members or provincial hubs 23,38 .
But centers are only a by-product of the algorithm, rather than at their core of methodologies.
The approach that we propose here is explicitly focusing on community centers to identify clusters, which is motivated by the existence of underlying asymmetries between nodes [19][20][21] , the concept of local leaders 18 in networks and borrows ideas from density and distance based clustering algorithms on vector data 16 .We hypothesise that a community center is a local leader that is comparatively of a larger degree than its neighbors, thus "dominating" them, and is of a relatively long shortest-path distance to other local leaders.
Our algorithm consists of four steps that we now detail.We start with an undirected network with N nodes and E edges, for example see Fig. 1A.For better clarity, nodes are also labeled and traversed in lexicographical order (see Fig. 1B).
Step 1 First, we calculate the degree k u of each node u (see digits in Fig. 1A), which is an operation of linear time complexity O(E).Our algorithm neglects self-loops in default, but if self-loops are meaningful for calculating degree of nodes, setting the input parameter "self_loop" of the algorithm as True will increase the degree of nodes accordingly, and nodes with self-loops will not be considered as neighbors of themselves.
Step 2 Second, we traverse each node u and point u to any adjacent node v with k v ≥ k u and k v = max{k z |z ∈ V(u)} (i.e., v has the largest degree in the neighborhood of u).For example, in Fig. 1B  When m is traversed, it will not point to any of its followers (e.g., b).This process naturally avoids the creation of loops and ensure we only obtain directed acyclic graphs (DAGs), see Supplementary Fig. 1 and proof in Supplementary Note 1.1.1 for more details.If such a v does not exist, u will not have any outgoing edge and will be identified as a local leader (see dark grey nodes f, p, and m in Fig. 1B).We denote the set of local leaders as C.
After traversing all nodes, for nodes with multiple out-going links, we randomly retain one (see only short dash arrows in Fig. 1C for a possible mapping).Mathematically, we have obtained a forest of trees, where the root of each tree is a local leader, and is also a potential community center.For most nodes, except local leaders, this process identifies a local hierarchy (indicated by dash arrows), with an asymmetric leader-follower relation (see short-dash arrows in Fig. 1B).This step is completed in O(E).
Step 3 Third, to identify the upper level for local leaders along the hierarchy, we use a local breadth-first search (BFS) starting from each local leader u and stop the search when encountering the first local leader v with k v ≥ k u and assign the shortest path length on the original network d uv to l u , which is the length of the out-going link of node u.Note that l u ≥ 2 for all local leaders, and all pure followers have l u = 1.For example, node p is a local leader, in the second iteration of the BFS, it encounters another local leader f with k f > k p .We stop the local-BFS and point p to f, and l p = d p f = 2. Similarly, f → m and d m f = 4.The out-going link of local leaders goes beyond the direct connections in the original network (see long-dash arrows in Fig. 1C).
When there are several local leaders that have a no smaller degree than the local leader u in the l th u iteration, the largest one is chosen; if multiple nodes have the same largest degree, one is picked at random uniformly.For local leader(s) with 14/19 the maximal degree in the whole network, denoted as M, a subset of C, there is no need to perform the BFS, and we directly assign l x∈M = max u∈C\M (l u ).
Community centers can be easily identified as local leaders with both a large k u and a long l u (see Fig. 1E), and naturally emerges from the rooted tree revealed by local dominance (see all dash arrows in Fig. 1C and the explicit tree structure in Fig. 1D).We use the product of rescaled degree ki and rescaled distance li to quantitatively measure the "centerness" of each node (see more details and discussions in Supplementary Note 1.2).Community centers can be determined via visual inspection for obvious gaps or by, possibly, sophisticated automated detection methods for gaps in the future (see Fig. 1F).For example, the community centers identified by the LS method in Fig. 1 are nodes f and m.In the Zachary Karate Club network, the identified community centers correspond to the president and the instructor, which is consistent with reality 60 (see Supplementary Fig. 6).
Theoretically, this third step takes O (|C| − |M|)⟨k⟩ ⟨l⟩ C\M =O((|C| − |M|)E), where ⟨k⟩ is the average degree of the network, ⟨l⟩ C\M = ∑ u∈C\M l u /(|C| − |M|), and the size of the set of potential centers |C| is usually much smaller than N (see Supplementary Table 1).In practice, ⟨k⟩ ⟨l⟩ C\M is bounded to be smaller than E as it mimics a local-BFS process.As indicated by numerical results, even (|C| − |M|)⟨k⟩ ⟨l⟩ C\M is usually smaller than E (see Supplementary Table 1).In addition, the local-BFS process can be simultaneously implemented for all local leaders in parallel to further speed up the algorithm in practice.
Step 4 Finally, for all identified community centers, we remove their out-going links, if any.Community labels are then assigned along the reverse direction of directionality u ← v from community centers.This step takes again a linear time O(N).
Taken together, the time complexity of our LS algorithm is linear in the number of edges: O E + (|C| − |M|)⟨k⟩ ⟨l⟩ C\M + N = Θ(E), which is among the fastest community detection algorithms.Our framework provides a new perspective on community detection methods.It only relies on the notion of local dominance, which is identified solely from local information from the topology.It does not need to iteratively optimize an objective function 9,[26][27][28][29] based on a global randomized null model 9,23,27 or resorting to iterative spreading dynamics 30,31 as other state-of-the-art algorithms.It is important to emphasise that the communities that are uncovered by LS are not necessarily associated to a high density of links, as in modularity optimisation, or specific patterns of connectivity inside versus across groups, as in methods based on stochastic block models [40][41][42] , but are instead obtained as a group of nodes that are dominated by the same leader.

Figure 1 .
Figure 1.Schematic illustration of the Local Search (LS) algorithm.(A) An example network where digits on nodes and size of nodes indicate the degree.(B) The identification of local leaders based on local dominance by creating a forest of DAGs as indicated by short dashed directed edges.For each node u, it points to any adjacent neighbor v with k v ≥ k u and kv = max{k z |z ∈ V(u)}, where V(u) is the set of neighboring nodes.In this example, nodes are traversed by their lexicographical order, when node b is traversed, it points to m as k m = max{k z |z ∈ V(b)} ≥ k b ; later, when m is traversed, it has no out-going link, and so m is identified as a local leader: it does not point to any of its followers and its remaining neighbors all have smaller degrees.When there are more than one neighbor with the same largest degree, more than one directed edge is temporarily added, e.g., node c points to both b and m as k b = k m = max{k z |z ∈ V(c)} ≥ k c ; nodes d and l also have more than one outgoing link.The local leaders, which are potential community centers, are f , m, and p (indicated by dark grey color).(C) Each node randomly retains just one out-going edge shown as a short dashed directed edge (e.g., c can point to b or m with an equal probability, similarly for l and d).Then, for each local leader u, a local-BFS is performed to find its nearest local leader with k v ≥ k u , and the shortest path length on network d uv , ∀v is designated by l u .Here, p → f with l p = 2, and f → m with l f = 4.In (C), short-dash arrows and long-dash arrows correspond to pure followers (whose l u = 1) and local leaders (whose l u ≥ 2), respectively.Each node has at most one out-going link (u → v), which can go beyond direct connections.The local leader(s) with the maximal degree has no out-going link (here node m).(D) The corresponding tree structure formed by local dominance.The scale on the left is a visual aid for calculating l i between connected nodes in the DAG.(E) The scatter plot of k i and l i for all nodes.Community centers are of both a larger degree k i and a longer l i .(F) The decision graph for quantitatively determining community centers (indicated by triangles) based on the product of rescaled degree ki and rescaled distance li (see more details in Supplementary Note 1.2).Community centers can be detected by a visual inspection for obvious gaps or sophisticated automatic detection methods.Here, two centers, nodes m and f, are identified.The color of nodes in (C) and (D) represents the community partition, and community centers are highlighted by a darker hue of the same color.

Figure 3 .
Figure 3. Detection of multiscale community structure with different heterogeneity.The network in (A) comprise four top-level communities (labeled as a, b,c, and d) with 400 nodes each and an inter-connection probability p 1 = 0.0002, each of which further comprises four second-level communities with 100 nodes and p 2 = 0.035 (e.g., community c comprises c1, c2, c3, and c4).The second-level communities are generated by the Barabási-Albert model 44 with m = 7, which leads to an average degree ⟨k⟩ = 14.(B) shows the decision graph for the LS method when analyzing the network in (A).(C) displays the tree structure formed by the local dominance between identified centers of each community.For better clarity, community centers are named by the community label instead of the real index of the node, and we only show the tree structure of these centers.The height difference indicates the l i of the lower node.(D)-(F) is the same as (A)-(C), with only changing the generation process of second-level communities to the Erdős-Rényi random network with a connection probability p = 0.14, which still leads to the same average degree ⟨k⟩ = 14.In such a setting, similar to SBM, nodes in the network are again relatively homogeneous.For better clarity, in (E) and (F) only top sixteen centers are labeled and their affiliation relation are visualized, and in total, LS detects 29 centers at the second-level for this network.For the multiscale network in A, the LS method detects four top-level communities with F 1 = 0.99 and 16 second-level communities with F 1 = 0.56.For the network in D, the LS method detects four top-level communities with F 1 = 0.89 and 29 second-level communities with F 1 = 0.29.In both cases, the Louvain algorithm only obtain four communities, which corresponds to the first-level ones, with F 1 equals 1, however, it cannot detect second-level partitions.By comparing results in (A)-(C) and in (D)-(F), we can find that our LS algorithm works well on networks with stronger heterogeneity.Results shown here correspond to just one realization, in multiple realizations, as every first-and second-level communities are equivalent, the label sequence in B and E and the tree structure in C and F may vary but have a consistent structure.

4 .Figure 4 .
Figure 4.The community structure detected by our LS algorithm on mobility flow networks in three diversified cities across continents.(A) Dakar in Senegal, Africa.(B) Abidjan in Côte d'Ivoire, Africa.(C) Beijing in China, Asia.Each dot represents a location, which corresponds to a region by Voronoi tessellation according to cellphone towers.Communities are indicated by different colors, and their centers are marked as stars.The decision graphs are shown in Supplementary Fig. 8.
node g will point to f instead of p as k f > k p > k g ; and c points to both b and m as k b = k m = max{k z |z ∈ V(c)} > k c .Note that a node cannot point to its follower, and since nodes are traversed in lexicographical order, when node b is traversed, it will point to m as k m = max{k z |z ∈ V(b)} ≥ k b .

Table 1 .
Basic statistics of networks.

Table 2 .
48mparison between the LS and Louvain algorithms on networks with ground-truth community labels.N c denotes the number of ground-truth communities in the network or identified by different methods, and F 1 -score is a common performance measure in machine learning between predictions and ground-truth labels (see more details in Supplementary Note 2), and t (ms) is the running time of the algorithm when implemented in Python.As there is no ground truth labels but only meta data for DBLP48(see Supplementary Note 2 for more discussions), we are unable to report F 1 -score.As LS is able to detect multiscale structure, we report the number of communities detected with notable gaps: 8 large communities, 1859 smaller communities.Both the Louvain and LS algorithm are of linear complexity in time, and our LS method is faster.In addition, the LS method performs better in most cases.The algorithm with a better performance is highlighted in bold.Comparisons with a broader range of classical community detection algorithms are shown in Table3.

Table 3 .
Comparisons with classical community detection algorithms on real networks with ground-truth community labels.The algorithm with the highest F 1 score is highlighted in bold, and the second highest one is highlighted by underline.