## Abstract

Virtually all network analyses involve structural measures between pairs of vertices, or of the vertices themselves, and the large amount of symmetry present in real-world complex networks is inherited by such measures. This has practical consequences that have not yet been explored in full generality, nor systematically exploited by network practitioners. Here we study the effect of network symmetry on arbitrary network measures, and show how this can be exploited in practice in a number of ways, from redundancy compression, to computational reduction. We also uncover the spectral signatures of symmetry for an arbitrary network measure such as the graph Laplacian. Computing network symmetries is very efficient in practice, and we test real-world examples up to several million nodes. Since network models are ubiquitous in the Applied Sciences, and typically contain a large degree of structural redundancy, our results are not only significant, but widely applicable.

## Introduction

Network models of real-world complex systems have been extremely successful at revealing structural and dynamical properties of these systems^{1}. The success of this approach is due to its simplicity, versatility, and surprising universality, with common properties and principles shared by many disparate systems^{2,3,4}.

One property of interest is the presence of structural redundancies, which manifest themselves as symmetries in a network model. Symmetries relate to system robustness^{5,6}, as they identify structurally equivalent nodes, and can arise from replicative growth processes such as duplication^{7}, evolution from basic principles^{8}, or functional optimisation^{9}, and can be arbitrarily generated in model graphs^{10}. It has been shown that real-world networks possess a large number of symmetries^{8,11,12,13,14}, and that this has important consequences for network structural^{11}, spectral^{13}, and dynamical^{15,16,17,18,19} properties for instance cluster synchronisation^{14,20,21,22,23,24,25}.

Crucially, network symmetries are inherited by any measure or metric on the network, that is, any structural measurement between pairs of vertices (such as distances), vertex-valued measurements (such as centrality) or even matrices derived from the network (such as the graph Laplacian). However, the effects of symmetry on arbitrary network measures is not yet fully understood nor exploited in network analysis, even though the network symmetry of the large but sparse graphs typically found in applications can be effectively computed and manipulated.

In this article, we show how a network representation of an arbitrary pairwise measure inherits the same symmetries of the original network, and uncovers the structural and spectral signatures of symmetry on this network representation. Namely, for an arbitrary network measure, we identify subgraphs where the symmetry is generated (symmetric motifs) and their structure (Fig. 1a), use the network quotient to quantify the redundancy due to symmetry (Fig. 1b), develop general compression algorithms that eliminate this redundancy, and study the reduction in computational time obtained by exploiting the presence of symmetries. The eigenvalues and eigenvectors of a network measure also reflect the presence of symmetry: we show how symmetry explains most of the discrete spectrum of an arbitrary network measure, predict the most significant eigenvalues due to symmetry, and use this to develop a fast symmetry-based eigendecomposition algorithm. We achieve remarkable empirical results in our real-world test networks: compression factors up to 26% of the original size, over 90% of the discrete spectrum explained by symmetry, and full eigendecomposition computations in up to 13% of the original time, demonstrating the practical use of symmetry in network analysis. We also discuss the implications of network symmetry in vertex measures. We illustrate our approach in several network measures, providing novel results of independent interest for the shortest path distance, communicability, the graph Laplacian, closeness centrality, and eigenvector centrality. To facilitate dissemination, we provide full implementations of all the algorithms described in this article^{26}. Our results supersede^{11,13} and help to understand other network symmetry results thereafter^{12,27,28,29,30,31}. We focus on structural and spectral properties, and symmetries commonly found in real-world networks: For a more general study of arbitrary symmetry in (networks of) dynamical systems, see refs. ^{15,16,17,18}. To keep our account as self-contained as possible, we include material well known in the algebraic graph theory literature, e.g.^{32,33,34,35}, without any originality claim.

## Results

### Symmetry in complex networks

The notion of network symmetry is captured by the mathematical concept of *graph automorphism*^{32}. This is a permutation of the vertices (nodes) preserving adjacency, and can be expressed in matrix form using the adjacency matrix of the network. If a network (mathematically, a finite simple graph) \({\mathscr{G}}\) has *n* vertices, labelled 1 to *n*, its *adjacency matrix* *A* = (*a*_{ij}) is an *n* × *n* matrix with (*i*, *j*)-entry *a*_{ij} = 1 if there is an edge between nodes *i* and *j*, and zero otherwise. A graph automorphism *σ* is then a permutation, or relabelling, of the vertices *v* ↦ *σ*(*v*) such that (*σ*(*i*), *σ*(*j*)) is an edge only if (*i*, *j*) is an edge, or, equivalently, *a*_{ij} = *a*_{σ(i)σ(j)} for all *i*, *j*. In matrix terms, this can be written as

where *P* is the permutation matrix corresponding to *σ*, that is, the matrix with (*i*, *j*)-entry 1 if *σ*(*i*) = *j*, and 0 otherwise. The automorphisms of a graph form a mathematical structure called a group, the *automorphism group* of \({\mathscr{G}}\). In principle, any (finite) group *G* is the automorphism group of some graph \({\mathscr{G}}\)^{32}, but, in practice, real-world networks exhibit very specific types of symmetries generated at some small subgraphs called *symmetric motifs*^{11}. Namely, we can partition the vertex set *V* into the *asymmetric core* of fixed points *V*_{0} (an automorphism *σ** moves* a vertex *i* ∈ *V* if *σ*(*i*) ≠ *i*, and *fixes* it otherwise), and the vertex sets *M*_{i} of the symmetric motifs,

as shown in Fig. 1a for a toy example. Equation (2) is called the *geometric decomposition* of the network^{11}.

Real-world networks typically exhibit a core of fixed points (asymmetric core), and a large number of relatively small symmetric motifs, where all the network symmetry is generated, and hence the size of the automorphism group is often extremely large, in stark contrast to random graphs, typically asymmetric^{38}. However, each symmetry is the product (composition) of automorphisms permuting a very small number of vertices within a symmetric motif. For example, the toy graph in Fig. 1a has 2^{7} × 3! × 4! = 18, 432 symmetries (size of the automorphism group) but they generated by (all combinations of) just ten permutations, each permuting a few vertices within a symmetric motif (one permutation per motif except two for *M*_{4}, *M*_{5}, and *M*_{7}).

Each symmetric motif can be further subdivided into orbits of structurally indistinguishable nodes (shown by colour in Fig. 1a), which play the same structural role in the network and, therefore, contribute to network redundancy and thus to the robustness of the underlying system. Our notion of structurally indistinguishable nodes (nodes in the same orbit of the automorphism group) extends the notion of structurally equivalent nodes found in the social sciences^{39}, that is, nodes with the same set of neighbours. It is not equivalent: nodes in the same orbit may not have the same neighbours (e.g., *M*_{1}, *M*_{6}, or *M*_{7} in Fig. 1a).

Network symmetries of (possibly very large) real-world networks can be effectively computed, stored and manipulated (see “Methods”). For instance, we computed generators of the automorphism group, and the subsequent geometric decomposition, for real-world networks up to several million nodes and edges in a few seconds (see *t*_{1} and *t*_{2} in Table 1).

Most symmetric motifs in real-world networks (typically over 90%, see the *b**s**m* column in Table 1) are of a very specific type, called *basic*^{11}: they are made of one or more orbits of the same size, and every permutation of the vertices in each orbit is realisable, that is, can be extended to a network automorphism (see Fig. 1a). Basic symmetric motifs (BSMs) have a very constrained structure^{13}, which we will generalise to arbitrary network measures and exploit throughout this article. Non-basic symmetric motifs (typically branched trees, as *M*_{7} in Fig. 1a) are called complex; they are rare and can either be studied on a case-by-case basis, or removed from the symmetry computation altogether (by ignoring the symmetries generated by them).

The definition of network automorphism Eq. (1) carries to an arbitrary *n* × *n* real matrix *A* = (*a*_{ij}). Any such matrix can be seen as the adjacency matrix of a network with *n* vertices labelled 1 to *n*, and an edge (link) from node *i* to node *j* with weight *a*_{ij} if *a*_{ij} ≠ 0, and no such edge if *a*_{ij} = 0. This means that an automorphism does not only preserve edges, but also their weights and directions. This may not be a realistic assumption for real-world weighted networks, where the weights often come from observational or experimental data, but it applies to the matrix representing a network structural measure, as we illustrate in Fig. 2 and explain next.

### Structural network measures

A *(pairwise) structural network measure* is a function *F*(*i*, *j*) on pairs of vertices which satisfies

for all automorphisms \(\sigma \in \,{\text{Aut}}\,({\mathscr{G}})\). Since automorphisms identify structurally indistinguishable vertices (*i* and *σ*(*i*)) and, similarly, edges ((*i*, *j*) and (*σ*(*i*), *σ*(*j*))), structural network measures are (edge) functions that depend on the network structure alone, and not, for example, on node or edge labels, or other meta-data. Most network measures are structural, including graph metrics (e.g., shortest path), and matrices algebraically derived from the adjacency matrix (e.g., Laplacian matrix). (We identify matrices *M* with pairwise measures via *F*(*i*, *j*) = [*M*]_{ij}.) In particular, structural measures are independent of the ordering or labelling of the vertices. In contrast, functions depending, explicitly or implicitly, on some vertex ordering or labelling, are not structural, for example the shortest path length through a given node.

We can encode a structural measure *F* as a network with adjacency matrix [*F*(*A*)]_{ij} = *F*(*i*, *j*) (see Fig. 2a for the adjacency matrix and Fig. 2b, c for two examples of structural measures), and write (3) in matrix form as

where *P* is the permutation matrix corresponding to *σ*. Comparing this to Eq. (1), we see that the network representation of *F*, \(F({\mathscr{G}})\), with adjacency matrix *F*(*A*), inherits all the symmetries of \({\mathscr{G}}\). In particular, the network \(F({\mathscr{G}})\) has the same decomposition into symmetric motifs Eq. (2), and orbits, as \({\mathscr{G}}\). The BSMs in \(F({\mathscr{G}})\) must occur on the same vertices *M*_{i}, although they are now all-to-all weighted subgraphs in general (Fig. 2b). Nevertheless, they have a very constrained structure: the intra and inter orbit connectivity depends on two parameters only. Namely, each orbit in a BSM is uniquely determined by *β* = *F*(*v*_{i}, *v*_{i}) (the connectivity of a vertex with itself) and *α* = *F*(*v*_{i}, *v*_{j}), *i* ≠ *j* (the connectivity of a vertex with every other vertex in the orbit), for all *v*_{i}, *v*_{j} in the orbit. Similarly, the connectivity between two orbits Δ_{1} and Δ_{2} in the same BSM also depends on two parameters: after a suitable reordering Δ_{1} = {*v*_{1}, …, *v*_{n}} and Δ_{2} = {*w*_{1}, …, *w*_{n}}, we have *δ* = *F*(*v*_{i}, *w*_{i}) and *γ* = *F*(*v*_{i}, *w*_{j}) for all 1 ≤ *i*, *j* ≤ *n*. (For a proof, see Theorem 1 in “Methods.”) This can be observed in Fig. 2c and is represented schematically in Fig. 3a, b. In particular, each BSM takes a very constrained form in the quotient, as shown schematically in Fig. 3c, d.

The results in this article apply to arbitrary structural measures, although the two most common cases in practice are the following. We call *F* *full* if *F*(*i*, *j*) ≠ 0 for all *i* ≠ *j* ∈ *V* (e.g., a graph metric), and *sparse* if *F*(*i*, *j*) = 0 if *a*_{ij} = 0, for all *i* ≠ *j* ∈ *V* (e.g., the graph Laplacian). The graph representation of \(F({\mathscr{G}})\) is an all-to-all weighted graph if *F* is full, and has a sparsity similar to \({\mathscr{G}}\) if *F* is sparse (cf. Fig. 2c).

From now on, we will assume that \({\mathscr{G}}\) is undirected and *F* is symmetric, *F*(*i*, *j*) = *F*(*j*, *i*), which may not be the case even if \({\mathscr{G}}\) is undirected (e.g., the transition probability of a random walker \(F(i,j)=\frac{{a}_{ij}}{\,{\text{deg}}\,(i)}\)), and discuss directed networks and asymmetric measures in the “Methods” section.

### Quotient network

The formal procedure to quantify and eliminate structural redundancies in a network is via its *quotient network*. This is the graph with one vertex per orbit or fixed point (see Fig. 1b) and edges representing average connectivity. Formally, if *A* is the *n* × *n* adjacency matrix of a graph \({\mathscr{G}}\), the *quotient network* with respect to a partition of the vertex set *V* = *V*_{1} ∪ … ∪ *V*_{m} is the graph \({\mathscr{Q}}\) with *m* × *m* adjacency matrix the *quotient matrix* *Q*(*A*) = (*b*_{kl}) defined by

the average connectivity from a vertex in *V*_{k} to all vertices in *V*_{l}. There is an explicit matrix equation for the quotient. Consider the *n* × *m* *characteristic matrix* *S* of the partition, that is, [*S*]_{ik} = 1 if *i* ∈ *V*_{k}, and zero otherwise, and the diagonal matrix Λ = diag(*n*_{1}, …, *n*_{m}), where *n*_{k} = ∣*V*_{k}∣. Then

The quotient network is a directed and weighted network in general. An alternative is to use the *symmetric quotient*, with adjacency matrix *Q*_{sym}(*A*) = Λ^{−1/2}*S*^{T}*A* *S*Λ^{−1/2}, which is weighted but undirected. Note that *Q*(*A*) and *Q*_{sym}(*A*) are spectrally equivalent matrices: they have the same eigenvalues, with eigenvectors related by the transformation **v** ↦ Λ^{1/2}**w**.

In the context of symmetries, we will always refer to the quotient with respect to the partition of the vertex set into orbits. This quotient removes all the original symmetries from the network: if *σ*(*v*_{i}) = *v*_{j}, then *v*_{i} and *v*_{j} are in the same orbit and hence represented by the same vertex in the quotient network, which is then fixed by *σ*. We can, therefore, infer and quantify properties arising from redundancy alone by comparing a network to its quotient. The quotients of real-world networks are often significantly smaller (in vertex and edge size) than the original networks^{11,12} (see \({\widetilde{n}}_{{\mathscr{Q}}}\) and \({\widetilde{m}}_{{\mathscr{Q}}}\) in Table 1), and this reduction quantifies the structural redundancy present in an empirical network. Not every real-world network is equally symmetric, and, in our test networks, we give examples of network quotient reductions ranging from about 50% to just 2%. Computing the network quotient involves multiplication by very sparse matrices (Λ is diagonal and *S* has one non-zero element per row) and hence is computationally efficient (a few seconds in all our test networks).

### Redundancy in network measures

The amount structural redundancy on a network (measured by \({\widetilde{n}}_{{\mathscr{Q}}}={n}_{{\mathscr{G}}}/{n}_{{\mathscr{Q}}}\)) is amplified in the computation of a typical (full) network measure (see Eq. (7) below). It is therefore natural to ask how to quantify, and eliminate, the symmetry-induced redundancy. If a network has \({n}_{{\mathscr{G}}}\) vertices and \({n}_{{\mathscr{Q}}}\) orbits, there are \({n}_{{\mathscr{G}}}^{2}\) pairs of vertices but only \({n}_{{\mathscr{Q}}}^{2}\) pairs of orbits, achieving a reduction, or compression ratio, of

for a full network measure, typically much smaller than the ratio \({\widetilde{n}}_{{\mathscr{Q}}}={n}_{{\mathscr{G}}}/{n}_{{\mathscr{Q}}}\). On the other hand, for a sparse network measure, we only need to consider edge values, hence the reduction is the ratio between the number of edges in the graph and in its quotient

For an arbitrary network measure, its compression ratio, which measures the redundancy present (zero values excluded), will range between *c*_{full} and *c*_{sparse}. The compression ratios *c*_{full} and \({c}_{\text{sparse}}={\widetilde{m}}_{{\mathscr{Q}}}\) are shown on Table 1 for our test networks. We found a remarkable amount of redundancy (up to 70%) due to symmetry alone (Fig. 4).

### Symmetry compression

A natural question, with practical consequences for network analysis, is whether we can easily “eliminate” the symmetry-induced redundancies. This means storing only one value of a network function for each orbit of structurally indistinguishable nodes or edges, all sharing the same such value. Although this has been explored in particular cases, such as shortest path distances^{27}, here we present a general treatment. A simple method is to use the quotient matrix

which is easier to store than Λ^{−1}*S*^{T}*A**S*. This matrix achieves a compression ratio between *c*_{full} and *c*_{sparse} (by using a sparse representation of *B*), as explained before. From this matrix, we can recover all but the internal connectivity inside a symmetric motif, which is replaced by the average connectivity. Namely, let us define

where *n*_{i}, respectively, *n*_{j}, is the size of the orbit containing *v*_{i}, respectively, *v*_{j} (note that these orbit sizes can be obtained as the row sums of the characteristic matrix *S*). Then one can show (“Methods”, Theorem 2) that

where we call a pair of vertices *external* if they belong to two different symmetric motifs, and *internal* otherwise, and *v*_{i} ∈ Δ_{1} and *v*_{1} ∈ Δ_{l} are orbits. Hence, if we are not interested in the exact internal connectivity (inside a symmetric motif), or it can be recovered easily by other means (e.g., one motif at a time), we can use this simple method to eliminate all the symmetry-induced redundancies on an arbitrary network measure encoded as a matrix *A*. We have included simple *average symmetry compression* and *decompression* algorithms (Algs. 1 and 2), where *A*_{avg} is the matrix with entries \({\overline{a}}_{ij}\). The original \({n}_{{\mathscr{G}}}\times {n}_{{\mathscr{G}}}\) matrix *A* is stored using the \({n}_{{\mathscr{Q}}}\times {n}_{{\mathscr{Q}}}\) quotient matrix *B* plus a very sparse (*n* non-zero elements) characteristic matrix *S*.

### Algorithm 1.

Average symmetry compression.

### Algorithm 2.

Average symmetry decompression.

The vast majority of edges in the network representation of a network measure are external (at least 99.999% for a full measure in our test networks, see *i**n**t*_{f} in Table 1), and hence the information loss by using *A*_{avg} instead of *A* is minimal. We can nevertheless enforce *lossless compression*, by storing the intra-motif connectivity separately. Indeed, we can exploit the fact that most symmetric motifs in empirical networks are basic, and hence each orbit, or pair of orbits, is uniquely determined by two parameters (Fig. 3). If we disregard the symmetries generated at non-basic symmetric motifs, the corresponding quotient, called *basic quotient*, written \({{\mathscr{Q}}}_{\text{basic}}\), leaves non-basic motifs unchanged and retains most of the symmetry in a typical real-world network. By annotating this quotient, we can recover the original network representation of the network measure exactly. We have implemented lossless compression and decompression algorithms (“Methods”, Algorithms 6 and 7), and evaluated them in our test networks (Fig. 4).

### Computational reduction

Network symmetries can also reduce the computational time of evaluating an arbitrary network measure *F*. By Eq. (3), we only need to evaluate *F* on orbits, resulting in a computational reduction ratio of between \({\widetilde{m}}_{{\mathscr{Q}}}\) and \({\widetilde{n}}_{{\mathscr{Q}}}^{2}\) (Table 1) for sparse, respectively full, network measures. Of course, this assumes that the computation on each pair of vertices *F*(*i*, *j*) is independent of one another, which is often not the case. Moreover, the calculation of *F*(*i*, *j*) is still performed on the whole network \({\mathscr{G}}\).

A more substantial computational reduction can be obtained by evaluating *F* on the (often much smaller) quotient network instead. We call *F** quotient recoverable* if it can be applied to the quotient network \({\mathscr{Q}}\), and \(F({\mathscr{G}})\) can be recovered from \(F({\mathscr{Q}})\), for all networks \({\mathscr{G}}\). Note that this may involve, beyond evaluating \(F({\mathscr{Q}})\), an independent (hence parallelizable) computation on each symmetric motif (typically a very small graph). By evaluating *F* in the quotient network, we can obtain very substantial computational time savings, depending on the amount of symmetry present and the computational complexity of *F*. Depending on the network measure, it may not be possible to recover \(F({\mathscr{G}})\) exactly from \(F({\mathscr{Q}})\), but only partially. We call a network measure *F* *partially quotient recoverable* if it can be applied to a quotient network \({\mathscr{Q}}\) of a network \({\mathscr{G}}\), and all the external edges of \(F({\mathscr{G}})\) can be recovered from \(F({\mathscr{Q}})\), for all networks \({\mathscr{G}}\). Since the quotient averages the network connectivity, we can often recover the average values of *F* within symmetric motifs. We call *F* *average quotient recoverable* if, in addition to external edges, the average intra-motif edges can be recovered from \(F({\mathscr{Q}})\). A typical situation is when \(F({\mathscr{Q}})\) equals the quotient of *F*, that is, in symbols,

In the “Applications” section, we will show that communicability is average quotient recoverable, and shortest path distance is partially, but not average, quotient recoverable. Not every measure can be (partially) recovered from the quotient, for example the number of distinct paths between two vertices, as the internal connectivity within each symmetric motif is lost, and replaced by its average connectivity, in the quotient. Note that the word “partially” can be misleading: typically almost all edges are external (see ext_{s} and int_{f} in Table 1).

The resulting computational time reduction obtained by evaluating *F* in the quotient can be very substantial, as illustrated by several popular network measures in our test networks (Fig. 5).

### Spectral signatures of symmetry

The spectrum of the network’s adjacency matrix relates to a multitude of structural and dynamical properties^{1}. The presence of symmetries is reflected in the spectrum of the network^{13}, and indeed in the spectrum of any network measure. Symmetries give rise to high-multiplicity eigenvalues (shown as “peaks” in the spectral density) and, in fact, we can explain and predict most of the discrete part of the spectrum of an arbitrary network measure on a typical real-world network.

Let *A* be the *n* × *n* adjacency matrix of a (possibly weighted) network (such as the network representation of a network measure). First, note that symmetry naturally produces high-multiplicity eigenvalues, since

where (*λ*, **v**) is an eigenpair of *A* and *P* the permutation matrix of a network automorphism (Eq. (1)). This gives another eigenpair (*λ*, **v**) whenever **v** and *P***v** are linearly independent (obviously not always the case).

Let *B* = *Q*(*A*) be the *m* × *m* quotient of *A* (Eq. (6)) with respect to the partition of the vertex set into orbits. This partition satisfies a regularity condition called *equitability*^{35}, which can be written in matrix form as *A**S* = *S**B*, where *S* is the characteristic matrix of the partition. In particular, if (*λ*, **v**) is a quotient eigenpair, then (*λ*, *S***v**) is a parent eigenpair,

In fact, one can show (“Methods”, Theorem 3) that *A* has an eigenbasis of the form

where {**v**_{1}, …, **v**_{m}} is any eigenbasis of *B*, and *S*^{T}**w**_{j} = 0 for all *j*. We can think of a vector \({\bf{v}}\in {{\mathbb{R}}}^{m}\), respectively \({\bf{w}}\in {{\mathbb{R}}}^{n}\), as a vector on (the vertices of) the quotient, respectively the parent, network. Then, each vector *S***v**_{i} equals the vector **v**_{i} *lifted* to the parent network by repeating the value on each orbit. Similarly, *S*^{T}**w**_{j} = 0 means that the sum of the entries of **w**_{j} on each orbit is 0. All in all, we can always find an eigenbasis of *A* consisting of *non-redundant* eigenvectors {*S***v**_{1}, …, *S***v**_{m}} arising from a quotient eigenbasis by repeating values on each orbit, and *redundant* eigenvectors {**w**_{1}, …, **w**_{n−m}} arising from the network symmetries, which add up to zero on each orbit (hence “dissappering” in the quotient). Similarly, we call their respective eigenvalues *redundant* and *non-redundant*.

Analogous to the way that symmetry is generated at symmetric motifs, the redundant eigenvectors and eigenvalues arise directly from certain eigenvectors and eigenvalues of the symmetric motifs, considered as networks on their own (Fig. 6). In fact, each symmetric motif \({\mathscr{M}}\) contributes the same (called *redundant*) eigenpairs to *any* network containing \({\mathscr{M}}\) as a symmetric motif: One can show (“Methods”, Theorem 4) that if \({\mathscr{M}}\) is a symmetric motif of a network \({\mathscr{G}}\) and (*λ*, **w**) is a redundant eigenpair of \({\mathscr{M}}\) (that is, the values of **w** add up to zero on each orbit of \({\mathscr{M}}\)), then \((\lambda ,\widetilde{{\bf{w}}})\) is an eigenpair of \({\mathscr{G}}\), where \(\widetilde{{\bf{w}}}\) is equal to **w** on (the vertices of) \({\mathscr{M}}\), and zero elsewhere. We call such a vector \(\widetilde{{\bf{w}}}\)*localised* on the motif \({\mathscr{M}}\)^{13}, as it is zero outside the motif. Moreover, if \({\mathscr{M}}\) has *n* vertices and *k* orbits, then it has an eigenbasis consisting of *n* − *k* redundant eigenpairs, which are inherited by any network containing \({\mathscr{M}}\) as a symmetric motif (Fig. 6, Theorem 4 in “Methods”).

Furthermore, since most symmetric motifs in real-world networks are basic, thus have a very constrained structure (Fig. 3), we can in fact determine the redundant spectrum of BSMs with up to a few orbits, that is, we can predict where the most significant “peaks” in the spectral density of an arbitrary network function will occur. The formulae for the redundant spectra for BSMs of one or two orbits (which covers most BSMs, up to 99% of them in our test networks) is given on Table 2.

We now give more details of the computation of the redundant spectrum of BSMs up to two orbits (Table 2), with full details in the “Methods” section. A BSM with one orbit is an (*α*, *β*)-uniform graph \({K}_{n}^{\alpha ,\beta }\) with adjacency matrix \({A}_{n}^{\alpha ,\beta }=({a}_{ij})\) given by *a*_{ij} = *α* and *a*_{ii} = *β* for all *i* ≠ *j*. Then \({K}_{n}^{\alpha ,\beta }\) has eigenvalues (*n* − 1)*α* + *β* (non-redundant), with multiplicity 1, and − *α* + *β* (redundant), with multiplicity *n* − 1. The corresponding eigenvectors are **1**, the constant vector 1 (non-redundant), and **e**_{i}, the vectors with non-zero entries 1 at position 1, and − 1 at position *i*, 2 ≤ *i* ≤ *n* (redundant). For unweighted graphs without loops (*β* = 0, *α* ∈ {0, 1}), we recover the redundant eigenvalues 0 and − 1 predicted in ref. ^{13}.

A BSM with two orbits must be a uniform join of the form \({K}_{n}^{{\alpha }_{1},{\beta }_{1}}\mathop{\leftrightarrow }\limits^{\gamma ,\delta }{K}_{n}^{{\alpha }_{2},{\beta }_{2}}\) (Fig. 3). Let *κ*_{1} and *κ*_{2} be the two solutions of the quadratic equation *c**κ*^{2} + (*b* − *a*)*κ* − *c* = 0, where *a* = *α*_{1} − *β*_{1}, *b* = *α*_{2} − *β*_{2} and *c* = *γ* − *δ*. Then, the redundant eigenvalues of this BSM are (“Methods”, Theorem 5)

each with multiplicity *n* − 1, with eigenvectors (*κ*_{1}**e**_{i}∣**e**_{i}) and (*κ*_{2}**e**_{i}∣**e**_{i}), respectively, 2 ≤ *i* ≤ *n*. For unweighted graphs without loops, we recover the redundant eigenvalues predicted in ref. ^{13}, that is,

where \(\varphi =\frac{1+\sqrt{5}}{2}\), the golden ratio.

### Eigendecomposition algorithm

Decoupling the contribution to the network spectrum from the symmetric motifs and from the quotient network, as explained above, naturally leads to an eigendecomposition algorithm that exploits the presence of symmetries: the spectrum and eigenbasis of an undirected network (equivalently, a diagonalisation of its adjacency matrix *A* = *U**D**U*^{T}) can be obtained from those of the quotient, and of the symmetric motifs, reducing the computational time (cubic on the number of vertices) to up to a third in our test networks (Fig. 5, left column of the spectral case), in line with our predictions (\(sp={n}_{{\mathscr{Q}}}^{3}\) in Table 1). The algorithm is shown and explained below. A MATLAB implementation is available at a public repository^{26}.

Our eigendecomposition algorithm (Algorithm 3) applies to any undirected matrix with symmetries (identifying a matrix with the network it represents). It first computes the eigendecomposition of the quotient matrix, then, for each motif, the redundant eigenpairs. Namely, it first computes the spectral decomposition eig of the symmetric quotient *B*_{sym} = Λ^{−1/2}*S*^{T}*A**S*Λ^{−1/2}, where Λ is the diagonal matrix of the orbit sizes (which can be obtained as the column sums of *S*). This matrix is symmetric and has the same eigenvalues as the left quotient. Moreover, if \({B}_{\text{sym}}={U}_{q}{D}_{q}{U}_{q}^{-1}\) then the left quotient eigenvectors are the columns of Λ*U*_{q}. These become, in turn, eigenvectors of *A* by repeating their values on each orbit, and can be obtained mathematically by left multiplying by the characteristic matrix *S*. Then, for each motif, we compute the redundant eigenpairs using a null space matrix (explained below), storing eigenvalues and localised (zero outside the motif) eigenvectors.

Only redundant eigenvectors of a symmetric motif (that is, those which add up to zero on each orbit) become eigenvectors of *A* by extending them as zero outside the symmetric motif. Therefore, we need to construct redundant eigenvectors from the ouput of eig on each motif (the spectral decomposition of the corresponding submatrix). If \({U}_{\lambda }=\left(\begin{array}{ccc}{{\bf{v}}}_{1}&\ldots &{{\bf{v}}}_{k}\end{array}\right)\)are *λ*-eigenvectors of a symmetric motif with characteristic matrix of the orbit partition *S*_{sm}, we need to find linear combinations such that

Therefore, if the matrix *Z* ≠ 0 represents the null space of \({S}_{\,\text{sm}\,}^{T}{U}_{\lambda }\), that is, \({S}_{\,\text{sm}\,}^{T}{U}_{\lambda }Z=0\) and *Z*^{T}*Z* = 0, then the columns of *U*_{λ}*Z* are precisely the redundant eigenvectors. This is implemented in Algorithm 3 within the innermost **for** loop.

### Algorithm 3.

Eigendecomposition algorithm.

### Vertex measures

We have so far considered network measures of the form *F*(*i*, *j*), where *i* and *j* are vertices. However, many important network measurements are vertex based, that is, of the form *G*(*i*) for each vertex *i*. We say that a vertex measure *G* is *structural* if it only depends on the network structure and, therefore, satisfies

for each automorphism \(\sigma \in \,{\text{Aut}}\,({\mathscr{G}})\), that is, it is constant on orbits (Fig. 1).

Although for vertex measures we do not have a network representation, we can still exploit the network symmetries. First, *G* needs only to be computed/stored once per orbit, resulting on a reduction/compression ratio of \({\widetilde{n}}_{{\mathscr{Q}}}={n}_{{\mathscr{Q}}}/{n}_{{\mathscr{G}}}\) (Table 1).

Secondly, when quotient recovery holds (that is, we can recover *G* from its values on the quotient and symmetry information alone), it amounts to a further computational reduction (Fig. 5), depending on the computational complexity of *G*. Finally, many vertex measures arise nevertheless from a pairwise function, such as *G*(*i*) = *F*(*i*, *i*) (subgraph centrality from communicability), or \(G(i)=\frac{1}{n}{\sum }_{j}F(i,j)\) (closeness centrality from shortest path distance), allowing the symmetry-induced results on *F* to carry over to *G*.

### Applications

We illustrate our methods on several popular pairwise and vertex-based network measures. Although novel and of independent interest, these are example applications: Our methods are general and the reader should be able to adapt our results to the network measure of their interest.

*Adjacency matrix*: the methods in this paper can be applied to the network itself, that is, to its adjacency matrix. We recover the structural and spectral results in refs. ^{11,13}, and the quotient compression ratio reported in ref. ^{12}, here \({c}_{\text{sparse}}={\widetilde{m}}_{{\mathscr{Q}}}\) in Table 1. The network (adjacency) eigendecomposition can be significantly sped up by exploiting symmetries (Fig. 5).

*Communicability*: communicability is a very general choice of structural measure, consisting on any analytical function *f*(*x*) = ∑*a*_{n}*x*^{n} applied to the adjacency matrix, \(f(A)=\mathop{\sum }\nolimits_{n = 0}^{\infty }{a}_{n}{A}^{n},\) and it is a natural measure of network connectivity, since the matrix power *A*^{k} counts walks of length *k*^{37}. The most common choice of coefficients is \({a}_{n}=\frac{1}{n!}\), which gives the exponential matrix \({e}^{A}=\mathop{\sum }\nolimits_{n = 0}^{\infty }\frac{{A}^{n}}{n!}\). Communicability is a structural network measure and its network representation, the graph \(f({\mathscr{G}})\) with adjacency matrix *f*(*A*), inherits all the symmetries of \({\mathscr{G}}\) and thus it has the same symmetric motifs and orbits. The BSMs are uniform joins of orbits, and each orbit is a uniform graph (Figs. 3 and 2b) characterised by the communicability of a vertex to itself (a natural measure of centrality^{36}), and the communicability between distinct vertices. As a full network measure, the compression ratio *c*_{full} applies (Table 1), indicating the fraction of storage needed by using the quotient to eliminate redundancies (Fig. 4). Moreover, average quotient recovery holds for communicability since *f*(*Q*(*A*)) = *Q*(*f*(*A*)) (Methods, Theorem 6). Alternatively, we can use the spectral decomposition algorithm on the adjacency matrix (*A* = *U**D**U*^{T} implies *f*(*A*) = *U**f*(*D*)*U*^{T}) reducing the computation, typically cubic on the number of vertices, by \(sp={\widetilde{n}}_{{\mathscr{Q}}}^{3}\) (Table 1 and Fig. 5). For the spectral results, note that *f*(*A*) = *U**f*(*D*)*U*^{T} has eigenvalues *f*(*λ*), and same eigenvectors, as *A*. Thus,

account for most of the discrete part of the spectrum *f*(*A*), for the adjacency matrix *A* of a typical (undirected, unweighted) real-world network (Eq. (18)).

*Shortest path distance*: this is the simplest metric on a (connected) network, namely the length of a shortest path between vertices. A *path* of *length* *n* is a sequence (*v*_{1}, *v*_{2}, …, *v*_{n+1}) of distinct vertices, except possibly *v*_{1} = *v*_{n+1}, such that *v*_{i} is connected to *v*_{i+1} for all 1 ≤ *i* ≤ *n* − 1. The *shortest path distance* \({d}^{{\mathscr{G}}}(u,v)\) is the length of the shortest (minimal length) path from *u* to *v*. If *p* = (*v*_{1}, *v*_{2}, …, *v*_{n}) is a path and \(\sigma \in \,{\text{Aut}}\,({\mathscr{G}})\), we define *σ*(*p*) = (*σ*(*v*_{1}), *σ*(*v*_{2}), …, *σ*(*v*_{n})), also a path since *σ* is a bijection.

One can show that (i) automorphisms preserve shortest paths and their lengths; (ii) shortest paths between vertices in different symmetric motifs do not contain intra-orbit edges; and (iii) shortest path distance is a partially quotient recoverable structural measure (“Methods”, Theorem 7). In particular, automorphisms *σ* preserve the shortest path metric, \(d(i,j)=d\left(\sigma (i),\sigma (j)\right)\), and we can compute shortest distances from the quotient,

whenever *V*_{i} and *V*_{j} are orbits in different symmetric motifs. This accounts for all but the (small) intra-motif distances and reduces the computation as shown in Fig. 5.

Distances between points within the same motif cannot in general be directly recovered from the quotient, not even for BSMs. (Consider for instance the double star, motif *M*_{1}, in Fig. 1: The distance from the top red to the bottom blue vertex is three, while in the quotient is one.) In general, therefore, the shortest path distance is partially, but not average, quotient recoverable. Intra-motif distances, if needed, could still be recovered one motif at a time.

Note that these results can be exploited for other graph-theoretic notions defined in terms of distance, for example eccentricity (and thus radius or diameter), which only depends on maximal distances and thus it can be computed directly in the quotient.

In terms of symmetry compression, the compression ratio *c*_{full} applies, accounting for the amount of structural redundancy due solely to symmetries. The spectral results, although perhaps less relevant, still apply for \(d({\mathscr{G}})\), the graph encoding pairwise shortest path distances. The adjacency matrix \(d(A)=({d}^{{\mathscr{G}}}(i,j))\) is non-zero outside the diagonal, hence \(d({\mathscr{G}})\) is a all-to-all weighted network without self-loops and integer weights, and so is each symmetric motif. Using the formula in Table 2, we can easily compute values of the most significant part of the discrete spectrum (redundant eigenvalues) of *d*(*A*), namely −3, −2, −1, 0, \(-2\pm \sqrt{2}\), \(-3\pm \sqrt{2}\), \(\frac{-3\pm \sqrt{5}}{2}\), \(\frac{-5\pm \sqrt{5}}{2}\), and \(\frac{-5\pm \sqrt{13}}{2}\).

*Laplacian matrix*: the Laplacian matrix of a network *L* = *D* − *A*, where *D* is the diagonal matrix of vertex degrees, is a (sparse) network measure and therefore inherits all the symmetries of the network. The matrix *L* can be seen as the adjacency matrix of a network \({\mathscr{L}}\) with identical symmetric motifs, except that all edges are weighted by −1 and all vertices have self-loops weighted by their degrees in \({\mathscr{G}}\) (Fig. 2c). In particular, the motif structure (namely, the self-loop weights) depends on the how the motif is embedded in the network \({\mathscr{G}}\).

Quotient compression and computational reduction are less useful in this case, however the spectral results are more interesting. The spectral decomposition applies, and we can compute redundant Laplacian eigenvalues directly from Table 2, for instance positive integers for BSMs with one orbit (“Methods”, Corollary 2). This explains and predicts most of the “peaks” (high-multiplicity eigenvalues) in the Laplacian spectral density, confirmed on our test networks (Fig. 7). Using the formula in Table 2, one can similarly compute the redundant spectrum for 2-orbit BSMs, and for other versions of the Laplacian (e.g., normalised, vertex weighted). Finally, observe that the spectral decomposition applies, thus Algorithm 3 provides an efficient way of computing the Laplacian eigendecomposition with an expected \(sp={\tilde{n}}_{{\mathscr{Q}}}^{3}\) (see Table 1) computational time reduction.

*Commute distance and matrix inversion*: the commute distance is the expected time for a random walker to travel between two vertices and back^{44}. In contrast to the shortest path distance, it is a global metric, which takes into account all possible paths between two vertices. The commute distance is equal up to a constant (the volume of the network) to the resistance metric *r*^{45}, which can be expressed in terms of \({L}^{\dagger }=({l}_{ij}^{\dagger })\), the pseudoinverse (or Moore-Penrose inverse) of the Laplacian, as \(r(i,j)={l}_{ii}^{\dagger }+{l}_{jj}^{\dagger }-2{l}_{ij}^{\dagger }\). The commute (or resistance) distance is a (full) structural measure, and all our structural and spectral results apply. Crucially, we can use eigendecomposition algorithm to obtain *L* = *U**D**U*^{T} (and hence *L*^{†} = *U**D*^{†}*U*^{T}, and *r*) from the quotient and symmetric motifs, resulting in significant computational gains (Fig. 5). More generally, if *M*_{F} is the matrix representation of a network measure, its pseudoinverse \({M}_{F}^{\dagger }\) is also a network measure, and the comments above apply. Note that \({M}_{F}^{\dagger }\) is generally a full measure even if *M*_{F} is sparse (the inverse of a sparse matrix is not generally sparse).

*Vertex symmetry compression*: as a vertex measure *G* is constant on orbits, we only need to store one value per orbit. Let *S* be the characteristic matrix of the partition of the vertex set into orbits, and Λ the diagonal matrix of orbit sizes (column sums of *S*). If *G* is represented by a vector *v* = (*G*(*i*)) of length \({n}_{{\mathscr{G}}}\), we can store one value per orbit by taking **w** = Λ^{−1}*S*^{T}**v**, a vector of length \({n}_{{\mathscr{Q}}}\), and recover **v** = *S*^{T}**w** (“Methods”, Theorem 9).

*Degree centrality*: the degree of a node (in- or out-degree if the network is directed) is a natural measure of vertex centrality. As expected, the degree is preserved by any automorphism *σ*, which can also be checked directly,

as automorphisms permute orbits (so *j* ∈ *V* and *σ*(*j*) ∈ *V* are the same elements but in a different order). In particular, the degree is constants on orbits. We recover the degree centrality from the quotient as the out-degree (“Methods”, Proposition 2).

*Closeness centrality*: the closeness centrality of a node *i* in a graph \({\mathscr{G}}\), \(c{c}^{{\mathscr{G}}}(i)\), is the average shortest path length to every node in the graph. As symmetries preserve distance, they also preserve closeness centrality, explicitly,

and centrality is constant on each orbit, as expected. Moreover, closeness centrality can be recovered from the quotient (shortest paths does not contain intra-orbit edges, except between vertices in the same symmetric motif, see above), as

if *i* belongs to the orbit *V*_{k} and *d*_{k} is the average intra-motif distance, that is, the average distances of a vertex in *V*_{k} to any vertex in \({\mathscr{M}}\), the motif containing *V*_{k}. By annotating each orbit by *d*_{k}, we can recover betweenness centrality exactly. Alternatively, as *d*_{k} ≪ *n* (note that *d*_{k} ≤ *m* if \({\mathscr{M}}\) has *m* orbits), we can approximate \(c{c}^{{\mathscr{G}}}(i)\) by the first summand, or simply by the quotient centrality \(c{c}^{{\mathscr{Q}}}(\alpha )\), in most practical situations.

*Betweenness centrality*: this is the sum of proportions of shortest paths between pairs of vertices containing a given vertex. It can be computed from shortest path distances and number of shortest paths^{46}, both pairwise structural measures, reducing the computation of a naive *O*(*n*^{3}) time, *O*(*n*^{2}) space implementation by \({\widetilde{n}}_{{\mathscr{Q}}}^{3}\) and \({\widetilde{n}}_{{\mathscr{Q}}}^{2}\). It would be interesting to adapt a faster algorithm, e.g., ref. ^{46} to exploit symmetries, but this is beyond our scope.

*Eigenvector centrality*: eigenvector centrality is obtained from a Perron–Frobenius eigenvector (i.e., of the largest eigenvalue) of the adjacency matrix of a connected graph^{1}. Since this eigenvalue must be simple, it cannot be a redundant eigenvalue. Hence it is a quotient eigenvalue, and, as those are a subset of the parent eigenvalues, it must still be the largest (hence the Perron–Frobenius) eigenvalue of the quotient. Its eigenvector can then be lifted to the parent network, by repeating entries on orbits. That is, if (*λ*, *v*) is the Perron–Frobenius eigenpair of the quotient, then (*λ*, *S**v*) is the Perron–Frobenius eigenpair of the parent network. In practice, we use the symmetric quotient *B*_{sym} = Λ^{−1/2}*S*^{T}*A**S*Λ^{−1/2} for numerical reasons (Algorithm 4). Hence the computation (quadratic time by power iteration) can be reduced by \({\widetilde{n}}_{{\mathscr{Q}}}^{2}\) (Fig. 5).

### Algorithm 4.

Eigenvector centrality from the quotient network.

## Discussion

We have presented a general theory to describe and quantify the effects of network symmetry on arbitrary network measures, and explained how this can be exploited in practice in a number of ways.

Network symmetry of the large but sparse graphs typically found in applications can be effectively computed and manipulated, making it an inexpensive pre-processing step. We showed that the amount of symmetry is amplified in a pairwise network measure but can be easily discounted using the quotient network. We can for instance eliminate the symmetry-induced redundancies, or use them to simplify the calculation by avoiding unnecessary computations. Symmetry has also a profound effect on the spectrum, explaining the characteristic “peaks” observed in the spectral densities of empirical networks, and occurring at values we are able to predict.

Our framework is very general and apply to any pairwise or vertex-based network measure beyond the ones we discuss as examples. We emphasised practical and algorithmic aspects throughout, and provide pseudocode and full implementations^{26}. Since real-world network models and data are very common, and typically contain a large degree of structural redundancy, our results should be relevant to any network practitioner.

## Methods

### Geometric decomposition and symmetric motifs

We write \(\,{\text{Aut}}\,({\mathscr{G}})\) for the automorphism group of an (unweighted, undirected, possibly very large) network \({\mathscr{G}}=(V,E)\) (see below for a discussion of directed and weighted networks). Each automorphism (symmetry) \(\sigma \in \,{\text{Aut}}\,({\mathscr{G}})\) is a permutation of the vertices and its *support* is the set of vertices moved by *σ*,

Two automorphisms *σ* and *τ* are *support-disjoint* if the intersection of their supports is empty, \(\,{\text{supp}}\,(\sigma )\cap \,{\text{supp}}\,(\tau )={{\emptyset}}\). The *orbit* of a vertex *i* is the set of vertices to which *i* can be moved to by an automorphism, that is,

One can show^{11} that there is a partition a set *X* of generators of \(\,{\text{Aut}}\,({\mathscr{G}})\) into its finest support-disjoint classes *X* = *X*_{1} ∪ … ∪ *X*_{m}, which is unique up to permutation of the sets *X*_{i}. The vertex sets \({M}_{i}={\cup }_{\sigma \in {X}_{i}}\,{\text{supp}}\,(\sigma )\) give the geometric decomposition Eq. (2), and the subgraphs induced by them are, by definition, the *symmetric motifs* of \({\mathscr{G}}\). (The next section explains how to compute the geometric decomposition in practice.) Since support-disjoint automorphisms must commute (the order in which they are composed is irrelevant), the subgroups of \(\,{\text{Aut}}\,({\mathscr{G}})\) generated by *X*_{1} to *X*_{m}, call them *H*_{1} to *H*_{m}, give a direct product decomposition \(\,{\text{Aut}}\,({\mathscr{G}})={H}_{1}\times \ldots \times {H}_{m}\). The geometric decomposition is defined from the finest support-disjoint partition of a special set of generators (called *essential*), as explained in ref. ^{11}. However, the results in this article are valid for any support-disjoint decomposition of any set of generators (essential or not) of \(\,{\text{Aut}}\,({\mathscr{G}})\).

If all the orbits of a symmetric motif have the same size *k* and every permutation of the vertices in each orbit can be extended to a network automorphism supported on the motif, we call the symmetric motif *basic* (or BSM) of *type* *k*. (In particular, the corresponding subgroup *H*_{i} must be Sym(*k*), the symmetric group of all permutations of *k* elements.) If a symmetric motif is not basic, we call it *complex* or of *type* 0.

### Network symmetry computation

First, we compute a list of generators of the automorphism group from an edge list (we use saucy3^{47}, which is extremely fast for the large but sparse networks typically found in applications). Then, we partition the set of generators *X* into support-disjoint classes *X* = *X*_{1} ∪ … ∪ *X*_{m}, that is, *σ* and *τ* are support-disjoint whenever *σ* ∈ *X*_{i}, *τ* ∈ *X*_{j} and *i* ≠ *j*. To find the finest such partition, we use a bipartite graph representation of vertices *V* and generators *X*. Namely, let \({\mathscr{B}}\) be the graph with vertex set *V* ∪ *X* and edges between *i* and *σ* whenever *i* ∈ supp(*σ*). Then *X*_{1}, …, *X*_{m} are the connected components of \({\mathscr{B}}\) (as vertex sets intersected with *X*). Each *X*_{i} corresponds to the vertex set *M*_{i} of a symmetric motif \({{\mathscr{M}}}_{i}\), as \({M}_{i}={\bigcup }_{\sigma \in {X}_{i}}\,{\text{supp}}\,(\sigma ).\) Finally, we use GAP^{48} to compute the orbits and type of each symmetric motif (Algorithm 5). Full implementations of all the procedures outlined above are available at a public repository^{26}.

### Algorithm 5.

Orbits and type of a symmetric motif.

### Structural network measures

We prove below the structural result for BSMs for arbitrary graphs and network measures. The proof is a generalisation of the argument on ref. ^{49} (p. 48) to weighted directed graphs with symmetries.

**Theorem 1** *Let M be the vertex set of a BSM of a network* \({\mathscr{G}}\), *and F a structural network measure. Then the graph induced by M in* \(F({\mathscr{G}})\) *is a BSM of* \(F({\mathscr{G}})\)*, and satisfies*:

- (i)
*for each orbit*Δ = {*v*_{1}*,**…,**v*_{n}}*, there are constants α and β such that the orbit internal connectivity is given by α*=*F*(*v*_{i}*,**v*_{j})*for all i*≠*j and β*=*F*(*v*_{i}*,**v*_{i})*for all i*; - (ii)
*for every pair of orbits*Δ_{1}*and*Δ_{2}*, there is a labelling*Δ_{1}= {*v*_{1},*…*,*v*_{n}}, Δ_{2}=*{w*_{1}*,**…,**w*_{n}*} and constants γ*_{1}*, γ*_{2}*, δ*_{1},*δ*_{2}*such that γ*_{1}=*F(v*_{i}*,**w*_{j}*), γ*_{2}=*F(w*_{j},*v*_{i}*), δ*_{1}=*F(v*_{i},*w*_{i}*)*,*and δ*_{2}=*F(w*_{i},*v*_{i}*)*,*for all i*≠*j*; - (iii)
*every vertex v not in the BSM is joined uniformly to all the vertices in each orbit*{*v*_{1},*…*,*v*_{n}}*in the BSM, that is*,*F*(*v,**v*_{i}) =*F*(*v*,*v*_{j})*and F*(*v*_{i},*v*) =*F(v*_{j},*v) for all i*,*j*.

*Moreover, property* (iii) *holds in general for any symmetric motif.*

If \({\mathscr{G}}\) is undirected and *F* is symmetric, *γ*_{1} = *γ*_{2} and *δ*_{1} = *δ*_{2} and each orbit is a (*α*, *β*)-uniform graph \({K}_{n}^{\alpha ,\beta }\) and each pair of orbits form a (*γ*, *δ*)-uniform join, explaining Fig. 3a, b.

** Proof** As \(F({\mathscr{G}})\) inherits all the symmetries of \({\mathscr{G}}\),

*M*has the same orbit decomposition and the symmetric group

*S*

_{n}

*acts in the same way, hence*

*M*induces a BSM in \(F({\mathscr{G}})\) too. For the internal connectivity, note that every permutation of the vertices

*v*

_{i}is realisable. Thus, given arbitrary 1 ≤

*i*,

*j*,

*k*,

*l*≤

*n*, we can find \(\sigma \in Aut({\mathscr{G}})\) such that

*σ*(

*v*

_{k}) =

*v*

_{i}and, if

*j*≠

*i*and

*l*≠

*k*, additionally satisfies

*σ*(

*v*

_{l}) =

*v*

_{j}. This gives

*F*(

*v*

_{i},

*v*

_{j}) =

*F*(

*σ*(

*v*

_{k}),

*σ*(

*v*

_{l})) =

*F*(

*v*

_{k},

*v*

_{l}), as

*F*is a structural network measure. The other case,

*i*=

*j*and

*k*=

*l*, gives

*F*(

*v*

_{i},

*v*

_{i}) =

*F*(

*σ*(

*v*

_{k}),

*σ*(

*v*

_{k})) =

*F*(

*v*

_{k},

*v*

_{k}). For the orbit connectivity result (ii), we generalise the argument in ref.

^{49}(p. 48) to weighted directed graphs with symmetries, particularly (\(F{\mathscr{G}}\)). We assume some basic knowledge and terminology about group actions

^{50}and symmetric groups

*S*

_{n}. Given two orbits Δ

_{1}= {

*v*

_{1}, …,

*v*

_{n}} and Δ

_{2}= {

*w*

_{1}, …,

*w*

_{n}} and 1 ≤

*i*≤

*n*, define

the vertices in Δ_{2} joined to *v*_{i} in \(F({\mathscr{G}})\). If a finite group *G* acts on a set *X*, the stabiliser of a point *G*_{x} = {*g* ∈ *G* ∣ *g**x* = *x*} is a subgroup of *G* of index \([G:H]=\frac{| G| }{| H| }\) equals to the size of the orbit of *x*. Hence, the stabilisers \({G}_{{v}_{i}}\) or \({G}_{{w}_{j}}\) are subgroups of *S*_{n} of index *n*, for all *i*, *j*. The group *S*_{n} has a unique, up to conjugation, subgroup of index *n* if *n* ≠ 6. In this case, \({G}_{{v}_{1}}\) is conjugate to \({G}_{{w}_{1}}\) so \({G}_{{v}_{1}}=\sigma {G}_{{w}_{1}}{\sigma }^{-1}={G}_{\sigma {w}_{1}}\) for some *σ* ∈ *S*_{n}. Relabelling *σ**w*_{1} as *w*_{1} we have \({G}_{{v}_{1}}={G}_{{w}_{1}}\). Similarly, we can relabel the remaining vertices in Δ_{2} so that \({G}_{{v}_{i}}={G}_{{w}_{i}}\) for all *i*: write *v*_{2} = *σ*_{2}*v*_{1}, *v*_{3} = *σ*_{3}*v*_{1}, … and relabel *w*_{2} = *σ*_{2}*w*_{1}, *w*_{3} = *σ*_{3}*w*_{1}, …, noticing there cannot be repetitions as *σ*_{k}*w*_{1} = *σ*_{l}*w*_{1} for *k* ≠ *l* implies \({\sigma }_{k}{\sigma }_{l}^{-1}\in {G}_{{w}_{1}}={G}_{{v}_{1}}\) and thus *v*_{k} = *σ*_{k}*v*_{1} = *σ*_{l}*v*_{1} = *v*_{l}, a contradiction. Fix 1 ≤ *i* ≤ *n*. The stabiliser \({G}_{{v}_{i}}\) fixes *v*_{i} but it may permute vertices in Δ_{2}. In fact, the set *Γ*_{i} above must be a union of orbits of \({G}_{{v}_{i}}\) on Δ_{j}: if *w* ∈ *Γ*_{i} and \(\sigma \in {G}_{{v}_{i}}\) then 0 ≠ *F*(*v*_{i}, *w*) = *F*(*σ**v*_{i}, *σ**w*) = *F*(*v*_{i}, *σ**w*) so *σ**w* also belongs to *Γ*_{i}. The orbits of \({G}_{{v}_{i}}={G}_{{w}_{i}}\) in Δ_{2} are {*w*_{i}} and Δ_{2}⧹{*w*_{i}}, as \({G}_{{w}_{i}}\) fixes *w*_{i} and freely permutes all other vertices in Δ_{2}. The case *n* = 6 is similar, except that *S*_{6} has two conjugacy classes of subgroups of index 6, one as above, and the other a subgroup acting transitively on the 6 vertices, which gives a unique orbit Δ_{2}. In all cases, the set Δ_{2}⧹{*w*_{i}} is part of an \({G}_{{v}_{i}}\)-orbit, which gives the connectivity result, as follows. Fix 1 ≤ *i* ≤ *n*. For 1 ≤ *j*, *k* ≤ *n* different from *i*, the vertices *w*_{j} and *w*_{k} are in the same \({G}_{{v}_{i}}\)-orbit so there is \(\sigma \in {G}_{{v}_{i}}\) with *σ**w*_{j} = *w*_{k} and, therefore, *F*(*v*_{i}, *w*_{j}) = *F*(*σ**v*_{i}, *σ**w*_{j}) = *F*(*v*_{i}, *w*_{k}). The argument is general, so we have shown *a*_{i} = *F*(*v*_{i}, *w*_{j}) is constant for all *j* ≠ *i*. It is enough to show *a*_{i} = *a*_{1} for all *i*. Choose *j* ≠ *i*, then

as long as \({\sigma }_{i}^{-1}{\sigma }_{j}{w}_{1}\ne {w}_{1}\), which cannot happen as otherwise \({\sigma }_{i}^{-1}{\sigma }_{j}\in {G}_{{w}_{1}}={G}_{{v}_{1}}\) implies \({\sigma }_{i}^{-1}{\sigma }_{j}{v}_{1}={v}_{1}\) or *v*_{j} = *σ*_{j}*v*_{1} = *σ*_{i}*v*_{1} = *v*_{i}, a contradiction. Hence, we have shown *F*(*v*_{i}, *w*_{j}) is a constant, call it *γ*_{1}, for all *i* ≠ *j*. In addition, *F*(*v*_{i}, *w*_{i}) = *F*(*σ*_{i}*v*_{1}, *σ*_{i}*w*_{1}) = *F*(*v*_{1}, *w*_{1}) is also a constant, call it *δ*_{1}, for all *i*. The cases *γ*_{2} = *F*(*w*_{j}, *v*_{i}) and *δ*_{2} = *F*(*w*_{i}, *v*_{i}) are identical, reversing the roles of Δ_{1} and Δ_{2}.

Property (iii) holds for any symmetric motif, not necessarily basic, as follows. By the definition of orbit, for each pair *i*, *j* we can find an automorphism *σ* in the geometric factor such that *σ*(*v*_{j}) = *v*_{i}. Since *v* is not in the support of that geometric factor, it is fixed by *σ*, that is, *σ*(*v*) = *v*. Therefore *F*(*v*, *v*_{i}) = *F*(*σ*(*v*), *σ*(*v*_{j})) = *F*(*v*, *v*_{j}), and similarly *F*(*v*_{i}, *v*) = *F*(*v*_{j}, *v*). □

### Average compression

**Theorem 2** *Let A* = (*a*_{ij}) *be the n* × *n adjacency matrix of a network with vertex set V. Let S be the n* × *m characteristic matrix of the partition of V into orbits of the automorphism group, and Λ the diagonal matrix of column sums of S. Define B* = *S*^{T}*AS and* \({A}_{{\rm{avg}}}=RB{R}^{T}=({\bar{a}}_{ij})\) *where R* = *SΛ*^{−1}*. Then*,

- (i)
*if i,**j**∈**V belong to different symmetric motifs,*\({\bar{a}}_{ij}={a}_{ij}\); - (ii)
*if i,**j**∈**V belong to orbits i**∈**Δ*_{1}*and j**∈**Δ*_{2}*in the same symmetric motif*,$${\bar{a}}_{ij}=\frac{1}{| {\Delta }_{1}| }\frac{1}{| {\Delta }_{2}| }\sum_{{{u\in {\Delta }_{1}}\atop {v\in {\Delta }_{2}}}}{a}_{uv}.$$(28)

Before proving this statement, we make a few observations. The column sums of *S* equal the sizes of the vertex partition sets, hence Λ is the same as in the definition of quotient matrix (6), and can be obtained easily from *S*. The matrix *S* is very sparse (each row has a unique non-zero entry) and can be stored very efficiently. Case (i) covers the vast majority of vertex pairs (external edges) for a network measure (see ext_{s} and *i**n**t*_{f} in Table 1). In (ii), the case Δ_{1} = Δ_{2} is allowed. The matrix *B* = *S*^{T}*A**S* is symmetric with integer entries if *A* is too, hence generally easier to store than *Q*(*A*) = Λ^{−1}*S*^{T}*A**S*.

** Proof** Let

*V*= Δ

_{1}∪ … ∪ Δ

_{m}be the partition into orbits, and write

*n*

_{k}= ∣Δ

_{k}∣. Clearly, the row sums of

*S*equals

*n*

_{1}, …,

*n*

_{m}. Writing [

*M*]

_{ij}for the (

*i*,

*j*)-entry of a matrix

*M*, matrix multiplication gives

Similarly, assuming *i* ∈ Δ_{k} and *j* ∈ Δ_{l}, we have

This expression reduces to *a*_{ij} if the orbits belong to different symmetric motifs, since in this case all the summands in \(\sum _{{{{u\in {\Delta }_{k}}}, {v\in {\Delta }_{l}}}}{a}_{uv}\) are equal to one another. Indeed, given *i*_{1}, *i*_{2} ∈ Δ_{k}, and *j*_{1}, *j*_{2} ∈ Δ_{l}, we can find, by the definition of orbit and symmetric motif, automorphisms *σ* and *τ* such that *σ*(*i*_{1}) = *i*_{2} while fixing *j*_{1}, and *τ*(*j*_{1}) = *j*_{2} while fixing *i*_{1}. This gives

A similar proof shows that we can recover exact inter-motif connectivity (external edges), and average intra-motif connectivity (average internal edges) from the quotient network, as follows.

**Corollary 1** *Let A* = (*a*_{ij}) *be the n* × *n adjacency matrix of a network and Q*(*A*) = (*b*_{kl}) *its quotient with respect to the partition into orbits of the automorphism group V* = Δ_{1} *∪* *…* *∪* Δ_{m}*. Suppose that i* *∈* Δ_{k}*, j* *∈* Δ_{l}*. Then*,

- (i)
*if the orbits*Δ_{k}*and*Δ_{l}*belong to different symmetric motifs, b*_{kl}=*a*_{ij}; - (ii)
*if the orbits*Δ_{k}*and*Δ_{l}*belong to the same symmetric motif*,$${b}_{kl}=\frac{1}{| {\Delta }_{k}| }\sum_{{{u\in {\Delta }_{k}}\atop {v\in {\Delta }_{l}}}}{a}_{uv}.$$(29)

### Lossless compression

We can achieve lossless compression by exploiting the structure of BSMs, which account for most of the symmetry in real-world networks. If the motif is basic, we can preserve the exact parent network connectivity in an annotated quotient, as follows. Each orbit in a BSM is a uniform graph \({K}_{n}^{\alpha ,\beta }\), which appears in the quotient as a single vertex with a self-loop weighted by (*n* − 1)*α* + *β* (Fig. 3c). Hence, if we annotate this vertex in the quotient by not only *n* but also *α*, or *β*, we can recover the internal connectivity. Similarly, the connectivity between two orbits in the same symmetric motif is given by two parameters *γ*, *δ*, and appears in the quotient as an edge weighted (*n* − 1)*γ* + *δ* (Fig. 3d) and thus can also be recovered from a quotient with edges annotated by *γ*, or *δ*.

Since there is no general formula for an arbitrary non-basic symmetric motif, we can work with the *basic quotient* \({{\mathscr{Q}}}_{\text{basic}}\) instead, that is, the quotient with respect to the partition of the vertex set into orbits in BSMs only (vertices in non-basic symmetric motifs become fixed points hence part of the asymmetric core). The annotated (as above) basic quotient achieves most of the symmetry reduction in a typical empirical network (\({\tilde{n}}_{{\mathscr{Q}}}^{\,\text{basic}\,}\approx {\tilde{n}}_{{\mathscr{Q}}}\), \({\tilde{m}}_{{\mathscr{Q}}}^{\,\text{basic}\,}\approx {\tilde{m}}_{{\mathscr{Q}}}\), (see Table 1, caption). To maintain the same vertex labelling as in the parent network, we record, for each pairs of orbits in the same symmetric motif, the corresponding permutation of the second orbit (else we recover the adjacency matrix only up to permutations of the orbits).

Algorithms for lossless compression and recovery based on the basic quotient are shown below (Algorithms 6 and 7), and MATLAB implementations for BSMs up to two orbits are available at a public repository^{26}. The results reported in Fig. 4 are with respect to these implementations, and the actual compression ratios reported include the size of the annotation data for lossless compression with vertex identity (a very small fraction of the size of the quotient in practice, adding at most 0.02% to the basic full compression ratio in all our test cases).

### Algorithm 6.

Lossless symmetry compression.

### Algorithm 7.

Lossless symmetry decompression.

### Spectral signatures of symmetry

The partition into orbits satisfy the following regularity condition^{34,35}. A partition of the vertex set *V* = *V*_{1} ∪ … ∪ *V*_{m} is *equitable* if

**Proposition 1** *Let V* = *V*_{1} *∪* *…V*_{m} *be a partition of the vertex set of a graph with adjacency matrix A* = (*a*_{ij}), *and let S be the characteristic matrix of the partition*. *Write Q*(*A*) *for the quotient with respect to the partition*.

- (i)
*The partition is equitable if and only if AS*=*SQ*(*A*); - (ii)
*The partition into orbits of the automorphism group is equitable*.

** Proof** (i) Fix 1 ≤

*i*≤

*n*and 1 ≤

*k*≤

*m*, and suppose

*i*∈

*V*

_{l}. Then

and, using the equitable condition,

For the converse, note that [*A**S*]_{il} does not depend on *i* but on the orbit of *i*. Namely, given *i*_{1}, *i*_{2} ∈ *V*_{k},

(ii) Given *i*_{1} and *i*_{2} in the same orbit Δ_{k}, choose an automorphism *σ* such that *σ*(*i*_{1}) = *i*_{2}. Then, since automorphisms respect the adjacency matrix, *a*_{ij} = *a*_{σ(i)σ(j)} for all *i*, *j*, we have

where the last equality follows from the fact that an element in a group permutes orbits, in this case, {*j* : *j* ∈ Δ_{l}} = {*σ*(*j*) : *j* ∈ Δ_{l}}. Hence, the partition into orbits is equitable. □

It follows immediately that the quotient eigenvalues are a subset of the eigenvalues of the parent network,

(Note that *S**v* ≠ 0 if *v* ≠ 0.) That is, the spectrum of the quotient is a subset of the spectrum of the graph, with eigenvectors lifted from the quotient by repeating entries on orbits. Moreover, we can complete an eigenbasis with eigenvectors orthogonal to the partition (adding up to zero on each orbit).

**Theorem 3** *Suppose that A is an n* × *n real symmetric matrix and B the m* × *m quotient matrix with respect to an equitable partition V*_{1} *∪* *…* *∪* *V*_{m} *of the set* {*1*, *2*, *…*, *n*}*. Let S be the characteristic matrix of the partition. Then A has an eigenbasis of the form*

*where* {*v*_{1}, *…*, *v*_{m}} *is any eigenbasis of B, and S*^{T}*w*_{i} = *0 for all i.*

** Proof** Recall that

*S*

*v*≠ 0 if

*v*≠ 0 (

*S*lifts the vector

*v*from the quotient by repeating entries on each orbit) so the linear map

has trivial kernel and hence it is an isomorphism onto its image. In particular, \({\mathscr{B}}=\{S{v}_{1},\ldots ,S{v}_{m}\}\) is also a linearly independent set, and they are all eigenvectors of *A*, since *A**S* = *S**B* as the partition is equitable. To finish the proof we need to complete \({\mathscr{B}}\) to a basis \(\left\{S{v}_{1},\ldots ,S{v}_{m},{w}_{1},\ldots ,{w}_{n-m}\right\}\) such that each *w*_{j} is an *A*-eigenvector orthogonal to all *S**v*_{i}. As \({\mathscr{B}}\) is a basis of Im(*S*), this would imply *w*_{i} ∈ Im(*S*)^{⊥} = Ker(*S*^{T}), giving *S*^{T}*w*_{i} = 0 for all *i*, as desired. Since *A* is diagonalisable, \({{\mathbb{R}}}^{n}\) decomposes as an orthogonal direct sum of eigenspaces, \({{\mathbb{R}}}^{n}{ = \bigoplus }_{\lambda }{E}_{\lambda }\). In each *E*_{λ}, we can find vectors *w*_{j} such that they complete \({V}_{\lambda }=\{S{v}_{i}\in {\mathscr{B}}\ | \ {v}_{i}\,\text{}\lambda \text{-eigenvector}\,\}\) to a basis of *E*_{λ} and that are orthogonal to all vectors in *V*_{λ} (consider the orthogonal complement of the subspace generated by *V*_{λ} in *E*_{λ}). Repeating this procedure on each *E*_{λ}, we find vectors {*w*_{1}, …, *w*_{n−m}} as needed. □

The statement and proof above holds for arbitrary matrices *A* by replacing “eigenbasis” by “maximal linearly independent set” and removing the condition *S*^{T}*w*_{i} = 0. It would be interesting to know whether the condition *S*^{T}*w*_{i} = 0 holds for motif eigenvectors in the directed case as well (the proof above is no longer valid).

**Theorem 4** *Let* \({\mathscr{M}}\) *be a symmetric motif of a* (*possibly weighted*) *undirected graph* \({\mathscr{G}}\)*. If* (*λ*, *w*) *is a redundant eigenpair of* \({\mathscr{M}}\) *then* \((\lambda ,\widetilde{w})\) *is a eigenpair of* \({\mathscr{G}}\), *where* \(\widetilde{w}\) *is equal to w on (the vertices of)* \({\mathscr{M}}\), *and zero elsewhere*.

** Proof** Since (

*λ*,

*v*) is an \({\mathscr{M}}\)-eigenpair,

where \({A}_{{\mathscr{M}}}\) is the adjacency matrix of \({\mathscr{M}}\). We can decompose \({\mathscr{M}}\) into orbits,

and, by the spectral decomposition theorem above applied to \({\mathscr{M}}\), *w* is orthogonal to each orbit, that is,

We need to show that \((\lambda ,\widetilde{w})\) is a \({\mathscr{G}}\)-eigenpair. Let us write *A* for the adjacency matrix of \({\mathscr{G}}\) (recall \({\mathscr{M}}\) is a subgraph so *A* restricts to \({A}_{{\mathscr{M}}}\) on \({\mathscr{M}}\)). We need to show \(A\widetilde{w}=\lambda \widetilde{w}\). Given \(i\in V({\mathscr{G}})\), we have two cases. First, if \(i\in V({\mathscr{M}})\),

since \(\widetilde{w}\) equals *w* on \({\mathscr{M}}\), and is zero outside \({\mathscr{M}}\). The second case, when \(i\in V({\mathscr{G}}){\!\!}\setminus {\!\!}V({\mathscr{M}})\), gives

as before, and then we use the decomposition of \({\mathscr{M}}\) into orbits,

Here, we have used that the vertex *i*, outside the motif, connects uniformly to each orbit, that is, \({A}_{i{j}_{1}}={A}_{i{j}_{2}}\) for all *j*_{1}, *j*_{2} ∈ *V*_{k}, and we call this quantity *α*_{k}. Finally, recall that *w* is orthogonal to each orbit, to conclude

Therefore, the redundant spectrum of \({\mathscr{G}}\) is the union of the redundant eigenvalues of the symmetric motifs, together with their redundant eigenvectors localised on them. Since most symmetric motifs in real-world networks are basic, most symmetric motifs in the network representation of a network measure will be basic too. Given their constrained structure, one can in fact determine the redundant spectrum of BSMs with up to few orbits, for arbitrary undirected networks with symmetry.

** Redundant spectrum of a 1-orbit BSM**: A BSM with one orbit is an (

*α*,

*β*)-uniform graph \({K}_{n}^{\alpha ,\beta }\) with adjacency matrix \({A}_{n}^{\alpha ,\beta }=({a}_{ij})\) given by

*a*

_{ij}=

*α*and

*a*

_{ii}=

*β*for all

*i*≠

*j*. Then \({K}_{n}^{\alpha ,\beta }\) has eigenvalues (

*n*− 1)

*α*+

*β*(non-redundant), with multiplicity 1, and −

*α*+

*β*(redundant), with multiplicity

*n*− 1. The corresponding eigenvectors are

**1**, the constant vector 1 (non-redundant), and

**e**

_{i}, the vectors with non-zero entries 1 at position 1, and − 1 at position

*i*, 2 ≤

*i*≤

*n*(redundant). This can be shown directly by computing \({A}_{n}^{\alpha ,\beta }{\bf{1}}\) and \({A}_{n}^{\alpha ,\beta }{{\bf{e}}}_{i}\), and noting that

**1**,

**e**

_{2}, …,

**e**

_{n}are linearly independent (although not orthogonal), and thus form an eigenbasis. Indeed, \({A}_{n}^{\alpha ,\beta }{\bf{1}}\) is the vector of column sums of the matrix \({A}_{n}^{\alpha ,\beta }\), which are constant (

*n*− 1)

*α*+

*β*, and \({A}_{n}^{\alpha ,\beta }{{\bf{e}}}_{i}\) is the constant 0 vector, except possibly at positions 1, which equals

*β*−

*α*, and

*i*, which equals

*α*−

*β*.

** Redundant spectrum of a 2-orbit BSM**: A BSM with two orbits is a uniform join of the form

Define *a* = *α*_{1} − *β*_{1}, *b* = *α*_{2} − *β*_{2}, *c* = *γ* − *δ*, and note that *c* ≠ 0: otherwise *γ* = *δ* and we can freely permute one orbit while fixing the other, that is, this would not be a BSM with two orbits but rather two BSMs with one orbit each. As above, let **e**_{i} be the vector with non-zero entries 1 at position 1, and −1 at position *i*, for any 2 ≤ *i* ≤ *n*.

**Lemma 1** *The following set of vectors is linearly independent* {(*κ*_{1} *e*_{i} ∣ *e*_{i}), (*κ*_{2} *e*_{i} ∣ *e*_{i})∣*2* ≤ *i* ≤ *n*}*, for all* \({\kappa }_{1}\, \ne \, {\kappa}_{2}\in {\mathbb{R}}\).

** Proof** Define the (

*n*− 1) ×

*n*matrix

*B*

_{n}= (

**1**∣ − Id

_{n−1}) where

**1**is a constant 1 column vector, and Id

_{n−1}the identity matrix of size

*n*− 1. The set of vectors in the statement can be arranged in block matrix form as

This matrix has a minor of order 2(*n* − 1),

Using that \(\det \left(\begin{array}{cc}A&B\\ C&D\end{array}\right)=AD-BC\) whenever *A*, *B*, *C*, *D* are square blocks of the same size and *C* commutes with *D*^{51}, this minor equals

Next, we derive conditions for a vector *v*_{i} = (*κ**e*_{i}∣*e*_{i}) to be an eigenvector of the uniform join (32), that is, *A**v*_{i} = *λ**v*_{i}, for some \(\lambda \in {\mathbb{R}}\), where *A* is the (symmetric) adjacency matrix of the uniform join,

The *j*th entry of the vector *A**v* is

Comparing these with the entries of the vector *λ**v*_{i}, we obtain

The two equations on the right-hand side are satisfied if and only if *λ* = − *κ**c* − *b* and *κ* is a solution of the quadratic equation

which has two distinct real solutions

since *c* ≠ 0, as explained above. Altogether with the lemma, we have shown the following.

**Theorem 5** *The redundant spectrum of a symmetric motif with two orbits* \({K}_{n}^{{\alpha }_{1},{\beta }_{1}}\mathop{\leftrightarrow }\limits^{\gamma ,\delta }{K}_{n}^{{\alpha }_{2},{\beta }_{2}}\) *is given by the eigenvalues*

*each with multiplicity n* − *1, and eigenvectors* (*κ*_{1}*e*_{i}∣*e*_{i}) *and* (*κ*_{2}*e*_{i}∣*e*_{i})*, respectively, where* *κ*_{1} *and κ*_{2} *are the two solutions of the quadratic equation cκ*^{2} *+* *(b* − *a)κ* *−* *c* = *0, a* = α_{1} *−* *β*_{1}*, b* = *α*_{2} − *β*_{2} *and c* = *γ* − *δ* ≠ *0*.

For unweighted graphs without loops, we recover the redundant eigenvalues for BSMs with two orbits predicted in ref. ^{13}, as follows. We have *β*_{1} = *β*_{2} = 0, *α*_{1}, *α*_{2}, *γ*, *δ* ∈ {0, 1}, and thus *a*, *b* ∈ {0, 1} and *c* ∈ { −1, 1}. If *a* = *b*, the quadratic equation becomes *κ*^{2} − 1 = 0 with solutions *κ* = ±1 and thus *λ* = −*b* − *c**κ* ∈ {−2, −1, 0, 1}. If *a* ≠ *b* we can assume *a* = 1, *b* = 0 and the quadratic *c**κ* − *κ* − *c* = 0 has solutions *φ* and 1 − *φ* if *c* = 1, −*φ* and *φ* − 1 if *c* = −1, where \(\varphi =\frac{1+\sqrt{5}}{2}\) is the golden ratio. In either case, the redundant eigenvalues *λ* = −*b* − *c**κ* = −*c**κ* are −*φ* and *φ* − 1. Altogether, the redundant eigenvalues for 2-orbit BSMs are { − , − *φ*, − 1, 0, *φ* − 1, 1}, which equals the redundant eigenvalues RSpec_{2} in the notation of ref. ^{13}.

We omit the calculation of the redundant spectrum of BSMs with three (or more) orbits, as it becomes much more elaborate, and its relevance in real-world networks is less justified (for example, <1% of BSMs in each of our test networks, Table 1, has three or more orbits).

### Applications

**Theorem 6** (Communicability) *Let Q*(*A*) *be the quotient of the adjacency matrix A of a network with respect to the partition into orbits of the automorphism group. Let f*(*x*) = *∑a*_{n}*x*^{n} *be an analytic function. Then f*(*Q*(*A*)) = *Q*(*f*(*A*))*.*

** Proof** Call

*B*=

*Q*(

*A*) and recall that

*A*

*S*=

*S*

*B*by Proposition 1(i), where

*S*is the characteristic matrix of the partition. Therefore,

*A*

^{n}

*S*=

*S*

*B*

^{n}for all

*n*≥ 0 and

since Λ^{−1}*S*^{T}*S* is the identity matrix. □

**Theorem 7** (Shortest path distance) *Let A* = (*a*_{ij}) *be as above. Then*

- (i)
*if*(*v*_{1}*,**v*_{2}*,**…,**v*_{n})*is a shortest path from v*_{1}*to v*_{n}*and*\(\sigma \in \,{\text{Aut}}\,({\mathscr{G}})\)*, then*\(\left(\sigma ({v}_{1}),\sigma ({v}_{2}),\ldots ,\sigma ({v}_{n})\right)\)*is a shortest path from σ*(*v*_{1})*to σ*(*v*_{n}); - (ii)
*if*(*v*_{1}*,**v*_{2}*,**…,**v*_{n})*is a shortest path from v*_{1}*to v*_{n}*, and v*_{1}*and v*_{n}*belong to different symmetric motifs, then v*_{i}*and v*_{i+1}*belong to different orbits, for all 1**≤**i**≤**n**−**1*; - (iii)
*if u and v belong to orbits U, respectively V*,*in different symmetric motifs, then the distance from u to v in*\({\mathscr{G}}\)*equals the distance from U to V in the unweighted*(*or skeleton*)*quotient*\({\mathscr{Q}}\).

** Proof** (i) Since automorphisms are bijections and preserve adjacency, (

*σ*(

*v*

_{1}),

*σ*(

*v*

_{2}), …,

*σ*(

*v*

_{n})) is a path from

*σ*(

*u*) to

*σ*(

*v*) of the same length. If there were a shorter path (

*σ*(

*u*) =

*w*

_{1},

*w*

_{2}, …,

*σ*(

*v*) =

*w*

_{m}),

*m*<

*n*, the same argument applied to

*σ*

^{−1}gives a shorter path (

*u*=

*σ*

^{−1}(

*w*

_{1}),

*σ*

^{−1}(

*w*

_{2}), …,

*v*=

*σ*

^{−1}(

*w*

_{m})) from

*u*to

*v*, a contradiction. (ii) Any subpath of a minimal length path is also of minimal length between its endpoints. Arguing by contradiction, there exists a subpath

*p*= (

*w*

_{1},

*w*

_{2}, …,

*w*

_{n}) (or

*p*= (

*w*

_{n},

*w*

_{n−1}, …,

*w*

_{1})), such that

*w*

_{1}and

*w*

_{2}belong to the same orbit, and

*w*

_{n}belongs to a different symmetric motif. Hence, we can find \(\sigma \in \,{\text{Aut}}\,({\mathscr{G}})\) with

*σ*(

*w*

_{2}) =

*w*

_{1}and fixing

*w*

_{n}. This implies

*σ*(

*p*) = (

*σ*(

*w*

_{1}),

*σ*(

*w*

_{2}) =

*w*

_{1},

*σ*(

*w*

_{3}), …,

*σ*(

*w*

_{n}) =

*w*

_{n}), a shortest path by (i), of length

*n*− 1. The subpath (

*w*

_{1},

*σ*(

*w*

_{3}), …,

*w*

_{n}) has length

*n*− 2, contradicting

*p*being a minimal length path from

*w*

_{1}to

*w*

_{n}. (The case

*p*= (

*w*

_{n},

*w*

_{n−1}, …,

*w*

_{1}) is analogous.) (iii) Let

*p*= (

*u*=

*v*

_{1},

*v*

_{2}, …,

*v*

_{n+1}=

*v*) be a shortest path from

*u*to

*v*, so that \({d}^{{\mathscr{G}}}(u,v)=n\). Let

*V*

_{k}be the orbit containing

*v*

_{k}, for all

*k*. By (ii),

*V*

_{k}≠

*V*

_{k+1}for all 1 ≤

*k*≤

*n*thus

*q*= (

*U*=

*V*

_{1},

*V*

_{2}, …,

*V*

_{n+1}=

*V*) is a path in \({\mathscr{Q}}\) and \({d}^{{\mathscr{Q}}}(U,V)\le n\). By contradiction, assume there is a shorter path in \({\mathscr{Q}}\) from

*U*to

*V*, that is, (

*U*=

*W*

_{1},

*W*

_{2}, …,

*W*

_{m+1}=

*V*) with

*m*<

*n*. The we can construct a path in \({\mathscr{G}}\) from

*u*to

*v*of length

*m*(a contradiction), as follows. For each 1 ≤

*i*≤

*m*,

*W*

_{i}is connected to

*W*

_{i+1}in \({\mathscr{Q}}\), hence there is a vertex in

*W*

_{i}connected to at least one vertex in

*W*

_{i+1}. Since vertices in an orbit are structurally indistinguishable,

*any*vertex in

*W*

_{i}is then connected to at least one vertex in

*W*

_{i+1}(formally, if

*w*∈

*W*

_{i}is connected to \(w^{\prime} \in {W}_{i+1}\) then

*σ*(

*w*) ∈

*W*

_{i}is connected to \(\sigma (w^{\prime} )\in {W}_{i+1}\)). This allows us to construct a path in \({\mathscr{G}}\) from

*u*to

*v*of length

*m*<

*n*, a contradiction. □

Let us call the *external degree* of a vertex as the number of adjacent vertices outside the motif it belongs to. The proof of the following is straightforward from the definitions.

**Theorem 8** (Symmetric motif Laplacian) *A symmetric motif* \({\mathscr{M}}\) *in* \({\mathscr{G}}\) *induces a symmetric motif in* \({\mathscr{L}}\) *with adjacency matrix*

*where* \({L}_{{\mathscr{M}}}\) *is the ordinary Laplacian matrix of* \({\mathscr{M}}\) *considered as a graph on its own, and d*_{1}, *…*, *d*_{k} *are the external degrees of the k orbits of* \({\mathscr{M}}\) *of sizes m*_{1}, *…*, *m*_{k}*. (Here, I*_{n} *is the identity matrix of size n and we use* *⊕* *to construct a block diagonal matrix*).

Recall that each orbit in a BSM (in an undirected, unweighted graph) is either a complete or an empty graph.

**Corollary 2** (Redundant Laplacian eigenvalues) *Let* \({\mathscr{G}}\) *be an undirected*, *unweighted network. If* \({\mathscr{M}}\) *is a 1-orbit BSM with m vertices of external degree d*, *then the redundant Laplacian eigenvalue induced by* \({\mathscr{M}}\) *is d if* \({\mathscr{M}}\) *is an empty graph, and d* *+* *m if* \({\mathscr{M}}\) *is a complete graph, in both cases with multiplicity m* *−* *1*.

** Proof** By Theorem 8, the Laplacian of the motif in \({\mathscr{L}}\) is \({L}_{{\mathscr{M}}}+d{I}_{m}\). The redundant eigenvalues of this matrix are the redundant eigenvalues of \({L}_{{\mathscr{M}}}\) (0 if \({\mathscr{M}}\) is empty and

*m*if \({\mathscr{M}}\) is a complete graph, in both cases with multiplicity

*m*− 1) plus

*d*. All in all, the redundant eigenvalues for 1-orbit BSMs occur a the positive integers \({{\mathbb{Z}}}^{+}\). □

**Theorem 9** (Vertex compression) *If* *v**is a vector of length* \({n}_{{\mathscr{G}}}\) that is constant on orbits, then SΛ^{−1}S^{T}** v** =

**.**

*v*** Proof** First, note that

*S*

^{T}

*S*= Λ (this holds for any partition of the vertex set).

As *v* is constant on orbits, it is already of the form *v* = *S**w* for some *w*. Therefore *S*Λ^{−1}*S*^{T}*v* = *S*Λ^{−1}*S*^{T}*S**w* = *S**w* = *v*. □

**Proposition 2** (Degree centrality) *Let B* = *(b*_{αβ}*) be the adjacency matrix of the quotient, and V* = *V*_{1} *∪* *…* *∪* *V*_{m} *the partition of the vertex set into orbits. If i* *∈* *V*_{α}, *then* \({d}_{i}^{{\mathscr{G}}}={d}_{\alpha }^{{\mathscr{Q}},\,\text{out}\,}\)*.*

*Proof*

### Weighted and directed networks

The adjacency matrix of a network can encode arbitrary weights and directions, making a general *n* × *n* real matrix *A* the adjacency matrix of some (weighted, directed) network. The definition of automorphism group, geometric decomposition, symmetric motif, and orbit, and their properties, as they are defined only in terms of *A*, carry verbatim to arbitrarily weighted and directed networks. In this setting, a symmetry (automorphism), respects not only adjacency, but weights and directions. In particular, the automorphism group is smaller than (a subgroup of) the automorphism group of the underlying undirected, unweighted network. By introducing edge weights or directions, some symmetries will disappear, removing (and occasionally subdividing) symmetric motifs and orbits, as the next results shows.

**Theorem 10** *Let A*_{w} = (*w*_{ij}) *be the adjacency matrix of an arbitrarily weighted and directed network* \({{\mathscr{G}}}_{\text{w}}\), *and A* = (*a*_{ij}) *the adjacency matrix of the underlying undirected and unweighted network* \({\mathscr{G}}\)*, that is, a*_{ij} = *sgn*(∣*w*_{ij}∣ + ∣*w*_{ji}∣)*. Consider the symmetric motifs of* \({\mathscr{G}}\)*, respectively* \({{\mathscr{G}}}_{\text{w}}\)*, with vertex sets M*_{1}, *…*, *M*_{m}, *respectively* \({M}_{1}^{\prime}\), *…*, \({M}_{m^{\prime} }^{\prime}\)*. Then for every* \(1\le i\le m^{\prime}\) *there is a unique 1* *≤* *j* *≤* *m such that* \(M^{\prime} \subseteq {M}_{j}\)*. Similarly, each vertex orbit in* \({{\mathscr{G}}}_{\text{w}}\) *is a subset of a vertex orbit in* \({\mathscr{G}}\).

** Proof** First, we show that the automorphism group of \({{\mathscr{G}}}_{\text{w}}\) is a subgroup of the automorphism group of \({\mathscr{G}}\). If

*σ*:

*V*→

*V*is a permutation of the vertices, then

*w*

_{σ(i)σ(j)}=

*w*

_{ij}⇒

*a*

_{σ(i)σ(j)}=

*a*

_{ij}by considering two cases:

*w*

_{ij}≠ 0 implies

*w*

_{σ(i)σ(j)}≠ 0, which gives

*a*

_{ij}=

*a*

_{σ(i)σ(j)}= 1;

*w*

_{ij}= 0 implies

*w*

_{σ(i)σ(j)}= 0, which gives

*a*

_{ij}=

*a*

_{σ(i)σ(j)}= 0 (note

*w*

_{ij}≠ 0 ⇔

*a*

_{ij}= 1). Hence, \(\,{\text{Aut}}\,({{\mathscr{G}}}_{\text{w}})\subset \,{\text{Aut}}\,({\mathscr{G}})\), which immediately gives the result on orbits.

Let us choose essential^{11} sets of generators *S*, respectively \(S^{\prime}\), of \(\,{\text{Aut}}\,({\mathscr{G}})\), respectively \(\,{\text{Aut}}\,({{\mathscr{G}}}_{\text{w}})\), with support-disjoint partitions

It is enough to prove the statement for these sets: given *i*, there is unique *j* such that \({X}_{i}^{\prime}\subseteq {X}_{j}\). Let \(x^{\prime} \in {X}_{i}^{\prime}\subseteq \,{\text{Aut}}\,({{\mathscr{G}}}_{\text{w}})\subseteq \,{\text{Aut}}\,({\mathscr{G}})\) thus we can write \(x^{\prime} ={h}_{1}\cdot \ldots \cdot {h}_{m}\) with *h*_{k} ∈ *H*_{k} = 〈*X*_{k}〉. Since \(X^{\prime}\) is an essential set of generators, there is an index *j* such that *h*_{k} = 1 (the identity, or trivial permutation) for all *k* ≠ *j*, so that \(x^{\prime} ={h}_{j}\). Given any other \(y^{\prime} \in {X}_{i}^{\prime}\), the same argument gives \(y^{\prime} ={h}_{l}\) for some 1 ≤ *l* ≤ *m*. We claim *j* = *l*, as follows. The partition of *X*, respectively \(X^{\prime}\), above are the equivalence classes of the equivalence relation generated by *σ* ~ *τ* if *σ* and *τ* are not support-disjoint permutations. Since \(x^{\prime} ,y^{\prime}\) are in the same equivalence class, so are *h*_{j} and *h*_{l} and thus *j* = *l*. □

The same result applies to networks with other additional structure, not necessarily expressed in terms of the adjacency matrix, such as arbitrary vertex or edge labels, by restricting to automorphisms preserving the additional structure. We obtain fewer symmetries, and a refinement of the geometric decomposition, symmetric motifs, and orbits as above. The results in this paper, although applicable in theory, become less useful in practice as further restrictions are imposed, reducing the number of available network symmetries.

### Asymmetric measures

In the case of an asymmetric network measure (*F*(*i*, *j*) ≠ *F*(*j*, *i*)), its network representation \(F({\mathscr{G}})\) is directed even if \({\mathscr{G}}\) is not. However, \(F({\mathscr{G}})\) still inherits all the symmetries of \({\mathscr{G}}\), that is, every automorphism of \({\mathscr{G}}\) respects weights *and* edge directions in \(F({\mathscr{G}})\). Therefore, \(F({\mathscr{G}})\) has the same symmetric motifs (as vertex sets) and orbits as \({\mathscr{G}}\), and the structural results in this paper apply verbatim.

## Data availability

The data sets analysed during the current study are available at the locations stated in the caption to Table 1. The data sets generated during the current study can be found at https://doi.org/10.6084/m9.figshare.11619792.

## Code availability

The code used to process the data sets can be found at https://bitbucket.org/rubenjsanchezgarcia/networksymmetry/.

## References

- 1.
Newman, M.

*Networks: An Introduction*(Oxford University Press, Oxford, 2010). - 2.
Watts, D. & Strogatz, S. Collective dynamics of small-world networks.

*Nature***393**, 440–442 (1998). - 3.
Barabási, A.-L. & Albert, R. Emergence of scaling in random networks.

*Science***286**, 509–512 (1999). - 4.
Milo, R. et al. Network motifs: simple building blocks of complex networks.

*Science***298**, 824–827 (2002). - 5.
Tononi, G., Sporns, O. & Edelman, G. M. Measures of degeneracy and redundancy in biological networks.

*Proc. Natl Acad. Sci. USA***96**, 3257–3262 (1999). - 6.
Albert, R. & Barabási, A.-L. Statistical mechanics of complex networks.

*Rev. Mod. Phys.***74**, 47 (2002). - 7.
Chung, F., Lu, L., Dewey, T. G. & Galas, D. J. Duplication models for biological networks.

*J. Comput. Biol.***10**, 677–687 (2003). - 8.
Xiao, Y., Xiong, M., Wang, W. & Wang, H. Emergence of symmetry in complex networks.

*Phys. Rev. E***77**, 066108 (2008). - 9.
Nishikawa, T. & Motter, A. E. Network-complement transitions, symmetries, and cluster synchronization.

*Chaos***26**, 094818 (2016). - 10.
Klickstein, I. & Sorrentino, F. Generating symmetric graphs Chaos: an interdisciplinary.

*J. Nonlinear Sci.***28**, 121102 (2018). - 11.
MacArthur, B. D., Sánchez-García, R. J. & Anderson, J. W. Symmetry in complex networks.

*Discret. Appl. Math.***156**, 3525–3531 (2008). - 12.
Xiao, Y., MacArthur, B. D., Wang, H., Xiong, M. & Wang, W. Network quotients: structural skeletons of complex systems.

*Phys. Rev. E***78**, 046102 (2008). - 13.
MacArthur, B. D. & Sánchez-García, R. J. Spectral characteristics of network redundancy.

*Phys. Rev. E***80**, 026117 (2009). - 14.
Pecora, L. M., Sorrentino, F., Hagerstrom, A. M., Murphy, T. E. & Roy, R. Cluster synchronization and isolated desynchronization in complex networks with symmetries.

*Nat. Commun.***5**, 4079 (2014). - 15.
Golubitsky, M. & Stewart, I.

*The Symmetry Perspective: from Equilibrium to Chaos in Phase Space and Physical Space*. Vol. 200 (Springer Science & Business Media, Basel, 2003). - 16.
Stewart, I., Golubitsky, M. & Pivato, M. Symmetry groupoids and patterns of synchrony in coupled cell networks.

*SIAM J. Appl. Dynamical Syst.***2**, 609–646 (2003). - 17.
Golubitsky, M., Stewart, I. & Török, A. Patterns of synchrony in coupled cell networks with multiple arrows.

*SIAM J. Appl. Dynamical Syst.***4**, 78–100 (2005). - 18.
Aguiar, M. A. & Dias, A. P. S. Synchronization and equitable partitions in weighted networks Chaos: an Interdisciplinary.

*J. Nonlinear Sci.***28**, 073105 (2018). - 19.
Sorrentino, F., Siddique, A. B. & Pecora, L. M. Symmetries in the time-averaged dynamics of networks: reducing unnecessary complexity through minimal network models Chaos: an interdisciplinary.

*J. Nonlinear Sci.***29**, 011101 (2019). - 20.
Nicosia, V., Valencia, M., Chavez, M., Díaz-Guilera, A. & Latora, V. Remote synchronization reveals network symmetries and functional modules.

*Phys. Rev. Lett.***110**, 174102 (2013). - 21.
Sorrentino, F., Pecora, L. M., Hagerstrom, A. M., Murphy, T. E. & Roy, R. Complete characterization of the stability of cluster synchronization in complex dynamical networks.

*Sci. Adv.***2**, e1501737 (2016). - 22.
Sorrentino, F. & Pecora, L. Approximate cluster synchronization in networks with symmetries and parameter mismatches Chaos: An Interdisciplinary.

*J. Nonlinear Sci.***26**, 094823 (2016). - 23.
Schaub, M. T. et al. Graph partitions and cluster synchronization in networks of oscillators Chaos: An Interdisciplinary.

*J. Nonlinear Sci.***26**, 094821 (2016). - 24.
Pecora, L. M., Sorrentino, F., Hagerstrom, A.M., Murphy, T. E. & Roy, R. in

*Advances in Dynamics, Patterns, Cognition*. 145–160 (Springer, 2017). - 25.
Siddique, A. B., Pecora, L., Hart, J. D. & Sorrentino, F. Symmetry- and input-cluster synchronization in networks.

*Phys. Rev. E***97**, 042217 (2018). - 26.
Bitbucket repository. https://bitbucket.org/rubenjsanchezgarcia/networksymmetry/ (Bitbucket repository, 2019).

- 27.
Xiao, Y., Wu, W., Pei, J., Wang, W. & He, Z. Efficiently indexing shortest paths by exploiting symmetry in graphs. In

*Proc. 12th International Conference on Extending Database Technology: Advances in Database Technology*. 493–504 (ACM, 2009). - 28.
Wang, J., Huang, Y., Wu, F.-X. & Pan, Y. Symmetry compression method for discovering network motifs.

*IEEE/ACM Trans. Comput. Biol. Bioinf.***9**, 1776–1789 (2012). - 29.
Karalus, S. & Krug, J. Symmetry-based coarse-graining of evolved dynamical networks.

*Europhys. Lett.***111**, 38003 (2015). - 30.
Nyberg, A., Gross, T. & Bassler, K. E. Mesoscopic structures and the laplacian spectra of random geometric graphs.

*J. Complex Netw.***3**, 543–551 (2015). - 31.
Do, A.-L., Höfener, J. & Gross, T. Engineering mesoscale structures with distinct dynamical implications.

*N. J. Phys.***14**, 115022 (2012). - 32.
Biggs, N.

*Algebraic Graph Theory*(Cambridge University Press, Cambridge, 1993). - 33.
Godsil, C. & Royle, G. F.

*Algebraic Graph Theory*(Springer, New York, 2013). - 34.
Cvetkovic, D. M., Rowlinson, P. & Simic, S.

*An Introduction to the Theory of Graph Spectra*(Cambridge University Press, Cambridge, 2010). - 35.
Brouwer, A. E. & Haemers, W. H.

*Spectra of Graphs*(Springer, New York, 2011). - 36.
Estrada, E. & Rodriguez-Velazquez, J. A. Subgraph centrality in complex networks.

*Phys. Rev. E***71**, 056103 (2005). - 37.
Estrada, E. & Hatano, N. Communicability in complex networks.

*Phys. Rev. E***77**, 036111 (2008). - 38.
Erdős, P. & Rényi, A. Asymmetric graphs.

*Acta Math. Hung.***14**, 295–315 (1963). - 39.
Borgatti, S. & Grosser, T. Structural Equivalence: meaning and measures.

*International Encyclopedia of the Social & Behavioral Sciences*(2015). - 40.
*Internet topology network dataset–KONECT*. http://konect.uni-koblenz.de/networks/[openflights|opsahl_powergrid|ca-AstroPh|topology|wordnet-words|com-amazon|actor-collaboration|as-skitter,|roadNet-CA|livejournal-links] (2016). - 41.
*A Network Graph for Human Diseases*. http://exploring-data.com/info/human-disease-network/ (2018). - 42.
*Yeast Interactome Project*. http://interactome.dfci.harvard.edu/S_cerevisiae/ (2018). - 43.
*Human Protein Reference Database*,*Release9_062910*. http://www.hprd.org/ (2018). - 44.
Lovász, L.

*Random Walks on Graphs: A Survey*(Bolyai Society Mathematical Studies, 1993). - 45.
Klein, D. J. & Randić, M. Resistance distance.

*J. Math. Chem.***12**, 81–95 (1993). - 46.
Brandes, U. A faster algorithm for betweenness centrality.

*J. Math. Socio.***25**, 163–177 (2001). - 47.
*aucy 3.0*. http://vlsicad.eecs.umich.edu/BK/SAUCY/ (2012). - 48.
*GAP–Groups, Algorithms, and Programming*,*Version 4.8.7*. http://www.gap-system.org (2017). - 49.
Liebeck, M. W. Graphs whose full automorphism group is a symmetric group.

*J. Aust. Math. Soc.***44**, 46–63 (1988). - 50.
Rotman, J. J.

*An Introduction to the Theory of Groups*(Springer, New York, 2012). - 51.
Horn, R. A. & Johnson, C. R.

*Matrix Analysis*(Cambridge University Press, Cambridge, 1990).

## Acknowledgements

Special thanks to Ben MacArthur for support and advice during the writing of this article. The Newton Institute in Cambridge supported the author during the programme “Theoretical foundations for statistical network analysis” (EPSRC grant EP/K032208/1). Conor Wild’s undergraduate project “Topics in Network Symmetries” contains (unpublished) results about the graph Laplacian in the context of symmetries generalised here. Thanks to Yamir Moreno and Emanuele Cozzo, whose “simple” question inadvertently prompted this lengthly answer.

## Author information

### Affiliations

### Contributions

R.J.S.G. conceived the project, carried out the mathematical and numerical analysis, and wrote the article.

### Corresponding author

Correspondence to Rubén J. Sánchez-García.

## Ethics declarations

### Competing interests

The author declares no competing interests.

## Additional information

**Publisher’s note** Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

## Rights and permissions

**Open Access** This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.

## About this article

### Cite this article

Sánchez-García, R.J. Exploiting symmetry in network analysis.
*Commun Phys* **3, **87 (2020). https://doi.org/10.1038/s42005-020-0345-z

Received:

Accepted:

Published:

## Comments

By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.