Compressing local atomic neighbourhood descriptors

Many atomic descriptors are currently limited by their unfavourable scaling with the number of chemical elements $S$ e.g. the length of body-ordered descriptors, such as the Smooth Overlap of Atomic Positions (SOAP) power spectrum (3-body) and the Atomic Cluster Expansion (ACE) (multiple body-orders), scales as $(NS)^\nu$ where $\nu+1$ is the body-order and $N$ is the number of radial basis functions used in the density expansion. We introduce two distinct approaches which can be used to overcome this scaling for the SOAP power spectrum. Firstly, we show that the power spectrum is amenable to lossless compression with respect to both $S$ and $N$, so that the descriptor length can be reduced from $\mathcal{O}(N^2S^2)$ to $\mathcal{O}\left(NS\right)$. Secondly, we introduce a generalized SOAP kernel, where compression is achieved through the use of the total, element agnostic density, in combination with radial projection. The ideas used in the generalized kernel are equally applicably to any other body-ordered descriptors and we demonstrate this for the Atom Centered Symmetry Functions (ACSF). Finally, both compression approaches are shown to offer comparable performance to the original descriptor across a variety of numerical tests.


I. INTRODUCTION
Increases in computational power have made it possible to use quantum mechanical methods [1][2][3] to model materials at a level of accuracy where chemical transformations can be usefully studied. This ability has opened the door for in silico materials modelling to usefully compliment, and sometimes even replace, the need for costly experiments. A major drawback of such computational techniques is the limited simulation sizes and timescales that can be used. Empirical potentials do away with the electrons and model the total energy as a sum of local atomic energies which depend only on the local environment of each atom. These simplifications lead to potentials that are typically many orders of magnitude faster and, crucially, a computational cost that scales linearly with the number of atoms. Moving from hand-crafted potentials with tens of parameters to machine learned potentials [4,5] has lead to a substantial increase in the accuracy and transferability of such potentials in recent years. [6][7][8][9][10]. This improvement has enabled vast, and accurate, simulations involving hundreds of thousands of atoms [11] that would otherwise be completely inaccessible.
Another area which has seen rapid development is the application of both supervised and unsupervised learning techniques to large scale materials data [12][13][14]. Such techniques have already seen great success and will undoubtedly prove useful in expediting material design and discovery [15] as well as understanding observed trends. Both potential fitting and, more broadly, materials machine learning require mathematical descriptions of material structures that can be used as inputs to models. * jpd47@cam.ac.uk Many different types of descriptors have been proposed [16], almost all of which are invariant to translations, rotations, and permutation of equivalent atoms. Incorporating these symmetries within the descriptor avoids models having to "learn" them, leading to greater data efficiency. Another commonality is the rapid increase in descriptor size for environments composed of multiple elements. For instance, the size of the SOAP power spectrum [17] scales quadratically with the number of elements S whilst the length of the bispectrum scales as S 3 . This increase poses challenges for interatomic potentials, both in terms of evaluation speed and the memory required for fitting, as well as more generally for the storage of descriptors in databases. Before introducing some of the existing approaches it is worth noting that the problem can be somewhat mitigated by exploiting sparsity where possible. For instance, the SOAP power spectrum is sparse with respect to elements, so that even if there are S total elements present across a given dataset, only those present in a given environment S env need be considered when computing an individual descriptor. Whilst this can facilitate a storage saving, it does not always lead to a reduction in the number of model parameters, e.g. in a linear model, nor does it fully solve the problem for higher-body order descriptors; the corresponding SOAP bispectrum for a high entropy alloy liquid environment with S env = 8 is ∼500 times larger than if a single element were present.
A great deal of effort has already gone into ways of reducing this scaling, with many interesting approaches being used. In refs. [18] and [19] constant complexity in S is achieved by concatenating two descriptors, one of which is element agnostic and another where the contributions from each element are weighted. This effectively amounts to embedding the elemental information into two dimensions, rather than keeping different elements entirely distinct. A similar strategy was used in refs. [20] and [21], except here the element embeddings were optimised during model fitting, so that the final embeddings contained a data-driven measure of chemical similarity between the elements. Approaching the problem from an information content perspective, the recent work of ref. [22], demonstrated that a model fitted to as few as 10% of the power spectrum components led to negligible degradation in force errors on an independent test set, suggesting that significant compression can be achieved. Similar results were seen in ref. [23], where descriptors were selected using CUR matrix decomposition [24], in ref. [25], where state-of-the-art model performance was achieved on the QM9 dataset with a heavily compressed descriptor obtained though repeated tensor products and reduction using Principal Component Analysis (PCA), and also in ref. [26] where a data-driven approach to constructing an optimal radial basis was employed with great success.
A complimentary strategy is to focus on general, nondata driven, ways of reducing the scaling with S whilst minimizing the loss of information. Such compression strategies could prove useful in situations where the dataset evolves over time, e.g. adding configurations with new elements, or for the storage of descriptors in databases such as NOMAD [14] [27], where the use-case for the descriptors is not yet known. Analytic compression strategies could also be used before applying datadriven compressions, allowing for larger and more diverse datasets to be treated in this way. The recently introduced Moment Invariant Local Atomic Descriptors (MILAD) descriptors [19] follow this philosophy by using a minimal set of rotational invariants as the descriptor, such that the environment can be recovered by inverting the descriptor. Whilst the primary focus of that work was not to reduce the scaling with S, we stress that the core ideas used are highly pertinent and that similar themes are present here.
In this work we introduce two novel, non-data driven, approaches for compressing the SOAP power spectrum; the power spectrum is available through the quippy [28] [29], dscribe [30] and librascal [31] packages. Firstly, by considering the ability to recover the density expansion coefficients from the power spectrum, we introduce a compressed power spectrum and show that, under certain conditions, it is possible to recover the original descriptor from the compressed version. Secondly, we introduce a generalisation of the SOAP kernel which affords compression with respect to both S and the number of radial basis functions N used in the density expansion. This kernel retains a useful physical interpretation and the ideas used are applicable to all body-ordered descriptors, which is demonstrated using the ACSFs [32]. Finally, we evaluate the performance of the compressed descriptors across a variety of datasets using numerical tests which probe the information content, sensitivity to small perturbations and the accuracy of fitted energy models and force-fields.

A. SOAP
The SOAP kernel, introduced in ref. [17], provides a way of computing the similarity between a pair of atomic environments. Invariance to permutation of equivalent atoms is achieved by forming densities, where a separate density is constructed for each element α, σ controls the width of the Gaussians used to construct the density, f cut (r) is a cutoff function that ensures atoms enter the environment smoothly and r i and s i denote the the position and element of the ith neighbour atom respectively. The kernel is made invariant to the orientation of environments by integrating over all possible rotationsR ∈ SO(3) with the full multi-species kernel [33] defined as Expanding the density using the spherical harmonics Y lm (r), and a set of orthogonal radial basis functions g n (r), and substituting into the definition of the SOAP kernel with ν = 2 yields, k(ρ, ρ ) = αβ nn l p αβ nn l p αβ nn l = p · p where p and p are the SOAP power spectrums for the environments. In this form the rotational invariance of the power spectrum can be seen by noting that density expansion coefficients transform under rotations as where D is a unitary Wigner-D matrix, so that the terms c α * nl · c β n l are individually invariant [17]. Taking σ → 0 (for convienience) in Equation 2 with ν = 2 reveals that the power spectrum is a 3-body descriptor, k(ρ, ρ ) = dR ijpq δ sisp δ(r i −Rr p )δ sj sq δ(r j −Rr q ) (4) where i, j and p, q index atoms in the first and second environment respectively. The integration over all rotations ensures that each matching pair of triangles, where one vertex is the central atom and the others are neighbour atoms, between the environments contributes to the kernel, so that the power spectrum is effectively a histogram of triangles. Increasing ν increases the body order of the descriptor, so that the bispectrum (ν = 3) corresponds to a histogram of tetrahedra and so on. With multiple elements, the neighbour atoms at the corner of each triangle must also match, so that there is a histogram of triangles for each pair of elements. In general, the length of the descriptor scales as S ν where S is the number of elements and the body-order is ν + 1, rendering these descriptors impractical for large S. In this work we investigate a number of alternatives which aim to circumvent this scaling, with a particular focus on compressing the power spectrum.

B. Information Content
After exploiting symmetry, p αβ nn l = p βα n nl , the length of the power spectrum is 1 2 N S(N S + 1)(L + 1), where N , L and S are the number of radial basis functions, highest order of spherical harmonic and total number of elements respectively. Here we show that the power spectrum is amenable to lossless compression, with the final descriptor having length N S(L + 1) 2 . We start by highlighting that the sum over m in Equation 3 can instead be viewed as a dot product between density expansions coefficient vectors c α nl of length 2l + 1. Then, because all products between coefficient vectors with equal l index are taken, the power spectrum can be re-shaped from a vector into a sequence of "l-slices" where the l index has been suppressed as it is the same for all vectors in a given slice. Viewed as such, P l is a Gram matrix between the coefficient vectors, P l = X T l X l where X l = (c α 1 , c α 2 , . . . , c S N ). As P l only contains information on the relative orientations of the c α n , the c α n can, at best, be recovered from P l up to a global rotation in the 2l + 1 dimensional space. A direct consequence of this is that simultaneously rotating all coefficient vectors with equal l index does not affect the power spectrum, so that there are many different densities which share the same power spectrum. Fortunately, the space of atomic densities, i.e. densities which correspond to physically plausible atomic configurations, is much smaller than the space of all densities, so that this effect is somewhat mitigated in practice.
Interestingly, whilst P l is an N S × N S matrix, its rank is at most 2l + 1, as the rank of the (2l + 1) × N S matrix X l is at most 2l + 1. This means that provided the first 2l + 1 columns in X l form a basis, then all terms in P l can be recovered from only the first 2l + 1 rows. This observation suggests that the power spectrum is amenable to non-data-driven lossless compression when 2l +1 < N S. For situations with many different elements N S 2l + 1 so that the potential compression factor is large. Rather than simply storing the first 2l + 1 rows, we propose storing 2l + 1 random linear combinations of all rows, where W is an N S ×(2l+1) matrix of randomly chosen weights.
A set of density expansion coefficients consistent with the original power spectrum can then be recovered from W T P l by diagonalising W T P l W = U T ΛU , taking (b 1 , b 2 , . . . , b 2l+1 ) = Λ 1 2 U and then forming X l = Λ − 1 2 U W T P l , where U is a unitary matrix whose columns are the eigenvectors of W T P l W and Λ is a diagonal matrix of the eigenvalues [34]. This procedure fails if (b 1 , b 2 , . . . , b 2l+1 ) does not form a basis, so that Λ has one or more zero eigenvalues. This could arise because rank(W T X l ) < rank(X l ) ≤ 2l + 1, which is highly unlikely because of the random weights, or because rank(W T X l ) = rank(X l ) = r < 2l + 1, which occurs frequently as explained below. From a recovery perspective the latter is not problematic as the same procedure can be carried out using (b 1 , b 2 , . . . , b r ) as a basis by discarding rows and columns from W T P l W as required.
By compressing all "l-slices" in this way only N S(L + 1) 2 invariants need be stored, compared to 1 2 N S(N S + 1)(L + 1) for the uncompressed power spectrum. Interestingly, slightly more compression can be achieved by forming the symmetric matrix Q T P l Q, where Q is an N S×N S matrix of random weights and then storing only the upper right triangular portion -the original power spectrum can still be recovered in an entirely analogous fashion. The symmetry of Q T P l Q means that there are actually fewer invariants then density expansion coefficients; per l slice, there are l(2l + 1) fewer invariants, which corresponds exactly to the number of distinct rotations in 2l + 1 dimensions. However, the cost of this additional compression is a loss of sparsity. Whereas only the entries of W T P l (and P l ) corresponding to elements present in the local atomic environment will be non-zero, Q T P l Q will be a dense matrix as all the c α nl are "mixed" together. Retaining this sparsity is exceptionally important for efficient descriptor storage as often S S env , where S is the total number of elements present in a dataset whereas S env is the typical number present in any given environment.  Moving from left to right P l , W T P l and Q T P l Q are shown for l = 0, 1, 2 and N S = 8. Typically N S will be significantly larger than shown here. Only the elements shown in color need be stored to determine all remaining elements which are shown in grey. The total length of each descriptor is listed underneath.
Finally, we note that there is an additional restriction on the rank of P l , namely rank(P l ) ≤ n neighb where n neighb is the number of neighbouring atoms contained within the cutoff. This occurs because the density expansion vectors for the density of a single neighbour atom corresponding to different radial basis functions are all parallel. This is shown explicitly in the Supporting Information but can be understood by noting that the projections of a Gaussian onto different radial shells will vary only in magnitude, so that the angular part of the expansion, which determines the direction of c α nl , will be identical. This observation is consistent with intuition -the information content should depend on the number of neighbours -and may prove useful when constructing concise descriptors for datasets where n neighb is known to be bounded.

C. Generalized Kernel
In the previous section we showed that it is possible to compress the SOAP power spectrum in a lossless manner. Here we introduce a family of physically interpretable compression options based on generalising the SOAP kernel. The first step is to generalise Equation 2 to where the first factor is as before and the second factor involves the total density ρ(r) = α ρ(r) α . The advantage of this approach is that the total body order is now partitioned into element-sensitive and element-agnostic terms, so that the scaling with S can be controlled separately from the overall body-order. For instance, ν, µ = 1, 1 results in a modified power spectrum p α nn l with a length that scales linearly with S and is related to the original power spectrum, ν = 2, via, The ν, µ = 1, 1 power spectrum still corresponds to a histogram of triangles, but now only one vertex of each triangle is element-sensitive, as shown in Figure 3. Clearly, this idea can be applied just as well for higher body-orders to provide compression with respect to S. In the previous section we also achieved compression with respect to N . Following a similar approach as for S, we further generalise the kernel to whereρ(r) is the projection of the density onto the surface of the unit sphere. As before, Equation 8 still corresponds to comparing all possible triangles for ν + ν + µ +μ = 2. However now some of the vertices may be projected onto the surface of the unit sphere, as well as potentially being element insensitive. The full range of 3-body compression options is depicted in Figure 3, with the interpretation of the original power spectrum, ν = 2 (only non-zero index values are listed), shown in the upper right corner.
Utilising these compressions offers a significant reductions in descriptor size but does result in a descriptor that is typically less informative. An example of such an information loss is shown in Figure 2, where the two environments shown are distinguished by ν = 2 but not by µ, ν = 1, 1, as now there is only 1 element-sensitive vertex per triangle, rather than two. Returning to the idea of the previous section, we see that ν, µ = 1, 1 corresponds to using {C n }, where C n = α c α n , in place of the random basis (b 1 , b 2 , . . . , b 2l+1 ). In this case the mirror symmetry of the total density means that rank(C 1 , C 2 , . . . , C N ) = 2 < rank(c 1 , c 2 , . . . , c N S ) = 4, so that the full power spectrum cannot be recovered from the compression. However, this pair of environments are a handcrafted example and more generally, in the absence of similar symmetries, we would expect {C n } to span the required space, provided N ≥ 2l + 1, so that the compression would be lossless. This line of thought motivates the introduction of a further compression, denoted ν, µ * = 1, 1, where the kernel is as defined in Equation 6, but 2l + 1 radial basis functions are used for the total density expansion. This choice achieves the same level of compression as W T P l and, provided rank(C 1 , C 2 , . . . , C 2l+1 ) = rank(c 1 , c 2 , . . . , c N S ), it will also be lossless. Such arguments suggest that compressing beyond ν, µ * = 1, 1 will necessarily be lossy, with the analysis for higher body orders being left to future work. At first glance the degeneracies introduced by choosing ν, µ = 1, 1 appear catastrophic. However, it should be noted that the original power spectrum ν = 2 has similar degeneracies. A simple, single element example was given in ref. [37], which is trivially modified to contain multiple elements in Figure S1. Despite this, the full ν = 2 power spectrum has been widely used with great success across a wide variety of applications [6,38,39], as have other descriptors which are known to be mathematically incomplete [32,40]. As such, whilst it is useful to understand the origin of any additional degeneracies in compressed descriptors, the most practically interesting question is whether or not they lead to a significant degradation in model performance across typical datasets. We assess this for the variants of the SOAP power spectrum listed in Table I by performing numerical tests which are discussed in the results section. Finally, whilst we only test compressions of the power spectrum in this work, these compression ideas can be applied just as easily to higher body-orders. For instance, choosing ν,ν, µ = 1, 1, 1 would result in a compressed version of the bispectrum that scales quadratically with S and N , rather than cubically. In general, the length of the descriptor will scale as S ν+ν N ν+µ with the total body order given by ν +ν +µ+μ, so that the scaling with S and N can be chosen independently from the overall body-order. We anticipate these compressions being particularly useful for descriptors such as ACE [41] [9] where ν ≥ 4 is commonly used.

D. Element Embedding
An alternative approach to constructing concise descriptors for systems with large S was used in refs. [18] [19] [42]. Rather than using a separate density for each element they instead used two density channels; the total, element agnostic, density and an element weighted density defined as ρ Z (r) = α w α ρ α (r) where w α is an element dependent weight. The descriptor can then be formed using these two density channels, so that the length of the power spectrum is N (2N + 1)(L + 1), independent of S.
The element-weighted power spectrum is an instance of a more general type of constant complexity approach, where each element α is represented by a vector u α , where dim( u α ) = d J < S, so that the chemical elements are effectively embedded into a lower dimensional space. The u α can then be optimised during model fitting so that the alchemical similarity between different elements, k αβ = u α · u β [21], is learned from the data. This approach was used in refs. [21] and [20], where in both cases the optimised embedding was consistent with known chemical trends, and more recently, similar, learnable mappings have been used in refs. [43], [44] and [45] [46] [47].
This approach was taken further still in ref. [26] where PCA was used to determine a reduced optimal radial basis for a given dataset. By allowing basis changes, followed by truncation's, that also mix different elemental channels this approach can be seen as a simultaneous embedding of both the elemental and radial information into a lower dimensional space. Interestingly, the random weight matrix W in Equation 5 can be interpreted as performing an analogous embedding, This identification connects the approaches and motivates further work where W is optimised for a given dataset.
Using element embedding, with fixed or optimisable embedding vectors, offers an alternative route to reduc- ing the scaling with S and we include a corresponding compressed power spectrum, denoted as ν = 2, d J , in our numerical tests for comparison. A summary of all ways of compressing the power spectrum is given in Table I, with the results given in the next section. The datasets used are described briefly in section IV A.

E. Distance-Distance Correlation and Information Imbalance
The similarity between a pair of environments can be quantified by the Euclidean distance between their respective descriptor vectors, so that different descriptors give different measures of similarity. These distances can be used to quantify the relative information content of descriptors [22] and, more generally, to compare how different descriptors encode a given dataset. We follow ref. [48] using distance-distance correlation plots to compare the distances implied by the compressed descriptors to those of the full power spectrum, where it is desirable for a compression to preserve the distances as faithfully as possible. In Figure 4 the distance-distance (left) and ranked distance-ranked distance (right) correlations between the ν = 2 power spectrum and two of the com-pressed alternatives are shown for all pairs of environments within liquid configurations, S ≥ 3, in the HEA dataset. The advantage of comparing the ranked distances is that the ranking process eliminates scaling and monotonic transformations of the distances, leaving only the correlation structure behind [22,49].
The correlations between both the distances and ranked distances for the ν, µ = 1, 1 compression and ν = 2 are strong, with a notable absence of points in the top left and bottom-right of each plot. The same is not true of both the constant complexity alternatives, where there are many environments deemed well separated by ν = 2 but which are poorly distinguished by ν, d J = 2, 2 (and µ = 2). An example of such an environment is shown in Figure S2, with the distances according to each descriptor listed alongside. The most striking difference is seen in the ranked-distance correlation plots which are near uniform for ν, d J = 2, 2 (and µ = 2), demonstrating the original correlation structure is almost completely lost.
We also compute the information imbalance (introduced in ref. [22]) between the different descriptors; the information imbalance is a way of measuring the relative information content of different distances measures, see Section IV C for details. The information imbalance Table I. Various compressions of the SOAP power spectrum are listed, N , S, L, dJ and umax are the number of radial basis functions, elements, maximum order of spherical harmonics, number of embedding dimensions and the number of optimised radial basis functions respectively. The circumflex indicates projection onto the unit sphere so thatρ(r) =ρ(r). The kernel for ν, µ * = 1, 1 is identical as for ν, µ = 1, 1, except that 2l + 1 radial basis functions are used in the total density expansion, so that n = 1, 2, . . . , 2l + 1.
Label References Descriptor Length ν = 2 Full SOAP kernel [17,33] p αβ nn l = m c α * nlm c β n lm -Optimal Radial Basis [26] with mixed-species basis p uu l = m c * ulm c u lm 1 2 umax(umax + 1)(L + 1) planes for the HEA and amino acid datsets are shown in Figures 5 and 6, where points in the bottom left, top left and along the diagonal indicate descriptors which encode the same, less and orthogonal (different) information to ν = 2 respectively. The results for the HEA dataset provide quantitative evidence for the trends seen in Figure  4, demonstrating that for this dataset, the ν, d J = 2, 2 (and µ = 2) descriptors are significantly less informative than the others. Conversely, these descriptors carry almost identical information to ν = 2 on the amino acid dataset. We believe this is because all the amino acid molecules are geometry optimised, so that the atom type information can be inferred from the atomic positions alone; this is backed up by ∆(µ = 2 → ν = 2) 1. The stark differences seen between these two datasets suggests that whilst low-dimensional element embeddings are undoubtedly useful, they are not suitable as information preserving compressions for all datasets.

F. Fitting to Energies
The ultimate test of any descriptor is the accuracy and overall performance of the models which use it. In this work we test the proposed SOAP compressions by fitting interatomic potentials to total energies for four separate datasets using both Ridge Regression (RR) and Kernel Ridge Regression (KRR) models, see Section IV E for details.
Learning curves for all models and datasets are shown in Figure 7 whilst the total length of each descriptor is shown in the bottom left of each plot. It is important to note that all descriptors other than ν = 2, d J are sparse with respect to the elements, so that in practice their storage requirements scale with S env , the typical number of elements present within each environment, rather than S total , the total number of elements present in the dataset. For the elpasolite dataset S env = 4 for all structures, so in this case the lengths of the sparse descriptors are indicated in white. However, whilst the descriptors themselves are sparse the number of model parameters is dependent on the full length of the descriptor. As such, we stress that both sparsity in S env and reducing the scaling with respect to S total are highly desirable. As a final point, the amino acids and Li-TM datasets were also used in ref. [18] to test the accuracy achievable combining a constant complexity descriptor and a neural network potential. We stress that the errors reported there are on the training set, acting as a proof of principle, whilst the larger errors reported here are on a distinct test set, so that the numbers are not comparable.
The trends seen across the QM9, Li-TM and amino acid datasets are broadly similar, with ν, µ = 1, 1 achieving the same accuracy as ν = 2 with a significantly reduced descriptor size. The results with W T P l and ν, µ * = 1, 1 are ∼5-15% less accurate then with ν = 2 but, crucially, both these descriptors offer additional compression with respect to the radial channels and, as out-  Figure S5 for the correlations for µ = 2, ν,μ = 1, 1 and ν, µ * = 1, 1 lined in the previous section, contain sufficient information to recover the full power spectrum under certain conditions. Compressing beyond the recoverable limit with ν,μ = 1, 1 -projecting the element agnostic vertex onto the unit sphere -offers an additional factor of ∼5 in terms of compression. On the elpasolite dataset this does not compromise the accuracy at all whilst for QM9 this compression incurs a 66% increase in the MAE, which is limited to ∼30% for both the Li-TM's and amino acids. Comparing these results to the errors achieved using µ = 2 and ν = 2 hints at the relative importance of atom type and geometric information for the different datasets. Unsurprisingly, knowledge of the atom types is much more important for the elapsolites, where all structures are in the same crystal prototype, whilst for the chemically reasonable organic molecules found in the QM9 and the amino acid datasets a reasonable model can be fit using geometric information alone -MAE of 0.62 kcal/mol for µ = 2 on QM9. This is consistent with the information imbalance analysis on the amino acids and suggests that for these molecules, in their equilibrium geometries, it is possible to use known bond lengths and coordination numbers to infer the atom type information from the geometry alone.
The (unoptimised) embedding ν, d J = 2, 2 performs relatively well with errors 27%, 57% and 26% larger then ν = 2 for the QM9, Li-TM, and amino acids datsets respectively. This performance is most comparable to ν,μ = 1, 1, which, particularly after exploiting sparsity, offers a greater level of compression. A clear, qualitative difference in behaviour is seen for the embedding approaches on the elpasolite dataset, which contains a much larger number of elements. Here, the unoptimised embedding performs no better than using µ = 2 and has a greatly diminished final learning rate compared to ν = 2. For comparison, the optimised embeddings, denoted using ad J , from ref. [21], which use similar SOAP parameters (r cut = 5Å, N = 12, L = 9), are overlaid. This shows that optimising the embedding with only two dimensions leads to minimal improvement although a significant gain can be made usingd J = 4. However, the final learning rate ford J = 4 is still diminished, relative to ν = 2, again indicating a qualitative difference in behaviour from the compressions which keep separate densities for each element. It is also worth noting that as S env = 4 for the elpasolites, using d J = 4 does not offer any descriptor compression compared to ν = 2 once sparsity is exploited. The embedding approaches do however offer a clear advantage in the low-data regime, which we believe is due to the reduced dimensionality of the descriptor space. If each data point occupies a certain volume of descriptor space then such embeddings greatly increase the relative fraction of "physically reasonable" de-scriptor space which is occupied by each point. As such, the relevant descriptor space is "covered" faster so that shortest distance from a test data point to any point in the training set decreases faster with new training data.

G. Fitting a Force-field
The compression options illustrated in Figure 3 were tested further by training sparse KRR models on energies, forces and stresses for the HEA dataset using the gap fit program [50], with the same parameters used in ref. [51]. This dataset is of particular interest as in previous work, ref. [51], it was found that using a model fitted using a 2-body + 3-body descriptor performed better than using a 2-body descriptor + the full multi-element power spectrum, ν = 2. This result is explained by the power spectrum being a higher dimensional descriptor, so that much more training data is required to span the full descriptor space corresponding to all physically relevant atomic environments. In the low data regime, i.e. before this condition is satisfied, the model will generalise very poorly outside of the training set. However, provided sufficient data, one would expect a model trained on a more informative descriptor to provide greater accuracy. This effect can be seen in the left hand panel of Figure 8 where the force errors for ν,μ = 1, 1 are initially better than for ν = 2, but then plateau quickly. In contrast the errors for the ν = 2 model continue to decrease, however, here the very large descriptor meant that only 4000 sparse points could be used due to a memory restriction of 1.5 TB of RAM.
Interestingly the best model, subject to the practical restrictions on memory and quantity of training data, makes use of the ν, µ = 1, 1 compression. Whilst the increase in performance, compared to the 2b+3b, is modest across the entire test set it is much more dramatic on the quinary alloy liquid configurations where the energy RMSE is reduced from 96 meV/atom to 12.3 meV/atom, Figure S3.

H. Sensitivity Analysis
When using atomic descriptors for tasks such as regression it is highly desirable that they are sensitive to small perturbations of a given atomic environment, with in depth analysis of well known descriptors carried out in refs. [23] and [48]. The latter introduced the concept of the sensitivity matrix Λ, see section IV D, where the eigenvalues λ can be used to determine if there are any perturbations of the local environment which a descriptor is insensitive to.
The results in Figures 9 and S4 show that the proposed compressions do not significantly affect the sensitivity. In particular, no perturbations were found for which any of the compressed descriptors had near-zero sensitivity, whilst the original ν = 2 did not. This behaviour is not Note that the 2b+3b model is from ref. [51] where it was reported that increasing the number of sparse points did not improve performance.
entirely unexpected, as for instance, choosing µ = 2 is equivalent to using the original power spectrum with a single element environment. Perhaps more surprising is that using ν,μ = 1, 1 -projecting one atom onto the unit sphere -did not reduce the sensitivity more drastically. Of course, these tests are not exhaustive and it is probable that special environments, likely closely related to any additional degeneracies, exist where this is not the case [52]. However, we find these results promising and leave further investigation to future work.

I. Atom Centered Symmetry Functions
Whilst the compression presented in Section II B is specific to the SOAP power spectrum, the ideas used to form the generalized SOAP kernel are equally applicable to all body-ordered descriptors. To demonstrate this we fit KRR models to the QM9 and Li-TM datasets using compressed versions of the popular ACSFs introduced in ref. [32]. We also fit models based on element agnostic and element-weighted ACSFs [18] [42] for comparison. As before a polynomial kernel, k(x i , x j ) = (x i · x j ) ζ , was employed and the regularisation strength was chosen using k-fold cross-validation with k=10. The G2 (2body) and G4 (3-body) ACSF functions were used with the traditional parameters [23,32] scaled to cutoffs of 4Å and 6Å for the QM9 and Li-TM datasets respectively. Compression was applied to the 3-body terms only and the descriptors are labelled using the notation outlined in Figure 3, so that ν = 2 is the conventional  For consistency with the existing literature we label the element-weighted ACSF as wACSF [42], rather than ν, d J = 2, 2. The dscribe [30] python package was used to compute the full ACSF descriptor, ν = 2, and the compressed variants were formed by summing over elements as required.
Learning curves are shown in Figure 10, where it can be seen that using the ν, µ = 1, 1 compression does not cause a noticeable decrease in model accuracy on either dataset. For the Li-TM dataset models were fit using ζ=1, 2, 4, 8 and 16 to assess the effect of varying the flexibility of the model. As expected, the shorter descriptors, µ = 2 and wACSF, benefited more from increasing ζ but still never achieved comparable accuracy to the others. More interestingly, using ν, µ = 1, 1 provided the same accuracy as ν = 2 even with ζ = 1, despite the full descriptor being more than 5× as long. These results clearly show how the ideas behind the generalized SOAP kernel can be successfully applied to other body-ordered descriptors.

III. DISCUSSION
As the number of elements S increases, the length of many atomic descriptors increases drastically, with S ν scaling common for ν + 1-body order descriptors. In this work we have sought non-data-driven way to reduce this scaling, with a focus on the SOAP power spectrum. We started by investigating the degree to which the density expansion coefficients can be recovered from the power spectrum. This analysis revealed that the power spectrum can be viewed as a collection of Gram matrices P l , one for each total angular momentum number l, and that storing only a subset of components W T P l is sufficient to preserve full information. This compression reduces the descriptor size from 1 2 N S(N S + 1)(L + 1) to N S(L + 1) 2 , where N and L are the number of radial basis functions and highest order of spherical harmonic used in the density expansion respectively. Compressing the power spectrum in this way requires a single matrix of random weights W to be stored and used consistently to compress all descriptors for a given dataset.
Next, we introduced the generalized SOAP kernel. The standard power spectrum, ν = 2, is a 3-body descriptor corresponding to a histogram of triangles, with one histogram for each pair of species. In the generalised power spectrum the number of triangle vertices which are element-sensitive can be varied, as can the number of vertices which are projected onto the surface of the unit sphere. These modifications allow the scaling of the descriptor size with S and N to be set independently of the overall body-order and are applicable to all body-ordered descriptors. By again considering the ability to reconstruct the original power spectrum from the generalized one we showed that, subject to certain conditions, choosing ν, µ * = 1, 1 (or ν, µ = 1, 1 where N ≥ 2l + 1) does not lead to any loss of information. We also stress that descriptors based on the generalized kernel retain the element-wise sparsity of the original kernel, so that the number of non-zero components scales only with the number of elements present in a given environment, rather than total number present across a dataset.
The real-world performance of the compressions was tested using multiple numerical tests across a total of 5 pre-existing datasets. First, the information content, relative to the original power spectrum was analysed using the information imbalance approach of ref. [22]. This analysis indicated that retaining element sensitivity on only one vertex was sufficient to ensure a minimal loss of information across all tested datasets. The constant complexity compression approaches performed well for the geometry optimised amino acids but incurred severe information loss on the liquid environments within the high-entropy alloy dataset. We believe this is because the atom type information for the QM9 and amino acid datasets is effectively encoded in the equilibrium geometry of the molecules, which suggests that such datasets are not well suited to assess the ability of a given descriptor to encode multi-element information.
Secondly, models were fitted to the total energies for the QM9, Li-TM, amino acid and elpasolite datasets using linear and kernel ridge regression. The most promising compressions achieved very similar results to the full power spectrum across all datasets, whilst being significantly shorter. A notable deviation in behaviour was seen for the element-embedding fits to the elpasolite  dataset, highlighting the differences between compression approaches. KRR models were also fitted to the QM9 and Li-TM datasets using compressed versions of ACSF's, where, as before, the errors achieved with the ν, µ = 1, 1 compression was almost identical to those achieved with the full descriptor. Following this, KRR models using the generalized SOAP power spectrum were fitted to energies, forces and virials for the HEA dataset using the gap fit program [50]. The most accurate model that could be fitted, subject to practical restrictions on the quantity of training data and available memory, made use of the ν, µ = 1, 1 compression, providing concrete evidence that these compressions will be useful when fitting force-fields. Finally, the sensitivity of all descriptors to small perturbations was evaluated using the sensitivity matrix introduced in ref. [48]. None of the compressions were found to significantly reduce the sensitivity of the descriptor, which is unsurprising given their relation to the single element power spectrum.
We anticipate that these compressions will prove useful across a wide range of applications and that some of the ideas may be applicable to other body-ordered atomic descriptors, as was shown explicitly for ACSFs. In particular, the generalised kernel could provide compression for approaches such as ACE [9,41], where the high bodyorders, ν ≥ 4, that are needed limit the number of different elements that can be treated. Furthermore, we stress that the compression ideas presented here can often be combined with pre-existing techniques, such as element-embedding, and that, in general, choosing the appropriate compression methods for a given situation is crucial.

A. Datasets
The datasets used for the numerical tests are: 1. QM9: The QM9 dataset [53] contains ∼140,000 geometry optimized organic molecules containing only H, C, N, O and F. In this work we fit the internal energy at 0 K, reported as U0, and hereafter refer to this as energy.
2. HEA: A quinary (W, Ta, Ni, Va and Mo) high entropy alloy dataset from ref. [51] containing 2329 configurations including bulk crystals, surfaces, vacancies, alloys and liquid structures. The training set was used for the distance-distance correlation, information imbalance and sensitivity analysis whilst the independent test from ref. [51] was used to assess the accuracy of the force-fields.

5.
Elpasolites: A collection of ∼10,500 geometry optimised structures from ref. [55]. All main group elements up to Bi are present, 39 elements in total, and all structures share the elpasolite structural prototype.

B. Element Embedding Parameters
For the element embeddings, d J = 2 was used for most datasets with an additional test performed using d J = 3 for the elpasolite dataset. These embeddings were not optimised, as in refs. [18] and [19], and were both constructed using w 1 α = 1, so that the first density channel is element agnostic. For d J = 2 the weights for the second channel were chosen by ordering the elements according to atomic number and then assigning weights of 1,2,3,... so that for QM9 the w 2 α used were 1,2,3,4 and 5 for H,C,N,O and F respectively. For d J = 3 the weights for the second and third density channels were assigned using the group and period of each element in the periodic table, so that for sulphur w 1α = 1, w 2α = 3 and w 3α = 16. This was done in an attempt to capture the chemical similarity encoded in the periodic table and is similar to the encoding used in ref. [55]. For the elpasolite dataset the results of using optimised embeddings, denoted usingd J = 2 andd J = 4, from ref. [21] are also shown.

C. Information Imbalance
The information imbalance is a way of measuring the relative information content of different distance measures. Whilst the distance-distance correlation compares all pairs of distances, the information imbalance is only concerned that the nearest neighbours of each environment are the same according to descriptor A and descriptor B. More precisely, for each environment the distances to all other environments are computed using both distance A and distance B. These distances are then sorted and ranked from 0 − N so that each environment has a rank according to A, r A and according to B, r B . Then the information imbalance from A to B is defined as where N is the number of environments in the dataset and r B |r A = 1 is the conditional average of r B given that r A = 1. Defined in this way ∆ A→B is statistically confined to lie between 0, A contains the information in B, and 1, A is not informative about B. By comparing ranks, rather than distances, ∆ A→B is insensitive to changes in scale and by considering only nearest neighbour distances ∆ A→B is also well suited to handle nonlinear relationships. Please refer to ref. [22] for more details.

D. Sensitivity Matrix
Here we give a brief explanation of how the sensitivity matrix Λ is constructed, please refer to ref. [48] for full details. The distance d between the original environment and the perturbed environment is given by where ∆x i is the change in component i of the descriptor x. In terms of the atomic displacements this can be rewritten as where ∆R is a vector of length 3N containing the small perturbations to the atomic positions. Defined as such, the distance between the original environment and one perturbed along an eigenvector u of Λ is given by d = √ λ|u| where Λu = λu. Thus, by examining the eigenvalues of Λ we can detect if there are any perturbations that a given descriptor is insensitive to. We note that there will always be six zero eigenvalues, corresponding to three translations and three rotations, and that we expect additional zero eigenvalues for perturbations about symmetric atomic configurations [52]. This is demonstrated in Figure S4 where there are only 6 zero eigenvalues for the asymmetric liquid environments from the HEA dataset but many more zero eigenvalues for the symmetric environments found in the elpasolite dataset.

E. KRR Models
A simple linear model was used to fit the average chemical potential µ α for each element so that the predicted energyÊ j for configuration j, denoted by A j , was given by,Ê where n jα is the number of atoms of type α in S j , β is a coefficient vector, and k (A j ) is shorthand for the vector of kernels between A j and the structures in the training set, where x i is the descriptor of atom i, so that the kernel between structures is the sum over all pairwise atomic kernels. A polynomial kernel was used throughout so that k(x i , x j ) = (x i · x j ) ζ where ζ = 1 is equivalent to RR and ζ = 2, 4, 8 or 16 were used for KRR. We follow ref. [56] in using the following loss function, motivated by the Gaussian Process Regression view, where K ij = k(A i , A j ), E is the vector of total energies for structures in the training set,Ê are the predicted energies and Σ ij = σn i δ ij where n i is the total number of atoms in S i . Minimizing L is equivalent to minimising the sum of the RMSE per atom on the training set and an L 2 regression penalty on β. Doing so yields the following well known solution [57], In all cases multiple models were trained using a randomly selected train:test split, with all data not in the training set used as test data. The average MAE achieved across these models is reported with error bars indicating one standard deviation, whilst the regularisation strength σ was chosen using k-fold cross validation, k ∼ 10 − 20.
V. DATA AVAILABILITY All datasets used are freely available and the data used to generate all plots is available at https://doi.org/10.5281/zenodo.5793851.

VI. CODE AVAILABILITY
The compressions outlined in Figure 3 have been implemented in the Gaussian Approximation Potential (GAP) fitting code gap fit [50]. A Jupyter notebook demonstrating how the W T P l compressed power spectrum is computed, and how the original power spectrum can be recovered from it, is available at https://doi.org/10.5281/zenodo.5793851. Figure S1. Three pairs of environments are show. The top row shows a pair of distinct environments that are not distinguished by the SOAP power spectrum -taken from ref. [37]. The middle and bottom rows show two possible variations using two differnet elements. The "coloring" on the bottom row breaks the degeneracy for ν = 2 (and ν, µ = 1, 1) whilst that shown in the middle row does not. There is no coloring of this pair that breaks the degeneracy for ν = 2 but not ν, µ = 1, 1.

Type
Length d ν = 2 11760 1.07 ν, µ = 1, 1 5760 0.856 ν, µ * = 1, 1 4800 0.849 ν,μ = 1, 1 480 0.856 ν, dJ = 2, 2 3000 0.106 W T P l 4800 0.91 ± 0.02 Figure S2. Two environments, modified from the HEA dataset, containing V (grey), Ta (blue), Mo (green) and Nb (gold) are shown. The central atom is marked in red for clarity and the central weight has been set to 0 so that only the neighbour atoms affect the distance between the environments. The density expansion was performed using N = 12, L = 9, rcut = 3.2Å, σ = 0.4Å and a cut-off transition width of 0.1Å. All of the neighbouring atoms are within 3Å of the central atom so there are no cut-off effects present. The maximum distance across the dataset for all descriptors was ≈ √ 2 so that a direct comparison between distances is meaningful. The distances between the two environments are shown in the table on the right; for W T P l the mean and standard deviation from 20 sets of randomly chosen weights are shown.  Figure S3. The energy and force errors on the test set, stratified by configuration type, are shown for the 2b+3b (blue) and 2b+SOAP models, ν,μ = 1, 1 (orange), ν, µ = 1, 1 (green) and ν = 2 (red). Note the big improvement in the energy errors for the quinary alloy liquid. That the ν, µ = 1, 1 model achieves lower errors on the quinary liquid than the simpler W-Mo and W-Mo-Ta liquids can be attributed to a lack of training data; there are 55 training configurations for the former whilst for the others there are only 3. Figure S4. The eigenvalues of the sensitivity matrix are shown for 50 randomly chosen liquid environments from the HEA dataset (left) and 3 randomly chosen environments from the elpasolite dataset (right). Values below 10 −12 have been increased to this value and have had a small level of noise added to the y coordinate. All points have had random noise in the range [−0.2, 0.2] added to the x coordinate to better see the data. The grey dashed line marks the 6 smallest eigenvalues, which are all zero to within numerical tolerance Figure S5. The top two rows show the distance-distance correlations between all pairs of liquid environments in the HEA dataset. The distance according to ν = 2 is shown on the x-axis whilst the distance according to the compressed descriptor is shown on the y-axis. For the bottom two rows the distances have been ranked, per environment, and the rank-rank correlation is shown. All descriptors were computed using rcut = 5Å, nmax = 12, lmax = 9, σ = 0.4Å and a cut-off transition width of 0.5Å. The contribution of the central atom to the density was set to zero to allow for meaningful comparisons between environments with differing central atoms. All distances have been scaled so that the largest distance according to each descriptor is 1. The zero distances between environments and themselves have been excluded.