Data segmentation based on the local intrinsic dimension

One of the founding paradigms of machine learning is that a small number of variables is often sufficient to describe high-dimensional data. The minimum number of variables required is called the intrinsic dimension (ID) of the data. Contrary to common intuition, there are cases where the ID varies within the same data set. This fact has been highlighted in technical discussions, but seldom exploited to analyze large data sets and obtain insight into their structure. Here we develop a robust approach to discriminate regions with different local IDs and segment the points accordingly. Our approach is computationally efficient and can be proficiently used even on large data sets. We find that many real-world data sets contain regions with widely heterogeneous dimensions. These regions host points differing in core properties: folded versus unfolded configurations in a protein molecular dynamics trajectory, active versus non-active regions in brain imaging data, and firms with different financial risk in company balance sheets. A simple topological feature, the local ID, is thus sufficient to achieve an unsupervised segmentation of high-dimensional data, complementary to the one given by clustering algorithms.

In this work, we propose a method to segment data based on the local ID which aims at overcoming the aforementioned limitations. Building on TWO-NN 23 , a recently proposed ID estimator which is insensitive to density variations and uses only the distances between points, we develop a Bayesian framework that allows identifying, by Gibbs sampling, the regions in the data landscape in which the ID can be considered constant. Our approach works even if the data are embedded on highly curved and twisted manifolds, topologically complex and not isomorphic to hyperplanes, and if they are harvested from a non-uniform probability density. Moreover, it is specifically designed to use only the distance between data points, and not their coordinates. These features, as we will show, make our approach computationally efficient and more robust than other methods. Applying our approach to data of various origins, we show that ID variations between different regions are common. These variations often reveal fundamental properties of the data: for example, unfolded states in a molecular dynamics trajectory of a protein lie on a manifold of a lower dimension than the one hosting the folded states. Identifying regions of different dimensionality in a dataset can thus be a way to perform an unsupervised segmentation of the data. At odds with common approaches, we do not group the data according to their density but perform segmentation based on a geometric property defined on a local scale: the number of independent directions along which neighboring data are spread.

Methods
A Bayesian approach for discriminating manifolds with different IDs. The starting point of our approach is the recently proposed TWO-NN estimator 23 , which infers the IDs from the statistics of the distances of the first two neighbors of each point. Let the data (x 1 , x 2 , . . . , x N ) , with N indicating the number of points, be i) independent and identically distributed samples from a probability density ρ(x) with support on a manifold with unknown dimension d, such that ii) for each point x i , ρ(x i ) is approximately constant in the spherical neighborhood with center i and radius given by the distance between i and its second neighbor. Assumption i) is shared all methods that infer the ID from the distribution of distances (e.g., Refs. 5,6,[9][10][11][12]14 . Assumption ii) is the key assumption of TWO-NN: while of course it may not be perfectly satisfied, other methods used to infer the ID from distances require uniformity of the density on a larger scale including the first k neighbors, usually with k ≫ 2 . Let r i1 and r i2 be the distances of the first and second neighbor of x i , then µ i . = r i2 r i1 follows the Pareto distribution f (µ i |d) = dµ −(d+1) i . This readily allows the estimation of d from µ i , i = 1, 2, . . . , N . We can write the global likelihood of µ ≡ {µ i } as where V . = N i=1 log(µ i ) . From Eq. (1), and upon specifying a suitable prior on d, a Bayesian estimate of d is immediately obtained. This estimate is particularly robust to density variations: in 23 it was shown that restricting the uniformity requirement only to the region containing the first two neighbors allows properly estimating the ID also in cases where ρ is strongly varying.
TWO-NN can be extended to yield a heterogeneous-dimensional model with an arbitrarily high number of components. Let x = {x i } be i.i.d samples from a density ρ(x) with support on the union of K manifolds with varying dimensions. This multi-manifold framework is common with many previous works investigating heterogeneous dimension in a dataset [8][9][10][11][14][15][16][18][19][20]24,25 . Formally, let ρ(x) = K k=1 p k ρ k (x) where each ρ k (x) has support on a manifold of dimension d k and p . = (p 1 , p 2 , . . . , p K ) are the a priori probabilities that a point belongs to the manifolds 1, . . . , K . We shall first assume that K is known, and later show how it can be estimated from the data. The distribution of the µ i is simply a mixture of Pareto distributions: Here, like in ref. 23 , we assume that the variables µ i are independent. This condition could be rigorously imposed by restricting the product in Eq. (1) to data points with no common first and second neighbors. In fact, one does not find significant differences between the empirical distribution of µ obtained with all points and the one obtained by restricting the sample to the independent points (i.e., the empirical distribution of µ , obtained considering entire sample, is well fitted by the theoretical Pareto law). However, restricting the estimation to independent samples generally worsens the estimates, as one is left with a small fraction (typically, ≈ 15% ) of the original points. Therefore, for the purpose of parametric inference, we decide to include all points in the estimation. In Supplementary Table S1 we show that restricting the estimation to independent points yields results that are qualitatively in good agreement with those obtained using all points, but with generally worse dimension estimates. Following the customary approach 26 we introduce latent variables z . = (z 1 , z 2 , . . . , z K ) where z i = k indicates that point i belongs to manifold k. We have P(µ i |d, p, z) = P(µ i |z i , d)P pr (z i |p) with with , P pr (z i |p) = p z i . This yields the posterior We use independent Gamma priors on d , d k ∼ Gamma(a k , b k ) and a joint Dirichlet prior on p ∼ Dir(c 1 , . . . , c K ) . We fix a k , b k , c k = 1 ∀k , corresponding to a maximally non-informative prior on the p and an expectation of E[d k ] = 1 for all k. If one has a different prior expectation on d and p , other choices of prior may be more convenient. (1) (3) P post (z, d, p|µ) ∝ P(µ|z, d)P pr (z|p)P pr (d)P pr (p).  (3) does not have an analytically simple form, but it can be sampled by standard Gibbs sampling 27 , allowing for the joint estimation of d, p, z . However, model (3) has a serious limitation: Pareto distributions with (even largely) different values of d overlap to a great extent. Therefore, the method can not be expected to correctly estimate the z i : a given value µ i may be compatible with several manifold memberships. This issue can be addressed by correcting an unrealistic feature of model (3), namely, the independence of the z i . We assume that the neighborhood of a point is more likely to contain points from the same manifold than from different manifolds. This requirement can be enforced with an additional term in the likelihood that penalizes local inhomogeneity (see also Ref. 14 ). Consider the q-neighbor matrix N (q) with nonzero entries N (q) ij only if j = i is among the first q neighbors of i. Let ξ be the probability to sample the neighbor of a point from the same manifold, and 1 − ξ the probability to sample it from a different manifold, with ξ > 0.5. Define n in i as the number of neighbors of i with the same manifold membership ( n in ij δ z i z j ). Then we introduce a probability distribution for N (q) as: where Z q is a normalization factor that depends also on the sizes of the manifolds (see Supplementary Information for its explicit expression). This term favors homogeneity within a q-neighborhood of each point. With this addition, the model now reads: The posterior (5) is sampled with Gibbs sampling starting from a random configuration of the parameters. We repeat the sampling M times starting from different random configurations of the parameters, keeping the chain with the highest maximum log-posterior value. The parameters d and p can be estimated by their posterior averages. For all estimations, we only include the last 10% of the points. This ensures that the initial burn-in period is excluded 28 . Moreover, visual inspection of the MCMC output is performed to ensure that we do not incur in any label switching issues 29 . Indeed, we observed that once the chains converge to one mode of the posterior distribution they do not transition to different modes. As for the z , we estimate the value of π ik ≡ P post (z i = k) . Point i can be safely assigned to manifold k if π ik > 0.8 , otherwise we will consider its assignment to be uncertain. Note that our definition of "manifold" requires compactness: two spatially separated regions with approximately the same dimension are recognized as distinct manifolds by our approach.
We name our method Hidalgo (Heterogeneous Intrinsic Dimension Algorithm). Hidalgo has three free parameters: the number of manifolds, K; the local homogeneity range, denoted by q; the local homogeneity level, denoted by ξ.We fix q and ξ based on preliminary tests conducted on several artificial data sets, which show that the optimal "working point" of the method is at q = 3, ξ = 0.8 ( Supplementary Fig. S1). As is common for mixture models, the value of K can be estimated by model selection. In particular, we can compare the models with K and K + 1 for increasing K, starting from K = 1 and stopping when there is no longer significant improvement, as measured by the average log-posterior.
The computational complexity of the algorithm depends on the total number of points N, the number of manifolds K, and the parameter q. As input to the Gibbs sampling, the method requires the neighbor matrix and the µ vector. To compute the latter, one must identify the first q neighbors of each point, which, starting from the distance matrix, requires O(qN 2 ) steps (possibly reducible to O(qN log N) with efficient nearest-neighbor-search algorithms 30 ). The computation of a distance matrix from a coordinate matrix takes as O(DN 2 ) steps, where D is the total number of coordinates. In all practical examples with N 10 6 points these preprocessing steps require much less computational time than the Gibbs sampling. The complexity of the Gibbs sampler is dominated by the sampling of the Z , which scales linearly with N and K, and quadratically with q (in total, KNq 2 ). The number of iterations required for convergence cannot be simply estimated. In practice, we found this time to be roughly linear in N ( Supplementary Fig. S2). Thus, in practice the total complexity of Hidalgo is roughly quadratic in N. The code implementing Hidalgo is freely available at https ://githu b.com/miche leall egra/Hidal go.

Results
Validation of the method on artificial data. We first test Hidalgo on artificial data for which the true manifold partition of the data is known. We start from the simple case of two manifolds with different IDs, d 1 and d 2 . We consider several examples, varying the higher dimension d 1 from 5 to 9 while fixing the lower dimension d 2 to 4. On both manifolds N = 1000 points are sampled from a multivariate Gaussian with a variance matrix given by 1/d i , i = 1, 2 multiplied by the identity matrix of proper dimension. The two manifolds are embedded in a space with dimension corresponding to the highest dimension d 2 , with their centers at a distance of 0.5 standard deviations, so they are partly overlapping. In Fig. 1a,b we illustrate the results obtained in the case of fixed ξ = 0.5 , equivalent to the absence of any statistical constraint on neighborhood uniformity (note that for ξ = 0.5 the parameter q is irrelevant). The estimates of the two dimensions are shown together with the normalized mutual information (NMI) between the estimated assignment of points and the true one. The latter is defined as the MI of the assignment and the ground truth labeling divided by the entropy of the ground truth labeling. As expected, without a constraint on the assignment of neighbors, the method is not able to correctly separate the points and thus to estimate the dimensions of the two manifolds, even in the case of quite different IDs (NMI< 0.001 ). As soon as we consider ξ > 0.5 , results improve. A detailed analysis of the influence of the hyperparameters q and ξ is reported in Supplementary Information. Based on such analysis, we identify the optimal parameter choice as q = 3, ξ = 0.8 . In Fig. 1c,d we repeat the same tests as in Fig. 1a,b but with q = 3 and ξ = 0.8 . Now the NMI between the estimated and ground truth assignment is almost 1 in all cases and, cor- Scientific RepoRtS | (2020) 10:16449 | https://doi.org/10.1038/s41598-020-72222-0 www.nature.com/scientificreports/ respondingly, the estimation of d 1 and d 2 is accurate. To verify whether our approach can discriminate between more than two manifolds ( K > 2 ), we consider a more challenging scenario consisting of five Gaussians with unitary variance in dimensions 1, 2, 4, 5, 9 respectively. Some of the Gaussians have similar IDs, as in the case of dimensions 1 and 2, or 4 and 5: Moreover, they can be very close to each other: the centers of those in dimensions 4 and 5 are only half a variance far from each other, and they are crossed by the Gaussian in dimension 1.
To analyze such dataset we again choose the hyperparameters q = 3 and ξ = 0.8 . We do not fix the number of manifolds K to its ground truth value K = 5 , but we try to let the method estimate K without relying on a priori information. We perform the analysis with different values of K = 1, . . . , 6 and compute an estimate of the average of the logarithm of the posterior value L for each K. Results are shown in Fig. 1e. We see that L increases up to K = 5 , and then decreases, from which we infer that the optimal number of manifolds is K = 5 . In Fig. 1f we illustrate the final assignment of points to the respective manifolds together with the estimated dimensions, upon setting the number of manifolds to K = 5 . The separation of the manifolds is very good. Only a few points of the manifold with dimension 1 are incorrectly assigned to the one with dimension 2 and vice versa. The values of normalized mutual information between the ground truth and our classification is 0.89. Finally, we show that our method can handle data embedded in curved manifolds, or with complex topologies. Such data should not pose additional challenges to the method: Hidalgo is sensitive only to the local structure of data. In terms of the distribution of the first and second nearest-neighbor distances, a 2-d torus looks exactly like a 2-d plane, and thus we should be able to correctly classify these objects, that are so topologically different, as 2-d objects regardless of their shape. We generated a dataset with 5 Gaussians in dimension 1, 2, 4, 5, 9 (Fig. 2a). Each Gaussian is then embedded on a curved nonlinear manifold: a 1-dimensional circle, a 2-dimensional torus, a 4-dimensional Swiss roll, a 5-dimensional sphere, and a 9-dimensional sphere. None of these manifolds is topologically isomorphic to a hyperplane, except for the Swiss roll. Moreover, the manifolds are intersecting.

Figure 1.
Results on simple artificial data sets. We consider sets of points drawn from mixtures of multivariate Gaussians in different dimensions. In all cases, we perform 10 5 iterations of the Gibbs sampling and repeat the sampling M = 10 times starting from different random configurations of the parameters. We then consider the sampling with the highest maximum average of the log-posterior value. Panels (a,b) Points are drawn from two Gaussians in different dimensions. The higher dimension varies from d 1 = 5 to d 1 = 9 , the lower dimension is fixed at d 2 = 4 . N = 1000 points are sampled from each manifold. We fix K = 2 . We show results obtained with ξ = 0.5 , namely, without enforcing neighborhood uniformity (here q = 1 , but since ξ = 0.5 the value of q is irrelevant). In panel (a) we plot the estimated dimensions of the manifolds (dots: posterior means; error bars: posterior standard deviations) and the MI between our classification and the ground truth. In panel (b) we show the assignment of points to the low-dimensional (blue) and high-dimensional (orange) manifolds for the case d 1 = 9 (points are projected onto the first 3 coordinates). Similar figures are obtained for other values of d 1 . Panels (c,d) The same setting as in panel (a,b), but now we enforce neighborhood uniformity, using ξ = 0.8 and q = 3 . Points are now correctly assigned to the manifolds whose ID is properly estimated. Panels (e,f) Points drawn from five Gaussians in dimensions www.nature.com/scientificreports/ As shown in Fig. 2b, our approach can distinguish the five manifolds. The dimensions are correctly retrieved (Fig. 2b), points are assigned to the right manifold with an accuracy corresponding to a value of the normalized mutual information (NMI) = 0.84 . Only some points of the 3d-torus are misassigned to the 4d-swiss roll. As in the previous example, L increases up to K = 5 , and then decreases, from which we infer that the optimal number of manifolds is K = 5 (Fig. 2c).
ID variability in a protein folding trajectory. As a first real application of Hidalgo, we address ID estimation for a dynamical system. Asymptotically, dynamical systems are usually restrained to a low-dimensional manifold in phase space, called an attractor. Much effort has been devoted to characterizing the ID of such attractor 4 . However, in the presence of multiple metastable states an appropriate description of the visited phase space may require the use of multiple IDs. Here, we consider the dynamics of the villin headpiece (PDB entry: 2F4K). Due to its small size and fast folding kinetics, this small protein is a prototypical system for molecular dynamics simulations. Our analysis is based on the longest available simulated trajectory of the system from Ref. 33 . During the simulated 125 µs , the protein performs approximately 10 transitions between the folded and the unfolded state. We expect to find different dimensions in the folded and unfolded state, since these two states are metastable, and they would be considered as different attractors in the language of dynamical systems. Moreover, they are characterized by different chemical and physical features: the folded state is compact and dry in its core, while the unfolded state is swollen, with most of the residues interacting with a large number of water molecules. We extract the value of the 32 key observables (the backbone dihedral angles) for all the N = 31,000 states in the trajectory and apply Hidalgo to this data of extrinsic dimension D = 32 . We obtain a vector of estimated intrinsic dimensions d and an assignment of each point i to one of the K manifolds. We find four manifolds, three low-dimensional ones ( d 1 = 11.8 , d 2 = 12.9 , d 3 = 13.2 ) and a high-dimensional one ( d 4 = 22.7 ). We recall that two spatially separated regions with approximately the same dimension (in this case, d 2 and d 3 ) are recognized as distinct manifolds by our approach. To test whether this partition into manifolds is related to the separation between the folded and the unfolded state we relate the partition to the fraction of native contacts Q, which can be straightforwardly estimated on each configuration of the system. Q is close to one only if the configuration is folded, while it approaches zero when the protein is unfolded. In Fig. 3 we plot the probability distribution of Q restricted to the four manifolds. We find that the vast majority of the folded configurations ( Q > 0.8 ) are assigned to the high-dimensional manifold. Conversely, the unfolded configurations ( Q ≤ 0.8 ) are most of the times assigned to one of the low-dimensional manifolds. This implies that a configuration belonging to the low dimensional manifolds is almost surely unfolded. The normalized mutual information between the manifold assignment and the folded/unfolded classification is 0.60. Thus, we can essentially identify the folded state using the intrinsic dimension, a purely topological observable unaware of any chemical detail.
ID variability in time-series from brain imaging. In the next example, we analyze a set of time-series from functional resonance imaging (fMRI) of the brain, representing the BOLD (blood oxygen-level dependent) signal of each voxel, which captures the activity a small part of the brain 34 . fMRI time-series are often projected on a lower dimension through linear projection techniques like PCA 35 , a step that assumes a uniform ID. However, the gross features of the signal (e.g., power spectrum and entropy) are often highly variable in different parts of the brain, and also non-uniformities in the ID may well be present. Here, we consider a single-subject fMRI recording containing D = 202 images collected while a subject was performing a visuomotor task 31,36 .
From the images we extract the N = 29851 time series corresponding to the BOLD signals of each voxel. Apply- www.nature.com/scientificreports/ ing Hidalgo, we find two manifolds with very different dimensions d 1 = 16.2 , d 2 = 31.9 . Again, we relate the identified manifolds to a completely independent quantity, the clustering frequency introduced in 31,32 , which measures the temporal coherence of the signal of a voxel with the signals of other voxels in the brain. Voxels with non-negligible clustering frequency ( � > 0.2 ) are likely to belong to brain areas involved in the cognitive task at hand. In Fig. 4 we show the probability distribution of restricted to the two manifolds. We find that the "task-related" voxels ( � > 0.2 ) almost invariably belong to the manifold with high dimensionality. These voxels appear concentrated in the occipital, parietal, and temporal cortex (Fig. 4 bottom), and belong to a task-relevant network of coherent activity 31 . The normalized mutual information between the manifold assignment and the task-relevant/task-irrelevant classification is 0.24; note that this value is lower than for the protein folding case, because the the low-dimensional manifold is "contaminated" by many points with low . This result finds an interesting interpretation: the subset of "relevant" voxels gives rise to patterns that are not only coherent but also characterized by a larger ID than the remainder of the voxels. On the contrary, the incoherent voxels exhibit a lower ID, hence a reduced variability, which is consistent with the fact that the corresponding time series are dominated by low-dimensional noise. Again, this feature emerges from the global topology of the data, revealed by our ID analysis, without exploiting any knowledge of the task that the subject is performing.

ID variability in financial data.
Our final example is in the realm of economics. We consider firms in the well-known Compustat database ( N = 8309 ). For each firm, we consider D = 31 balance sheet variables from the fiscal year 2016 (for details, see Supplementary Table S2). Applying Hidalgo we find four manifolds of dimensions d 1 = 5.4 , d 2 = 6.3 , d 3 = 7.0 and d 4 = 9.1 . To understand this result, we try to relate our classification with common indexes showing the type and financial stability of a firm. We start by relating our classification to the Fama-French classification 37 , which assigns each firm to one of twelve categories depending on the firm's trade. In Fig. 5(top) we separately consider firms belonging to the different Fama-French classes, and compute the fraction of firms assigned to each the four manifolds identified by Hidalgo. The two classifications are not independent, since the fractions for different Fama-French classes are highly non-uniform. More precisely, the normalized mutual information between the two classifications is 0.19, rejecting hypothesis of statistical independence (p-value < 10 −5 ). In particular, firms in the utilities and energy sector show a preference for low dimensions ( d 1 and d 2 ), while firms purchasing products (nondurables, durables, manufacturing chemicals, equipment, wholesale) are concentrated in the manifold with the highest dimension d 4 . The manifold with intrinsic dimension d 3 mostly includes firms in the financial and health care sectors. Different dimensions are not only related to the classification of the firm, but also their financial robustness. We consider the S&P quality ratings for the firms assigned to each manifold (also from Compustat; ratings are available for 2894 firms). In Fig. 5(bottom) www.nature.com/scientificreports/ we show the distribution of ratings for the different manifolds. These distributions appear to be different. We cannot predict the rating based on the manifold assignment (the normalized mutual information between the manifold assignment and the rating is 0.04), but companies with worse ratings show a preference to belong to low-dimensional manifolds: by converting ratings into numerical values scores from 1 to 8, we found a Pearson correlation of 0.22 between the local ID and the rating ( p < 10 −10 ) We suggest a possible interpretation for this phenomenon: a low ID may imply a more rigid balance structure, which may entail a higher sensitivity to market shocks which, in turn, may trigger domino effects in contagion processes. This result shows that information  www.nature.com/scientificreports/ about the S&P rating can be derived using only the topological properties of the data landscape, without any in-depth financial analysis. For example, no information on the commercial relationship between the firms or the nature of their business is used.
Comparison with other methods. We have compared our method with two state-of-the-art methods in the literature, considering the synthetic datasets as well as the protein folding dataset (for which the folded/ unfolded classification yields an approximate "ground truth"). The first method we consider is the "sparse manifold clustering and embedding" (SMCE, Ref. 19 ). The method creates a graph by connecting neighboring points, supposedly lying on the same manifold, and then uses spectral clustering on this graph to retrieve the manifolds.
As an output, it returns a manifold assignment for each point, together with an ID estimate. We resort to the implementation provided by Ehsan Elhamifar (http://khour y.neu.edu/home/eelha mi/codes .htm). For SMCE, we can compute the NMI between the manifold assignment and the ground-truth manifold label. Furthermore, we can report the estimated local ID for points with a given ground-truth assignment, identified with the ID of the manifold to which they are assigned. SMCE requires to specify the number of manifolds K as input and depends on a free parameter . To present a fair comparison, we fixed K at its ground truth value and explored different values in the range 0.01 ≤ ≤ 20 , and we report the results corresponding to the highest NMI with the ground truth. Furthermore, we repeated the estimation M = 10 times and kept the results with the highest NMI with the ground truth. The second method is the local ID (LID) estimation by Carter et al. 10 , which combines ID estimation restricted to a (large) neighborhood and local smoothing to produce an integer ID estimate for each point. We used the implementation provided by Kerstin Johnsson 38 (https ://githu b.com/kjohn sson/intri nsicD imens ion) using a neighborhood size k = 50 which is a standard value. Note that the LID estimation is deterministic, so we do not need to repeat the estimation several times. For LID, we can compute the NMI between the (integer) label given by the local ID estimate and the ground truth; moreover, we can compute the average ID, d for the points with a given ground truth assignment.
The results of the comparison are presented in Table 1.
For the examples with two manifolds, SMCE 19 is able to correctly retrieve the manifolds with high accuracy (NMI≥ 0.94 ). The estimate of the lower dimension d 2 = 4 is always correct, while the highest dimension is underestimated for d 1 ≥ 7 . Overall, SMCE performs very well (even better than Hidalgo) on the datasets with two manifolds, as the two manifolds are well separated, matching well the assumptions of the model. On the examples with five manifolds, the performance of SMCE is worse, as the method cannot correctly retrieve intersecting manifolds. In the example with five linear manifolds, SMCE cannot retrieve the manifold of dimension d 1 = 1 , which is intersecting the manifolds of dimension d 3 = 4 and d 4 = 5 . Moreover, the ID estimates are quite poor. In the example with five curved manifolds, SMCE merges the manifolds with d 4 = 5 and d 5 = 9 , and cannot retrieve the manifolds with d 1 = 1 and d 3 = 4 , which get split. Again, the dimension estimates are poor. For the protein folding data, where regions with different IDs are probably not well separated, SMCE finds a single manifold with d = 11 , and thus it is not able to detect any ID variation.
For the examples with two manifolds, the LID method 10 is generally able to correctly retrieve regions with different IDs with high accuracy (NMI≥ 0.92 ), except for the case with d 1 = 5, d 2 = 4 where results are inaccurate. The estimate of the lower dimension d 2 = 4 is always precise, while the higher dimension is always underestimated. The degree of underestimation increases with increasing d 1 . This is expected, since the ID estimates are produced by assuming a uniform density in a large neighborhood of each point, an assumption that fails in these data, especially for points on the manifold with higher dimension. Overall, LID performs quite well (comparably with Hidalgo) in separating regions with different IDs in the datasets with two manifolds, but gives worse ID estimates than Hidalgo. In the examples with five manifolds, the performance of LID is slightly worse. In the example with five linear manifolds LID merges the manifolds of dimension d 3 = 4 and d 4 = 5 , and misassigns some of their points to the manifold of dimension 1. This is because relatively large neighborhoods are used for the estimation of the local ID: this leads to a difficulty in discriminating close regions with similar ID ( d 3 = 4 and d 4 = 5 ) and to a "contamination" of results by points of the manifold with d 1 = 1 intersecting the two. In the example with five curved manifolds, LID can correctly identify the manifolds with d 2 = 2 , d 4 = 5 , and d 5 = 9 , even if it underestimates the ID in the latter two cases (high dimension, large density variations). Many points in the manifold of dimension d 3 = 4 are mis-assigned to the manifold of dimension d 1 = 1 , due to the fact that the two manifolds are highly intersecting. Finally, for the protein folding data, where the density is highly variable, LID shows a tendency for unfolded configurations to have a higher local ID than unfolded ones. However, it cannot discriminate between folded and unfolded configurations as Hidalgo. The ID of the folded configurations is highly underestimated compared to the Hidalgo estimate, probably because of density variations.
In Fig. 6 we compare the results of the three methods (Hidalgo, SMCE, and LID) for the dataset with five curved manifolds.

Discussion
The increasing availability of a large amount of data has considerably expanded the opportunities and challenges for unsupervised data analysis. Often data come in the form of a completely uncharted "point cloud" for which no model is at hand. The primary goal of the analyst is to uncover some structure within the data. For this purpose, a typical approach is dimensionality reduction, whereby the data are simplified by projecting them onto a low-dimensional space.
The appropriate intrinsic dimension (ID) of the space onto which one should project the data is not constant everywhere. In this work, we developed an algorithm (Hidalgo) to find manifolds of different IDs in the data. Applying Hidalgo, we observed large variations of the ID in datasets of diverse origin (a molecular dynamics simulation, a set of time series from brain imaging, a dataset of firm balance sheets). This finding suggests that a  www.nature.com/scientificreports/ highly non-uniform ID is not an oddity, but a rather common feature. Generally speaking, ID variations can be expected whenever the system under study can be in various "macrostates" characterized by a different number of degrees of freedom due, for instance, to differences in the constraints. As an example, the folded state the protein is able to locally explore the phase space in many independent directions, while in the unfolded state it only performs wider movements in fewer directions. In the case of companies, a financially stable company may have more degrees of freedom to adjust its balance sheet.
In the cases we have analyzed, regions characterized by different dimensions were found to host data points differing in important properties. Thus, variations of the ID within a dataset can be used to classify the data into different categories. It is remarkable that such classification is achieved by looking at a simple topological property, the local ID. Let us stress that ID-based segmentation should not be considered an alternative to clustering methods. In fact, in most cases (e.g., most classical benchmark sets for clustering) different clusters do not correspond to significantly different intrinsic dimensions -rather, they correspond to regions of high density of points within a single manifold of well-defined ID. In such cases, clusters cannot be identified as different manifolds by Hidalgo. Conversely, when the ID is variable, regions of different IDs do not necessarily correspond to clusters in the standard meaning of the word: they may or may not correspond to regions of high density of points. A typical example could be that of protein folding: while the folded configurations are quite similar to one another, and hence constitute a cluster in the traditional sense, the unfolded configurations may be very heterogeneous, hence quite far in data space and then not a "cluster" in the standard meaning of the word.
The idea that ID may vary in the same data is not new. In fact, many works have discussed the possibility of a variable ID and developed methods to estimate multiple IDs 7-18,20-22 . Our method builds on these previous contributions but is designed with the specific goal of overcoming technical limitations of other available approaches and make ID-based segmentation a general-purpose tool. Our scheme uses only the distances between the data points, and not their coordinates, which significantly enlarges its scope of applicability. Moreover, the scheme uses only the distances between a point and its q nearest neighbors, with q ≤ 5 . We thus circumvent the notoriously difficult problem of defining a globally meaningful metric 3 , only needing a consistent metric on a small local scale. Finally, Hidalgo is not hindered by density variations or curvature. For these reasons, Hidalgo is competitive with other manifold learning and variable ID estimation methods and, in particular, can yield better ID estimates and manifold reconstruction. We have compared our method with two state-of-the-art methods in the literature, the local ID method (LID 10 ), and the sparse manifold clustering and embedding (SMCE 19 ) method. Both methods show issues in the cases of intersecting manifolds and variable densities and yield worse ID estimates than Hidalgo, especially for large IDs.
Hidalgo is computationally efficient and therefore suitable for the analysis of large data sets. For example, it takes ≈ 30 mins to perform the analysis of the neuroimaging data, which includes 30000 data points, on a standard computer using a single core. Implementing the algorithm with parallel Gibbs sampling 39 may considerably reduce computing time and then yield a fully scalable method. Obviously, Hidalgo has some limitations. Some are intrinsic to the way the data are modeled. Hidalgo is not suitable to cover cases in which the ID is a continuously varying parameter 21 , or in which sparsity is so strong that points cannot be assumed to be sampled from a Figure 6. Results of other variable-ID analysis methods on topologically complex artificial data. We consider sets of points drawn from mixtures of multivariate Gaussians embedded in curved nonlinear manifolds, as detailed in Fig. 2, and compared the results of Hidalgo (a), SMCE (b), LID (c). In panel (a) we plot again, for ease of comparison, the results of Fig. 2b. In panel (b), we show the local ID estimates given by LID 10 , considering k = 50 nearest neighbors for local ID estimation and smoothing. The method assigns an integer dimension to each point: in the plot, the dimension of each point is represented by its color, according to the color bar shown. The method correctly retrieves the manifolds of dimension 5 and 9, even though the points in the manifold with dimension 5 are assigned a local ID that oscillates between 4 and 5, and points in the manifold with dimension 9 are assigned a local ID that oscillates between 7 and 8. Also, the manifold of dimension 2 is well identified. The manifolds of dimension 1 and 4, however, cannot be correctly discriminated. Points of the manifold of dimension 4 are assigned ID estimates of 1 or 3, without a clear separation of the manifolds. The NMI between the ground truth and the integer local ID is 0.77. In panel (c) we show the assignment of points as given by SMCE 19 . The manifolds of dimension 5 and 9 are merged. The manifold of dimension 2 is identified, but it is also contaminated by points from the manifold of dimension 4. SMCE cannot correctly retrieve the manifolds of dimension 1 and 4. The NMI between the ground truth and the assignment is 0.61. ID estimates obtained according to the prescription given in 19  www.nature.com/scientificreports/ continuous distribution. In particular, Hidalgo is not suitable for discrete data for which the basic assumptions of the method are violated. For example, ties in the sample could lead to null distances between neighbors, jeopardizing the computation of µ . Others are technical issues related with the estimation procedure, and, and least in principle, susceptible to improvement in refined versions of the algorithm: for instance, one may improve the estimation with suitable enhanced sampling techniques. Finally, let us point out some further implications of our work. Our findings suggest a caveat with respect to common practices of dimensionality reduction, which assume a uniform ID. In the case of significant variations, a global dimensionality reduction scheme may become inaccurate. In principle, the partition in manifolds obtained with Hidalgo may be the starting point for using standard dimensionality reduction schemes. For example, one can imagine to apply PCA 1 or Isomap 3 , or sketchmap 40 separately to each manifold. However, we point out that a feasible scheme to achieve this goal does not come as an immediate byproduct of our method. Once a manifold with given ID is identified, it is highly nontrivial to provide a suitable parameterization thereof, especially because the manifolds may be highly nonlinear, and even topologically non-trivial. How to suitably integrate our approach with a dimensionality reduction scheme remains a topic for further research. Another implication is that a simple topological invariant, the ID, can be very a powerful tool for unsupervised data analysis, lending support to current efforts at characterizing topological properties of the data 41,42 .