Estimating the success of re-identifications in incomplete datasets using generative models

While rich medical, behavioral, and socio-demographic data are key to modern data-driven research, their collection and use raise legitimate privacy concerns. Anonymizing datasets through de-identification and sampling before sharing them has been the main tool used to address those concerns. We here propose a generative copula-based method that can accurately estimate the likelihood of a specific person to be correctly re-identified, even in a heavily incomplete dataset. On 210 populations, our method obtains AUC scores for predicting individual uniqueness ranging from 0.84 to 0.97, with low false-discovery rate. Using our model, we find that 99.98% of Americans would be correctly re-identified in any dataset using 15 demographic attributes. Our results suggest that even heavily sampled anonymized datasets are unlikely to satisfy the modern standards for anonymization set forth by GDPR and seriously challenge the technical and legal adequacy of the de-identification release-and-forget model.

I n the last decade, the ability to collect and store personal data has exploded. With two thirds of the world population having access to the Internet 1 , electronic medical records becoming the norm 2 , and the rise of the Internet of Things, this is unlikely to stop anytime soon. Collected at scale from financial or medical services, when filling in online surveys or liking pages, this data has an incredible potential for good. It drives scientific advancements in medicine 3 , social science 4,5 , and AI 6 and promises to revolutionize the way businesses and governments function 7,8 .
However, the large-scale collection and use of detailed individual-level data raise legitimate privacy concerns. The recent backlashes against the sharing of NHS [UK National Health Service] medical data with DeepMind 9 and the collection and subsequent sale of Facebook data to Cambridge Analytica 10 are the latest evidences that people are concerned about the confidentiality, privacy, and ethical use of their data. In a recent survey, >72% of U.S. citizens reported being worried about sharing personal information online 11 . In the wrong hands, sensitive data can be exploited for blackmailing, mass surveillance, social engineering, or identity theft.
De-identification, the process of anonymizing datasets before sharing them, has been the main paradigm used in research and elsewhere to share data while preserving people's privacy [12][13][14] . Data protection laws worldwide consider anonymous data as not personal data anymore 15,16 allowing it to be freely used, shared, and sold. Academic journals are, e.g., increasingly requiring authors to make anonymous data available to the research community 17 . While standards for anonymous data vary, modern data protection laws, such as the European General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA), consider that each and every person in a dataset has to be protected for the dataset to be considered anonymous [18][19][20] . This new higher standard for anonymization is further made clear by the introduction in GDPR of pseudonymous data: data that does not contain obvious identifiers but might be re-identifiable and is therefore within the scope of the law 16,18 .
Yet numerous supposedly anonymous datasets have recently been released and re-identified 15,[21][22][23][24][25][26][27][28][29][30][31] . In 2016, journalists reidentified politicians in an anonymized browsing history dataset of 3 million German citizens, uncovering their medical information and their sexual preferences 23 . A few months before, the Australian Department of Health publicly released de-identified medical records for 10% of the population only for researchers to re-identify them 6 weeks later 24 . Before that, studies had shown that de-identified hospital discharge data could be re-identified using basic demographic attributes 25 and that diagnostic codes, year of birth, gender, and ethnicity could uniquely identify patients in genomic studies data 26 . Finally, researchers were able to uniquely identify individuals in anonymized taxi trajectories in NYC 27 , bike sharing trips in London 28 , subway data in Riga 29 , and mobile phone and credit card datasets 30,31 .
Statistical disclosure control researchers and some companies are disputing the validity of these re-identifications: as datasets are always incomplete, journalists and researchers can never be sure they have re-identified the right person even if they found a match [32][33][34][35] . They argue that this provides strong plausible deniability to participants and reduce the risks, making such de-identified datasets anonymous including according to GDPR [36][37][38][39] . De-identified datasets can be intrinsically incomplete, e.g., because the dataset only covers patients of one of the hospital networks in a country or because they have been subsampled as part of the de-identification process. For example, the U.S. Census Bureau releases only 1% of their decennial census and sampling fractions for international census range from 0.07% in India to 10% in South American countries 40 . Companies are adopting similar approaches with, e.g., the Netflix Prize dataset including <10% of their users 41 .
Imagine a health insurance company who decides to run a contest to predict breast cancer and publishes a de-identified dataset of 1000 people, 1% of their 100,000 insureds in California, including people's birth date, gender, ZIP code, and breast cancer diagnosis. John Doe's employer downloads the dataset and finds one (and only one) record matching Doe's information: male living in Berkeley, CA (94720), born on January 2 nd 1968, and diagnosed with breast cancer (self-disclosed by John Doe). This record also contains the details of his recent (failed) stage IV treatments. When contacted, the insurance company argues that matching does not equal re-identification: the record could belong to 1 of the 99,000 other people they insure or, if the employer does not know whether Doe is insured by this company or not, to anyone else of the 39.5M people living in California.
Our paper shows how the likelihood of a specific individual to have been correctly re-identified can be estimated with high accuracy even when the anonymized dataset is heavily incomplete. We propose a generative graphical model that can be accurately and efficiently trained on incomplete data. Using socio-demographic, survey, and health datasets, we show that our model exhibits a mean absolute error (MAE) of 0.018 on average in estimating population uniqueness 42 and an MAE of 0.041 in estimating population uniqueness when the model is trained on only a 1% population sample. Once trained, our model allows us to predict whether the re-identification of an individual is correct with an average false-discovery rate of <6.7% for a 95% threshold ð b ξ x > 0:95Þ and an error rate 39% lower than the best achievable population-level estimator. With population uniqueness increasing fast with the number of attributes available, our results show that the likelihood of a re-identification to be correct, even in a heavily sampled dataset, can be accurately estimated, and is often high. Our results reject the claims that, first, re-identification is not a practical risk and, second, sampling or releasing partial datasets provide plausible deniability. Moving forward, they question whether current de-identification practices satisfy the anonymization standards of modern data protection laws such as GDPR and CCPA and emphasize the need to move, from a legal and regulatory perspective, beyond the de-identification releaseand-forget model.

Results
Using Gaussian copulas to model uniqueness. We consider a dataset D, released by an organization, and containing a sample of n D individuals extracted at random from a population of n individuals, e.g., the US population. Each row x (i) is an individual record, containing d nominal or ordinal attributes (e.g., demographic variables, survey responses) taking values in a discrete sample space X . We consider the rows x (i) to be independent and identically distributed, drawn from the probability distribution X with PðX ¼ xÞ, abbreviated p(x).
Our model quantifies, for any individual x, the likelihood ξ x for this record to be unique in the complete population and therefore always successfully re-identified when matched. From ξ x , we derive the likelihood κ x for x to be correctly re-identified when matched, which we call correctness. If Doe's record x (d) is unique in D, he will always be correctly re-identified (κ x ðdÞ ¼ 1 and ξ x ðdÞ ¼ 1). However, if two other people share the same attribute (x ðdÞ not unique, ξ x ðdÞ ¼ 0), Doe would still have one chance out of three to have been successfully re-identified κ x ðdÞ ¼ 1=3 ð Þ . We model ξ x as: and κ x as: κ x P x correctly matched in ðx ð1Þ ; ; x ðnÞ Þ j 9i; x ðiÞ ¼ x with proofs in "Methods". We model the joint distribution of X 1 , X 2 , … X d using a latent Gaussian copula 43 . Copulas have been used to study a wide range of dependence structures in finance 44 , geology 45 , and biomedicine 46 and allow us to model the density of X by specifying separately the marginal distributions, easy to infer from limited samples, and the dependency structure. For a large sample space X and a small number n D of available records, Gaussian copulas provide a good approximation of the density using only d(d − 1)/ 2 parameters for the dependency structure and no hyperparameter.
The density of a Gaussian copula C Σ is expressed as: with a covariance matrix Σ, u ∈ [0, 1] d , and Φ the cumulative distribution function (CDF) of a standard univariate normal distribution. We estimate from D the marginal distributions Ψ (marginal parameters) for X 1 , …, X d and the copula distribution Σ (covariance matrix), such that p(x) is modeled by with F j the CDF of the discrete variable X j . In practice, the copula distribution is a continuous distribution on the unit cube, and p(x) its discrete counterpart on X (see Supplementary Methods). We select, using maximum likelihood estimation, the marginal distributions from categorical, logarithmic, and negative binomial count distributions (see Supplementary Methods). Sampling the complete set of covariance matrices to estimate the association structure of copulas is computationally expensive for large datasets. We rely instead on a fast two-step approximate inference method: we infer separately each pairwise correlation factor Σ ij and then project the constructed matrix Σ on the set of symmetric positive definite matrices to accurately recover the copula covariance matrix (see "Methods").
We collect five corpora from publicly available sources: population census (USA and MERNIS) as well as surveys from the UCI Machine Learning repository (ADULT, MIDUS, HDV). From each corpus, we create populations by selecting subsets of attributes (columns) uniformly. The resulting 210 populations cover a large range of uniqueness values (0-0.96), numbers of attributes , and records (7108-9M individuals). For readability purposes, we report in the main text the numerical results for all five corpora but will show figures only for USA.  Figure 1a shows that, when trained on the entire population, our model correctly estimates population uniqueness Ξ X ¼ P x2X pðxÞ 1 À pðxÞ ð Þ nÀ1 , i.e., the expected percentage of unique individuals in (x (1) , x (2) , …, x (n) ). The MAE between the empirical uniqueness of our population Ξ X and the estimated uniqueness c Ξ X is 0.028 ± 0.026 [mean ± s.d.] for USA and 0.018 ± 0.019 on average across every corpus (see Table 1). Figure 1a and Supplementary Fig. 1 furthermore show that our model correctly estimates uniqueness across all values of uniqueness, with low within-population s.d. (Supplementary Table 3). Figure 1b shows that our model estimates population uniqueness very well even when the dataset is heavily sampled (see Supplementary Fig. 2, for other populations). For instance, our model achieves an MAE of 0.029 ± 0.015 when the dataset only contains 1% of the USA population and an MAE of 0.041 ± 0.053 on average across every corpus. Table 1 shows that our model reaches a similarly low MAE, usually <0.050, across corpora and sampling fractions.
Likelihood of successful re-identification. Once trained, we can use our model to estimate the likelihood of his employer having correctly re-identified John Doe, our 50-year-old male from Berkeley with breast cancer. More specifically, given an individual record x, we can use the trained model to compute the likelihood b ξ x ¼ 1 À qðx j Σ; ΨÞ ð Þ nÀ1 for this record x to be unique in the population. Our model takes into account information on both marginal prevalence (e.g., breast cancer prevalence) and global attribute association (e.g., gender and breast cancer). Since the cdf. of a Gaussian copula distribution has no close-form expression, we evaluate q(x|Σ, Ψ) with a numerical integration of the latent continuous joint density inside the hyper-rectangle defined by the d components (x 1 , x 2 , …, x d ) 47,48 . We assume no prior knowledge on the order of outcomes inside marginals for nominal attributes and randomize their order. Figure 2a shows that, when trained on 1% of the USA populations, our model predicts very well individual uniqueness, achieving a mean AUC (area under the receiver-operator characteristic curve (ROC)) of 0.89. For each population, to avoid overfitting, we train the model on a single 1% sample, then select 1000 records, independent from the training sample, to test the model. For re-identifications that the model predicts to be always correct ( b ξ x > 0:95, estimated individual uniqueness >95%), the likelihood of them to be incorrect (false-discovery rate) is 5.26% (see bottom-right inset in Fig. 2a). ROC curves for the other populations are available in Supplementary Fig. 3 and have overall a mean AUC of 0.93 and mean false-discovery rate of 6.67% for b ξ x > 0:95 (see Supplementary Table 1). Finally, Fig. 2b shows that our model outperforms even the best theoretically achievable prediction using only population uniqueness, i.e., assigning the score ξ ðpopÞ x ¼ Ξ X to every individual (ground truth population uniqueness, see Supplementary Methods). We use the Brier Score (BS) 49 to measure the calibration of with, in our case, ξ x ðiÞ the actual uniqueness of the record x ðiÞ (1 if x ðiÞ is unique and 0 if not) and c ξ x ðiÞ the estimated likelihood. Our model obtains scores on average 39% lower than the best theoretically achievable prediction using only population uniqueness, emphasizing the importance of modeling individuals' characteristics.
Appropriateness of the de-identification model. Using our model, we revisit the (successful) re-identification of Gov. Weld 25 . We train our model on the 5% Public Use Microdata Sample (PUMS) files using ZIP code, date of birth, and gender and validate it using the last national estimate 50 . We show that, as a male born on July 31, 1945 and living in Cambridge (02138), the information used by Latanya Sweeney at the time, William Weld was unique with a 58% likelihood (ξ x = 0.58 and κ x = 0.77), meaning that Latanya Sweeney's re-identification had 77%      Figure 3a shows that the same combinations of attributes (ZIP code, date of birth, gender, and number of children) would also identify 79.4% of the population in Massachusetts with high confidence ð b ξ x > 0:80Þ. We finally evaluate the impact of specific attributes on William Weld's uniqueness. We either change the value of one of his baseline attributes (ZIP code, date of birth, or gender) or add one extra attribute, in both cases picking the attribute at random from its distribution (see Supplementary Methods). Figure 3c shows, for instance, that individuals with 3 cars or no car are harder to re-identify than those with 2 cars. Similarly, it shows that it would not take much to re-identify people living in Harwich Port, MA, a city of <2000 inhabitants.
Modern datasets contain a large number of points per individuals. For instance, the data broker Experian sold Alteryx access to a de-identified dataset containing 248 attributes per household for 120M Americans 51 ; Cambridge university researchers shared anonymous Facebook data for 3M users collected through the myPersonality app and containing, among other attributes, users' age, gender, location, status updates, and results on a personality quiz 52 . These datasets do not necessarily share all the characteristics of the one studied here. Yet, our analysis of the re-identification of Gov. Weld by Latanya Sweeney shows that few attributes are often enough to render the likelihood of correct re-identification very high. For instance, Fig. 3b shows that the average individual uniqueness increases fast with the number of collected demographic attributes and that 15 demographic attributes would render 99.98% of people in Massachusetts unique.
Our results, first, show that few attributes are often sufficient to re-identify with high confidence individuals in heavily incomplete datasets and, second, reject the claim that sampling or releasing partial datasets, e.g., from one hospital network or a single online service, provide plausible deniability. Finally, they show that, third, even if population uniqueness is low-an argument often used to justify that data are sufficiently de-identified to be considered anonymous 53 -, many individuals are still at risk of being successfully re-identified by an attacker using our model.
As standards for anonymization are being redefined, incl. by national and regional data protection authorities in the EU, it is essential for them to be robust and account for new threats like the one we present in this paper. They need to take into account the individual risk of re-identification and the lack of plausible deniability-even if the dataset is incomplete-, as well as legally recognize the broad range of provable privacy-enhancing systems and security measures that would allow data to be used while effectively preserving people's privacy 54,55 .

Discussion
In this paper, we proposed and validated a statistical model to quantify the likelihood for a re-identification attempt to be successful, even if the disclosed dataset is heavily incomplete.
Beyond the claim that the incompleteness of the dataset provides plausible deniability, our method also challenges claims that a low population uniqueness is sufficient to protect people's privacy 53,56 . Indeed, an attacker can, using our model, correctly re-identify an individual with high likelihood even if the population uniqueness is low (Fig. 3a). While more advanced guarantees like k-anonymity 57 would give every individual in the dataset some protection, they have been shown to be NP-Hard 58 , hard to achieve in modern high-dimensional datasets 59 , and not always sufficient 60 .
While developed to estimate the likelihood of a specific reidentification to be successful, our model can also be used to estimate population uniqueness. We show in Supplementary Note 1 that, while not its primary goal, our model performs consistently better than existing methods to estimate population uniqueness on all five corpora ( Supplementary Fig. 4, P < 0.05 in 78 cases out of 80 using Wilcoxon's signed-rank test) [61][62][63][64][65][66] and consistently better than previous attempts to estimate individual uniqueness 67,68 . Existing approaches, indeed, exhibit unpredictably large over-and under-estimation errors. Finally, a recent work quantifies the correctness of individual re-identification in incomplete (10%) hospital data using complete population frequencies 24 . Compared to this work, our approach does not require external data nor to assume this external data to be complete.
To study the stability and robustness of our estimations, we perform further experiments ( Supplementary Notes 2-8).
First, we analyze the impact of marginal and association parameters on the model error and show how to use exogenous information to lower it. Table 1 and Supplementary Note 7 show that, at very small sampling fraction (below 0.1%), where the error is the largest, the error is mostly determined by the marginals, and converges after few hundred records when the exact marginals are known. The copula covariance parameters exhibit no significant bias and decrease fast when the sample size increases (Supplementary Note 8).
As our method separates marginals and association structure inference, exogenous information from larger data sources could also be used to estimate marginals with higher accuracy. For instance, count distributions for attributes such as date of birth or ZIP code could be directly estimated from national surveys. We replicate our analysis on the USA corpus using a subsampled dataset to infer the association structure along with the exact counts for marginal distributions. Incorporating exogenous information reduces, e.g., the mean MAE of uniqueness across all corpora by 48.6% (P < 0.01, Mann-Whitney) for a 0.1% sample. Exogenous information become less useful as the sampling fraction increases (Supplementary Table 2). Second, our model assumes that D is either uniformly sampled from the population of interest X or, as several census bureaus are doing, released with post-stratification weights to match the overall population. We believe this to be a reasonable assumption as biases in the data would greatly affect its usefulness and affect any application of the data, including our model. To overcome an existing sampling bias, the model can be (i) further trained on a random sample from the population D (e.g., microdata census or survey data) and then applied to a non-uniform released sample (e.g., hospital data, not uniformly sampled from the population) or (ii) trained using better, potentially unbiased, estimates for marginals or association structure coming from other sources (see above).
Third, since D is a sample from the population X, only the records that are unique in the sample can be unique in the population. Hence, we further evaluate the performance on our model only on records that are sample unique and show that it only marginally decrease the AUC (Supplementary Note 5). We therefore prefer to not restrict our predictions to sample unique records as (a) our models need to perform well on non-sample unique records for us to be able to estimate correctness and (b) to keep the method robust if oversampling or sampling with replacement were to have been used.

Methods
Inferring marginals distributions. Marginals can be either (i) unknown and are estimated from the marginals of the population sample X S , this is the assumption used in the main text, or (ii) known with their exact distribution and cumulative density function directly available.
In the first case, we fit marginal counts to categorical (naive plug-in estimator), negative binomial, and logarithmic distributions using maximum log-likelihood. We compare the obtained distributions and select the best likelihood according to its Bayesian information criterion (BIC): where b L is the maximized value of the likelihood function, n D the number of individuals in the sample D, and k the number of parameters in the fitted marginal distribution.
Inferring the parameters of the latent copula. Each cell Σ ij of the Σ covariance matrix of a multivariate copula distribution is the correlation parameter of a pairwise copula distribution. Hence, instead of inferring Σ from the set of all covariance matrices, we separately infer every cell Σ ij ∈ [0, 1] from the joint sample of D i and D j . We first measure the mutual information IðD i ; D j Þ between the two attributes and select σ ¼ c Σ ij minimizing the Euclidean distance between the empirical mutual information and the mutual information of the inferred joint distribution.
In practice, since the cdf. of a Gaussian copula is not tractable, we use a bounded Nelder-Mead minimization algorithm. For a given (σ, (Ψ i , Ψ j )), we sample from the distribution q(⋅|σ, (Ψ i , Ψ j )) and generate a discrete bivariate sample Y from which we measure the objective: We then project the obtained b Σ matrix on the set of SDP matrices by solving the following optimization problem: Modeling the association structure using mutual information. We use the pairwise mutual information to measure the strength of association between attributes. For a dataset D, we denote by I D the mutual information matrix where each cell IðD i ; D j Þ is the mutual information between attributes D i and D j . When evaluating mutual information from small samples, obtained scores are often overestimating the strength of association. We apply a correction for randomness using a permutation model 69 : In practice, we estimate the expected mutual information between D i and D j with successive permutations of D j . We found that the adjusted mutual information provides significant improvement for small samples and large support size jX j compared to the naive estimator.
Theoretical and empirical population uniqueness. For n individuals x (1) , x (2) , …, x (n) drawn from X, the uniqueness Ξ X is the expected percentage of unique individuals. It can be estimated either (i) by computing the mean of individual uniqueness or (ii) by sampling a synthetic population of n individuals from the copula distribution. In the former case, we have x ðiÞ unique inðx ð1Þ ; ; where T x = [∃!i, x (i) = x] equals one if there exists a single individual i such as x (i) = x and zero otherwise. T x follows a binomial distribution B(p(x), n). Therefore and This requires iterating over all combinations of attributes, whose number grows exponentially as the number of attributes increases, and quickly becomes computationally intractable. The second method is therefore often more tractable and we use it to estimate population uniqueness in the paper.
For cumulative marginal distributions F 1 , F 2 , …, F d and copula correlation matrix Σ, the algorithm 1 (Supplementary Methods) samples n individuals from q(⋅|Σ,Ψ) using the latent copula distribution. From the n generated records (y (1) , y (2) , …, y (n) ), we compute the empirical uniqueness Individual likelihood of uniqueness and correctness. The probability distribution qðÁ j Σ; ΨÞ can be computed by integrating over the latent copula density. Note that the marginal distributions X 1 to X d are discrete, causing the inverses F À1 1 to F À1 d to have plateaus. When estimating p(x), we integrate over the latent copula distribution inside the hypercube ½x 1 À 1; x 1 ½x 2 À 1; x 2 ½x d À 1; x d : with ϕ Σ the density of a zero-mean multivariate normal (MVN) of correlation matrix Σ. Several methods have been proposed in the literature to estimate MVN rectangle probabilities. Genz and Bretz 47,48 proposed a randomized quasi Monte Carlo method which we use to estimate the discrete copula density. The likelihood ξ x for an individual's record x to be unique in a population of n individuals can be derived from p X (X = x): ξ x p X ðx unique in ðx ð1Þ ; ; x ðnÞ Þ j 9i; x ðiÞ ¼ xÞ ð20Þ ¼ p X ðx unique in ðx ð1Þ ; ; x ðnÞ Þ j x ð1Þ ¼ xÞ ð21Þ ¼ p X ð8i 2 ½2; n; x ðiÞ ≠xÞ ð22Þ Similarly, the likelihood κ x for an individual's record x to be correctly matched in a population of n individuals can be derived from p X ðX ¼ xÞ. With T P n i¼1 x ðiÞ ¼ x Â Ã À 1, the number of potential false positives in the population, we have: κ x Pðx correctly matched in ðx ð1Þ ; ; x ðnÞ Þ j 9i; x ðiÞ ¼ xÞ ð24Þ n À 1 k pðxÞ k ð1 À pðxÞÞ ðnÀ1ÀkÞ ð26Þ Note that, since records are independent, T follows a binomial distribution B(n − 1, p(x)).
We substitute the expression for ξ x in the last formula and obtain: Data availability