Introduction

Government regulations, such as the roll-out of the General Data Protection Regulation in the European Union (EU) (https://gdpr-info.eu), the California Consumer Privacy Act (https://oag.ca.gov/privacy/ccpa), and the development of the Data Sharing and Release Bill in Australia (https://www.pmc.gov.au/public-data/data-sharing-and-release-reforms.) increasingly prohibit sharing customer’s data without explicit consent1.

A strong candidate for ensuring privacy is differential privacy. Differential privacy intuitively uses randomization to provide plausible deniability for the data of an individual by ensuring that the statistics of privacy-preserving outputs do not change significantly by varying the data of an individual2,3. Companies like Apple (https://www.apple.com/privacy/docs/Differential_Privacy_Overview.pdf), Google (https://developers.googleblog.com/2019/09/enabling-developers-and-organizations.html), Microsoft (https://www.microsoft.com/en-us/ai/ai-lab-differential-privacy), and LinkedIn (https://engineering.linkedin.com/blog/2019/04/privacy-preserving-analytics-and-reporting-at-linkedin) have rushed to develop projects and to integrate differential privacy into their products. Even, the US Census Bureau has decided to implement differential privacy in 2020 Census4. Of course, this has created much controversy pointing to “ripple effect on the many public and private organizations that conduct surveys based on census data”5.

A variant of differential privacy is local differential privacy in which all data points are randomized before being used by the aggregator, who attempts to infer the data distribution or some of its properties6,7,8. This is in contrast with differential privacy in which the data is first processed and then obfuscated by noise. Local differential privacy ensures that the data is kept private from the aggregator by adding noise to the individual data entries before the aggregation process. This is a preferred choice when dealing with untrusted aggregators, e.g., third party service providers or commercial retailers with financial interests, or when it is desired to release an entire dataset publicly for research in a privacy-preserving manner9. Differential privacy is in spirit close to randomized response methods introduced originally in10 to reduce potential bias due to non-response and social desirability when asking questions about sensitive topics. The randomized response can be used to conceal individual responses (i.e., protect individual privacy) so that the respondents are more inclined to answer truthfully11,12,13,14. In fact, for questions with binary answer, the randomized response method with forced response (i.e., a respondent determines whether to answer a sensitive question truthfully or with forced yes/no based on flipping a biased coin) is differentially private and the probability of the head for the coin determines the privacy budget in differential privacy15. However, differential is a more general and flexible methodology that can be used for categorical and non-categorical (i.e., continuous domain) questions4,16,17. This paper specifically consider the problem of analyzing privacy-preserving data on continuous domain, which is out of the scope of randomized response methodology.

Locally differential data can significantly distort our estimates of the probability density of the data because of the additive noise used to ensure privacy. The density of privacy-preserving data can become flatter in comparison with the density function of the original data points due to convolution of its density with privacy-preserving noise density. The situation can be even more troubling when using slow-decaying privacy-preserving noises, such as the Laplace noise. This concern is true irrespective of how many samples are gathered. This can result in under/over-estimation of the heavy-hitters, a common and worrying criticism of using differential privacy in the US Census18.

Estimating probability distributions/densities under differential privacy is of extreme importance as it is often the first step in gaining more important insights into the data, such as regression analysis. However, most of the existing work on probability distributions estimation based on locally differential private data focuses on categorical data19,20,21,22,23. For categorical data (in contrast with numerical data), the privacy-preserving noise is no longer additive, e.g., the so-called exponential mechanism24 or other boutique differential privacy mechanisms25 are often employed that are not on the offer in the 2020 US Census. The density estimation results for categorical data are also related to de-noising results in randomized response methods12. The work on continuous domains is often done by binning or quantizing the domain. However, finding the optimal number of bins or quantization resolution depending on privacy parameters, data distribution, and number of data points is a challenging task.

In this paper, we take a different approach to density estimation by using kernels and thus eliminating the need to quantize the domain. Kernel density estimation is a non-parametric way to estimate the probability density function of a random variable using its samples proposed independently by Parzen26 and Rosenblatt27. This methodology was extended to multi-variate variables in28,29. These estimators work based on batches of data; however, they can also be made recursive30,31,32. When the data samples are noisy because of measurement noise or, as in the case of this paper, privacy-preserving noise, we need to eliminate the effect of the additive noise kernel density estimation by deconvolution33. Therefore, we use the framework of deconvoluting kernel density estimators33,34,35,36 to remove the effect of privacy-preserving noise, which is often in the form of Laplace noise37. This approach also allows us to adapt the results from non-parameteric regression with errors-in-variables38,39,40 to develop regression models based on locally differentially private data. This is the first time that deconvoluting kernel density estimators have been used for analyze differentially-private data. This is an important challenge facing social science researchers and demographers following the changes administered in the 2020 Census in the United States4.

Methods

Consider independently distributed data points \(\{\mathbf{x }[i]\}_{i=1}^n\subset {\mathbb {R}}^q\), for some fixed dimension \(q\ge 1\), from common probability density function \(\phi _{\mathbf{x }}\). Each data point \(\mathbf{x }[i]\in {\mathbb {R}}^q\) belongs to an individual. Under no privacy restrictions, the data points can be provided to the central aggregator to construct an estimate of the density \(\phi _{\mathbf{x }}\) denoted by \({\widehat{\phi }}_{\mathbf{x }}\). We may use kernel K, which is a bounded even probability density function, to generate the density estimate \({\widehat{\phi }}_{\mathbf{x }}\). A widely recognized example of a kernel is the Gaussian kernel41 in

$$\begin{aligned} K({\mathbf{x }})=\frac{1}{\sqrt{(2\pi )^q}}\exp \left( -\frac{1}{2}\mathbf{x }^\top \mathbf{x }\right) . \end{aligned}$$
(1)

In the big data regime \(n\gg 1\), the choice of the kernel is not crucial to the accuracy of kernel density estimators so long as it meets the conditions in34. In this paper, we keep the kernel general. By using kernel K, we can construct the estimate

$$\begin{aligned} {\widehat{\phi }}^{\text {np}}_{\mathbf{x }}(\mathbf{x })=\frac{1}{nh^q}\sum _{i=1}^{n}K((\mathbf{x }-\mathbf{x }[i])/h), \end{aligned}$$
(2)

where \(h>0\) is the bandwidth. The bandwidth is often selected such that \(h\rightarrow 0\) as \(n\rightarrow \infty\). The optimal rate of decay for the bandwidth has been established for families of distributions33,34.

Remark 1

The problem formulation in this paper considers real-valued data as opposed as categorical data. This distinguishes the paper from the computer science literature on this topic, which primarily focuses on categorical data19,20,21,22,23. Real-valued data can arise in two situations. First, the posed question can be non-categorical, e.g., credit rating for loans or the interest rates of loans. We will consider this in one of our experimental results. However, aggregated categorical data can also be real-valued. For instance, the 2020 US Census reports the aggregate number of individuals from a race or ethnicity group within different counties. These numbers will be made differentially private as part of the US Census Bureau’s privacy initiative4. Therefore, the methods developed in this paper are still relevant to categorical data, albeit in aggregated forms.

As discussed in the introduction, due to privacy restrictions, the exact data points \(\{\mathbf{x }[i]\}_{i=1}^n\) might not be available to generate the density estimate in (2). The aggregator may only have access to noisy versions of these data points:

$$\begin{aligned} \mathbf {z}[i]=\mathbf{x }[i]+\mathbf {n}[i], \end{aligned}$$
(3)

where \(\mathbf {n}[i]\) is a privacy-preserving additive noise. To ensure differential privacy, Laplace additive noises is often used37. For any probability density \(\phi\), we use the notation \(\text {supp}(\phi )\) to denote its support set, i.e., \(\text {supp}(\phi ):=\{\xi :\phi (\xi )>0\}\).

Assumption 1

(Bounded support) \(\text {supp}(\phi _{\mathbf{x }})\subseteq \prod _{i=1}^q [{\underline{x}}_i,{\overline{x}}_i]\) for finite constants \({\underline{x}}_i\le {\overline{x}}_i\).

Assumption 1 is without loss of generality as we are always dealing with bounded domains in social sciences with a priori known bounds on the data (e.g., the population of a region).

Definition 1

(Local differential privacy) The reporting mechanism in (3) is \(\epsilon\)-(locally) differentially private for \(\epsilon \ge 0\) if

$$\begin{aligned} {\mathbb {P}}\{\mathbf{x }[i]+\mathbf {n}[i]\in {\mathcal {Z}}|\mathbf{x }[i]=\mathbf{x }\}\le \exp (\epsilon ) {\mathbb {P}}\{\mathbf{x }[i]+\mathbf {n}[i]\in&{\mathcal {Z}}|\mathbf{x }[i]=\mathbf{x }'\},\quad \forall \mathbf{x },\mathbf{x }'\in \text {supp}(\phi _{\mathbf{x }}), \end{aligned}$$

for any Borel-measurable set \({\mathcal {Z}}\subseteq {\mathbb {R}}^q\).

Definition 1 ensures that the statistics of privacy-preserving output \(\mathbf{x }[i]+\mathbf {n}[i]\), determined by its distribution, do not change “significantly” (the magnitude of change is bounded by the privacy parameter \(\epsilon\)) if the data of individual \(\mathbf{x }[i]\) changes. If \(\epsilon \rightarrow 0\), the output becomes more noisy and a higher privacy guarantee is achieved. Laplace additive noise is generally used to ensure differential privacy. This is formalized in the following theorem, which is borrowed from37.

Theorem 1

Let \(\{\mathbf {n}[i]\}_{i=1}^n\) be distributed according to the common multivariate Laplace density:

$$\begin{aligned} \phi _{\mathbf {n}}(\mathbf {n})=\frac{1}{2^q\prod _{j=1}^q b_j}\exp \left( -\sum _{j=1}^q\frac{|n_j|}{b_j}\right) , \end{aligned}$$

where \(n_j\) is the j-th component of \(\mathbf {n}\in {\mathbb {R}}^q\). The reporting mechanism in (3) is \(\epsilon\)-locally differentially private if \(b_j=q({\overline{x}}_j-{\underline{x}}_j)/\epsilon\) for \(j\in \{1,\dots ,q\}\).

In what follows, we assume that the reporting policy in Theorem 1 is used to generate locally differentially private data points. Since \(\{\mathbf {n}[i]\}_{i=1}^n\) are distributed according to the common density \(\phi _{\mathbf {n}}(\mathbf {n})\), \(\{\mathbf {z}[i]\}_{i=1}^q\) would also follow a common probability density, which is denoted by \(\phi _{\mathbf {z}}\). Note that

$$\begin{aligned} \Phi _{\mathbf {z}}(\mathbf {t})=\Phi _{\mathbf{x }}(\mathbf {t})\Phi _{\mathbf {n}}(\mathbf {t}), \end{aligned}$$
(4)

where \(\Phi _{\mathbf {z}}\), \(\Phi _{\mathbf{x }}\), and \(\Phi _{\mathbf {n}}\) are the characteristic functions of \(\phi _{\mathbf {z}}\), \(\phi _{\mathbf{x }}\), and \(\phi _{\mathbf {n}}\). Using (4), we can use any approximation of \(\Phi _{\mathbf {z}}\) to construct an approximation of \(\Phi _{\mathbf{x }}\) and thus estimate \(\phi _{\mathbf{x }}\). If we use kernel K for estimating density of \(\mathbf {z}[i]\), \(\forall i\), we get

$$\begin{aligned} {\widehat{\phi }}_{\mathbf {z}}(\mathbf {z})=\frac{1}{nh^q}\sum _{i=1}^{n}K((\mathbf {z}-\mathbf {z}[i])/h). \end{aligned}$$

Here, \({\widehat{\phi }}_{\mathbf {z}}\) is used to denote the approximation of \(\phi _{\mathbf {z}}\). The characteristic function of \({\widehat{\phi }}_{\mathbf {z}}\) is given by

$$\begin{aligned} {\widehat{\Phi }}_{\mathbf {z}}(\mathbf {t}) =&\Phi _{K}(h\mathbf {t}){\widehat{\Phi }}(\mathbf {t}), \end{aligned}$$

where \(\Phi _{K}(\mathbf {t})\) is the characteristic function of K and \({\widehat{\Phi }}(\mathbf {t})\) is the empirical characteristic function of measurements \(\{\mathbf {z}[i]\}_{i=1}^n\), defined as

$$\begin{aligned} {\widehat{\Phi }}(\mathbf {t})=\frac{1}{n}\sum _{i=1}^{n}\exp \left( i \mathbf {t}^\top \mathbf {z}[i] \right) . \end{aligned}$$

Therefore, the characteristic function of \({\widehat{\phi }}_{\mathbf{x }}\) is given by

$$\begin{aligned} {\widehat{\Phi }}_{\mathbf{x }}(\mathbf {t})=\frac{\Phi _{K}(H\mathbf {t}){\widehat{\Phi }}(\mathbf {t})}{\Phi _{\mathbf {n}}(\mathbf {t})} \end{aligned}$$

Further, note that

$$\begin{aligned} \Phi _{\mathbf {n}}(\mathbf {t}) =&{\mathbb {E}}\left\{ \exp \left( i \mathbf {t}^\top \mathbf {n}\right) \right\} \\ =&{\mathbb {E}}\left\{ \exp \left( i t_{1} n_{1}\right) \exp \left( i t_{2} n_{2}\right) \cdots \exp \left( i t_{q} n_{q}\right) \right\} \\ =&{\mathbb {E}}\left\{ \exp \left( i t_{1} n_{1}\right) \right\} {\mathbb {E}}\left\{ \exp \left( i t_{2} n_{2}\right) \right\} \cdots {\mathbb {E}}\left\{ \exp \left( i t_{q} n_{q}\right) \right\} \\ =&\prod _{j=1}^q \frac{1}{1+b_j^2 t_j^2}, \end{aligned}$$

where \(t_j\) is the j-th component of \(\mathbf {t}\in {\mathbb {R}}^q\). We get

$$\begin{aligned} {\widehat{\phi }}_{\mathbf{x }}(\mathbf{x })=\frac{1}{nh^q} \sum _{i=1}^n {\widehat{K}}_h((\mathbf{x }-\mathbf {z}[i])/h), \end{aligned}$$
(5)

where

$$\begin{aligned} {\widehat{K}}_h(\mathbf{x })&=\frac{1}{(2\pi )^q}\int _{{\mathbb {R}}^q}\exp (-i \mathbf {t}^\top \mathbf{x })\frac{\Phi _{K}(\mathbf {t})}{\Phi _{\mathbf {n}}(\mathbf {t}/h)}\text {d}\mathbf {t}\\&=\frac{1}{(2\pi )^q}\int _{{\mathbb {R}}^q}\exp (-i \mathbf {t}^\top \mathbf{x })\prod _{j=1}^q \left( 1+\frac{b^2}{h^2} t_j^2\right) \Phi _{K}(\mathbf {t})\text {d}\mathbf {t}\\&=\prod _{j=1}^q \left( 1-\frac{b_j^2}{h^2} \frac{\partial ^2}{\partial x_j^2}\right) K(\mathbf{x }), \end{aligned}$$

where \(x_j\) is the j-th component of \(\mathbf{x }\in {\mathbb {R}}^q\).

Under appropriate conditions on the kernel K34, we can see that

$$\begin{aligned} {\mathbb {E}}\{{\widehat{\phi }}_{\mathbf{x }}(\mathbf{x })|\{\mathbf{x }_i\}_{i=1}^n\}={\widehat{\phi }}^{\text {np}}_{\mathbf{x }}(\mathbf{x }). \end{aligned}$$
(6)

Therefore, \({\widehat{\phi }}_{\mathbf{x }}(\mathbf{x })\) in (5) is effectively an unbiased estimate of \({\widehat{\phi }}^{\text {np}}_{\mathbf{x }}(\mathbf{x })\) in (2). In average, we are canceling the effect of the differential privacy noise. Selecting bandwidth (or smoothing parameter) h is an important aspect of kernel estimation. In26, it was shown that \(\lim _{n\rightarrow \infty } h=0\) guarantees asymptotic ubiasedness (i.e., point-wise convergence of the kernel density estimate to the true density function) while \(\lim _{n\rightarrow \infty } nh=+\infty\) is required to ensure asymptotic consistency. Many studies have focused on finding optimal bandwidth29,42,43,44. Numerical methods based on cross validation for setting the bandwidth are proposed in45,46. Often, it is recommended to compare the results from different bandwidth selection algorithms to avoid misleading conclusions caused by over-smoothing or under-smoothing of the density estimate47. These results have been also extended to noisy measurements with deconvoluting kernel density estimation34,48. If h scales according to \(n^{-1/5}\), \({\widehat{\phi }}_{\mathbf{x }}(\mathbf{x })\) is a consistent estimator of \(\phi _{\mathbf{x }}\) as \(n\rightarrow \infty\), i.e., \({\widehat{\phi }}_{\mathbf{x }}(\mathbf{x })\) converges \(\phi _{\mathbf{x }}\) point-wise for all \(\mathbf{x }\in \text {supp}(\phi _{\mathbf{x }})\)34. Note that by selecting \(h={\mathcal {O}}(n^{-1/5})\), we get

$$\begin{aligned} \int {\mathbb {E}}\{{\widehat{\phi }}_{\mathbf{x }}(\mathbf{x })-\phi _{\mathbf{x }}(\mathbf{x })\}^2={\mathcal {O}}(n^{-4/5}), \end{aligned}$$

where \({\mathcal {O}}\) denotes the Bachmann–Landau notation. Therefore, \(\int {\mathbb {E}}\{{\widehat{\phi }}_{\mathbf{x }}(\mathbf{x })-\phi _{\mathbf{x }}(\mathbf{x })\}^2\rightarrow 0\) as \(n\rightarrow \infty\). This means that the effect of the differential-privacy noise is effectively negligible on large datasets.

Figure 1
figure 1

The kernel regression model (dashed black) and true regression curve (solid black) for mixture Gaussian data made differentially private with \(\epsilon =10\).

Figure 2
figure 2

The kernel regression model (dashed black) and true regression curve (solid black) for chi-squared data made differentially private with \(\epsilon =10\).

For regression analysis, we consider independently distributed data points \(\{(\mathbf{x }[i],\mathbf{y }[i])\}_{i=1}^n\) from common probability density function. We would like to understand the relationship between inputs \(\mathbf{x }[i]\) and outputs \(\mathbf{y }[i]\) for all i. Similarly, we assume that we can only access noisy privacy-preserving inputs \(\{\mathbf {z}[i]\}_{i=1}^n\) instead of accurate inputs \(\{\mathbf{x }[i]\}_{i=1}^n\). Following the argument above, we can also construct the Nadaraya–Watson kernel regression (see, e.g.49) as

$$\begin{aligned} {\widehat{m}}(\mathbf{x }):=\frac{\sum _{i=1}^n {\widehat{K}}_h((\mathbf{x }-\mathbf {z}[i])/h)\mathbf{y }[i]}{\sum _{i=1}^n {\widehat{K}}_h((\mathbf{x }-\mathbf {z}[i])/h)}. \end{aligned}$$
(7)

Under appropriate conditions on the kernel K and the bandwidth h40, \({\widehat{m}}(\mathbf{x })\) converges to \({\mathbb {E}}\{\mathbf{y }|\mathbf{x }\}\) almost surely. In practice the bandwidth can be computed by minimizing the cross-validation cost, i.e., the error of estimating each \(\mathbf{y }[j]\) using the Nadaraya–Watson kernel regression constructed from \(\{(\mathbf {z}[i],\mathbf{y }[i])\}_{i\in \{1,\dots ,n\}\setminus \{j\}}\) averaged over all choices of \(\ell\). The optimal bandwidth is given by

$$\begin{aligned} \mathop {\hbox {arg min}}\limits _{h} \sum _{j=1}^n \ell (\mathbf{y }[j],{\widehat{m}}_{-j}(\mathbf{x }[j])), \end{aligned}$$
(8)

where \(\ell\) is a fitness function, e.g., \(\ell (\mathbf{y },\mathbf{y }')=\Vert \mathbf{y }-\mathbf{y }'\Vert _2^2\), and \({\widehat{m}}_{-j}(\mathbf{x })\) is the Nadaraya–Watson kernel regression constructed from \(\{(\mathbf {z}[i],\mathbf{y }[i])\}_{i\in \{1,\dots ,n\}\setminus \{j\}}\):

$$\begin{aligned} {\widehat{m}}_{-j}(\mathbf{x }):=\frac{\sum _{i\in \{1,\dots ,n\}\setminus \{j\}} {\widehat{K}}_h((\mathbf{x }-\mathbf {z}[i])/h)\mathbf{y }[i]}{\sum _{i\in \{1,\dots ,n\}\setminus \{j\}} {\widehat{K}}_h((\mathbf{x }-\mathbf {z}[i])/h)}. \end{aligned}$$

This approach has been widely used for setting the bandwidth in non-parametric regression38.

Results

In this section, we demonstrate the performance of the developed methods on multiple datasets. We first use a synthetic dataset for illustration purposes and then utilize real financial and demographic datasets. Throughout this section, we use the following original kernel:

$$\begin{aligned} K(x)=\frac{1}{\pi } \frac{1}{1+x^2}. \end{aligned}$$

Note that \(\mathbf{x }=x\) is a scalar as we are only considering a single input. This is the Cauchy distribution. We get the adjusted kernel in

$$\begin{aligned} {\widehat{K}}_h(x)&=\left( 1-\frac{b^2}{h^2} \frac{\text {d}^2}{\text {d} x}\right) K(x)\\&=\frac{1}{\pi }\left[ \frac{1}{1+x^2} - \frac{b^2}{h^2}\frac{8x^2}{(x^2 + 1)^3} +\frac{b^2}{h^2} \frac{2}{(x^2 + 1)^2}\right] . \end{aligned}$$

We use the cross-validation procedure in (8) to find the bandwidth in the following simulation and experiments.

Figure 3
figure 3

Estimates of probability density function of the credit score using original noiseless data with original kernel \({\widehat{\phi }}^{np}_{x}(x)=\frac{1}{nh}\sum _{i=1}^{n}K((x-x[i])/h)\) (solid gray), \(\epsilon\)-locally differential private data with original kernel \({\widetilde{\phi }}_{x}(x)=\frac{1}{nh}\sum _{i=1}^{n}K((x-z[i])/h)\) (dashed black), and \(\epsilon\)-locally differential private data with adjusted kernel \({\widehat{\phi }}_{x}(x)=\frac{1}{nh}\sum _{i=1}^{n}K_h((x-z[i])/h)\) (solid black) for \(\epsilon =5.0\) and bandwidth \(h=0.1\).

Figure 4
figure 4

The kernel regression model (solid black) and the linear regression model (dashed black) based on the original data with bandwidth \(h=0.02\) superimposed on the original noiseless data (gray dots). The mean squared error for the kernel regression model is 4.42 and the mean squared error for the linear regression model is 4.61.

Figure 5
figure 5

The kernel regression model (solid black) and the linear regression model (dashed black) based on the \(\epsilon\)-locally differential private data with \(\epsilon =5\) and bandwidth \(h=0.20\) superimposed on the original noiseless data (gray dots). The mean squared error for the kernel regression model is 5.70 and the mean squared error for the linear regression model is 7.11.

Figure 6
figure 6

The mean squared error for the kernel regression model and the linear regression model based on the \(\epsilon\)-locally differential private (\(\epsilon\)-LDP in the legend) data versus privacy budget \(\epsilon\). The horizontal lines show the mean squared error for the kernel regression model and the linear regression model based on original noiseless data.

Synthetic dataset

We use a simulation study is to illustrate the performance of the Nadaraya–Watson kernel regression in (7) for privacy-preserving data. We consider multiple scenarios. We use two distributions for \(\{x[i]\}_{i=1}^n\). The first one is a Gaussian mixture \((1/3){\mathcal {N}}(-1,1)+(2/3){\mathcal {N}}(3/2,1/2)\) truncated over \([-3,3]\). The second distribution is a chi-squared distribution with three degrees of freedom distribution \(\chi ^2(3)\) truncated over [0, 3]. The truncation in both cases is for satisfaction of Assumption 1. We also consider two regression curves: \(g_1:x\mapsto x^2(1-x^2)/5\) and \(g_2:x\mapsto 4.5\sin (x)-5\). Finally, we assume a Gaussian measurement noise \({\mathcal {N}}(0,1)\), i.e., \(y[i]=g_j(x[i])+v[i]\) for \(j=1,2\), where v[i] is a zero mean Gaussian random variable with unit variance.

Figure 1 shows the kernel regression model (dashed black) and true regression curve (solid black) for mixture Gaussian data made differentially private with \(\epsilon =10\). Here, we consider two dataset size of \(n=1000\) and n = 10,000 and two regression curves of \(g_1\) and \(g_2\), introduced earlier. The Nadaraya–Watson kernel regression using differentially-private provides fairly accurate predictions. The accuracy of the prediction improves as the dataset gets larger. Figure 2 illustrates the kernel regression model (dashed black) and true regression curve (solid black) for chi-squared data made differentially private with \(\epsilon =10\). This shows that the fitness of the Nadaraya-Watson kernel regression is somewhat independent of the underlying distribution of the data.

Lending club dataset

The dataset contains information of 2,260,701 accepted and 27,648,741 rejected loans application on Lending Club, a peer-to-peer lending platform, over 2007 to 2018. The dataset is available for download on Kaggle50. For the accepted loans, dataset contains interest rates of the loans per annum and loan attributes, such as total loan size, and borrower information, such as number of credit lines, credit rating, state of residence, and age. Here, we only focus on data from 2010 (to avoid possible yearly fluctuations of the interest rate), which contains 12,537 accepted loans. We also focus on the relationship between the FICO (https://www.fico.com/en/products/fico-score.) credit score (low range) and the interest rates of the loan. This is an interesting relationship pointing to the value of credit rating reports51. The FICO credit score is very sensitive (as it relates to the financial health of an individual) and possesses a significant commercial value (as it is sold by a for-profit corporation). Thus, we assume that is is made available publicly in a privacy-preserving manner using (3). Note that the original data in50 provides this data in an anonymized manner without privacy-preserving noise.

Figure 3 illustrates estimates of probability density function of the credit score \(\phi _x(x)\) using original noiseless data with original kernel \({\widehat{\phi }}^{np}_{x}(x)\) in (2) (solid gray), \(\epsilon\)-locally differential private data with original kernel \({\widetilde{\phi }}_{x}(x)=\frac{1}{nh}\sum _{i=1}^{n}K((x-z[i])/h)\) (dashed black), and \(\epsilon\)-locally differential private data with adjusted kernel in (5) (solid black) for \(\epsilon =5.0\) and bandwidth \(h=0.1\). Note that \({\widetilde{\phi }}_{x}(x)=\frac{1}{nh}\sum _{i=1}^{n}K((x-z[i])/h)\) is a naive density estimate as it does not try to cancel the effect of the privacy-preserving noise. Clearly, using the original kernel for the noisy privacy-preserving data flattens the density estimate \({\widetilde{\phi }}_{x}(x)\). This is because we are in fact observing a convolution of the original probability density with the probability density of the Laplace noise. Upon using the adjusted kernel \({\widehat{K}}_h(x)\) the estimate of the probability density using the noisy privacy-preserving data matches the estimate of the probability density with the original data (with additional fluctuations due to the presence of noise). This provides a numerical validation of (6).

Now, let us focus on the regression analysis. Figure 4 shows the kernel regression model (solid black) and the linear regression model (dashed black) based on the original data with bandwidth \(h=0.02\) superimposed on the original noiseless data (gray dots). The mean squared error for the kernel regression model is 4.42 and the mean squared error for the linear regression model is 4.61. The kernel regression model is thus slightly superior (roughly 4%) to the linear regression model; however, the gap is narrow. Figure 5 illustrates the kernel regression model (solid black) and the linear regression model (dashed black) based on the \(\epsilon\)-locally differential private data with \(\epsilon =5\) and bandwidth \(h=0.20\) superimposed on the original noiseless data (gray dots). The mean squared error for the kernel regression model is 5.70 and the mean squared error for the linear regression model is 7.11. In this case, the kernel regression model is considerably (roughly 20%) better. In Fig. 6, we observe the mean squared error for the kernel regression model and the linear regression model based on the \(\epsilon\)-locally differential private data versus privacy budget \(\epsilon\). Clearly, the kernel regression model is consistently superior to the linear regression model. As \(\epsilon\) grows larger, the performance of the kernel regression model and the linear regression model based on the \(\epsilon\)-locally differential private data converge to the performance of the kernel regression model and the linear regression model based on original noiseless data. This intuitively makes sense as, by increasing the privacy budget, the magnitude of the privacy-preserving noise becomes smaller.

Figure 7
figure 7

The kernel regression model (solid black) and the logistic regression model (dashed black) based on the original data with bandwidth \(h=0.17\). The logarithm of the likelihood for the kernel regression model is \(-0.49\) and the logarithm of the likelihood for the logistic regression model is \(-\,0.50\).

Figure 8
figure 8

The kernel regression model (solid black) and the logistic regression model (dashed black) based on the \(\epsilon\)-locally differential private data with \(\epsilon =5.0\) bandwidth \(h=2.98\). The logarithm of the likelihood for the kernel regression model is \(-\,0.51\) and the logarithm of the likelihood for the logistic regression model is \(-\,0.53\).

Figure 9
figure 9

The logarithm of the likelihood for the kernel regression model and the logistic regression model based on the \(\epsilon\)-locally differential private data versus privacy budget \(\epsilon\). The horizontal lines show the logarithm of the likelihood for the kernel regression model and the logistic regression model based on original noiseless data.

Adult dataset

The dataset contains information of 32,561 individuals from the 1994 Census database. The dataset is available for download on UCI52. The dataset contains attributes, such as education, age, work type, gender, race, and a binary report whether the individual earns more than 50,000$ per year. We also focus on the relationship between the education (in years) and the individual ability to earn more than 50,000$ per year. The education is assumed to be made public in a privacy-preserving form following (3). This information can be considered private as it can be used in conjunction with other information to de-anonymize the dataset.

Figure 7 The kernel regression model (solid black) and the logistic regression model (dashed black) based on the original data with bandwidth \(h=0.17\). The logarithm of the likelihood for the kernel regression model is \(-\,0.49\) and the logarithm of the likelihood for the logistic regression model is \(-\,0.50\). The kernel regression model is thus slightly superior (roughly 2%) to the logistic regression model; however, the gap is almost negligible. Figure 8 illustrates the kernel regression model (solid black) and the logistic regression model (dashed black) based on the \(\epsilon\)-locally differential private data with \(\epsilon =5.0\) bandwidth \(h=2.98\). The logarithm of the likelihood for the kernel regression model is \(-\,0.51\) and the logarithm of the likelihood for the logistic regression model is \(-\,0.53\). In this case, the kernel regression model is slightly (roughly 4%) better. In Fig. 9, we observe the logarithm of the likelihood for the kernel regression model and the logistic regression model based on the \(\epsilon\)-locally differential private data versus privacy budget \(\epsilon\). The horizontal lines show the logarithm of the likelihood for the kernel regression model and the logistic regression model based on original noiseless data. Again, the kernel regression model is consistently superior to the logistic regression model. However, the effect is not as pronounced as the linear regression in the previous subsection. Finally, again, as \(\epsilon\) grows larger, the performance of the kernel regression model and the logistic regression model based on the \(\epsilon\)-locally differential private data converge to the performance of the kernel regression model and the linear regression model based on original noiseless data.

Discussion

The density of privacy-preserving data is always flatter in comparison with the density function of the original data points due to convolution with privacy-preserving noise density function. This is certainly a cause for concern due to addition of differential-privacy noise in 2020 US Census. This unfortunate effect is always present irrespective of how many samples we gather because we observe the convolution of the original probability density with the probability density of the privacy-preserving noise. This can result in miss-estimation of the heavy-hitters that often play an important role in social sciences due to their ties to minority groups. We developed density estimation methods using smoothing kernels and used the framework of deconvoluting kernel density estimators to remove the effect of privacy-preserving noise. This can result in a superior performance both for estimating probability density functions and for kernel regression in comparison to popular regression techniques, such as linear and logistic regression models. In the case of estimating the probability density function, we could entirely remove the flatting effect of the privacy-preserving noise at the cost of additional fluctuations. The fluctuations however could be reduced by gathering more data.