Selective knowledge sharing for privacy-preserving federated distillation without a good teacher

While federated learning (FL) is promising for efficient collaborative learning without revealing local data, it remains vulnerable to white-box privacy attacks, suffers from high communication overhead, and struggles to adapt to heterogeneous models. Federated distillation (FD) emerges as an alternative paradigm to tackle these challenges, which transfers knowledge among clients instead of model parameters. Nevertheless, challenges arise due to variations in local data distributions and the absence of a well-trained teacher model, which leads to misleading and ambiguous knowledge sharing that significantly degrades model performance. To address these issues, this paper proposes a selective knowledge sharing mechanism for FD, termed Selective-FD, to identify accurate and precise knowledge from local and ensemble predictions, respectively. Empirical studies, backed by theoretical insights, demonstrate that our approach enhances the generalization capabilities of the FD framework and consistently outperforms baseline methods. We anticipate our study to enable a privacy-preserving, communication-efficient, and heterogeneity-adaptive federated training framework.


Introduction
The rapid development of deep learning (DL) 1 has paved the way for its widespread adoption across various application domains, including medical image processing 2 , intelligent healthcare 3 , and robotics 4 .The key ingredients of DL include massive datasets and powerful computing platforms, which makes centralized training a typical approach for building high-performing models.However, regulations such as the General Data Protection Regulation (GDPR) 5 and California Consumer Privacy Act (CCPA) 6 have been implemented to limit data collection and storage since the data may contain sensitive personal information.For instance, collecting chest X-ray images from multiple hospitals to curate a large dataset for pneumonia detection is practically difficult, since it would violate patient privacy laws and regulations such as Health Insurance Portability and Accountability Act (HIPAA) 7 .While these restrictions as well as regulations are essential for privacy protection, they hinder the utilization of centralized training in practice.Meanwhile, many domains face the "data islands" problem.For instance, individual hospitals may possess only a limited number of data samples for rare diseases, which makes it difficult to develop accurate and robust models.
Federated learning (FL) [8][9][10] is a promising technique that can effectively utilize distributed data while preserving privacy.In particular, multiple data-owning clients collaboratively train a DL model by updating models locally on private data and aggregating them globally.These two steps iterate many times until convergence, while private data is kept local.Despite many benefits, FL faces challenges and poses inconveniences.Specifically, the periodical model exchange in FL entails communication overhead that scales up with the model size.This prohibits the use of large-sized models in FL 11,12 , which severely limits the model accuracy.Besides, standard federated training methods enforce local models to adopt the same architecture, which cannot adapt well to heterogeneous clients equipped with different computation resources 13,14 .Furthermore, although the raw data are not directly shared among clients, the model parameters may encode private information about datasets.This makes the shared models vulnerable to white-box privacy attacks 15,16 .
To resolve the above difficulties of FL requires us to rethink the fundamental problem of privacy-preserving collaborative learning, which is to effectively share knowledge among distributed clients while preserving privacy.Knowledge distillation (KD) 17,18 is an effective technique for transferring knowledge from well-trained teacher models to student models by leveraging proxy samples.The inference results by the teacher models on the proxy samples represent privileged knowledge, which supervises the training of the student models.In this way, high-quality student models can be obtained without accessing the training data of the teacher models.Applying KD to collaborative learning gives rise to a paradigm called federated distillation (FD) [19][20][21][22] , where the ensemble of clients' local predictions on the proxy samples serves as privileged knowledge.By sharing the hard labels (i.e., predicted results) 23 of proxy samples instead of model parameters, the FD framework largely reduces the communication overhead, can support heterogeneous local models, and is free from white-box privacy attacks.Nevertheless, without a well-trained teacher, FD relies on the ensemble of local predictors for distillation, making it sensitive to the training state of local models, which may suffer from poor quality and underfitting.Besides, the non-identically independently distributed (non-IID) data distributions 24,25 across clients exacerbate this issue, since the local models cannot output accurate predictions on the proxy samples that are outside their local distributions 26 .To address the negative impact of misleading knowledge, an alternative is to incorporate soft labels (i.e., normalized logits) 17 during knowledge distillation to enhance the generalization performance.Soft labels provide rich information about the relative similarity between different classes, enabling student models to generalize effectively to unseen examples.However, a previous study 20 pointed out that ensemble predictions may be ambiguous and exhibit high entropy when local predictions of clients are inconsistent.Sharing soft labels can exacerbate this problem as they are less certain than hard labels.The misleadingness and ambiguity harm the knowledge distillation and local training.For instance, in our experiments on image classification tasks, existing FD methods barely outperform random guessing under highly non-IID distributions.
This work aims to tackle the challenge of knowledge sharing in FD without a good teacher, and our key idea is to filter out misleading and ambiguous knowledge.We propose a selective knowledge sharing mechanism in federated distillation (named Selective-FD) to identify accurate and precise knowledge during the federated training process.As shown in Fig. 1 and Fig. 2, this mechanism comprises client-side selectors and a server-side selector.At each client, we construct a selector to identify out-of-distribution (OOD) samples 27,28 from the proxy dataset based on the density-ratio estimation 29 .This method detects outliers by quantifying the difference in densities between the inlier distribution and outlier distribution.If the density ratio of a particular sample is close to zero, the client considers it an outlier and refrains from sharing the predicted result to prevent misleading other clients.On the server side, we average the uploaded predictions from the clients and filter out the ensemble predictions with high entropy.The other ensemble predictions are then returned to the clients for knowledge distillation.We provide theoretical insights to demonstrate the impact of our selective knowledge sharing mechanism on the training convergence, and we evaluate Selective-FD in two applications, including a pneumonia detection task and three benchmark image classification tasks.Extensive experimental results show that Selective-FD excels in handling non-IID data and significantly improves test accuracy compared to the baselines.Remarkably, Selective-FD with hard labels achieves performance close to the one sharing soft labels.Furthermore, our proposed Selective-FD significantly reduces the communication cost during federated training compared with the conventional FedAvg approach.We anticipate that the proposed method will serve as a valuable tool for training large models in the federated setting for future applications.

Performance Evaluation
The experiments are conducted on a pneumonia detection task 30 and three benchmark image classification tasks [31][32][33] .The pneumonia detection task aims to detect pneumonia from chest X-ray images.This task is based on the COVIDx dataset 30 that contains three classes, including normal people, non-COVID-19 infection, and COVID-19 viral infection.We consider four clients, e.g., hospitals, in the federated distillation framework.To simulate the non-IID data across clients, each of them only has one or two classes of chest X-ray images, and each class contains 1,000 images.Besides, we construct a proxy dataset for knowledge transfer, which contains all three classes, and each class has 500 unlabeled samples.The non-IID data distribution is visualized in Fig. 3.The test dataset consists of 100 normal images and 100 images of pneumonia infection, where half of  Test accuracy of different methods on the pneumonia detection task.The error bar represents the mean ± standard deviation of five repetitions.The results show that the proposed Selective-FD method achieves the best performance, and the accuracy gain is more significant when using hard labels to transfer knowledge.Specifically, some baselines perform even worse than the independent learning scheme.These results demonstrate that knowledge sharing among clients can mislead and negatively influence local training.the pneumonia infections are non-COVID-19 infections and the other half are COVID-19 infections.Moreover, we evaluate the proposed method on three benchmark image datasets, including MNIST 31 , Fashion MNIST 32 , and CIFAR-10 33 .The datasets consist of ten classes, each with over 50,000 training samples.To transfer knowledge in a federated distillation setting, we randomly select 10% to 20% of the training data from each class as unlabeled proxy samples.In the experiments, ten clients participate in the distillation process, and we evaluate the model's performance under two non-IID distribution settings: a strong non-IID setting and a weak non-IID setting, where each client has one unique class and two classes, respectively.Several representative federated distillation methods are compared, including FedMD 13 , FedED 19 , DS-FL 20 , FKD 34 , and PLS 26 .Among them, FedMD, FedED, and DS-FL rely on a proxy dataset to transfer knowledge, while FKD and PLS are data-free KD approaches that share class-wise average predictions among users.Besides, we report the performance of independent learning (abbreviated as IndepLearn), where each client trains the model on its local dataset.The comparison includes the results of sharing hard labels (predicted labels) and soft labels (normalized logits).To evaluate the training performance of these methods, we report the classification accuracy on the test set as a metric.More details of the datasets and model structures are deferred to Supplementary Information.
The average test accuracy of clients on the pneumonia detection task is depicted in Fig. 4. It is observed that the proposed Selective-FD method outperforms all the baseline methods by a substantial margin, and the performance gain is more significant when using hard labels to transfer knowledge.For example, sharing knowledge by hard labels and soft labels resulted in improvements of 19.42% and 4.00%, respectively, over the best-performed baseline.This is because the proposed knowledge selection mechanism can adapt to the heterogeneous characteristics of local data, making it effective in selecting useful knowledge among clients.In contrast, some baselines perform even worse than the independent learning scheme.This finding highlights the potential negative influence of knowledge sharing among clients, which can mislead the local training.Notably, while hard label sharing provides a stronger privacy guarantee 35 , soft label sharing provides additional performance gains.This is because soft labels provide more information about the relationships between classes than hard labels, alleviating errors from misleading knowledge.1. Test accuracy of different methods.Each experiment is repeated five times.The results in bold indicate the best performance, while the results underlined represent the second-best performance.In the non-IID settings, our Selective-FD method performs better than the baseline methods, and the accuracy gain is more significant when using hard labels in knowledge distillation than soft labels.In the IID scenario, all the methods achieve satisfactory accuracy.
We also evaluate the performance of different federated distillation methods on the benchmark image datasets.As shown in Table 1, we find that all the methods achieve satisfactory accuracy in the IID setting.On the contrary, the proposed Selective-FD method consistently outperforms the baselines when local datasets are heterogeneous.The improvement of our method becomes more significant as the severity of the non-IID problem increases.Specifically, it is observed that the FKD and PLS methods degrade to IndepLearn in the strong non-IID setting.This is because each client only possesses one unique class, and the local predictions are always that unique class.Such misleading knowledge leads to significant performance degradation.

Effectiveness of Density-Ratio Estimation
We verify the effectiveness of the density-ratio estimation in detecting incorrect predictions of local models.Specifically, as an ablation study, we replace the density-ratio based selector in Selective-FD with confidence-based methods 27 and energy-based models (EBMs) 28 , respectively.The confidence score refers to the maximum probability of the logits in the classification task, which reflects the reliability of the prediction.The EBMs distinguish the in-distribution samples and out-distribution samples by learning an underlying probability distribution over the training samples.The predictions of proxy samples detected as out-distribution samples will be ignored.
Our experiments are conducted on benchmark datasets under the strong non-IID setting, where hard labels are shared among clients for distillation.We use the area under the receiver operating characteristic (AUROC) metric to measure the capability of selectors to detect incorrect predictions.In addition, we evaluate the performance of various selectors by reporting test accuracy.As shown on the left-hand side of Fig. 5, the AUROC score of our method is much higher than the baselines.Particularly, the confidence-based method and energy-based model perform only marginally better than random guess (AUROC= 0.5) on the MNIST and Fashion MNIST datasets.This is because the neural networks tend to be over-confident 36 about the predictions, and thus the confidence score may not be able to reflect an accurate probability of correctness for any of its predictions.Besides, the energy-based model fails to detect the incorrect predictions because they often suffer from the problem of overfitting without the out-of-distribution samples 37 .The right-hand side of Fig. 5 shows the test accuracy after federated distillation.As the density-ratio estimation can effectively identify unknown classes from the proxy samples, the ensemble knowledge is less  The results show that both the AUROC score and the accuracy of our Selective-FD method are much higher than the baselines, indicating its effectiveness in identifying unknown classes from the proxy dataset.This results in a remarkable performance gain in federated distillation.
misleading and our Selective-FD approach achieves a significant performance gain.

Ablation Study on Thresholds of Selectors
In Selective-FD, the client-side selectors and the server-side selector are designed to remove misleading and ambiguous knowledge, respectively.Two important parameters are the thresholds τ client and τ server .Specifically, each client reserves a portion of the local data as a validation set.The threshold of the client-side selector is defined as the τ client quantile of the estimated ratio over this set.When the density ratio of a sample falls below this threshold, the respective prediction is considered to be misleading.Besides, the server-side selector filters out the ambiguous knowledge according to the confidence score of the ensemble prediction.Specifically, when a confidence score is smaller than 1 − τ server /2, the corresponding proxy sample will not be used for knowledge distillation.
We investigate the effect of the thresholds τ client and τ server on the performance.We conduct experiments on three benchmark image datasets in the strong non-IID setting, where the predictions shared among clients are soft labels.Fig. 6 displays the results, with p proxy representing the percentage of proxy samples used to transfer knowledge during the whole training process.The threshold τ client is set as 0.25 when evaluating the performance of τ server , and the threshold τ server is set as 2 when evaluating the performance of τ client .We have observed that both τ client and τ server have a considerable impact on the performance.When τ client is set too high or τ server is set too low, a significant number of proxy samples are filtered out by the server-side selector, which decreases the test accuracy.On the other hand, setting τ client too low may cause the client-side selectors to be unable to remove the inaccurate local predictions, leading to a negative impact on knowledge distillation.When τ server is set too high, the server-side selector fails to identify the ensemble predictions with high entropy, which results in a drop in accuracy.These empirical results align with Theorem 2 and the analysis presented in Remark 1.

Comparison with FedAvg
Compared with the standard FL setting, such as FedAvg 8 , Selective-FD introduces a different approach by sharing knowledge instead of model parameters during the training process.This alternative method offers several advantages.First, Selective-FD naturally adapts to heterogeneous models, eliminating the need for local models to share the same architecture.Second, Selective-FD largely reduces the communication overhead in comparison to FedAvg, since the size of knowledge is significantly smaller than the model.Third, Selective-FD provides a stronger privacy guarantee than FedAvg.The local models, which might contain encoded information from private datasets 22 , remain inaccessible to other clients or the server.To better demonstrate the advantages in communication efficiency and privacy protection offered by the proposed method, the following content provides the quantitative comparisons between Selective-FD and FedAvg.

Communication Overhead
We compare the communication overhead of Selective-FD with FedAvg on the benchmark datasets in the strong non-IID setting.In each communication round of FedAvg, the clients train their models locally and upload them to the server for aggregation.This requires that all local models have the same architecture.For the MNIST classification task, the local models consist   We denote the percentage of proxy samples selected for knowledge distillation as p proxy .When τ client is too large or τ server is too small, the selectors filter out most of the proxy samples, leading to a small batch size and increased training variance.Conversely, when τ client is too small, the local outputs may contain an excessive number of incorrect predictions, leading to a decrease in the effectiveness of knowledge distillation.Besides, when τ server is too large, the ensemble predictions may exhibit high entropy, indicating ambiguous knowledge that could degrade local model training.These empirical results align with the analysis in Remark 1.
of two convolutional layers and two fully-connected layers.In the case of Fashion MNIST, we initialize each model as a Multilayer Perceptron (MLP) with two hidden layers, each containing 1,024 neurons.Furthermore, we employ ResNet18 38 as the local model to classify CIFAR-10 images.Our Selective-FD method requires clients to collate proxy samples prior to the training process, which consequently results in an additional communication overhead.However, Selective-FD significantly reduces the amount of data uploaded and downloaded per communication round compared with FedAvg.This is because the size of predictions used for knowledge distillation is much smaller than that of model updates utilized for aggregation.Fig. 7 plots the test accuracy and communication overhead with respect to the communication round.
It is observed that our Selective-FD method has comparable or inferior accuracy to FedAvg.But it considerably improves communication efficiency during federated training.Further improving the performance of Selective-FD is a promising direction for future research.Additional information regarding the experiments can be found in Supplementary Information.

Privacy Leakage
We compare the privacy leakage of Selective-FD and FedAvg under model inversion attacks 18,39 .The objective of the attacker is to reconstruct the private training data based on the shared information from clients.In FedAvg, a semi-honest server can perform white-box attacks 39 based on the model updates.In contrast, our Selective-FD method is free from such attacks since the clients' models cannot be accessed by the server.However, our method remains vulnerable to black-box attacks, where the attacker can infer local samples by querying the clients' models 18 .To assess the privacy risk quantitatively, we employ GMI 39 and IDEAL 18 to attack FedAvg and Selective-FD, respectively.This experiment is conducted on MNIST, and the results are shown in Fig. 8.It is observed that the quality of reconstructed images from FedAvg is better than that from Selective-FD.This demonstrates that sharing model parameters leads to higher privacy leakage than sharing knowledge.Besides, compared with sharing soft labels in Selective-FD, the reconstructed images inferred from the hard labels have a lower PSNR value.This indicates that sharing hard labels in Selective-FD exposes less private information than sharing soft labels.This result is consistent with Hinton's analysis 40 , where the soft labels provide more information per training case.In federated training where the local data are privacy-sensitive, such as large genomic datasets 41 , it becomes crucial to share hard labels rather than soft labels.This serves as a protective measure against potential membership inference attacks 42 .Finally, although the knowledge sharing methods provide stronger privacy guarantees compared with FedAvg, the malicious attackers can still infer the label distribution of clients from the shared information.Developing a privacy-enhancing federated training scheme is a promising but challenging direction for future research.

Discussion
This work introduces the Selective-FD method in federated distillation, which includes a selective knowledge sharing mechanism that identifies accurate and precise knowledge from clients for effective knowledge transfer.Particularly, it includes client-side selectors and a server-side selector.The client-side selectors use density-ratio estimators to identify out-of-distribution samples from the proxy dataset.If a sample exhibits a density ratio that approaches zero, it is identified as an outlier.To prevent the propagation of potentially misleading information to other clients, this sample is not used for knowledge distillation.Besides, as the local models could be underfitting at the beginning of the training process, the local predictions could be inconsistent among clients.To prevent the negative influence of ambiguous knowledge, the server-side selector filters out the ensemble predictions with high entropy.
Extensive experiments are conducted on both pneumonia detection and benchmark image classification tasks to investigate the impact of hard labels and soft labels on the performance of knowledge distillation.The results demonstrate that Selective-FD significantly improves test accuracy compared to FD baselines, and the accuracy gain is more prominent in hard label sharing than in soft label sharing.In comparison with the standard FL framework that shares model parameters among clients, sharing knowledge in FD may not achieve the same performance level, but this line of work is communication-efficient, privacy-preserving, and heterogeneity-adaptive.When performing federated training on large language models (LLM) 43,44 , the FD framework is especially useful since the clients do not need to upload the huge amount of model parameters, and it is difficult for attackers to infer the private texts.We envision that our proposed method can serve as a template framework for various applications and inspire future research to improve the effectiveness and responsibility of intelligence systems.
However, we must acknowledge that our proposed method is not without its limitations.Firstly, the federated distillation method relies on proxy samples to transfer knowledge.If the proxy dataset is biased towards certain classes, the client models may be biased toward these classes.This leads to poor performance and generalization.Secondly, the complexity of the client-side selector in our Selective-FD method increases quadratically with the number of samples and the sample space, which may limit its practical applicability.Some studies in open-set learning have shown that other low-complexity outlier detection methods, while lacking theoretical guarantees, can achieve comparable performance.This finding motivates us to explore more efficient selectors in future research.Thirdly, while our proposed method keeps models local, it cannot guarantee perfect privacy.Attackers may infer information from the private dataset based on the shared knowledge.To further strengthen privacy guarantees in FD, we could employ defense methods such as differential privacy mechanisms and secure aggregation protocols.However, these defense methods might lead to performance degradation or increased training complexity.Future research should develop more efficient and effective defensive techniques to protect clients against attacks while maintaining optimal performance.

Ethical and societal Impact
It is essential to note that the proposed federated distillation framework, like other AI algorithms, is a dual-use technology.Although the objective is to facilitate the training process of deep learning models by effectively sharing knowledge among clients, this technology can also be misused.Firstly, federated distillation could be used to train algorithms among malicious entities to identify individuals illegally and monitor the speaking patterns of people without their consent 45 .This poses threats to individual freedom and civil liberties.Secondly, it is assumed that all the participants in federated distillation, including the clients and the server, are trusted and behave honestly.However, if a group of clients or the server are malicious, they could manipulate the predictions and add poison knowledge during the training process, heavily degrading the convergence 46,47 .Thirdly, while federated distillation is free from white-box attacks, it is still potentially vulnerable to black-box attacks such as membership inference attacks 35 .This is because the local predictions are shared among clients, which may allow an attacker to infer whether a particular sample is part of a client's training set or not.To mitigate this vulnerability, additional privacy-preserving techniques such as differential privacy or secure aggregation can be employed.Furthermore, there are concerns about the collection of proxy samples to transfer knowledge between clients.This could potentially lead to breaches of privacy and security, as well as ethical concerns regarding informed consent and data ownership.In summary, while the proposed federated distillation framework has great potential for facilitating collaborative learning among multiple clients, it is important to be aware of the potential risks and take measures to ensure that the technology is used ethically and responsibly.

Methods
In this section, we provide an in-depth introduction to our proposed method.We first define the problem studied in this paper, then introduce the details of our method, and provide theoretical insights to discuss the impact of the selective knowledge sharing mechanism on generalization.

Notations and Problem Definition
Consider a federated setting for multi-class classification where each input instance has one ground truth label from C categories.We index clients as 1, 2, . . ., K, where the k-th client stores a private dataset Dk , consisting of m k data points sampled from the distribution D k .In addition, there is a proxy dataset Dproxy containing m proxy samples from the distribution D proxy to transfer knowledge among clients.We denote X as the input space and Y ∈ V (∆ C−1 ) as the label space.Specifically, V (∆ C−1 ) is a vertex set consisting of all vertices in a C − 1 simplex ∆ C−1 , where each vertex corresponds to a one-hot vector.We assume that all the local datasets and the test dataset have the same labeling function ĥ h h * : X → V (∆ C−1 ).Client k learns a local predictor h h h k : X → ∆ C−1 to approach ĥ h h * in the training phase and outputs a one-hot vector by ĥ h h k : X → V (∆ C−1 ) in the test phase.The

9/19
hypothesis spaces of h h h k and ĥ h h k are H k and Ĥk , respectively, which are determined by the parameter spaces of the personalized models.During the process of knowledge sharing, we define the labeling function of the proxy samples as h h h * proxy : X → ∆ C−1 , which is determined by both the local predictors and the knowledge selection mechanism.To facilitate the theoretical analysis, we denote ĥ h h * proxy : X → V (∆ C−1 ) as the one-hot output of h h h * proxy .In the experiments, we use cross entropy as the loss function, while for the sake of mathematical traceability, the ℓ 1 norm is adopted to measure the difference between two predictors, denoted by L D ( ĥ h h, ĥ h h ′ ) := E x∼D ∥ ĥ h h(x) − ĥ h h ′ (x)∥ 1 , where D is the distribution.Specifically, L D test ( ĥ h h k , ĥ h h * ), L Dk ( ĥ h h k , ĥ h h * ), and L Dproxy ( ĥ h h k , h h h * proxy ) represent the loss over the test distribution, local samples, and the proxy samples, respectively.Besides, we define the training loss over both the private and proxy samples as L Dk ∪ Dproxy ( ĥ h h k ) := αL Dk ( ĥ h h k , ĥ h h * ) + (1 − α)L Dproxy ( ĥ h h k , h h h * proxy ), where α ∈ [0, 1] is a weighted coefficient.The notation Pr D [•] represents the probability of events over the distribution D.

Selective Knowledge Sharing in Federated Distillation
Federated distillation aims at collaboratively training models among clients by sharing knowledge, instead of sharing models as in FL.The training process is available in Algorithm 1, which involves two phases.First, the clients train the local models independently on the local data.Then, the clients share knowledge among themselves based on a proxy dataset and fine-tune the local models over both the local and proxy samples.Fig. 1 provides an overview of our Selective-FD framework, which includes a selective knowledge sharing mechanism.The following section will delve into the details of this mechanism.

Client-Side Selector
Federated distillation presents a challenge of misleading ensemble predictions caused by the lack of a well-trained teacher.Local models may overfit the local datasets, leading to poor generalization on proxy samples outside the local distribution, especially with non-IID data across clients.To mitigate this issue, our method develops client-side selectors to identify proxy samples that are out of the local distribution (OOD).This is done through density-ratio estimation 48,49 , which calculates the ratio of two probability densities.Assuming the input data space X is compact, we define U as a uniform distribution with probability density function u(x) over X .Besides, we denote the probability density function at client k as p k (x).Our objective is to estimate the density ratio w w w * k (x) = p k (x)/u(x) based on the observed samples.Specifically, the in-sampple data x from the local distribution with p k (x) > 0 results in w w w * k (x) > 0, while the OOD samples x with probability p k (x) = 0 leads to w w w * k (x) = 0. Therefore, the clients can build density-ratio estimators to identify the OOD samples from the proxy dataset.
Considering the property of statistical convergence, we use a kernelized variant of unconstrained least-squares importance fitting (KuLSIF) 29 to estimate the density ratio.The estimation model in KuLSIF is a reproducing kernel Hilbert space (RKHS) 50 W k endowed with a Gaussian kernel function.We sample n k and n u data points from D k and U , respectively, and denote the resulting sample sets as S k and S u .Defining the norm on W k as ∥ • ∥ W k , the density-ratio estimator w w w k is obtained as an optimal solution of where the analytic-form solution w w w k is available in Theorem 1 of the KuLSIF method 29 .The following theorem reveals the convergence rate of the KuLSIF estimator.Theorem 1. (Convergence rate of KuLSIF 29 ).Consider RKHS W k to be the Hilbert space with Gaussian kernel that contains the density ratio w w w * k Given δ ∈ (0, 1) and setting the regularization n k ,n u , where O p is the probability order.
The proof is available in Theorem 2 of the KuLSIF method 29 .This theorem demonstrates that as the number of samples increases and the regularization parameter β n k ,n u approaches zero, the estimator w w w k converges to the density ratio w w w * k .During federated distillation, we use a threshold τ client > 0 to distinguish between in-distribution and out-of-distribution samples.If the estimated w w w k (x) value of a proxy sample x is below τ client , it is considered as an out-of-distribution sample at client k.In such cases, the client-side selector does not upload the corresponding local prediction as it could be misleading.

Server-Side Selector
After receiving local predictions from clients, the server averages them to produce the ensemble predictions.For each proxy sample x, the ensemble prediction is denoted as h h h * proxy (x) ∈ ∆ C−1 , and the corresponding one-hot prediction is represented as ĥ h h * proxy (x) ∈ V (∆ C−1 ).It is important to note that if the local predictions for a specific proxy sample differ greatly among clients, the resulting ensemble prediction h h h * proxy (x) could be ambiguous with high entropy.This ambiguity could negatively impact knowledge distillation.To address this issue, we developed a server-side selector that measures sample ambiguity by calculating Utilize proxy samples and ensemble predictions for knowledge distillation.Client k computes the predictions on proxy samples, filters out misleading knowledge based on the client-side selector, and uploads the local predictions to the server.13: end for 14: Server aggregates the local predictions, removes ambiguous knowledge based on the server-side selector, and sends the ensemble predictions back to the clients.
the ℓ 1 distance between h h h * proxy (x) and ĥ h h * proxy (x).The closer this distance is to zero, the less ambiguous the prediction is.In the proposed Selective-FD framework, the server-side selector applies a threshold τ server > 0 to filter out ambiguous knowledge, where the ensemble predictions with an ℓ 1 distance greater than τ server will not be sent back to the clients for distillation.

Theoretical Insights
In this section, we establish an upper bound for the loss of federated distillation, while also discussing the effectiveness of the proposed selective knowledge sharing mechanism in the context of domain adaptation 51 .To ensure clarity, we begin by providing relevant definitions before delving into the analysis.Definition 1. (Minimum combined loss) The ideal predictor in the hypothesis space Ĥk achieves the minimum combined loss λ over the test and training sets.Two representative λ k , λ k,proxy are defined as follows: L D test ( ĥ h h k , ĥ h h * ) + L D k ( ĥ h h k , ĥ h h * ) , λ k,proxy = min ĥ h h k ∈ Ĥk L D test ( ĥ h h k , ĥ h h * ) + L D proxy ( ĥ h h k , ĥ h h * ) . ( The ideal predictor serves as an indicator of the learning ability of the local model.If the ideal predictor performs poorly, it is unlikely that the locally optimized model, which minimizes the training loss, will generalize well on the test set.On the other hand, when the labeling function ĥ h h * belongs to the hypothesis space Ĥk , we get the minimum loss as λ k = λ k,proxy = 0.The next two definitions aim to introduce a metric for measuring the distance between distributions.Definition 2. (Hypothesis space G k ) For a hypothesis space Ĥk , we define a set of hypotheses g k : X → {0, 1} as G k , where 52 ) Given two distributions D and D ′ over X , let G k = {g k : X → {0, 1}} be a hypothesis space, and the G k -distance between D and With the above preparations, we derive an upper bound of the test loss of the predictor ĥ h h k at client k following the process of federated distillation.Theorem 2. With probability at least 1 − δ , δ ∈ (0, 1), we have where the probabilities p proxy = Pr D proxy [ ĥ h h * (x) ̸ = ĥ h h * proxy (x)] and p (2) proxy = Pr D proxy [ ĥ h h * (x) = ĥ h h * proxy (x)] satisfy p proxy + p (2) proxy and D (2)   proxy represent the distributions of proxy samples satisfying ĥ h h * (x) ̸ = ĥ h h * proxy (x) and ĥ h h * (x) = ĥ h h * proxy (x), respectively.In (3), the first term on the right-hand side represents the empirical risk over the local and proxy samples, and the second term is a numerical constraint, which indicates that having more proxy samples, whose number is denoted as m proxy , is beneficial to the generalization performance.The last two terms in (4) account for the misleading and ambiguous knowledge in distillation.From Theorem 2, two key implications can be drawn.Firstly, when there is severe data heterogeneity, the resulting high distribution divergence d G k (D k , D test ), d G k (D proxy , D test ) undermines generalization performance.When the proxy distribution is closer to the test set than the local data, i.e., d G k (D k , D test ) ≥ d G k (D proxy , D test ), federated distillation can improve performance compared to independent training.Secondly, if the labeling function (i.e., the ensemble prediction) h h h * proxy of the proxy samples is highly different from the labeling function ĥ h h * of test samples, the error introduced by the misleading and ambiguous knowledge can be significant, leading to negative knowledge transfer.Our proposed selective knowledge sharing mechanism aims to make the ensemble predictions of unlabeled proxy samples closer to the ground truths.Particularly, a large threshold τ client can mitigate the effect of incorrect predictions, while a small threshold τ server implies less ambiguous knowledge being used for distillation.
Remark 1. Care must be taken when setting the thresholds τ client and τ server , as a τ client that is too large or a τ server that is too small could filter out too many proxy samples and result in a small m proxy .This would enlarge the second term on the right-hand side of (3).Additionally, the threshold τ server effectively balances the losses caused by the misleading and ambiguous knowledge, as indicated by the inequalities 2 − τ server ≤ ∥ ĥ h h * (x) − h h h * proxy (x)∥ 1 and ∥ ĥ h h * proxy (x) − h h h * proxy (x)∥ 1 ≤ τ server .This property aligns with the empirical results presented in Fig. 6.
Remark 2. The proposed mechanism for selectively sharing knowledge and its associated thresholds τ client , τ server might alter the distributions D proxy , Dproxy , thus influencing the empirical risk, the minimum combined loss, the G k -distance, and the probabilities p proxy in Theorem 2. A more comprehensive and rigorous analysis of these effects is left to our future work.

More Discussion on Data Heterogenity
In the main text, we evaluated the performance of Selective-FD in the strong non-IID and weak non-IID settings, where each client only has one or two classes of samples.In this part, we consider more general non-IID settings by simulating the non-IID distribution based on the Dirichlet distribution Dir K (β ).The parameter K represents the number of clients, and β > 0 is a concentration parameter.When β is set to a smaller value, the data distribution is more non-IID.Specifically, we sample a vector p n ∼ Dir K (β ) and allocate a p n,k proportion of the instances of class n to client k.We conduct a performance comparison of the proposed Selective-FD method with two representative baselines, namely FedMD and IndepLearn, across various non-IID settings of the MNIST dataset.As shown in Fig. 9, the accuracy of IndepLearn degrades with the decreasing of parameter β .This is because the data distribution becomes increasingly non-IID.The FedMD method demonstrates the ability to achieve good performance when β > 10 −1 .However, it experiences performance degradation when β < 10 −1 .In contrast, our Selective-FD method consistently maintains satisfactory performance, even when the parameter β decreases to 10 −3 .Specifically, the accuracy gain of our method is more significant when using hard labels for knowledge distillation.

Proof of Theorem 2
Prior to proving Theorem 2, we first present two Lemmas.
Lemma 1.Given hypothesis spaces Ĥ := { ĥ h h : X → V (∆ C−1 )} and G := {g : X → {0, 1}} with g(x) = 1 2 ∥ ĥ h h(x) − ĥ h h ′ (x)∥ 1 for ĥ h h, ĥ h h ′ ∈ Ĥ , we have |L D ( ĥ h h, ĥ h = sup ĥ h h, ĥ h h Lemma 2. For any δ ∈ (0, 1), with probability at least 1 − δ over the choice of the samples, we have Proof.The loss L Dk ∪ Dproxy ( ĥ h h) can be written as Let m k and X (proxy) 1 , . . ., X m proxy be independent random variables that take on the values of ∥ ĥ h h(x) − ĥ h h * (x)∥ 1 for x ∈ Dproxy , respectively.We define X as the mean value of these variables, which represents the empirical loss in (9).By linearity of expectations, E[X] is equal to the loss L D k ∪D proxy ( ĥ h h).According to Hoeffding's inequality, for any ε > 0, we have Let the right-hand side of (10) be δ .We can derive the inequality in Theorem 2.
Denote ĥ h h * k = arg min ĥ h h k ∈ Ĥk L D test ( ĥ h h k , ĥ h h * ) + L D k ( ĥ h h k , ĥ h h * ) and ĥ h h * k,proxy = arg min ĥ h h k ∈ Ĥk L D test ( ĥ h h k , ĥ h h * ) + L D proxy ( ĥ h h k , ĥ h h * ) .We are now ready to prove Theorem 2.

18/19
Ablation Study on α in Theorem 2 In this part, we investigate the effect of coefficient α on the error bound in Theorem 2. We first identify the negligible terms.
Consider the deep learning model at client k has enough parameters such that its hypothesis space Ĥk contains the ground truth labeling function ĥ h h * .In this case, both λ k and λ k,proxy hold a value of zero.Besides, the numerical constraint tends to be zero given sufficient training samples.Furthermore, the proxy dataset D proxy for knowledge distillation is expected to be less heterogeneous compared with the local heterogeneous dataset D k at client k.Thus, the distance G k (D proxy , D test ) is much smaller than G k (D k , D test ).If the proxy dataset follows the same distribution as the test set, the G k (D proxy , D test ) distance equals to zero.Now it is clear that when α approaches 1, the empirical risk and the distance G k (D k , D test ) dominate the error bound.When α is close to 0, the empirical risk, misleading knowledge, and ambiguous knowledge become the dominant factors.To support this analysis, we conduct an ablation study on MNIST under a weak non-IID setting, comparing the classification error across various α values.Our method utilizes the proposed selection mechanism as a pseudo-labeling function of the proxy dataset h h h * proxy .The local client k trains its local model by minimizing the empirical risk αL Dk ( ĥ h h k , ĥ h h * ) + (1 − α)L Dproxy ( ĥ h h k , h h h * proxy ).To better assess the negative impact of misleading and ambiguous knowledge, we consider a baseline where the proxy dataset has ground truth labels ĥ h h * (x).The client minimizes the combined loss αL Dk ( ĥ h h k , ĥ h h * ) + (1 − α)L Dproxy ( ĥ h h k , ĥ h h * ) to train the local model.
As shown in Fig. 10, the test error rate of the baseline decreases monotonously as the α value decreases.This is because a small α reduces the negative influence of the local heterogeneous dataset D k on the training process.In contrast, the error rate of our proposed method first decreases but then increases when α approaches 0. Notably, this error rate is consistently higher compared to that of the baseline.These results can primarily be attributed to misleading and ambiguous knowledge, which degrades the training performance, particularly when α is close to 0. Proposed ( proxy with pseudo labels) Baseline ( proxy with ground truth labels)

Figure 1 .
Figure 1.The overall framework of Selective-FD.The federated distillation involves four iterative steps.First, each client trains a personalized model using its local private data.Second, each client predicts the label of the proxy samples based on the local model.Third, the server aggregates these local predictions and returns the ensemble predictions to clients.Fourth, clients update local models by knowledge distillation based on the ensemble predictions.During the training process, the client-side selectors and the server-side selector aim to filter out misleading and ambiguous knowledge from the local predictions.

Figure 4 .
Figure 4.Test accuracy of different methods on the pneumonia detection task.The error bar represents the mean ± standard deviation of five repetitions.The results show that the proposed Selective-FD method achieves the best performance, and the accuracy gain is more significant when using hard labels to transfer knowledge.Specifically, some baselines perform even worse than the independent learning scheme.These results demonstrate that knowledge sharing among clients can mislead and negatively influence local training.

Figure 5 .
Figure5.The AUROC scores for incorrect prediction detection (left) and the test accuracy after federated distillation (right).The error bar represents the mean ± standard deviation across 10 clients.The results show that both the AUROC score and the accuracy of our Selective-FD method are much higher than the baselines, indicating its effectiveness in identifying unknown classes from the proxy dataset.This results in a remarkable performance gain in federated distillation.
Ablation study on threshold τ client of Selective-FD.Ablation study on threshold τ server of Selective-FD.

Figure 6 .
Figure 6.Test accuracy and percentage p proxy under (a) different values of τ client and (b) different values of τ server .We denote the percentage of proxy samples selected for knowledge distillation as p proxy .When τ client is too large or τ server is too small, the selectors filter out most of the proxy samples, leading to a small batch size and increased training variance.Conversely, when τ client is too small, the local outputs may contain an excessive number of incorrect predictions, leading to a decrease in the effectiveness of knowledge distillation.Besides, when τ server is too large, the ensemble predictions may exhibit high entropy, indicating ambiguous knowledge that could degrade local model training.These empirical results align with the analysis in Remark 1.

Figure 7 .Figure 8 .
Figure 7. Test accuracy and communication cost as functions of the communication round.Our Selective-FD method has a comparable accuracy as FedAvg in the MNIST and CIFAR-10 datasets but is inferior in Fashion MNIST.However, Selective-FD achieves a significant reduction in the communication overhead, which is because the cost of model sharing in FedAvg is much higher than knowledge sharing in our method.

10 / 19 Algorithm 1 for 6 :
Selective-FD 1: Setting the training round T and the client number K. Server and clients collect the proxy dataset D proxy .2: Clients construct client-side selectors by minimizing (1) and initialize local models.3: for t in 1, . . ., T do Client k in 1, . . ., K (in parallel) do Train the local model based on the private dataset D k .7:

10 :
Server randomly selects the indexes of proxy samples and sends them to the clients.11: for Client k in 1, . . ., K (in parallel) do 12:

Figure 9 .
Figure 9. MNIST classification accuracy in different non-IID settings.When β is set to a smaller value, the data distribution is more non-IID.The knowledge is transferred via (a) hard labels and (b) soft labels.

Figure 10 .
Figure 10.Test error rate as a function of α.
Visualization of non-IID data distribution.The horizontal axis indexes the proxy dataset and the local datasets, while the vertical axis indicates class labels.The size of scattered points denotes the number of samples.

Table 5 .
Summary of datasets.