Multi-Instance Metric Transfer Learning for Genome-Wide Protein Function Prediction

Multi-Instance (MI) learning has been proven to be effective for the genome-wide protein function prediction problems where each training example is associated with multiple instances. Many studies in this literature attempted to find an appropriate Multi-Instance Learning (MIL) method for genome-wide protein function prediction under a usual assumption, the underlying distribution from testing data (target domain, i.e., TD) is the same as that from training data (source domain, i.e., SD). However, this assumption may be violated in real practice. To tackle this problem, in this paper, we propose a Multi-Instance Metric Transfer Learning (MIMTL) approach for genome-wide protein function prediction. In MIMTL, we first transfer the source domain distribution to the target domain distribution by utilizing the bag weights. Then, we construct a distance metric learning method with the reweighted bags. At last, we develop an alternative optimization scheme for MIMTL. Comprehensive experimental evidence on seven real-world organisms verifies the effectiveness and efficiency of the proposed MIMTL approach over several state-of-the-art methods.

During the past decades, a variety of computational methods have been proposed to tackle the genome-wide protein function prediction problem [1][2][3] . Some research in this literature 4,5 attempt to solve the protein function prediction problem as a naturally and inherently multi-instance learning problem. Multi-Instance Learning 6,7 is a recent machine learning framework for the learning problem in which each training example is represented by a bag of instances. In MIL, a bag has a positive label if it contains at least one positive instance. Otherwise the bag is annotated with a negative label.
MIL has received considerable attention and been frequently applied in a wide range of real-world applications 2,8 since it is more convenient and natural for representing complicated objects which have multiple semantic meanings. With the help of the MIL, some inherent patterns which are closely related to some labels may become explicit and clearer. Based on the advantage of MI representation, a variety of MIL methods have been proposed. Conventional step of MIL considers the availability of a large amount of labeled training data to learn a classifier in a source domain, and predicts the label of the test data in the same domain. However, in many real-world applications, labeled data are limited and expensive to obtain. This is especially true for genome data. If we can transfer information from a similar domain to assist MIL, it will be helpful.
Transfer learning [9][10][11][12][13][14][15] has been developed to handle the situation in which a classification task with sufficient training data is considered as source domain (SD), and we have limited data in another target domain (TD) where the latter data may be in a different feature space or have a different distribution. If we can encode the transfer learning methods into MIL, we can reuse the labeled data in the source domain to the target task. Recently, a number of transfer learning methods 16,17 have been developed. Most of these transfer learning algorithms are designed for single-instance learning (SIL) where training example is represented by one instance. It has been shown that learning performance can be significantly enhanced if the transfer learning techniques is exploited in SIL. However, it is difficult to directly apply these transfer learning methods to multi-instance situation. Hence, it is urgent to develop a MIL method under transfer learning setting.
Furthermore, most of these transfer learning methods 16,17 are based on the Euclidean distance idea, i.e., an objective function is optimized to maintain the class information of examples by their Euclidean distances. 1 School of Computer Science and Engineering, South China University of Technology, Guangzhou, 510006, China. 2 School of Software Engineering, South China University of Technology, Guangzhou, 510006, China. 3 State Key Laboratory for Novel Software Technology, Nanjing University, China. 4 Wuzhou Red Cross Hospital, Wuzhou, 543002, China. * These authors contributed equally to this work. Correspondence and requests for materials should be addressed to Q.W. (email: qyw@scut.edu.cn) SD The source domain dataset.

TD
The target domain dataset.
X i i-th bags, which represent a protein.
x j j-th instance in a bag.
x i j j-th instance in bag X i .

X i
The average of all the instances in bag X i .

Y i
Represent the Gene Ontology terms assigned to X i .
The center of X i .
The square of the Mahalanobis distance between instance x i and c i .
The square of the Mahalanobis distance between bags X i and X j .

A
The learned Mahalanobis distance metric. The expected loss. δ S A constant to limit the minimum distance between the center of the bag and the instance in the bag. δ D A constant to limit the maximum distance between bags from different class.
ξ, ζ Two slack vectors to improve the robustness of the algorithm. ω The weight vector of bags.

Method
In this section, we first give some definitions corresponding to MIL for genome-wide protein function prediction. Then, we present the traditional multi-instance metric learning approach briefly and discuss the limitation of the approach under transfer learning setting. After the discussion, a bag weights estimation method is presented. At last, we present the proposed method based on the learned bag weights. The genome-wide protein function prediction problem aims to find a method to annotate the biological function for a given protein. However, the structure and function of proteins are very complex. In fact, many proteins often include several structural domains, and each domain may appear in a number of different proteins. These domains can be treated as distinct functional or structural units of a protein. Multi-domain proteins are likely to create new functions by emerging from a selective pressure during evolution. A wide variety of proteins have diverged from common ancestors by combining and associating different domains. Nature often gets together several domains to produce multi-domain and multifunctional proteins with a large number of possibilities 32 . To describe the complex structure of proteins, we utilize the MIL into the genome-wide protein function problem. For instance, we represent each domain with an input instance, and represent each protein in organisms with a bag of instances, and treat each biological function as an output label. Thus, the protein function prediction problem is naturally and inherently MIL learning tasks.
Formally, We denote by represent the i-th protein in the training set. X i is a bag of n i instances, and every instance ∈ x R i d j is a vector of d dimensions. n bag indicates the bag number in SD. ∈ Y R i L representes the Gene Ontology terms 4 assigned to X i . Y i is a binary vector, and Y i k indicates the k-th element in indicates that the i-th protein is associated with the k-th Gene Ontology term. In other words, the bag X i is assigned to class ϑ k , and = Y 0 i k otherwise. We assume that bag X i is assigned to ϑ k at least one instance in X i belongs to ϑ k . Formally, MIL task aims to learn a function → h: X Y from the given data set SD. Important definitions are shown in Table 1.  Multi-Instance Metric Learning. After the definition, we briefly formulate the multi-instance metric learning framework which is provided by the paper 5 . The learning framework aims to find a distance metric from the training data SD. In general, the multi-instance metric learning problem can be formulated as the following optimization problem: The first constraint in equation (1) is used to minimize the instance distance in each bag 5 . The second constraint in equation (1) is used to maximize the distance between bags that corresponding to different labels. r(A) is a regularization term for A, λ and β are two balance parameters. Some times the training data extracted from the proteins may contain some noise and the noise may reduce the performance of the algorithm. To solve this problem, two slack vectors ξ and ζ are utilized into the learning framework to improve the robustness of the algorithm. δ S and δ D are two constants (δ S < δ D ). δ S is used to limit the maximum distance between the center of the bag and the instance in the bag. δ D is used to limit the minimum distance between bags from different class. c i is the center of X i . D(x i , c i ) is the square of the Mahalanobis distance between instance x i and c i and D(X i , X j ) is the square of the Mahalanobis distance between bags X i and X j . Here, we can define D( where X i indicates the average of all the instances in bag X i . According to this method, we can get together the instances from the same protein and separate the proteins with different biological function. Loss Analysis of Multi-Instance Metric Learning. The learning framework presented in equation (1) is designed for the learning problem where training and test data are drawn from same distribution. However, in many real applications, this assumption cannot be guaranteed. For more specific analysis of the drawback of the learning framework, we first discuss the loss analysis of equation (1) under transfer learning setting in this section.
Before we present the loss analysis, we first use an example to more intuitively explain the different distributions between training and test data. Figure 1 gives an example in which the distributions of the source domain and target domain are different. In such case, the distance metric learned by equation (1) can not help to minimize the distance in each bag and maximize the distance between bags from different classes due to the fact that, the expected loss of equation (1) on SD is inconsistent with that on TD. Herein, we define the expected loss corresponding to equation (1) on SD as following, where P(X) indicates the density of bag X in SD. Equation (3) can be refined as, Similarly, the expected loss corresponding to equation (1) on TD can be written as, P′ (X) represents the density of bag X in TD. From equations (3) and (5), we can find that if ≠ ′ P X P X ( ) ( ), the expected loss on SD is not equal to that on TD, In other words, if the training bags number of SD is as large as possible (i.e., to be the infinity), equation (1) still can not generate an optimal solution for the multi-instance prediction problem in TD.

Bag Weights Estimation.
After the loss analysis of Multi-Instance Metric Learning, we find that the traditional learning framework in equation (1) is not suitable for the learning problem under transfer learning setting. The main reason for this situation is the divergence between the expected loss of equation (1) on SD and TD. In this section, we propose learning a suitable bag weights to solve this problem.
From equations (3), (5) and (6), it is obvious that, if there exist a suitable bag weight for each bag X in SD to satisfy the following equation, we can balance the difference between To estimate the bag weights ω(X), we set ω(X) = P′ (X)/P(X) and adopt the approach proposed by MICS 31 where the bag weights is considered as a function that can be approximated by a linear combination of some basic functions, i.e., ω where {φ j }'s indicate a set of pre-defined basis functions and {α j }'s represent the corresponding nonnegative parameters to be learned. The weights of source-domain bags, ω(X), can be obtained by minimizing the least square loss ω  between ω and ω , i.e., As shown in MICS 31 , equation (8) can be converted to the following optimization problem, The basis functions ψ(X) can be selected as a series of kernels. Followed by the work of MICS 31 , we use the MI-Kernel 33 to measure the similar or dissimilar between multi-instance bags, where γ is the kernel width.
Since the optimization problem equation (9) is convex, gradient ascent approaches can be applied to obtain the global solution. By this way, we can learn a weight for each bag. According to the learned bags we can balance the differnece between L J

Multi-Instance Metric Transfer Learning.
We have balanced the divergence between the expected loss of equation (1) on SD and TD by reweighting the training bags. However, we still do not know how to obtain a Mahalanobis distance under the transfer learning setting. In this section, we will utilize the learned bag weight vector into our learning framework to obtain a Mahalanobis distance metric under transfer learning setting. Based on the loss analysis of multi-instance metric learning and the learned bag weights, we can reformulate equation (1) as follows, Note that a preprocessing step to centralize the input data is performed in MIMTL, where n all indicates all the instances number in SD, then D(x i , c i ) can be represented as, In the following, without special declaration the data are supposed to be centralized.
For some situations, in addition to the bags from SD, we can obtain few labeled bags from TD. Hence, we can learn the distance metric A based on the labeled bags both in SD and TD by setting the weights of bags from TD to be 1. Then we obtain the optimization problem, i , n td is the labeled bag's number in TD. Note that, if we cannot obtain any labeled bags from TD, we can delete the constraints in equation (13) corresponding to TD, and formulate the following method to learn the distance metric A,


Since we have reweighted the bags to balance the expected loss of learning framework on SD and TD, equation (14) can also generate an optimal solution for the multi-instance prediction problem, even without labeled bags in TD. In other words, labeled bags from TD are not required to guarantee a consistent expected loss of equation (13) on SD and TD.
Loss Analysis of Multi-Instance Metric Transfer Learning. We have provided a new learning framework in equations (13) and (14) to learn Mahalanobis distance metric under transfer learning setting. For a more detailed understanding of the new learning framework, we analyze the expected loss of this framework on both SD and TD in this section. We use ( ) exp L J to represent the expected loss of equation (13). Then ′ ( ) exp L J can be written as, Because ω(X) = P′ (X)/P(X), equation (15) can be rewritten as, From equation (16), we can see that the expected loss of equation (13) on SD is consistent with that on TD, indicates the expected loss of equation (13) on TD. This means that we can balance the difference between the expected loss of equation (13) on SD and TD with the help of the learned bag weights. Such that, equation (13) can guarantee the ability of generalizing the predicted model of TD data. Hence, distance metric A learned by MIMTL can more effectively measure the distance between bags.
Prediction by Using the Learned Metric. After we obtain the Mahalanobis distance metric by solving the optimization problem in equation (13), we can predict the label for test bags in TD. In this section, we will present how to predict by using the learned distance metric.
After we obtain the distance metric A, a base multi-instance learner (i.e., the citation-kNN algorithm 26 and the multi-instacne multi-label support vector machine 6 ) can be used cooperating with the distance metric A for bag label predicting. Considering the fact that, most of the genome-wide protein function prediction problem are associated with multiple class labels 4,34 , we train an independent distance metric A and a base multi-instance learner for each class. We present two methods to cooperate the distance metric A with basic multi-instance learner as following.
The First Method for Prediction. For the first method, we use MIMTL kNN to represent the MIMTL which use the citation-kNN algorithm as the base learner. For a given test bag X i and the distance metric A, MIMTL kNN compute the distance between X i and each training bag. Then, we find both the references and citers of X i . The class labels of X i is determined by a majority vote of the r nearest reference bags and the c nearest citing bags corresponding to X i .

1:
Initialize A 0 , ξ 0 and ζ 0 .  The Second Method for Prediction. For the second method, we use MIMTL SVM to represent the MIMTL which utilize the MIMLSVM algorithm as the base learner. To predict the label of a given test bag X i , we first cluster all of the instances in training data into k-medoids with the learned distance metric A. We denote the cluster number  With derived data set Z, we learn a binary classifier by support vector machine 35 . For the given test bag by computing the Mahalanobis distance between bag X i and all the medoids. Then, we can predict the label of X i by predicting the label of f i with the learned binary classifier.
Optimization. In this section, we derive approaches to solve the optimization problem constructed in equation (13). We first convert the constrained problem to an unconstrained problem by adding penalty functions. The resulting optimization problem becomes, where σ is the penalty coefficient. Then we use the gradient-projection method 36 to solve the optimization problem in equation (18). To be precise, in the first step, we initialize A 0 , ξ 0 and ζ 0 , and centralize the input data by equation (11). In the second step, we update the value of A, ξ and ζ using gradient descent based on the following rules, The derivatives of the objective f with respect to A, ξ and ζ in equations (19)(20)(21) are,

Figure 3. Comparison results with MIMTL and MICS on seven real-world organisms.
Scientific RepoRts | 7:41831 | DOI: 10.1038/srep41831 We repeat the second step until the change of the objective function  is less than a threshold ε. A detailed procedure is given in Algorithm 1.

Results
In this section, we verify the effectiveness of the proposed MIMTL approach, by conducting extensive experiments on seven real-world organisms which cover the biological three-domain system 37-39 (i.e., archaea, bacteria, and eukaryote). We compare the performance of MIMTL (The source code of MIMTL will be open upon the publication of papers.) with several sate-of-the-arts multi-instance learning methods including MIMLSVM 6 , MIMLNN 6 , EnMIMLNN 4 and MICS 31 . And the results of the comparison show that the proposed algorithm MIMTL outperforms other algorithms. Data setting. The seven real-world organisms datasets (http://lamda.nju.edu.cn/files/MIMLprotein.zip) have been used by many prior researchers on genome-wide protein function prediction 4 problem. The datasets come from the biological three-domain system (i.e., archaea, bacteria, eukaryote).  Tables 2 and 3. For each dataset, each bag containing several instances represents the protein in organisms, and each instance is described by a 216-dimensions vector where each dimension is the frequency of a triad type 40 . And a group of GO molecular function terms 41 is associated with each instance. For example, the Haloarcula marismortui dataset contains 304 proteins (bags) and including a number of 234 gene ontology terms (class) on molecular function ( Table 2). The total instance number of Haloarcula marismortui dataset is 950. The average number of instances per bag (protein) is 3.13 ± 1.09, and the average number of labels (GO terms) per instance is 3.25 ± 3.02.

Details information of the datasets is shown in
For each dataset, we separate the bags into the source and target domain by a sampling procedure following MICS. For instance, we put the bag into source domain if η ∑ ≤ = . Considering the fact that the segmentation methods of the source and target domains provided by MICS are artificially set. The data setting may be different from the real application. In order to give fair comparisons, we also test the performance of our algorithm in a more general setting. In this setting, the original bag data were randomly clustered into two clusters according to the center of the bag. Then, we randomly select one cluster as the source domain, and set the rest as the target domain. By this way, the original bag data is naturally split into two domains. Hence, the comparison will be more fair.
Evaluation Measure. In our experiments, we use four popular evaluation criteria to evaluate the performance the multi-instance learning approaches, i.e., Ranking Loss (RankLoss) 6 , Coverage (Coverage) 42 , Average-Recall (avgRecall) 6 and Average-F1 (avgF1) 6 . To explain each measure, for a given test set , we denote h(X i ) the returned labels for X i ; h(X i , y) is the returned confidence (real-value) for X i ; rank h (X i , y) is the rank of y which is derived from h(X i , y); Y i is the complementary set of Y i . Then, the criteria Ranking Loss is used to measure the average fraction of misordered label pairs generated by each algorithm. The little the ranking loss, the better the performance of the algorithm. The criteria Coverage is utilized to evaluate the average fraction of how far it is needed to go down the list of labels to cover all of the proper labels in the test bag. The little the coverage, the better the performance of the algorithm.
The criteria Average-Recall is included to measure the average fraction of correctly predicted labels. The larger the Average-Recall, the better the performance of the algorithm.
The criteria Average-F1 is a tradeoff of the average precision 6 and the average recall. The larger Average-F1, the better the performance of the algorithm.
in which avgPrec(h) represents the average precision 6 . Note that, in this paper, we do not use average precision to measure the performance of each algorithm. This is because, the positive and negative bags of these seven datasets are very unbalanced. The ratio between the number of positive instance and negative instance has been shown in Table 3. In this situation, if we set all the test  Table 6. Comparison results on the dataset where the source and target domains are drawn from different clusters. ↓ (↑ ) indicates the smaller (larger), the better of the performance.
bags to be negative, we can get a very high average precision. Hence, the average precision cannot measure the performance of each algorithm in the fair.
To make a fair comparison, we conduct all the experiments in this paper with 20 random permutations for each dataset. We report the comparison results for each evaluation metric-based on the averaging results over those 20 runs.
Comparing Algorithms. In this section, we briefly introduce the comparison methods (MIMLNN, MIMLSVM, EnMIMLNN, MICS) (The codes of these four MIL algorithms have been shared by their authors: http://lamda.nju.edu.cn/CH.Data.ashx) used in our experiments. On one hand, considering the fact that MIMTL is used to tackle the multi-instance learning problem, we compare MIMTL with MIMLNN, MIMLSVM which are two classical multi-instance learning algorithms. On the other hand, since MIMTL is also a kind of metric-based MIL, we include EnMIMLNN as a comparison method. Considering the fact that, MIMTL and MICS are both designed for Multi-Instance learning under transfer learning setting. The difference between MIMTL and MICS is that these two methods use different distance to measure the bags' distance. For instance, MIMTL use the Mahalanobis distance to measure the bags' distance while MICS use the Euclidean distance. To verify the contribution of the Mahalanobis distance to MIMTL, we include MICS as a comparison method.
We have introduced two methods to predict bag label for the test bags. To research which method is better for MIMTL, we also compare MIMTL kNN with MIMTL SVM on all dataset. MIMTL kNN is a variant of MIMTL by setting the base learner to be citation-kNN. And MIMTL kNN is also a variant of MIMTL by setting the base learner to be SVM.

Parameter Configurations.
In this section, we present the detail parameter configurations of each algorithm used in our experiments. To make the comparison more fair, we use the best parameters reported in the papers for the baseline methods. We select the best parameters for MIMTL by cross-validation.
• MIMLNN: The regularization parameter used to compute the matrix inverse is set to 1 and the number of clusters is set to 40 percent of the training bags. • MIMLSVM: The number of clusters is set to 20 percent of the training bags and the SVM used in MIMLSVM is implemented by LIBSVM 35 package with radial basis function whose parameter "− c" is set to 1 and "− g" is set to 0.2. • EnMIMLNN: The fraction parameter and the scaling factor are set to 0.1 and 0.8, respectively.
• MICS: The number of clusters is set to 80 percent of the training bags and the SVM used in MICS is implemented by LIBSVM 35 package with radial basis function whose parameter "− c" is set to 1 and "− g" is set to 0.2. • MIMTL: We set the balance parameters λ = 1, β = 1 and set the number of clusters to be 40 percent of the training bags. We select the radial basis function with "− c = 1" and "− g = 0.2" for the base learner SVM.
Performance Comparison. We have presented two versions of MIMTL, MIMTL kNN and MIMTL SVM in details. Before we compare MIMTL with other state-of-the-art MIL methods, we actually want to select a better base learner for MIMTL. To this end, we compare the performances of MIMTL kNN and MIMTL SVM on the seven datasets. Figure 2 reports the experimental results. From the figure, we can observe that the ranking loss and coverage of MIMTL SVM on all the seven datasets is dramaticly lower than that of MIMTL kNN . We also note that the avgF1 and avgRecall of MIMTL SVM on most datasets are much higher than that of MIMTL kNN . These experimental results in Fig. 2 suggest that SVM is more suitable for MIMTL as the base learner than citation-kNN.
In the second experiment, we verify the performance of MIMTL by comparing MIMTL with the other three traditional state-of-the-art MIL methods (MIMLNN, MIMLSVM, EnMIMLNN). The comparison results with four state-of-the-art MIL methods on seven real-world organisms are shown in Tables 4 and 5. From the table, we find that the performance of MIMTL is particularly significant than the other MIL methods. This is because, MIMLNN, MIMLSVM and EnMIMLNN are designed for the traditional MIL problem where the training and test bags are drawn from the same distribution. And the multi-instance classifier trained by MIMLNN, MIMLSVM and EnMIMLNN on SD cannot well suit to TD task. Different from MIMLNN, MIMLSVM and EnMIMLNN, MIMTL takes into account the distribution different problem between SD and TD and utilizes bag weights trick to handle this problem. Such that MIMTL can keep a better performance than other methods under transfer learning setting.
In the third experiment, we compare the performance of MIMTL with MICS since MIMTL is designed for multi-instance transfer learning problem, in this paragraph. The performance results for each algorithm on the seven datasets are shown in Fig. 3. From the figures, we find that the ranking loss and coverage of MIMTL on six of the seven datasets are lower than that of MICS. The avgRecall and avgF1 of MIMTL on all the seven datasets are higher than that of MICS. Though MIMTL and MICS are both designed for multi-instance transfer learning problem, the performance of MIMTL is much better than that of MICS. This may be because MICS only use the Euclidean distance to measure the distance bags. Compared to MICS, MIMTL utilize Mahalanobis distance into the multi-instance transfer learning. With the advantage of Mahalanobis distance, MIMTL can preserve much intrinsic geometric information of the bags than MICS. Hence, MIMTL can more effectively enhance the performance for multi-instance prediction for genome-wide protein function prediction.
In the fourth experiment, we also test the performance of MIMTL on the seven datasets with more general settings. In this experiment, we first randomly clustered each dataset in to two clusters. Then we set one cluster as the source domain and the rest as the target domain. Table 6 shows the experimental results. From the table, we can find that compared with four other baselines on the several genomic datasets, our algorithm is more excellent. Combining the experimental results from Tables 4 and 6, we found that our algorithm maintains excellent performance in both data settings (the data setting according to MICS, and the data setting according to random clustering). This may indicate that the performance of our algorithm is not affected by data settings. And this result also validates the robustness of our algorithm.
In the fifth experiment, we further verify compare MIMTL with the other state-of-the-art methods based on a robust non-parametric test (This non-parametric test provides us a method for comparing more algorithms across multiple data sets. The test procedure includes three steps: First, we rank the algorithms on each data set. Then, compute the average rank (in the descending order) of each algorithm on all the data set. At last, the Nemenyi post-hoc test is utilized to detect if a algorithm is significantly different from the others according to the average rank. The performances of two algorithms are significantly different if their corresponding average ranks differ by at least a critical distance (CD), vice versa. The algorithms that do not differ significantly than each other are usually connected with a bold horizontal line. The value of the critical distance is depended on the number of comparing algorithms, data sets number and a significance level p (i.e., p = 0.05). (The Friedman test 43 with the corresponding Nemenyi post-hoc tests 44 ) as recommended by Demšar, Janez 45 . The data setting used in this figure is following the protocol of MICS 31 . The test results of MIMLNN, MIMLSVM, EnMIMLNN, MICS and MIMTL are presented with several diagrams as shown in Fig. 4. Each subgraph in Fig. 4 is corresponding to a ranking-based measure. From the test results, we observe that the performance of MIMTL is similar as MIMLSVM on ranking loss and Coverage. And, the performance of MIMTL is significantly better than MIMLNN, EnMIMLNN, MICS on all the evaluation measures which verify the excellent performance of MIMTL.
Note that, the performance reported in Table 4 are different from that reported by EnMIMLNN and MIMLDML. This is because, we select the training and test data from different domains while EnMIMLNN select the training and test data from the same domain. To make the comparison more comprehensive, we compare MIMTL with the baselines following the evaluation protocol of EnMIMLNN, and randomly select the training and test data from the same domain.