LogSum + L2 penalized logistic regression model for biomarker selection and cancer classification

Biomarker selection and cancer classification play an important role in knowledge discovery using genomic data. Successful identification of gene biomarkers and biological pathways can significantly improve the accuracy of diagnosis and help machine learning models have better performance on classification of different types of cancer. In this paper, we proposed a LogSum + L2 penalized logistic regression model, and furthermore used a coordinate decent algorithm to solve it. The results of simulations and real experiments indicate that the proposed method is highly competitive among several state-of-the-art methods. Our proposed model achieves the excellent performance in group feature selection and classification problems.

With the development of DNA microarray technology 1,2 , the biological researchers can analyze the expression levels of thousands of genes simultaneously. Many studies have shown that microarray data can be used to classify the different types of cancer, which includes how long the incubation period is, and what drugs are effective in the diagnosis and treatment processes.
From a biological point of view 3 , only a small number of genes (biomarkers) strongly indicate the target cancer, while other genes are not related to disease. Therefore, the data with unrelated genes may bring noise, and make the machine learning approaches less easy to find pathogenic genes that cause the disease. Moreover, from a machine learning perspective, the large number of genes (features) with few samples in the datasets may cause overfitting 4 , and have negative impact on classification performance. Due to the importance of these issues, effective gene (biomarker) selection methods are needed to help classify different cancer types and improve prediction accuracy.
In recent years, many methods for gene selection in microarray datasets have been developed and generally can be divided into three categories: filters, wrappers, and embedded methods. Filter methods [5][6][7][8] evaluate genes based on discriminative power without considering their regulation correlations with other genes. The main disadvantage of the filtering methods is that it examines each gene separately, and makes each gene independent, thereby ignores the possibility that the genes have combined and grouping effects. This is a common problem with statistical methods, such as t-test, which can also examine each gene individually.
Wrapper methods [9][10][11] utilize feature assessment measures based on the learning performance to select subsets of genes. Generally, they can acquire a small number of related genes to notable promote the learning ability. In some cases, the results of the wrapper methods are better than those of the filter methods. However, the main fault of wrapper methods is their computational cost is high.
A third set of feature selection approaches is the embedded methods 12-26 that perform feature selection as part of the learning procedure of a single process. Under similar learning performance, the computational efficiency of embedded methods is more efficient than wrapper approaches. Hence, embedded methods have recently attracted a lot of attention in the literature. The regularization methods are important embedded technologies, which can perform feature selection and model training simultaneously. Many regularization methods have been proposed, such as Lasso 12 , SCAD 13 , adaptive Lasso 14 , MCP 15 , L q (0 < q < 1) 16 , L 1/2 17,18 , LogSum 19 , etc. These methods perform well with the independent feature selection. When the features are highly correlated, some regularization methods which pay attention to the grouping effect can be used to select the groups of the relevant features, such as group Lasso 20 , Elastic net 21  www.nature.com/scientificreports/ binary cancer classification [29][30][31][32][33] . However, the traditional logistic regression model has two obvious shortcomings, mainly in the following two aspects: 1. Feature selection problem. All or most of the feature coefficients obtained by fitting the logistic regression model are not zero, i.e. all most of the features are related to the classification target and not sparse. However, the key factors affecting the model are often only a few in many practical problems. This non-sparseness of the logistic models increases the computational complexity on the one hand and is not conducive to the actual interpretation of the practical problems. 2. Overfitting problem.
The logistic regression models can often obtain good precision for the training data, but for the test data outside the training set, the classification accuracy rate is not ideal. In fact, not only logistic regression, many other data analysis models will also be affected by overfitting. It has become one of the hot research topics in statistics, machine learning and other fields.
In recent years, there is growing interesting to apply the regularization techniques in the logistic regression models to solve the above mentioned two shortcomings. For example, Tibshirani and Friedman 34,35 proposed the sparse logistic regression based on the Lasso regularization and the coordinate descent methods. Algamal et al. 36,37 proposed the adaptive Lasso and the adjusted adaptive elastic net for gene selection in high dimensional cancer classification. Like sparse logistic regression with the L 1 regularization method, Cawley and Talbot 30 investigated sparse logistic regression with Bayesian regularization. Liang et al. 38 investigated the sparse logistic regression model with the L 1/2 penalty for gene selection in cancer classification.
Inspired by above mentioned methods, in this paper, we proposed a LogSum + L 2 penalized logistic regression model. The main contributions of this paper include.
1. Our proposed method can not only select sparse features (biomakers), but also identify the groups of the relevant features (gene pathways). The coordinate decent algorithm is used to solve the LogSum + L 2 penalized logistic regression model. 2. We also evaluate the capability of our proposed method and compare its performance with other regularization methods. The results of simulations and real experiments indicate that the proposed method is highly competitive among several state-of-the-art methods.
The rest of this paper is organized as follows. In "Related works" section, we introduce the related work. "Methods" section represents the LogSum + L 2 penalized logistic regression model and its optimization algorithm. "Experiments experimental results and discussion" section analyzes the results of the simulated data. "Discussion and conclusion" section analyzes the results of real data. Section 6 concludes this paper.

Related works
Sparse penalized logistic regression. We focused on binary classification using logistic regression (LR), which is a statistical method for modeling a binary classification problem. Suppose we have n samples and p genes. Datasets X and y are the genes matrix and the dependent variable, respectively. So, the n samples mean the set D, x ij denotes the value of gene j for the ith samples, y i is a corresponding variable that takes a value of 0 or 1, y i = 0 indicates the ith sample in Class 1 and y i = 1 indicates the ith sample is in Class 2. Then, we define a classifier f (x) = e x (1+e x ) such that for any input x with class label y, f (x) predicts y correctly. The LR is given as follows: In Eq. (1), β= (β 0 ,β 1 ,...,β p ) are the coefficients need to be estimated. We should notice that β 0 is the intercept. The log-likelihood function of the transformation of Eq. (1) is defined as: Then we can obtain the coefficients β when Eq. (2) is minimized. In the cancer classification problem with high-dimensional and low-sample size data (p ≫ n),directly solving the logistic model (2) will make overfitting. Therefore, to solve this problem, we need add a regularization term to (2), the sparse logistic regression can be modelled as: where l(β) is the loss function, p(β) is the penalty function, and > 0 is a control parameter.
A coordinate decent algorithm for different thresholding operators. The coordinate decent algorithm is a "one-at-a-time" approach 40 , and before considering the coordinate descent algorithm for the nonlinear www.nature.com/scientificreports/ logistic regularization, we first introduce a linear regression case. The objective function of the linear regression is as follow: where y = (y 1 , . . . , y n ) T is the vector of n response variables,X i = (x i1 , x i2 , . . . , x ij ) is ith input variables with dimensionality p and y i is the corresponding response variable. ||.|| denotes the L 2 -norm. The coordinate decent algorithm "one-at-a-time" is to solve β j and other β k =j (represent the coefficients β k =j remained after jth element β j is removed) are fixed. The Eq. (4) can be rewritten as: In Eq. (5), kth represents other features than the jth feature. The first order derivative at β j can be estimated as: i represents the partial residuals with respect to the jth feature.
To consider the correlation of features, Elastic Net ( L EN ) 21 had been proposed, which emphasizes a grouping effect. The L EN penalty function is given as follows: The penalty function of L EN is combination of L 1 penalty and ridge penalty which a = 1 and a = 0 respectively. Therefore, Eq. (6) is rewritten as follows: Zou and Hastie have proposed the univariate solution 21 for a L EN penalized regression coefficient as follows: where S(w j , a) is soft thresholding operator for the L 1 penalty if a is equal to 1, so Eq. (9) can be divided into three situations as follows: Fan et al. have proposed the SCAD penalty 13 , which can produce sparse set of solutions and approximately unbiased coefficients for large coefficients. Its penalty function is shown as follows: Additionally, the SCAD thresholding operator is given as follows: Like the SCAD penalty, Zhang et al. have proposed the maximum concave penalty (MCP) 15 . The formula of its penalty function is shown as: And the MCP thresholding operator is given as follows: Then the univariate half thresholding operator for a L 1/2 penalized linear regression coefficient is given as follows: To consider the correlation of genes, Huang et al. have proposed HLR regularization 26 . Equation (15) can be rewritten: And the univariate half thresholding operator for the HLR penalized linear regression coefficient is as follows: Theoretically, the L 0 regularization produces the better solutions with more sparsity, but it is NP problem. Therefore, Candes et al. have 19 proposed LogSum penalty, which approximates much better the L 0 regularization. We could rewrite the penalty function of the LogSum regularization as follows: where ε > 0 should be set arbitrarily small, to closely make the LogSum penalty resemble the L 0 -norm. Equation (19) has a local minimal 39 .

Methods
LogSum + L 2 penalized logistic regression model. In this paper, we proposed the LogSum + L 2 penalized logistic regression model for feature group selection. We could write the LogSum + L 2 penalty as follows: where y − Xβ 2 is the loss function, (y, X) is a data set, ε > 0 is a constant, > 0 , 1 ≥ 0 and 2 ≥ 0 are regularization parameters that control the complexity of the penalty function. Figure 1 describes the contour plots on two-dimensional for the penalty functions of L 1 , L EN , HLR and Log-Sum + L 2 approaches. It is demonstrated that the LogSum + L 2 penalty is non-convex for the given parameters 1 and 2 in Eq. (21). The LogSum + L 2 thresholding operator is given as follows: (22) is given as follows: Considering the regression model has the following form (14) β www.nature.com/scientificreports/ where the response y ∈ R n , the predictors X = (x 1 , x 2 , ..., x p ), X ∈ R n×p and the error term e = (e 1 , e 2 , ..., e n ) are i.i.d. with mean 0 variance σ 2 . The Logsum + L 2 regularization can be expressed as: Its first partial derivative with respect to β k is given by follows: Equation (25) is obtained from condition that the design matrix X is orthonormal. By setting the first partial derivative equal to zero, we obtain the estimator with its kth element β k .
According to different thresholding operators, we also discuss three properties to satisfy the coefficient estimator as shown in Fig. 2: (a) Unbiasedness the resulting estimator is nearly unbiased when the true unknown parameter is large to avoid unnecessary modeling bias; (b) Sparsity the resulting estimator is a thresholding rule, which automatically sets a small estimated coefficient to zero to reduce model complexity; (c) Continuity the resulting estimator is continuous to avoid instability in model prediction. Figure 2 shows four regularization methods:L 1 , L EN , HLR and LogSum + L 2 penalties with an orthogonal design matrix in the regression model. The estimators of L 1 and L EN are biased, whereas the HLR penalty is asymptotically unbiased. Similar to the HLR method, the LogSum + L 2 approach also performs better than L 1 and L EN in the property of unbiasedness. All of these four regularization methods fulfil requirements of sparsity and continuity.
. Redefine the partial residual for fitting current β j as Z (j) i = k� =j x ikβk and . A pseudocode of the coordinate descent algorithm for the Logsum + L 2 penalized logistic regression model is shown in Algorithm 1 (Fig. 3). Figure 3. Flowchart of the coordinate descent algorithm for the LogSum + L 2 penalized logistic regression model.

Experiments experimental results and discussion
Analysis on simulated data. In this section, we analyze the performance of the proposed method (the LogSum + L 2 penalized logistic regression model) by simulation analysis. We compare the proposed method with other three methods, which are logistic regression with L 1 , L EN , HLR regularizations. We simulate data from the true model.
where X ∼ N(0, 1) , ε is the independent random error and σ is the parameter that controls the signal to noise. Two scenarios are presented here. In each example, the dimension of features is 1000. Here are the details of the two scenarios. where ρ is the correlation coefficient of the group features.
In this example, there is one set of related features. The ideal sparse regression method should select 5 real features and set other 995 features as noise features to zero. 2. In Scenario 2, we set σ = 0.4 and the dataset consists of 400 observations, and defined two group features. In this experiment, we initialize the coefficient ρ of features' correlation as 0.2, 0.6 respectively, and hope to observe the accuracy of testing under different correlations by running different correlation values. The L 1 and L EN approaches were executed by Glmnet (http://web.stanf ord.edu/~hasti e/glmne t_matla b/, MATLAB version 2014-a). We use the tenfold cross-validation (CV) approach to optimize the regularization parameters or tuning parameters (balance the tradeoff between data fit and model complexity) of the L 1 , L EN , HLR and LogSum + L 2 approaches.
At the beginning, we divided the datasets at random into the training sets and the test sets. In our experiment, the approximate 70% of samples are proposed as training sets, and the rest are used as test sets. We repeated the simulations 30 times for each penalty method and computed the mean classification accuracy, mean classification sensitivity, and mean classification specificity on the training and test datasets respectively. To evaluate the quality of the selected features for the regularization approaches, the sensitivity and specificity of the feature selection performance 39 were defined as the follows: where the . * is the element-wise product, and |.| 0 calculates the number of non-zero elements in a vector, β and β are the logical "not" operators on the vector β and β .
The training results of different methods on simulate datasets are reported in Table 1. As it can be seen, for all scenarios, our proposed LogSum + L 2 procedure generally achieves higher or comparable classification performance than the L 1 , L EN and HLR methods. For example, in the Scenario 1 with ρ = 0.6, our proposed method gained the 97.86% of accuracy, 95.38% of sensitivity and 100% of specificity, all of this data has increased by 6% for other methods. And whatever Scenario 1 or 2, the LogSum + L 2 methods always show the highest accuracy of training set, both ρ = 0.2 and ρ = 0.6. In summary, in the case of different scenarios and different values ρ , the LogSum + L 2 penalized logistic regression model is always the best. Table 2 shows test results of different methods on simulate datasets. We can find that the performance of the LogSum + L 2 penalized logistic regression model is still the best one among the four methods. And in Scenario 1, whatever ρ = 0.2 or ρ = 0.6, the LogSum + L 2 approach shows similar values, but in Scenario 2, the sensitivity of the LogSum + L 2 model is far apart, and its accuracy and specificity are not much different compared with other three methods. Table 3 shows the feature selection of all competing regularization methods. As shown in Table 3, these are the β-Sensitivity and β-Specificity. The approximate results are similar to the previous two Tables. In the  Analysis of real data. We use three publicly available lung cancer microarray datasets, which download from GEO (https ://www.ncbi.nlm.nih.gov/geo/). Some detail information and introduction will be shown below: 1. GSE10072: Series GSE10072 is a gene expression signature of cigarette smoking and its role in lung adenocarcinoma development and survival. Tobacco smoking can cause 90% of lung cancer cases, but the changes in the level of the molecules that lead to cancer development and affect survival are still unclear. 2. GSE19188: Series GSE19188 is a dataset about gene expression for early stage Non-small-cell lung carcinoma (NSCLC). 156 tumors and normal samples are aggregated into the expected group. The prognostic characteristics of 17 genes showed the best correlation with the survival time after surgery. 3. GSE19804: Series GSE19804 is a dataset about Genome-wide screening of transcriptional modulation in non-smoking female lung cancer in Taiwan. Although smoking is a major risk factor for lung cancer, only 7% of women with lung cancer in Taiwan have a history of smoking, which is much lower than that of white women. Researchers extracted RNA from paired tumors and normal tissues for gene expression analysis to  www.nature.com/scientificreports/ explain this phenomenon. This dataset and its reports comprehensively analyze the molecular characteristics of lung cancer in non-smoking women in Taiwan.
The GSE10072 dataset contains 22,284 microarray gene expression profiles and GSE19188 and GSE 19,804 both have 54,675 microarray gene expression profiles. As same as simulation data, we randomly divide the datasets such that 70% of the datasets become training samples and 30% become test samples. A brief introduction of these datasets is summarized in Table 4. Table 5 describes the average training and test accuracies are obtained by different variable selection methods in the three datasets. It is easy to find that the performance of the LogSum + L 2 penalized logistic regression model is better than other three approaches. For example, in terms of training accuracy, the LogSum + L 2 approach reached 99.43%, and other three methods are 98.32%, 99.04% and 98.21% respectively in GSE10072 dataset. In GSE19188 dataset, we observe the test accuracy of the LogSum + L 2 method is 75%, and other three methods are 51.46%, 47.56% and 46.19% respectively. From the number of selected genes, we can find the LogSum + L 2 penalized logistic regression model always select the lowest number of genes and the L EN approach select the highest number of genes.
In order to search the common gene signatures selected by the different methods, we used VENNY software to generate Venn diagrams. As show in Fig. 4, we consider the common gene signatures selected by the logistic regression model with L 1 , L EN , HLR and LogSum + L 2 regularizations, which are the most relevant signatures of lung cancer. Many genes selected by the LogSum + L 2 penalized logistic regression model do not appear in the results of the other three regularization methods. For example, the LogSum + L 2 approach selects 5, 6, and 3 unique genes from GSE10072, GSE19188 and GSE19804 datasets respectively. This means that the LogSum + L 2 penalized logistic regression model can find the different genes and pathways related to lung cancer compared with other three regularization methods. Figures 5, 6 and 7 show the interactive networks of all the features selected by the LogSum + L 2 penalized logistic regression model. The integrative networks among these selected features are represented by the cBio-Portal from publicly lung cancer datasets. The circles with thick border represent the selected genes, and the rest circles with gradient color-coded represent genes according to their alteration frequencies in databases. The  www.nature.com/scientificreports/ hexagons represent target drugs, and among of them some with yellow color represent the drugs approved by FDA. The links connected some selected genes represent that they have regulation correlations with group effect. In GSE10072 dataset, from Fig. 5, we find a gene named EGFR, which has been conformed as the important target gene of NSCLC 40 . It belongs to ERBB receptor tyrosine kinase family, which include some other genes like HER2, HER3 and HER4. Due to observed patterns of oncogenic mutation of EGFR and HER2, many research works report their attractive option for targeted therapy in patients with NSCLC.
As shown in Fig. 6, three important genes TUBB1, PRKD1 and STK11 have been selected, and genes PRKD1 and STK11 have the regulation correlation with group effect from GSE19188 dataset. In fact, there are many drugs have been developed to target the gene TUBB1. And many research works report that genes PRKD1 and STK11 significantly influence the patients' survival rates across all tumors 41 .
As shown in Fig. 7, four important genes EPCAM, SMC3, HIST1H2BL, and LMNA and their regulation correlations with group effect have been selected from GSE19804 dataset. Many research works report that the epithelial cell adhesion molecule (EPCAM) represents true oncogenes as the tumor-associated calcium signal transducer, and study the relationship between gene EPCAM and NSCLC 42 . Table 6 summarizes that the genes were selected by the LogSum + L 2 penalized logistic regression model. At the beginning of the experiments, the attribute of genes is prob set ID. Thus, we could transform prob set ID to gene symbol by using the website DAVID (https ://david .ncifc rf.gov). According to the experimental results, the LogSum + L 2 penalized logistic regression model can find some unique genes, which cannot be identified by other regularization models but are significantly related to the disease. Therefore, we believe that the LogSum + L 2 penalized logistic regression model can accurately and efficiently identify cancer-related genes.

Discussion and conclusion
Successful identification of gene biomarkers and biological pathways can significantly improve the accuracy of diagnosis and help machine learning models have better performance on classification of different types of cancer. Many researchers used the logistic regressions with optimization methods for binary cancer classification. However, the traditional logistic regression model has two obvious shortcomings: feature selection and   www.nature.com/scientificreports/ overfitting problems. In this paper, we proposed the LogSum + L 2 penalized logistic regression model. Our proposed method can not only select sparse features (biomakers), but also identify the groups of the relevant features (gene pathways). The coordinate decent algorithm is used to solve the LogSum + L 2 penalized logistic regression model. We also evaluate the capability of our proposed method and compare its performance with other regularization methods. The results of simulations and real experiments indicate that the proposed method is highly competitive among several state-of-the-art methods. The disadvantage of the proposed method is its three regularization parameters need to be tuned by the k-fold cross-validation approach. In recent years, increasing associations between of microRNAs (miRNAs) and human diseases have been identified. Based on accumulating biological data, many computational models for potential miRNA-disease associations inference have been developed [43][44][45][46] . We will apply the proposed LogSum + L 2 penalized logistic regression model to identify the non-coding RNA biomarker of human complex diseases as the future direction of our research.