Protein Secondary Structure Prediction Based on Data Partition and Semi-Random Subspace Method

Protein secondary structure prediction is one of the most important and challenging problems in bioinformatics. Machine learning techniques have been applied to solve the problem and have gained substantial success in this research area. However there is still room for improvement toward the theoretical limit. In this paper, we present a novel method for protein secondary structure prediction based on a data partition and semi-random subspace method (PSRSM). Data partitioning is an important strategy for our method. First, the protein training dataset was partitioned into several subsets based on the length of the protein sequence. Then we trained base classifiers on the subspace data generated by the semi-random subspace method, and combined base classifiers by majority vote rule into ensemble classifiers on each subset. Multiple classifiers were trained on different subsets. These different classifiers were used to predict the secondary structures of different proteins according to the protein sequence length. Experiments are performed on 25PDB, CB513, CASP10, CASP11, CASP12, and T100 datasets, and the good performance of 86.38%, 84.53%, 85.51%, 85.89%, 85.55%, and 85.09% is achieved respectively. Experimental results showed that our method outperforms other state-of-the-art methods.

Many statistical approaches and machine learning approaches have been developed to predict secondary structure. One of the first approaches for predicting protein secondary structure, uses a combination of statistical and heuristic rules 4,5 . The GOR 6 method formalizes the secondary structure prediction problem within an information-theoretic framework. Position specific scoring matrix (PSSM) 7 based on PSIBLAST 8 reflects evolutionary information and has made the most significant improvements in protein secondary structure prediction. Many machine learning methods have been developed to predict protein secondary structure, and exhibit good performance by exploiting evolutionary information, as well as statistic information about amino acid subsequences 9 . For example, many neural network (NN) [10][11][12][13][14] methods, hidden Markov model (HMM) [15][16][17] , support vector machines (SVM) [18][19][20][21] , and K-nearest neighbors 22 have had substantial success, and Q3 accuracy has reached to 80%. The prediction accuracy has been continuously improved over the years, especially by using hybrid or ensemble methods and incorporating evolutionary information in the form of profiles extracted from alignments of multiple homologous sequences 23 . Recently, several papers used deep learning networks [24][25][26][27][28] to predict protein secondary structure and obtained good success. The highest Q3 accuracy without relying on structure templates is now at 82-84% 3 . DeepCNF 27 is a deep learning extension of conditional neural fields (CNF), which integrates conditional random fields and shallow neural networks. The overall performance of DeepCNF is significantly better than other state-of-the-art methods, breaking the long-lasting ~80% accuracy. Recently SPIDER3 improved the prediction of protein secondary structure by capturing non-local interactions using long short-term memory bidirectional recurrent neural networks 29 . In the paper 30 , a new deep inception-inside-inception network, called MUFOLD-SS, was proposed for protein secondary structure prediction. SPIDER3 and MUFOLD-SS achieved better performance, compared to DeepCNF.
In this paper, we presented a data partition and semi-random subspace method (PSRSM) for protein secondary structure prediction. The first step was partitioning the protein training dataset into several subsets based on the lengths of proteins sequences. The second step was generating subspaces by the semi-random subspaces method, training base classifiers on the subspaces, and then combining them by majority vote rule on each subset. Fig. 1 demonstrates our PSRSM experimental framework.
A key step of our method was to partition the training dataset into several subsets according to the length of the protein. The length of a protein sequence is the number of amino acids (AAs) in a protein sequence. Then we trained base classifiers in parallel on subspace data generated by using semi-random subspace method and combined them on each subset. In the conventional random subspace method, the low-dimensional subspaces are generated by random sampling of the original high-dimensional spaces. In order to get good performance of the ensemble, in this paper, we proposed a semi-random subspace method for protein secondary structure prediction. This method ensured that the base classifiers were as accurate and diverse as possible. We used support vector machines (SVMs) as the base classifier. Support vector machines are a popular machine learning method for classification, regression, and other learning tasks. Compared to other machine learning methods, SVM has the advantages of high performance, absence of local minima, and ability to deal with multidimensional datasets, in which with complex relationships exist among data elements. Support vector machines (SVMs) have had substantial success in protein secondary structure prediction.
Experimental results show that the overall performance of PSRSM was better than the current state-of-the-art methods.
In this research, we combined the ASTRAL dataset and CullPDB dataset to be our training dataset, i.e., the ASTRAL + CullPDB dataset. The CullPDB dataset was selected based on the percentage identity cutoff of 25%, the resolution cutoff of 3 angstroms, and the R-factor cutoff of 0.25. There are 12,288 proteins in the CullPDB dataset. ASTRAL dataset had 6,892 proteins, with less than 25% sequence identity. Our training dataset ASTRAL + CullPDB had 15,696 proteins; we removed all duplicated proteins.
Publicly available datasets CASP10, CASP11, CASP12, CB513, and 25PDB were used to evaluate our method and compared using SPINE-X 38 , JPRED 39 , PSIPRED 40 and DeepCNF. 99 proteins of the CASP10 dataset, 81 proteins of the CASP11 dataset, and 19 proteins of the CASP12 dataset were selected according to the availability of crystal structure. The CB513 dataset has 513 protein sequences. Any two proteins of CB513 share less than 25% sequence identity with each other. The 25PDB dataset was selected with low sequence similarity of no more than 25%, and has 1673 proteins, consisting of 443 all-α, 443 all-β, 346 α/β and 441 α + β. Note that the number of proteins in these datasets may be different from those reported in other published papers because we only used the available online (http://www.rcsb.org/) or with the PSSM program.
In addition, we randomly downloaded 100 new proteins (T100) released after 1 January 2018 from http:// www.rcsb.org/. The dataset (T100) contains 100 proteins with sequence lengths ranging from 18 to 1460. We used T100 to test PSRSM and deepCNF using our online servers and their online server RaptorX-Property which was ranked first in secondary structure prediction.
Because T100 dataset is released after 2018, there is no duplicated proteins with our training dataset. All our training datasets were collected before February 2017.
Performance measures. Several different measures can be used to measure the secondary structure prediction accuracy, the most common being Q3. The Q3 accuracy is defined as the percentage of residues for which the predicted secondary structures are correct, Q3 is calculated as follows: where, N H , N E , and N C , are the number of correctly predicted secondary structures: helix, strand and coil, respectively. N is the total number of residues (amino acids). We calculate the average accuracy of the whole test dataset and use average Q3 to evaluate the performance of our model on the test dataset, the average Q3 is defined as Where n is the number of protein sequences that has the valid predicted results in the test dataset, X i denotes a protein sequence, and Q3(X i ) is the Q3 accuracy of X i .

Performance.
We used Q3 accuracy to compare our PSRSM method with other state-of-the-art methods, SPINE-X, PSIPRED, JPRED, and DeepCNF, on four publicly available datasets (CASP10, CASP11, CASP12, and CB513). Table 1 shows the Q3 accuracy of PSRSM and the other state-of-the-art methods on the four datasets. The experimental results show that PSRSM is significantly outperforming SPINE-X, PSIPRED, and JPRED. Moreover, PSRSM had 1-3% higher Q3 accuracy than DeepCNF. We also tested our method on 25PDB dataset with 1674 proteins, and Q3 accuracy is 86.38%. In addition, we compared our proposed method to DeepCNF using our online servers (http://210.44.144.20:82/ protein_PSRSM/default.aspx) and their online server RaptorX-Property (http://raptorx.uchicago.edu/ StructurePropertyPred/predict/) on T100 dataset. Table 2 lists the Q3 accuracy of PSRSM and DeepCNF for each protein. The average Q3 accuracy of PSRSM was higher 2.5% than that of DeepCNF. In addition, we analyzed Q3 accuracy of predicted secondary structures in internal regions and at boundaries 2 . Here, we defined a helical/sheet residue as internal if its two nearest neighboring residues were also helical/sheet residues; we defined it as a boundary if one or both of the nearest neighbors had a different secondary structural assignment. The overall Q3 accuracies of PSRSM and DeepCNF, respectively, were 89.89% and 85.68% in internal regions, and 75.33% and 73.30% at boundaries. We also compared our method with other state-of-the-art methods (SPIDER3, MUFOLD, PSIPRED and JPRED) using their online server on T100 dataset in Table 3. The newly updated MUFOLD and SPIDER3 obtained 89.28% and 88.25% in internal regions, and 74.65% and 70.72% at boundaries. We can see that PSRSM was superior to current state-of-the-art methods not only in internal regions, but also at boundaries.

Discussion
Reason for partitioning training datasets according to protein length rather than randomly.
Our training data was the ASTRAL+CullPDB dataset, which had 15,696 proteins, and 3,863,231 amino acids (AAs). Since training support vector machines on such a large dataset is a very slow process, the first step of our method was partitioning the training data into several different subsets and training SVMs in parallel. If we partitioned the training data randomly, it would just reduce the computation time, but not increase the prediction accuracy 41 . The length of a protein sequence is the number of amino acids in a protein sequence. Protein length is an important feature of a protein because it influences protein structure. For example, the short sequence 'VVDALVR' formed 'EEEEEE' in six proteins: 1by5_A, 1qfg_A, 1qff_A, 1fcp_A, 1fi1_A, and 2fcp_A. Their lengths are 714, 725, 725, 705, 707, and 723 respectively. Meanwhile 'VVDALVR' formed 'HHHHHH' in one protein (3vtz_A), and its length was 269. This data can be downloaded at prodata.swmed.edu/chseq. 42 . Identical amino acid sequence has different types of secondary structures in proteins of different lengths; this is because protein length can affect both local and long-range interactions of the protein. Based on the above considerations, we partitioned training datasets according to protein length to cluster proteins in the training data.
In order to validate the effectiveness of our data partitioning strategy, we conducted another experiment. We randomly generated a subset of the ASTRAL+CullPDB dataset randomly instead of according to protein length, and similarly trained SVM base classifiers on the subset. Then we combined them into an ensemble (Classifier_C). We compared Classifier_C with our PSRSM 1 , and Table 4 shows that the performance of PSRSM 1 is quite similar to that of Classifier_C on CB513 dataset, but significantly better on subset with protein length L ∈ [1, 100]. The main difference between the two classifiers was the training set. All training proteins of PSRSM 1 were short proteins, they had similar protein lengths, and all lengths belonged to interval [1,100]; conversely, the lengths of Classfier_C training data were randomly distributed. Table 5 shows the performance of T100 dataset with different lengths based on 6 PSRSMs. 6 protein subsets with different lengths achieved the best performance 79.84%, 84.58%, 87.59%, 87.51%, 83.24%, and 83.93% respectively using their corresponding PSRSM.
Training time analysis. Another advantage of our method is that the training time was short. Because our training data ASTRAL + CullPDB is a large dataset, it was very slow to train the SVM classifier. We failed to train the SVM classifier on ASTRAL + CullPDB using our server.
The computational complexity to train an SVM 43 is Where N S is the number of support vectors, N f is the feature dimension, and N is the size of the training set. After data partitioning and sampling, the number of support vectors N S , feature dimension N f , and the size of the training set N are much smaller. Furthermore, since we trained our base classifiers in parallel, the running time was reduced. Table 6 shows the training time on each subset of the ASTRAL + CullPDB. D 1 , D 2 , …, D 5 and D 6 were subsets of ASTRAL + CullPDB (Table 7). We failed to train the SVM classifier on the ASTRAL + CullPDB using our server. After data partitioning but before sampling we completed training of SVM classifiers on each subset; more time was required because D 3 had more amino acids than other subsets. When we used PSRSM, the feature dimension was decreased, and the training time was reduced.

Conclusion and Future Work
In this paper we proposed a novel method, PSRSM, to predict protein secondary structure. The first step of our method was partitioning of the training set into several subsets based on protein length. In the second step, we generated k ensemble classifiers using the semi-random subspace method. If given a new query protein sequence, our method would select one, and only one, ensemble classifier from k ensemble classifiers according to length to predict the protein secondary structure. Experimental results showed that the overall performance of PSRSM was better than that of other current state-of-the-art methods. In particular, our method PSRSM is superior to other methods not only in internal regions, but also at boundaries.

Methods
Partitioning the training data. We partitioned the training data into k different subsets according to the protein sequence length. Let X denote a protein sequence, and L denote the length of X. We set k−1 partition , and r 1 , …, r 2 and r k−1 denote partition points that satisfy < < < < −  r r r r 0 1 k 1 k . These partition points partition interval (0, ∞) into k intervals without intersection.  Table 2. Q3 accuracy of PSRSM and DeepCNF for each protein in the T100. (If a protein sequence has more than 4000 or less than 26 amino acids, DeepCNF online server will report errors).  Table 3. PSRSM, DeepCNF, SPIDER3, MUFOLD,PSIPRED and JPRED average Q3 accuracies and Q3 accuracies in the internal regions, and at boundary regions of secondary structures on the T100. The DeepCNF method is available only to proteins with a length of [26,4000], MUFOLD is [30,700], and JPRED is [20,800].    Let D denote the training data ASTRAL + CullPDB. Subsets D 1 , D 2 , …, D k−1 and D k are defined as follows:  Table 7 shows the number of proteins and amino acids in Training classifiers. We generated t random subspaces of r-dimension, and trained t SVM base classifiers on each subset D i t feature subsets are used to train t base classifiers, and each subset had r features sampled from the 260-dimensional dataset.
Therefore we got k × t SVM base classifiers on k subsets, we denote these classifiers as a k × t matrix, where k is the number of subsets of the training data. where C ij is the SVM base classifier trained on the jth subspace data of subset D i . We combined classifiers 1 into a final ensemble classifier by majority vote rule, and thus got k ensemble classifiers as the final decision on each subset. They are denoted as below. Here 'Voting' means combining classifiers by majority vote rule, PSRSM i represents the final ensemble classifier on subset D i , and, In this study The parameters t is set to 12 base classifiers, and the dimension of subspaces r is 160 in our experiment.
The publicly available LIBSVM 44 software was used to train SVM classifiers. There are several kernel functions, commonly used in SVM: "liner", "polynomial", and "radial basis". In this paper, we used the radial basis function (RBF) as kernel, the form is where γ is a parameter. C is another parameter for SVM training; it is the regularization factor that controls the balance between low error and large divided margin. Parameters C and γ were decided using the grid search method. The optimal values of the two parameters are 0.9956 and 0.065, respectively.
Prediction. Given a new query protein sequence X, and protein sequence length L, our method selected one and only one ensemble classifier from k ensemble classifiers ({PSRSM 1 , PSRSM 2 , …, PSRSM k }) according to the length L to predict the protein secondary structure of X. Let ∼ Y denote the prediction output by PSRSM. Then where, PSRSM i is defined as (7).
For example, if a new query protein sequence X is a short protein and ∈ L (0, r ] 1 , then the corresponding PSRSM 1 trained on the short protein subset is used to predict its secondary structure. In general, if ∈ − L (r , r ] i 1 i , the ith classifier PSRSM i will be selected from k ensemble classifiers to predict the protein secondary structure of X.

Semi-Random Subspace Method (SRSM).
The random subspace method (RSM) is an ensemble construction technique. It was proposed by Ho in 1998 45 . RSM randomly samples a set of low-dimensionality subspaces from the whole original high-dimensional features space, then constructs a classifier on each smaller subspace and finally applies a combination rule for the final decision.
We proposed a semi-random subspace method for protein secondary structure prediction. In our research, each protein sequence was represented by a 260 × L matrix. The ith column vector represents features of the ith amino acid residue. We generated t feature subsets to train t base classifiers. Each subset had r features sampled from the 260-dimensional dataset.
Because the original PSSM of the associated residue is an important feature for the base classifier, those 20 dimensions in a central location of 260-dimensional data are fixed for each sampling.
Let S represent the 260-dimensional features vector, and 260 . We generated t subspaces ({S } i i t ) from S.S i represents a feature subset sampled from S, and =  There are two parameters to be determined for the semi-random subspace method, i.e., the number of subspaces t, and dimension of subspaces r.
Since D 1 was smaller than other subsets, the training time on D 1 was shorter than on other subsets. Therefore we conducted a series of experiments on D 1 to determine t and r. We fixed t = 12, because it requires t*d to be divided by 120, it is easy to set d or r. Experimental results on the CB513 dataset showed that with increasing r the Q3 accuracy increased, but when r > 160, the Q3 accuracy increased slowly (Fig. 2) and the training time must be much longer. So we determine r = 160 as the dimension of subspaces in our experiment.
Input features. The PSSM of a protein sequence represents homolog information affiliated with its aligned sequences. We used the PSI-BLAST program to generate the PSSM data. PSI-BLAST used BLOSUM62 evolutionary matrix to search a reduced version of the NCBI's non-redundant (NR) database filtered at 90% sequence similarity, in order to find the variability of the residue within a multiple sequence alignment. PSI-BLAST parameters was set with threshold h = 0.001 and j = 3 iterations. The resulting PSSMs were a 20 × L matrix, where L is the protein length and 20 is the number of amino acid types.
A sliding window of consecutive amino acids was used to obtain residue sequence information and predict the secondary structure of the central residue. Each residue was encoded by a vector of dimension 20 × w, where w is the sliding window size and is an odd number. The window was shifted from residue to residue through the protein chain. In this paper, the sliding window length w was set to 13. To use the first and last six amino acids, we inserted six zeros before and behind each protein sequence. Therefore each protein sequence was represented by a 260 × L matrix, and the ith column vector represented the protein features associated with the ith residue.
Secondary structure assignment was done with the DSSP. DSSP program defines eight states for secondary structure (H, E, B, T, S, L, G, and I) that are reduced to three states (H, E, and C) by different predictive methods. We used the following reductions: H, G and I to helix (H); E and B to beta strands (E); all the rest to coil (C).