Discriminative and Distinct Phenotyping by Constrained Tensor Factorization

Adoption of Electronic Health Record (EHR) systems has led to collection of massive healthcare data, which creates oppor- tunities and challenges to study them. Computational phenotyping offers a promising way to convert the sparse and complex data into meaningful concepts that are interpretable to healthcare givers to make use of them. We propose a novel su- pervised nonnegative tensor factorization methodology that derives discriminative and distinct phenotypes. We represented co-occurrence of diagnoses and prescriptions in EHRs as a third-order tensor, and decomposed it using the CP algorithm. We evaluated discriminative power of our models with an Intensive Care Unit database (MIMIC-III) and demonstrated superior performance than state-of-the-art ICU mortality calculators (e.g., APACHE II, SAPS II). Example of the resulted phenotypes are sepsis with acute kidney injury, cardiac surgery, anemia, respiratory failure, heart failure, cardiac arrest, metastatic cancer (requiring ICU), end-stage dementia (requiring ICU and transitioned to comfort-care), intraabdominal conditions, and alcohol abuse/withdrawal.

factorization, decomposes time-series matrix data from EHRs into latent medical concepts [20][21][22] . Most recently, nonnegative tensor factorization (NTF) is becoming particularly popular due to its ability to capture high dimensional data. It generates latent medical concepts using interaction between components from multiple information source [23][24][25][26][27] . Ho et al. first introduce NTF for phenotyping 23,24 . They define phenotypes as sets of co-occurring diagnoses and prescrptions, and obtain the phenotypes from latent representation of the co-occurrence. They use Kullback-Leibler divergence to decompose the observed co-occurrences that follow Poisson distribution based on CP decomposition. Ho et al. also incorporate sparsity constraints by setting thresholds for negligibly small values. Wang et al. enforce orthogonality constraints on NTF to derive less overlapping phenotypes 25 . Another NTF based on Tucker decomposition discovers (high-order) feature subgroups as decomposing the tensor into a core tensor multiplied by orthogonal factor matrices for each mode. It uses the core tensor to encode interactions among elements in each mode 26,28 .
One of important characteristics that phenotypes should have is to be discriminative to a certain clinical outcome of interest such as mortality, readmission, cost, et al. So far, however, there has been little consideration about discriminative phenotypes associated with certain clinical outcomes. The discriminative phenotypes can be beneficial to clinicians because they can directly apply the phenotypes to their daily practice to improve the clinical outcome of interest. For example, clinicians can use our phenotype to evaluate patients' risk of hospital death like APACHE II or SAPS score does, and improve resource allocation and quality-of-care in ICUs. Membership to the several different phenotypes can provide an insight on the situation of a patient beyond a single score. In addition, another crucial characteristic for phenotypes is to be distinct from each other, because otherwise clinicians cannot interpret and use the phenotypes easily. For example, let us say a patient suffers from hypertension and diabetes. To represent the patient, we can use a mixture of two phenotypes. We prefer Phenotype 1 = {hypertension, ACE inhibitors}, Phenotype 2 = {diabetes, insulin} to Phenotype 1 = {hypertension, ACE inhibitors, insu-lin}, Phenotype 2 = {diabetes}, because the former is more distinct and meaningful than the latter. Yet another critical concern about phenotypes is the compactness. Generally speaking, compact representation is more preferable than the lengthy one to end users if both have the same discrimination power and distinctness. This paper proposes a new tensor factorization methodology for generating discriminative and distinct phenotypes. We defined phenotypes as the sets of co-occurring diagnoses and prescriptions. We used a tensor to represent diagnosis and prescription information from EHRs, and decomposed the tensor into latent medical concepts (i.e., phenotypes). To discriminate a high-risk group (high mortality), we incorporated the estimated probability of mortality from logistic regression during the decomposition process. We also found cluster structures of diagnoses and prescriptions using contextual similarity between the components, and absorbed the cluster structure into the tensor decomposition process.

Methods
We first describe a computational phenotyping method that we developed (Fig. 1) and experiment design.
Phenotyping based on tensor factorization. We built a third-order tensor  with co-occurrences of patients, diagnoses, and prescriptions from intensive care unit (ICU) EHRs. Detailed tensor construction can be found in Supplementary methods. The co-occurrence is a natural representation of interactions between many diagnoses and prescriptions. We only focused on diagnosis and prescription data as previous phenotyping definition 29-31 , but we can extend the tensor to a high order (>3) to utilize additional data such as laboratory results and procedures. Specifically, we first built a matrix for individual patient to represent association between prescription Figure 1. Workflow of our phenotyping method. We constructed a tensor using the number of co-occurrences between diagnoses and prescriptions of each patient in EHRs. We then decomposed the tensor using the proposed constrained tensor factorization that incorporates regularizers for discriminative and distinct phenotypes. We defined phenotype as a set of co-occurring diagnoses and prescriptions, which can be inferred using decomposed tensors, and evaluated their discriminative and distinct power. We also selected top 10 representative phenotypes and presented its meaning and usefulness. and diagnosis. For example, let us say patient 1 is diagnosed with acute respiratory failure and hypertension, and is ordered the medicine phenylephrine during his or her admission. Then, each co-occurrence of acute respiratory failure and phenylephrine, and hypertension and phenylephrine is one, respectively (Fig. 2). Again, let us say patient I is diagnosed with Alzheimer's disease and is ordered medicine morphine sulfate twice. Then, the co-occurrence of Alzheimer's disease and morphine sulfate is 2. We collected all the matrices from all the patients and built the third-order observed tensor . Entries at (i, j, k) of the tensor (i.e.,  i,j,k ) is the number of co-occurrence of diagnosis j and prescription k for patient i.
To decompose the tensor, we used CP algorithm 32,33 ; detailed description of CP can be found in Supplementary methods. Recently, phenotyping based on Tucker model has been proposed 26,28 . It provides a more flexible modeling than does CP by allowing subgroups in each mode, but CP has an advantage in that it is computationally cheap and extendable by imposing regularizers. Using CP model, the third-order tensor  was decomposed into three factor matrices: A for patient mode, B for diagnosis mode, and C for prescription mode (Fig. 3). A phenotype consisted of diagnoses and prescriptions, and patients were involved in each phenotype. That is, the r th phenotype consisted of J diagnoses and K prescriptions with membership values that describe how much the diagnoses and prescriptions are involved and contribute to the r th phenotype. The membership values were normalized values between 0 and 1, and stored in the normalized vectors B r : and C r : , respectively. Meanwhile, patients were involved in the R phenotypes with membership values that represent how much the patient has the characteristic of the phenotypes. The membership values of patients were also normalized values between 0 and 1, and stored in the normalized vector A r : . Ability of r th phenotype that can capture and describe the data was stored in λ = A B C r r F r F r F : : : , because large values in A :r , B :r , and C :r means that the r th phenotype describes large portion of co-occurrence values in . So, conversely, a phenotype with highly co-occurring diagnosis and prescription may have large λ r .
For example, ICU survived patients (half of total patients) have Phenotype 1 in Fig. 3, which consists of the first two elements of diagnosis mode and the first one element of prescription mode. The second diagnosis element has higher membership to the Phenotype 1 than the first element does. The patients who died in ICU have Phenotype 2, which consists of the third diagnosis and the second prescription. Similarly, the deceased patients and a few patients who survived have Phenotype R, which consists of the fourth diagnosis and the third prescription. Note that in this example, elements in a phenotype are not overlapped with elements in other phenotypes; thus, we can interpret the phenotype easily. Also, note that phenotypes for the deceased patients and the patients who survived are separated so that we can easily determine which phenotypes are more associated with mortality; consequently, we can further use the phenotypes to evaluate the risk of patients according to the membership to the phenotypes. We introduced two regularizations to make the phenotype discriminative and distinct in the following sections.
Supervised phenotyping for discriminative power. We proposed a supervised approach to encourage the phenotypes separated according to mortality by adding a logistic regression regularization. In the previous section, patients had the membership values to the phenotypes. We used the membership as a feature vector to express patients, and used the feature vector to predict mortality. As a previous work on graph-based phenotyping method 21 , we added a regularization for supervised term. Let us say y i is a binary indicator of mortality, i.e., y i = 1 if i th patient dies during hospital admission and y i = −1 otherwise. The i th patient in training set L (i ∈ L) was represented as the membership values to the phenotypes, A i: , which is the i th row vector of A. Given logistic regression parameters θ, a probability of i th patient's mortality to be y i is We then maximized the log-probability, or minimize the negative log-probability, i i : Figure 2. Constructing tensor from EHRs. We built a third-order tensor  with co-occurrences of patients, diagnoses, and prescriptions from EHRs. Patient I is diagnosed with Alzheimer's disease and is ordered morphine sulfate twice.
Scientific RepoRts | 7: 1114 | DOI:10.1038/s41598-017-01139-y Thus, the objective function for updating each row A i: is with a weighting constant ω ( refers to Khatri-Rao product). Note that this objective function is with respect to row A i: not the whole patient factor matrix A. Gradient of f(A i: ) is : (1) : and hessian of f(A i: ) is If i ∉ L, we update A i: as Eq. (6) with ω = 0, which is a traditional CP decomposition without any regularization. Time complexity of Eq. (6) is bounded by O(JKR 2 ) for i ∈ L; total time complexity to update A is bounded by O(IJKR 2 ) ( Table S1). The supervised term had negligible effects on the total time complexity. This updating rule can be linearly scaled up to the size of A. Updating the logistic regression parameters θ followed a typical logistic regression modeling method. We added a ridge penalty to shrink the size of θ and avoid overfitting (c is a weighting constant) 34 as i i : 2 Similarity-based phenotyping for distinct power. To derive distinct phenotypes with less overlapping with each other, we made phenotypes only consist of similar elements. We first derived components' similarities from contexts in EHRs, used the similarities to infer cluster structures, and let phenotypes reflect the cluster structures.
Deriving contextual similarity. We derived contextual similarities from EHRs. Farhan et al. generate a vector representation of medical events (or elements in phenotype) 17 . Based on this work, we generated sequences that consist of diagnoses and prescriptions from EHRs in time order (Table 1). We applied Word2Vec, a two-layer neural network for natural language processing for numerical representation of discrete words 35 . We input the time-ordered EHRs sequences into Word2Vec and derived a set of vectors for each diagnosis or prescription.
After several trials, we set cardinality of the vector as 500 and window size of the sequence (i.e., the number of  diagnoses or prescriptions in a sequence to consider them contextually similar) as 30. We found that, as the cardinality increases, distribution of the pairwise similarities spreads widely (i.e., many similarity values are close to −1 or 1 other than 0), but computation time also increases rapidly. We also observed that most of the pairwise similarities become close to 0 as the window size decreases, and close to 1 as the window size increases. We then computed cosine similarities between the vector representation of elements, and derived a pairwise similarity matrix (either J × J matrix S B for diagnosis or K × K matrix S C for prescription). For example, let us say the j 1 th and j 2 th diagnoses in our dataset refer to atrial fibrillation and congestive heart failure, respectively. The vector representation is atrial fibrillation = (0.1, 0.6, 0.2, 0.1) and congestive heart failure = (0.3, 0.7, 0.1, 0.2). The similarity between them is stored at (j 1 , j 2 )-entry of S B , and the value is = ≈ . . We made S sparse for efficiency by ignoring trivial values. Many similarities were close to zero, and their small variance did not provide useful information. Similarities less than zero refer to dissimilarity, which was not the focus of this work. Considering all the less useful similarity values can increase computational overhead. We only used the highest l similarities value for each element, and consider the others as 0. We choose = ⌊ ⌋ l J log 2 for diagnosis and for prescription according to previous works 36,37 . We converted S into a normalized-cut similarity matrix 38 . Incorporating the normalized cut similarity helped our problem to increase both the total dissimilarity between the different phenotypes and the total similarity within the phenotypes, thus avoid overlapping between the phenotypes. Converting to the normalized cut similarity matrix is Incorporating cluster structure. With the similarity matrix, we inferred a cluster structure from the similarity and incorporated it to our NTF optimization. The cluster structure contained information on which elements should be in the same phenotype together. We introduced a regularization term for the spectral clustering. We increased the sum of pairwise similarity within a phenotype. Because how much the elements are involved in each phenotype is different, the pairwise similarity was weighted by the two elements' membership values to the phenotype. That is, in terms of diagnosis similarity matrix S B , the sum of weighted pairwise similarity within a phenotype r is This transformation is beneficial because it helps phenotypes to be more orthogonal (or distinct) by retaining B T orthogonality approximately 39 . Thus, the objective function with the cluster structure is with a weighting constant μ. By incorporating − S BB T B 2 , our phenotyping method can absorb the spectral clustering structure and improve the orthogonality at the same time. Although it is a fourth-order non-convex function and it is difficult to find a global optimum, it can converge to a stationary point 36 . To find an optimum value, we derived the gradient of g(B): T TB (2) and hessian of g(B): , which may not be scaled up well with large J. In this case, we can use a constant learning rate instead of ∇ − g vec B ( ( )) 2 1 although sacrificing converging rate. Similarly, the factor matrix C for prescriptions followed the same update procedure. We repeated the updating procedures for the factor matrices A, B and C and logistic regression parameter θ until convergence. We assumed convergence when , and fit old is the fit of the previous iteration. After normalizing, we removed trivial values less than threshold ε because those values are too small for meaningful membership value and worsen the conciseness. We summarized the entire updating procedures in Algorithm 1.

Experiment design. Dataset and preprocessing.
We used a large publicly available dataset MIMIC-III (Medical Information Mart for Intensive Care III) 40 . MIMIC-III contains comprehensive de-identified data on around 46,520 patients in critical care units of the Beth Israel Deaconess Medical Center between 2001 and 2012, and it includes information such as demographics, prescription, diagnosis ICD codes, and clinical outcomes such as mortality. We selected 10,028 patients, including all 5,014 patients who died during admission and a random sample of 5,014 of patients who survived. If a patient who survived had multiple admission histories, we used the first admission. We used 202 diagnosis ICD-9 codes that are appeared in the charts of at least 5% of the patients and 316 prescription codes that appeared in at least 10% of the patients. We excluded diagnosis ICD-9 'V' or 'E' codes that describe supplementary factors for health status. We excluded trivial base type prescriptions such as 0.9% sodium chloride, 5% dextrose, and sterile water. Most nonzero co-occurrence values are one, and skewed right (Fig. S1). To prevent small-dosage frequent medicines from having high co-occurrences, we truncated the co-occurrence values to 1% percentile, 10 (Fig. S1).
Evaluation measures. We evaluated our proposed method in terms of discrimination and distinction. We measured the discrimination by the area under the receiver operating characteristic curve (AUC), sensitivity, and specificity. We measured distinction by a relative length of phenotype and an average overlap. An absolute length of r th phenotype is the number of nonzero in membership vector B :r and C :r . The relative length of the phenotype is the absolute length divided by the maximum length J + K. We averaged the R relative lengths of phenotype. The average overlap 41 measures the degree of overlapping between all phenotype pairs. It is defined as the average of cosine similarities between phenotype pairs: Setting R = 50, we repeatedly ran our models ten times for 10-fold cross validation. We used the training set to compute the regression parameter θ and the likelihood term in supervised phenotyping, and used the test set to measure the discrimination (Table S3). Because tensor factorization is not deterministic method, the factorized tensors are different in each trial; so, we computed mean and 95% confidence interval.

2: repeat
: for all i.

4:
Update θ for logistic regression : : : : : : Baselines. We compared the discrimination and the distinction of our proposed methods with that of several baseline methods. The baselines are: • APACHE II, SAPS II, OASIS, APS III score: Disease severity scores for predicting mortality in intensive care unit (for comparing discrimination only) [42][43][44][45] . These scores assess the severity of disease using variables from pre-existing conditions, physiological measurements, biochemical/hematological indices, and source of admission. The weighted sum of individual values produces the severity scores 46 . • CP: Basic NTF model 47,48 .
• Rubik: A state-of-the-art computational phenotyping method based on CP. Rubik generates a phenotype candidate using count of diagnoses and treatments. It incorporates the orthogonality between phenotypes to derive concise phenotypes 41 . We assume no existing knowledge term and bias term.
When evaluating discrimination (AUC, sensitivity, specificity) of NTF-based models, we used the patient's membership values (i.e., A i: of size 1 × R) as features to fit a binary logistic regression to predict mortality. Particularly, for the supervised model, we fitted a binary logistic regression (after normalization) other than θ that are used during updating procedures. To examine the performance of the supervised and similarity-based phenotyping respectively, we compared the discrimination of CP and the supervised phenotyping (regardless of similarity term), and also compared the distinction of Rubik and similarity-based phenotyping (regardless of supervised term). We then combined the supervised approach and similarity-based approach together to achieve both discrimination and distinction. The weighting constants ω and μ were selected as ω = 1 and μ = 1000 after several trials. Note that ω was comparably small because it sensitively applied to each row of A whereas μ applies to the l 2 norm of the whole matrix B or C. We used a tensor Matlab Tensor Toolbox Version 2.5 49 from Sandia National Laboratories to represent tensors and compute tensor operations.

Results
We present the experimental evaluation and phenotypes derived from our method.
Discriminative and distinction power comparison. We found that our methods outperformed other baselines in terms of discrimination and distinction. The supervised phenotyping method showed the highest AUC and sensitivity among the other methods including APACHE II and SAPS II ( Table 2). The similarity-based phenotyping method showed the lowest relative length and average overlap among the other methods. Particularly when compared with Rubik 25 that considers orthogonality for the distinction, the similarity-based method improved the distinction significantly (the relative length of 0.3934 vs 0.0714).

Phenotypes.
We presented the phenotypes that are derived from the similarity-based phenotyping method for maximum conciseness. After the tensor decomposition procedures with R = 50, we selected 25 phenotypes by forward feature selection 50 to remove phenotypes that are redundant and not statistically significant for predicting mortality (Table 3). Among them, we reported ten representative phenotypes in which coefficients from the feature selection were large enough (absolute value of coefficient >20) to discriminate mortality (  Table 2. Discriminative and distinction power comparison. RMSE, discrimination (AUC, sensitivity, specificity) and distinction (Relative length, Average overlap) with 95% confidence interval of baselines and our proposed models when R = 50. CP = CP decomposition, Supervised = the supervised phenotyping for discriminative power, Sim.-based = the similarity-based phenotyping for distinct power, Supervised + Sim.
-based = the final model that incorporates the both supervised and similarity-based phenotyping.

Discussion
The objective of this study was to develop a phenotyping method that can generate discriminative and distinct phenotypes. As a result, we derived phenotypes that consist of interactions between related diagnoses and prescriptions, and patients had membership to each phenotype. The phenotypes from the supervised model were more discriminative than APACHE II, SAPS II scores and the phenotypes from CP model 32,33 ; the phenotypes from the similarity-based model were more distinct than the phenotypes from Rubik 25 . We also observed that  the supervised phenotyping and the similarity-based phenotyping have an opposite effect on each other in terms of the discrimination and distinction. The distinct phenotypes from the similarity-based approach lost its discriminative power, and the discriminative phenotypes from the supervised approach lost distinction power. A possible explanation for this trade-off is that the similarity-based model tends to ignore less relevant elements in a phenotype to achieve the best distinction, although the "less relevant elements" can contribute to increasing the discriminative power overall. However, the combined phenotypes from both approaches achieved the high discrimination and distinction at the same time (Table 2). When combining the supervised and the similarity-based phenotyping, the discrimination increased (with the AUC of 0.8389) compared to the similarity model (with the AUC of 0.7796), and distinction improved (with the relative length of 0.3958 and average overlap of 0.1267) compared to the supervised model (with the relative length of 0.6828 and average overlap of 0.3787).
We also described the most representative phenotypes: sepsis with acute kidney injury, cardiac surgery, anemia, respiratory failure, heart failure, cardiac arrest, metastatic cancer (requiring ICU), end-stage dementia (requiring ICU and transitioned to comfort care), intraabdominal conditions, and alcohol abuse/withdrawal. These conditions are fairly consistent with the list of conditions known to require ICU care in US hospitals 51 .   Our study also had some limitations. One limitation is that our approach used the entire ICU stay to generate our predictive models. Other predictive models, such as SAPS II, use only the first 24 hours of data as prediction at that point of the hospitalization is more clinically useful. However, our objective was to demonstrate how our approach could be used with a clinically significant outcome. Future work could create additional phenotypes using only the first 24 hours of data to generate models. A second limitation is that some of the phenotypes generated are not obvious to clinicians. For example, the main medications in the "anemia" phenotype are diabetic medications. This is likely because non-pharmacologic therapy is the main treatment for anemia and diabetic patients were highly represented in the "anemia" population.
With refinement, future applications of our proposed computational phenotyping method include clinical decision support to quickly identify subgroups of patients at different levels of important clinical outcomes (e.g., mortality, clinical decompensation, hospital readmission, etc.). It could also be used in cohort identification for quality improvement or research projects to find those who share similar characteristics by representing patients' heterogeneous medical records into membership of phenotypes. In addition, the phenotypes we derived can provide genomic scientists an insight into genotype-phenotype mapping for precision medicine 52,53 . In conclusion, computational phenotyping using non-negative tensor factorization shows promise as an objective method for identification of important cohorts with promise for clinical, quality improvement and research purposes.  Table 5. Patient's mortality distribution. The distribution is computed as the number of patients who died/the total number of patients whose membership value is in the range. Empty values when the number of patients <10. Note that our dataset contained half patients who died and half patients who survived.