Machine learning-based derivation and external validation of a tool to predict death and development of organ failure in hospitalized patients with COVID-19

COVID-19 mortality risk stratification tools could improve care, inform accurate and rapid triage decisions, and guide family discussions regarding goals of care. A minority of COVID-19 prognostic tools have been tested in external cohorts. Our objective was to compare machine learning algorithms and develop a tool for predicting subsequent clinical outcomes in COVID-19. We conducted a retrospective cohort study that included hospitalized patients with COVID-19 from March 2020 to March 2021. Seven Hundred Twelve consecutive patients from University of Washington and 345 patients from Tongji Hospital in China were included. We applied three different machine learning algorithms to clinical and laboratory data collected within the initial 24 h of hospital admission to determine the risk of in-hospital mortality, transfer to the intensive care unit, shock requiring vasopressors, and receipt of renal replacement therapy. Mortality risk models were derived, internally validated in UW and externally validated in Tongji Hospital. The risk models for ICU transfer, shock and RRT were derived and internally validated in the UW dataset but were unable to be externally validated due to a lack of data on these outcomes. Among the UW dataset, 122 patients died (17%) during hospitalization and the mean days to hospital mortality was 15.7 +/− 21.5 (mean +/− SD). Elastic net logistic regression resulted in a C-statistic for in-hospital mortality of 0.72 (95% CI, 0.64 to 0.81) in the internal validation and 0.85 (95% CI, 0.81 to 0.89) in the external validation set. Age, platelet count, and white blood cell count were the most important predictors of mortality. In the sub-group of patients > 50 years of age, the mortality prediction model continued to perform with a C-statistic of 0.82 (95% CI:0.76,0.87). Prediction models also performed well for shock and RRT in the UW dataset but functioned with lower accuracy for ICU transfer. We trained, internally and externally validated a prediction model using data collected within 24 h of hospital admission to predict in-hospital mortality on average two weeks prior to death. We also developed models to predict RRT and shock with high accuracy. These models could be used to improve triage decisions, resource allocation, and support clinical trial enrichment.

Outcomes.The primary outcome was in-hospital mortality.We developed and internally validated a prediction model for in-hospital mortality and externally validated the model in the Tongji dataset.Secondary outcomes were ICU transfer, shock and receipt of RRT.These secondary outcomes were missing in the Tongji dataset and so we developed and cross-validated prediction models for secondary outcomes using the UW dataset.Shock was defined as new receipt of vasopressor medications after the first day of hospitalization.

Feature selection.
Since the mortality prediction model was developed in the UW dataset and externally validated in the Tongji dataset, we first selected variables that were overlapping between both datasets.Twenty features overlapped between both datasets, and these 20 features were used for the mortality prediction model.All clinical and laboratory data were abstracted from the medical record within the first day of hospital admission, and patients were included in the analysis for each outcome only if the patients did not have the outcome on the first day of hospitalization.An individual prediction model was developed for each of the outcomes.
The following steps were taken for feature selection.First, features were dropped if > 10% of the values were missing.Second, near-zero variance features were removed, as these features almost exclusively had one unique value.Third, pair-wise correlations between all the features were calculated.If two features had a correlation larger than 0.8, the feature with a larger mean absolute correlation was dropped.Fourth, missing values were replaced by the mode if the variable was categorical or by the median otherwise.Finally, all the continuous variables were standardized.
Data partitioning, UW dataset.We randomly split the UW dataset into development and internal validation sets by stratified sampling.The training set included 475 patients, and the internal validation set included 237 patients.First, we trained models on the training set, and then selected the best model by its performance on the internal validation set.Top models for in-hospital mortality were then tested in the external validation set.We performed cross validation in the internal validation set for the three prediction models for ICU transfer, shock and RRT.We used the UW dataset as follows (1) patients were randomly split into 10 folds in a stratified fashion using the outcome variable; (2) the model was trained using nine of the ten folds and tested on the remaining fold.The procedure was repeated ten times until each fold had been used as a test fold exactly once.
www.nature.com/scientificreports/Machine learning models.Least absolute shrinkage and selection operator (LASSO) logistic regression is a logistic regression approach with L1 penalties 20 .The L1 penalty terms encourage sparsity, thus preventing overfitting and yielding a small model.A weighted LASSO logistic regression was used to handle the imbalanced data.The hyperparameter lambda was selected by stratified tenfold cross validation.
Elastic net logistic regression (LR) is an approach that combines LASSO LR and ridge logistic regression, incorporating both L1 and L2 penalties 21 .It can generate sparse models which outperform LASSO logistic regression when highly correlated predictors are present.The hyperparameters alpha and lambda were selected by stratified tenfold cross validation.
eXtreme Gradient Boosting (XGBoost).XGBoost is a gradient boosted machine (GBM) based on decision trees that separate patients with and without the outcome of interest using simple yes-no splits, which can be visualized in the form of decision trees 22 .GBM builds sequential trees, such that each tree attempts to improve model fit by more highly weighting the difficult-to-predict patients.The following hyperparameter settings were applied: nrounds = 150, eta = 0.2, colsample_bytree = 0.9, gamma = 1, subsample = 0.9 and max_depth = 4.We also used grid search to select the optimal hyperparameters for XGBoost on the training set.The hyperparameter candidates were generated exhaustively from number of boosting rounds (nrounds) = {150,250,350}, eta = {0.1,0.2,0.3},colsample_bytree = {0.5,0.7,0.9},gamma = {0.5,1},and max_depth = {4,8,12}.We used stratified fivefold cross validation to select the optimal hyperparameter that maximized the average AUC for the mortality prediction model.Then we retrained the model using the optimal hyperparameters on the training set and then tested and validated this model on the internal validation and external validation sets, respectively.

Class imbalance handling.
A weighted version of each of the three above methods was used to handle imbalanced data.For example, if there were 90 positives and 10 negatives, then a weight of 10 over 90 was assigned to a positive sample and a weight of one was assigned to a negative sample.
Probability calibration.Isotonic regression was used to calibrate the probabilities outputted by the machine learning models 23 .The calibration model was fitted on the training samples only.Calibration plot was created to assess the agreement between predictions and observed outcomes in different percentiles of the predicted values, and the 45-degree reference line indicates a perfectly calibrated model.If the fitted curve is below the reference line, it indicates that the model overestimates the probability of the outcome.As a comparison, a fitted curve above the reference line reflects underestimation.

Model comparison.
We tested the three machine learning methods (LASSO LR, elastic net LR, and XGBoost) independently to predict each outcome.Model performance was compared using the area under the receiver operatory characteristic curve (AUC) and 95% CI 24,25 .Top performing models for in-hospital mortality in the internal validation cohort were then carried forward to the external validation cohort.We also completed a pre-specified sub-group analysis of model performance in patients older than 50 years of age and in patients younger than 50 years of age.Two-sided p values < 0.05 were considered statistically significant.All models were developed using R.

Ethics approval.
The University of Washington Institutional Review Board approved this study.

Results
Patient characteristics.A total of 1057 patients were included in the analysis, 712 from UW and 345 from Tongji Hospital.Baseline characteristics for patients in both cohorts who died vs survived are shown in Tables 1  and 2. In the UW cohorts, 10% of patients were treated with hydroxychloroquine, 24% with remdesivir and 4% with tocilizumab during hospitalization.In the UW cohorts, patients who died were older (median [IQR] age 66 [54-75] vs. 55 [41-66] years), more likely to be male (70% vs. 61%), had lower platelet count (median [IQR] 155 [114-234] vs. 200 [155-265]), and higher white blood cell counts (median [IQR] 9.85 [7.01-14.44]vs. 7.87 [5.64-11.37].In the Tongji cohort there was a similar difference in baseline characteristics between patients who died and survived during hospitalization. Machine learning model for in-hospital mortality.Among 712 patients in the UW dataset, 122 (17%) died.The mean length of hospital stay was 15.7 (standard deviation 21.5) days for all patients and 14.8 (standard deviation 13.7) days for those that died.Among 328 patients from the Tongji Hospital dataset, 159 (46%) died 26 .We applied three machine learning methods (LASSO LR, elastic net LR and XGBoost) to the training set and evaluated the model performance in the interval validation set.Elastic net LR model had the highest AUC in the internal validation set (0.72, 95% CI: 0.64 to 0.81) for in-hospital mortality.Next, we tested the elastic net LR model in the external validation cohort, and obtained an AUC of 0.85 (95% CI: 0.81 to 0.89) for in-hospital mortality (Fig. 1A and B and Table 3).To examine the effect of hyperparamater optimization on XGBoost algorithm, we trained both XGBoost with hyperparameter optimization and compared to our original XGBoost algorithm (fixed hyperparameters) for five times.The mean internal validation AUC by fixing hyperparameters and with hyperparameter optimization were 0.638 and 0.668, respectively, and the difference was not statistically significant, p = 0.08.We also compared the mean AUC in the external validation and there was no significant improvement (p = 0.80).Based on these results, we carried forward the elastic net LR model to predict in-hospital mortality (Table 4).
The top 3 variables in the in-hospital mortality prediction model included, age, minimum platelet count, and maximum white blood cell count (Fig. 2A).Partial dependence plots for the most important continuous www.nature.com/scientificreports/variables in elastic net LR are shown in Fig. 3A.Older age was associated with a linear increase in mortality.In contrast, platelet count showed a relatively flat risk profile up to 500 × 10 9 /L after which risk of death increased linearly with lower platelet counts.The predicted risk of in-hospital mortality compared with the observed risk was well calibrated in the test set (Fig. 4).In Table 5, we provide the sensitivity, specificity, positive predictive values (PPV) and negative predictive values (NPV) across the three different cohorts for in-hospital mortality.We found that the model thresholds can be personalized to either maximize PPV or NPV.We found in the external validation cohort that the in-hospital mortality models had a maximum PPV and NPV of 0.84 or higher.Model coefficients are provided in Table S1 for future validation in diverse patient cohorts.
To better understand the association between clinical features and in-hospital mortality, we concentrated on patients > 50 years of age and re-trained the models excluding age.Elastic net LR model had the highest AUC    4).In Table 6, we provide the sensitivity, specificity, positive predictive values (PPV) and negative predictive values (NPV) across the three different cohorts for in-hospital mortality in patients > 50 years of age.Partial dependence plots for the most important continuous variables in elastic net LR are shown in Fig. 3B.Platelet count, blood nitrogen urea, haematocrit and white blood cell count were the top 4 variables that predicted in-hospital mortality in the patients > 50 years of age (Fig. 2B).
Machine learning models for secondary outcomes.We next developed and cross-validated prediction models for ICU transfer, shock and receipt of RRT.For the outcome of ICU transfer, 419 patients from the UW dataset were included with 45 (11%) patients were transferred to the ICU within 28 days of admission.A total of 293 patients were excluded from this analysis who were transferred to the ICU within the first day of hospitalization.The mean length of time to be transferred to ICU was 7.6 (standard deviation 9.1) days.Lasso LR achieved the highest AUC (0.60, 95% CI: 0.52,0.68)for prediction of ICU transfer compared with the other two methods (elastic net LR, XGBoost) (Fig. 5A and Table 7).The two predictors that most strongly correlated with subsequent ICU transfer were age and minimum SpO 2 .For the outcome of shock, 606 patients from the UW dataset were included and 67 (11%) patients developed shock within 28 days of admission.A total of 106 patients were excluded from this analysis who had shock within the first day of hospitalization.The mean length of time to develop shock was 7.0 +/− 6.5 days (mean +/− SD).Elastic net LR achieved the highest AUC of the three methods (0.76, 95% CI: 0.69 to 0.82) (Fig. 5B and Table 7).The three predictors that were most highly correlated with subsequent development of shock were ICU admission, minimum mean arterial blood pressure and minimum Glasgow coma scale score.
For the outcome of receipt of RRT, 671 patients from the UW dataset were included and 24 (2.6%) patients received RRT within 28 days of admission.A total of 41 patients were excluded from this analysis who received RRT within the first day of hospitalization.The mean length of time to receive RRT was 5.8 + /− 7.2 days (mean +/− SD).As shown in Fig. 5C and Table 7, Lasso LR achieved a slightly higher mean AUC compared with the other two methods (0.88, 95% CI: 0.79 to 0.98).The predictor that most strongly influenced need for RRT was minimum serum creatinine.Variable importance plots for all the secondary outcomes can be found in Fig. S2.Model calibration plots for each of the secondary outcomes are provided in Fig. 4. Coefficients for variables are provided in Tables S2-S4.

Discussion
In this derivation, internal validation and external validation study of adult hospitalized patients with COVID-19, we developed and validated an in-hospital mortality prediction tool using variables that are routinely collected within 24 h of hospital admission.We found the mortality prediction model had high accuracy to predict mortality with a 2-week lead-time.We also found that elastic net logistic regression had the highest prediction and best calibration of the machine learning models tested.In addition, we derived models for ICU transfer, shock and RRT.Our mortality prediction model provides a simple bedside tool and highlights clinical variables that can inform triage decisions in hospitalized patients with COVID-19.
The mortality prediction tool was derived using 20 variables and exported to an external dataset.The model had higher discrimination in the external dataset, demonstrating the generalizability of the model.Variables that informed model development included age, white blood count, and platelet count.These variables have been individually shown to be previously prognostic in COVID-19 hospitalization as well as in sepsis 1, 27,28 .A machine learning study in Germany for mortality prediction in COVID-19, also found that age and markers of thrombotic activity were predictive of ICU survival 29 .An advantage of our model to other studies is that we included not only patients admitted to the ICU but all patients presenting to the hospital.This broad inclusion criteria improves generalizability of our findings.We found that elastic net regression was the most accurate algorithm for predicting in-hospital mortality in our datasets.The value of elastic net regression machine learning algorithms is that it is interpretable.We provide the variables and the coefficients for each model in the supplemental materials to ease future testing in diverse patient cohorts.
The present machine learning models show that a reliable prediction can be made for hospital mortality and organ failure in hospitalized patients with COVID-19.The AUC for our model had a performance in the external validation set comparable to or improved than alternative COVID-19 prediction models 12,[30][31][32] .One benefit of our model is that it was developed and internally validated in a US population and externally validated in a population from China.This is in contrast to other prediction models developed in COVID-19 that are specific to patients admitted to one healthcare system or hospitalized in one country 12,13,29,30,33,34 .The ability to validate our model in a healthcare system outside the US shows the generalizability of the model and the reproducibility of our findings.Our findings also demonstrate the inherent similarities in the patient response to infection and the clinical variables that are associated with poor outcomes.
This study has several strengths, including a discovery and validation cohort.In addition, we developed models for not only mortality but also organ specific failure.Another strength is that the model predicted outcomes up to 2 weeks prior to the outcome occurring.This lead time is essential to help inform clinical care and provide a window when therapeutics can be tested to change eventual outcomes.Finally, all prediction models were developed using routinely collected data that is available in most electronic medical records.This allows the easy replication of our models to diverse patient cohorts.Since age is one of the strongest predictors of mortality in COVID-19, we specifically developed in-hospital mortality prediction models in the population of patients > 50 years of age.We found that clinical biomarkers, such as platelet count, blood urea nitrogen, white blood cell count and blood urea nitrogen, in combination continued to accurately predict in-hospital mortality.
There are also several limitations to this work.First, although developed and validated in an external dataset, it is possible that our findings may not generalize to other settings.For example, the validation set included patients enrolled early during the pandemic when certain immunomodulatory therapies (e.g., dexamethasone and tocilizumab) were not widely used.However, patients in the discovery set were enrolled during a broad timespan after clinical trials supported the use the corticosteroids in ICU patients with COVID-19.Second, we restricted to clinical and laboratory variables collected within 24 h of ICU admission.We restricted to these variables to develop prediction models that could be run on electronic health record data.Moreover, the variables used in the model are often not missing in the medical record and regularly collected.Third, secondary outcomes, such as ICU transfer, shock and need for RRT, were not available in the external validation set.

Conclusions
We developed prediction models with high discrimination for mortality, shock and RRT.The in-hospital mortality model performed well in the internal validation set and showed improved accuracy in the external validation set.Key variables that informed the in-hospital mortality prediction model included age, white blood cell count and platelet count.The mortality prediction model on average was able to identify future risk of mortality 2 weeks prior to the clinical outcome.All variables to develop the prediction models used clinical variables collected Vol:.(1234567890) Scientific Reports | (2022) 12:16913 | https://doi.org/10.1038/s41598-022-20724-4

Figure 1 .
Figure 1.Receiver operator characteristics curves for mortality prediction.(A) The c-statistic for in-hospital mortality using Elastic net LR model had an AUC of 0.72, 95% CI: 0.64 to 0.81 in the internal validation cohort.(B) In the external validation cohort the model had an AUC of 0.85 (95% CI: 0.8 to 0.89) for in-hospital mortality.

Figure 2 .
Figure 2. Variable importance plots for mortality in all patients and in patients over 50 years of age.(A) Top predictor variables for mortality in all patients.Mean SHAP values are provided on the x-axis, which shows that age, minimum platelet count, maximum white blood cell count, minimum blood urea nitrogen, maximum serum sodium, minimum haematocrit, maximum hematocrit, minimum serum creatinine, sex and minimum glucose are the top-10 variables.(B) Top predictor variables for mortality in patients over 50 years of age.Mean SHAP values are provided on the x-axis for the mortality prediction model in patients over 50 years of age, which includes the five selected variables: maximum platelet count, minimum blood urea nitrogen, maximum hematocrit, minimum white blood cell count, and maximum glucose.

Figure 3 .
Figure3.Partial dependence plots for mortality prediction model illustrating the relationship between mortality and the six top predictor variables A. Risk of mortality increases with increasing age, platelets < 500 10 9 /L, and increasing white blood cell count.Risk of mortality increases with increasing blood urea nitrogen with an inflection point at 50 mg/dL.The risk of mortality increases with decreasing haematocrit levels and increasing sodium levels.B. Risk of mortality increases with increasing age, platelets < 500 10 9 /L, and increasing white blood cell count.Risk of mortality increases with increasing blood urea nitrogen until 75 mg/dL and then levels off.The risk of mortality increases with decreasing haematocrit levels.

Figure 4 .
Figure 4. Calibration plots for prediction models.(A) 28-days mortality in the internal validation.(B) 28-days mortality in the external validation.(C) 28-day ICU transfer in the internal validation.(D) 28-day receipt of RRT in the internal validation.(E) 28-day shock in the internal validation.

Table 1 .
Features in the UW dataset stratified by survivors and non-survivors.All variables are median and interquartile range unless otherwise specified.

Table 4 .
Model performance in the training, internal and external validation sets for in-hospital mortality for patients over 50.The cutoff threshold to determine sensitivity and specificity was 0.5