Introduction

Around 457 million cases and 6 million deaths have been caused by COVID-19 worldwide by March 20221. Nearly 29 million cases and 654 thousand deaths occurred only in Brazil, ranking third in confirmed cases and deaths. Several machine learning algorithms have been proposed for predicting COVID-19 diagnosis2,3,4 and prognosis5,6,7,8, with different input data such as image or laboratorial exams9.

In countries with large socioeconomic inequalities and different access to healthcare and resource heterogeneity10,11, the best strategy for selecting training data for machine learning algorithms is still unknown. While more data may improve the ability of machine learning algorithms to identify detailed pathways linking the predictors to the outcome of interest, it may also introduce noise, as new learned pathways may not be locally replicable.

Also, collecting a large number of variables may be cost prohibitive for some hospitals, and different data collection protocols between hospitals can make this aggregation unfeasible. As the use of machine learning algorithms rapidly advances in healthcare, it will be increasingly important to identify how to improve the generalization of these algorithms in different regions.

In order to identify the best strategy for selecting training data to predict COVID-19 mortality, we gathered data from 18 distinct and independent hospitals (with no direct connections, such as having the same administration or using the same EMR system) from the five regions of Brazil, and tested eight different strategies for developing predictive models, starting with only local hospital data and then seven different approaches of aggregating external training data.

Results

Summary population characteristics

Table 1 presents the descriptive statistics regarding the individual characteristics of the patients. The sample of the study (8477 patients with COVID-19) was mostly comprised by men (55.1%). The most common race was white (62%), although the majority (64.6%) did not provide a self-declared race. Average age was 58.4 years and patients stayed 14 days on average. Patients that died during hospital stay were older (mean age 66.7 vs. 55.2 for survivors) and were more likely to be males (60.0% vs. 53.3% for survivors). List of participants and descritptive statistics for each hospital can be found on Supplementary Tables S1 and S2 respectively.

Table 1 Descriptive statistics of the demographics characteristics of the sample.

Algorithmic performance

Figure 1 shows the results of the AUROCs for the best of the three algorithms for each strategy. Overall, the best predictive performances were obtained when using training data from the same hospital, which was the winning strategy for 11 (61%) of the 18 participating hospitals.

Figure 1
figure 1

Best AUROCs according to strategy, region and hospital with the best strategy highlighted.

Figure 2 presents the AUROCs of the winning strategy for each hospital, separated by regions. For the southeast region, the most populous region of Brazil and where most of the data was collected, the winning strategy for every hospital was training with only local data. Supplementary Figs. S2 and S3 show recall and specificities from best strategies.

Figure 2
figure 2

AUROCs of the winning strategy per region. (a) Southeast, (b) Northeast, (c) Midwest, (d) South, (e) North.

Table 2 presents a summary of the best algorithm for each strategy. Overall, extreme gradient boosting (XGBoost) was the algorithm that presented the highest number of winning predictive performances regarding AUROCs (67/144, 46.5%), followed closely by Light GBM with 61 (42.4%) and catboost with 16 (11.1%). The list of the final hyperparameters for each algorithm is available in Supplementary Table S3. Calibration for best models are presented in Supplementary Table S4.

Table 2 Algorithm with the best predictive performance per strategy.

Discussion

We found that the different strategies for training data selection were able to predict COVID-19 mortality with good overall performance, using only routinely-collected data, with an AUROC of 0.7 or higher per strategy, with few exceptions. The best overall strategy was training and testing using only the reference hospital data, achieving the highest predictive performance in 11 of the 18 different hospitals.

In this study, while in some cases adding more data from different hospitals and regions improved predictive performance, in most scenarios it decreased the predictive ability of the algorithms. The inclusion of data from other hospitals contributed to training data noise possibly due to heterogeneity in hospital practices12 and in most cases deteriorated the predictive performance as seen in other studies13,14, possibly due to different patient demographics, and variable interactions that are not locally reproductible15. Other studies that included data from different hospitals and found high predictive performance may have benefited from using data from connected hospitals with similar patients using different techniques or larger samples16,17,18. Our study is unique in the sense that we analyzed data from 18 independent hospitals from all the five regions of a large and unequal country.

This study has some limitations that need to be acknowledged. First, even though we analyzed hospitals from every region of Brazil, they were not equally distributed, with a higher number of patients from the southeast and northeast regions, which are also the most populous. Another limitation is that as the 18 hospitals were unconnected and independent, there may have been differences on local data collection procedures and sample size that influenced the final results. Finally, some hospitals had small samples, but were included for aggregating purposes with other regions to check if other strategies improved overall performance.

In conclusion, we found that using only hospital data can yield better predictive results when compared to adding data from other regions with different population and socioeconomic characteristics. We found that algorithms trained with data from other hospitals frequently decreased local performance even if it considerably increased the training data available. However, models trained with data from other hospitals still presented acceptable performances, and could be an option while data for a specific hospital is still being collected.

Methods

Data source

A cohort of 16,236 patients from 18 distinct hospitals of all regions of Brazil were followed between March and August 2020. The map with the geographic location of participating hospitals is available in Supplementary Fig. S1. We filtered only adult patients (> 18 years) with a positive RT-PCR diagnostic exam for COVID-19, resulting in 8477 patients. Of these, 2356 (28%) died as a result of complications caused by COVID-19. The mortality outcome referred to the current hospital admission for COVID-19, independently of the timeframe. Hospitalization was only analyzed at the time of COVID-19 diagnosis and further hospitalizations of the patient were not included in the study. We used as predictors only variables collected in early hospital admission, i.e. within 24 h before and 24 h after the RT-PCR exam. The full list of hospitals is available in Supplementary Table S1.

A total of 22 predictors were selected among routinely-collected variables in all hospitals, including age, sex, heart rate, respiratory rate, systolic pressure, diastolic pressure, mean pressure, temperature, hemoglobin, platelets, hematocrit, red cells count, mean corpuscular hemoglobin (mch), red cell distribution width (rdw), mean corpuscular volume (mcv), leukocytes, neutrophil, lymphocytes, basophils, eosinophils, monocytes and C-reactive protein. Figure 3 illustrates the overall process.

Figure 3
figure 3

Process overview. From inclusion criteria to feature selection.

The study was approved by the Institutional Review Board (IRB) of the University of São Paulo (CAAE: 32872920.4.1001.5421), which included a waiver of informed consent. The data and the partnership with all members of IACOV-BR are included in this approval. The study followed the guidelines of the Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis (TRIPOD)19.

Machine learning techniques

Three popular machine learning models for structured data (lightGBM20 catboost21, and extreme gradient boosting22) were trained to predict COVID-19 mortality using routinely-collected data. Eight different strategies were tested to identify the best data selection strategy for each hospital and each of the three algorithms.

Strategies and preprocessing techniques

Initially, we used a single hospital data as the baseline strategy, splitting the data in 70% for training and 30% for testing, with the latter used to predict mortality risk. We then also tested seven different data aggregation strategies to assess the performance of the algorithms with different training data, as presented in Table 3.

Table 3 Clustering strategies for training and testing.

Variables with more than two categories were represented by a set of dummy variables, with one variable for each category. Continuous variables were standardized using the z-score. Variables with a correlation greater than 0.90 were discarded. Variables with more than 90% missing data were also discarded. Remaining variables with missing data were first imputed by the median. We also analyzed the use of the multiple imputation by chained equation (MICE)23 technique, but it did not improve the predictive performance of the models (Supplementary Fig. S4). We used K-fold cross-validation with 10 folds with Bayesian optimization (HyperOpt) to select the hyperparameters. Random oversampling was performed in the training set to improve class imbalance while keeping the test set intact24.

To evaluate the performance of the algorithms, we calculated the following metrics for each strategy: accuracy, recall (sensitivity), specificity, positive predictive value (PPV or precision), negative predictive value (NPV) and F1 score. The area under the receiver operating characteristic curve (AUROC) was the main metric used to select the best model among the different scenarios. All the results reported in this study are from the test set. Confidence intervals for AUROC curves were estimated using Delong method for computing the covariance of unadjusted AUC.

Institutional review board statement

The name of the ethics committee is “Comitê de Ética em Pesquisa da Faculdade de Saúde Pública da USP”. All the study protocol was approved by this Committee following all methods in accordance with the relevant guidelines and regulations. The approval date of the project was June 2020.