An interpretable hybrid predictive model of COVID-19 cases using autoregressive model and LSTM

The Coronavirus Disease 2019 (COVID-19) has had a profound impact on global health and economy, making it crucial to build accurate and interpretable data-driven predictive models for COVID-19 cases to improve public policy making. The extremely large scale of the pandemic and the intrinsically changing transmission characteristics pose a great challenge for effectively predicting COVID-19 cases. To address this challenge, we propose a novel hybrid model in which the interpretability of the Autoregressive model (AR) and the predictive power of the long short-term memory neural networks (LSTM) join forces. The proposed hybrid model is formalized as a neural network with an architecture that connects two composing model blocks, of which the relative contribution is decided data-adaptively in the training procedure. We demonstrate the favorable performance of the hybrid model over its two single composing models as well as other popular predictive models through comprehensive numerical studies on two data sources under multiple evaluation metrics. Specifically, in county-level data of 8 California counties, our hybrid model achieves 4.173% MAPE, outperforming the composing AR (5.629%) and LSTM (4.934%) alone on average. In country-level datasets, our hybrid model outperforms the widely-used predictive models such as AR, LSTM, Support Vector Machines, Gradient Boosting, and Random Forest, in predicting the COVID-19 cases in Japan, Canada, Brazil, Argentina, Singapore, Italy, and the United Kingdom. In addition to the predictive performance, we illustrate the interpretability of our proposed hybrid model using the estimated AR component, which is a key feature that is not shared by most black-box predictive models for COVID-19 cases. Our study provides a new and promising direction for building effective and interpretable data-driven models for COVID-19 cases, which could have significant implications for public health policy making and control of the current COVID-19 and potential future pandemics.


Introduction
The coronavirus disease 2019 (COVID-19) pandemic has posed a severe threat to global health and economy while producing some of the richest data we have ever seen in terms of infectious disease tracking.The quantity and quality of data placed epidemic modeling and forecasting at the forefront of worldwide public policy making.Compared to previous infectious diseases, COVID-19 shows special transmission characteristics, yielding significant fluctuations and non-stationarity in the new COVID-19 cases.This poses grand challenges in effective prediction, and, on the other hand, draws attention of the global community to epidemic tracking and prediction.
In the last three years, various models and methods have been developed to predict COVID-19 cases (see survey in 1 and references therein).These models can be roughly grouped into two categories: mechanistic models and data-driven models.The mechanistic models aim at directly characterizing the underlying mechanisms of COVID-19 transmission.Typical examples of mechanistic models are based on differential equations, such as the compartmental models SIR and SEIR [2][3][4][5] .The data-driven models formulate the prediction of the COVID-19 cases primarily as a regression problem and exploit fully data-adaptive approaches to understand the functional relationship between COVID-19 cases with a set of observable variables.Data-driven models include classical statistical models such as Autoregressive models (AR) [6][7][8] , Support Vector Machine (SVM) [9][10][11] , and the deep learning models [12][13][14][15][16][17][18] .In this paper, we will focus on data-driven models.
An Autoregressive model expresses the response variable as a linear function of its previous observations 19 .Its simple structure and strong interpretability are found to be powerful in capturing short-term changing trends in time series.AR models have been applied in various application fields, including infectious decease modeling 20,21 .However, they may fail to capture the highly nonlinear patterns and long-term effects in the data-generating dynamics.On the other extreme of the predictive model complexity spectrum, deep learning models, particularly LSTM 22 , have demonstrate impressive power in capturing proposed a hybrid model of ARIMA and artificial neural networks, aiming to capture more patterns in the data and improve forecasting performance.The preprocessed data is used to fit an ARIMA model first, before the residual term is used as input to train a neural network model.Fan et al. 39 followed a similar procedure, using an ARIMA model and an LSTM model.The logistics of these methods is to use an ARIMA model to capture the linear pattern of the data first and rely on the neural networks capture the non-linear pattern in the residuals.The main goal of these previous works is to explore whether a hybrid model produces better performance than the single models.
In our study, we design a general network architecture that includes both an AR part and an LSTM part additively and trains the entire architecture jointly by minimizing the empirical risk.By doing so, we do not arbitrarily give preference to any of the two additive components.Instead, the relative weights of the interpretable AR part and the predictive LSTM part are determined fully by the data.

Dataset Models Results
Atik, 2022 42   In summary, our contributions can be summarized as: • Development of a novel approach to hybrid modeling for COVID-19 cases prediction: we have designed a general network architecture that combines AR and LSTM models additively and trains the entire architecture jointly, allowing the relative weights of the interpretable AR part and the predictive LSTM part to be fully determined by the data.This approach is a departure from traditional sequential modeling approach and has the potential to contribute to the literature of sequential data prediction.
• Extensive numerical studies on data sets from two sources that displays a rich variety of variability: we have shown that the proposed hybrid model demonstrates better forecasting performance than single models.This finding is important as it shows that the hybrid model is an effective way to combine the strengths of different modeling techniques and can be used as a framework for future research.
• Exploration of interpretability: we have also explored the interpretability of the hybrid model, which is an important contribution as it allows for a better understanding of the model and can lead to improved decision-making based on the model's output.This contribution enhances the practical applicability of our proposed hybrid model.

Methods
In this section, we first overview the two building blocks of our additive hybrid model, namely the AR and the LSTM model, and their relative advantages.Then we present our hybrid model which combines these two building blocks additively, and we intuitively elaborate why it is better than the two individual components.

Autoregressive (AR) models
In time series, we often observe associations between past and present values.For example, by knowing the price of a stock in the past few days, we can often make a rough prediction about its value tomorrow where a 0 is the intercept, and a 1 , • • • , a p represent the coefficients.AR model is often effective on stationary data.To ensure stationarity, a common trick is to apply the differencing operation on the time series.A time series value at time t that has been differenced once, Y (1) , is defined as follows: and higher order differencing operation can be defined recursively.However, an AR model is not sufficient to capture the non-linear dependence structure, which is found to be an important feature of the COVID-19 data, indicated by Figure 1.A purely AR based model is thus often insufficient for the task of COVID-19 cases prediction.Long short term memory networks (LSTM) RNN (Recurrent Neural Network) 52 is known to suffer from the long term dependency problem: as the network grows larger through time, the gradient decays quickly during back propagation, making it impossible to train RNN models with long unfolding in time.To solve this problem, Hochreiter and Schmidhuber (1997) introduced a special type of RNN called LSTM with a proper gradient-based learning algorithm 22 .
We employ a LSTM regression model, which is represented as where we use Y t−1 , ...,Y t−p as the sequential input data; G represents the neural network architecture shown in Figure 7 and θ represents the weight parameters in neural networks.
The core concepts of a LSTM cell are the cell states and the associated gates, as illustrated in Figure 8.The cell state C t−1 at time step t − 1 acts as a transport highway that transfers relative information all the way down the sequence chain, which intuitively characterizes the "memory" of the network.The cell states, in principle, carry relevant information throughout the processing of the sequence.So even information from the earlier time steps can make its way to later time steps, reducing the effects of short-term memory.The Forget Gate decides what information should be kept.The Input Gate decides what information is to be added from the current step and update the cell state C t at time step t.The Output Gate determines what the next hidden state h t should be.The four gates comprise a fully connected feed forward neural network.
To achieve optimal prediction results using LSTM model, it is crucial to have a careful hyperparameter tuning, including the choice of units (dimension of the hidden state), the number of cells (i.e. the number of time steps), and layers.This is usually a difficult task in practice.For example, few LSTM cells are unlikely to capture the structure of the sequence, while too many LSTM cells might lead to overfitting.However, just like other neural networks, a well-known limitation of LSTM is its lack of interpretability 23 .

The hybrid model
As discussed above, both AR and LSTM have their relative strength and limitations in their prospective domains.We propose to combine the two models additively into one single hybrid model, which is expressed as where p is the lag number and α weights the contribution of two components: by tuning the value of α, one can strike a balance between the prediction given by AR and LSTM parts, and thus a prediction of linear and nonlinear signals.
We illustrate the structure of the hybrid model in Figure 2. The hybrid model is characterized as one neural network architecture where the two composing models are added through the last layer.The AR component captures the linear relationship in time series and the LSTM component would describe the nonlinear patterns.In section Training of the Supplementary material, we show how to train the weights in each of the two components in a fully data-adaptive manner by minimizing the empirical risk.We will compare the contribution of the hybrid model's AR component and LSTM component in section Results.

Results
The results include four sections: Model evaluations, Prediction, Interpretability, and Comparative study on the WHO datasets.In Model evaluations, we introduce the metrics we use to evaluate the models and on which we compare the models' performances.In section Prediction, we exhibit the visualizations of several interesting trials and compare the numerical predictions and evaluations of the three models.In the Interpretability part, we compare the AR component of the hybrid model with the AR model.This is to examine how we may interpret the hybrid model.We leave other training details in the Supplementary material.In section Comparative study on the WHO datasets, we further examine the performance of the proposed hybrid model by applying it to data of 7 different countries around the world and comparing its performance with that of its component models and 3 additional models.

Data description and statistical analysis
We utilize two primary data sources.The first data source is a dataset specific to California counties, which is available in the CHHS Open Data repository under the title COVID-19 Time-Series Metrics by County and State.This dataset includes information on populations, positive and total tests, number of deaths, and positive cases.We conducted a preliminary statistical analysis to examine correlations between these variables and the number of daily cases.The results of this analysis can be found in Supplementary figure 9 in the Supplementary material, and we anticipate that they will provide valuable insights for future research.
The second data source, used for comparative analysis, can be found in the WHO repository at the WHO Coronavirus (COVID-19) Dashboard.This resource presents official daily counts of COVID-19 cases, deaths, and vaccine utilization, as reported by countries, territories, and areas.In this study, we use 7 countries: Japan, Canada, Brazil, Argentina, Singapore, Italy, and the United Kingdom.
All datasets generated and analysed during the current study are also available in the author's Github repository 24 .

Model evaluations
We use a quantitative measure to evaluate and compare the performance of models: the Mean Absolute Percentage Error (MAPE), defined as: A model with small values of MAPE is preferred.We examine the performance of the three models (hybrid, AR, and LSTM) on different time periods within the available range.This is essential in our research, since the performance of a model is not constant on different trends; by intuition, a model performs better on smooth curves than it does on steep curves.By repeating our evaluation process on different time periods thus different trends, we wish to understand what trends do the model give the best performance.Such understanding will help us decide to what degrees we may trust the performance of the models.We evaluate the models repeatedly to reduce the influence brought by the instability of model training.Specifically, we leave 7 days between the first date of any two consecutive training data points.Although a larger number of repetitions seems desirable, increasing the repetition number is at the cost of making neighboring training points closer to each other.However, the difference in performance between two neighboring training points, that are too close to each other, would be attributed more to the instability of model training than to the difference in trend.Such results give us little information about the model performance over trend.In the end, we let the step number be the same as our lag number.By doing so, we suppose the concept of a week is important in forecasting.

Additional evaluation metrics
In the Supplementary material, we additionally evaluate and compare above models using Root Mean Square Error (RMSE) and Mean Absolute Error (MAE).The evaluation is done on the same dataset across different comparing methods.

Prediction
In this section, we present the visual and numerical results for all three models.We perform a comprehensive comparison of the performance for the three models in multiple counties, showing the advantage of the hybrid model.All predictions are transformed back to the original scale.

Visualization
We compare the three models' performance on COVID-19 case prediction in California 8 counties.For each county, we test the models' performance on several different situations: for example, when the training data has an up trend and the testing data has a down trend.From all trials we practiced, we choose the following trials, presented in Figure 3   Figure 3a shows models being trained on curved data and being tested on down trend data, as shown on the left and right panel, respectively.Figure 3b shows models being trained on up trend data and being tested on down trend data.Figure 3c shows models being trained on up trend data and being tested on up trend data.Figure 3d shows models being trained on down trend data and being tested on down trend data.Figure 4a and Figure 4b show models being trained on down trend data and being tested on up trend data, while Figure 4a has gentle upward testing data and Figure 4b has sharp upward testing data.Figure 4c show models being trained and tested on jagged data.
To ensure the results above are representative, we run each selected trial 100 times, visualize the mean and standard error of these trials, and present averaged MAPE.While AR outperforms LSTM on some cases, the hybrid model outperforms both in most cases, except that in Figure 3b and in Figure 4c.The MAPE, averaged on the 100 trials, shows that LSTM (4.469%) outperforms hybrid (4.993%) slightly in Figure 3b.However, as shown in the right panel of Figure 3b, the hybrid model captures the general trend of ground truth better than LSTM does.Similarly, in Figure 4c, AR (3.675%) outperforms hybrid (3.718%) slightly.Yet, as shown in the right panel of Figure 4c, the hybrid model captures the general trend of ground truth better than AR does.
Beside, interestingly enough, the hybrid model always seems to capture the ground truth's trend.Actually, the shape of hybrid 's forecasts resembles either that of the AR model or that of the LSTM model, or it resembles a combination of both.When AR model captures the trend better than the LSTM does, the hybrid model resembles the AR model in forecast shape: for example, in Figure 3b, San Francisco 2020-02-17 to 2020-05-14, and in Figure 4a, Santa Barbara 2022-01-17 to 2022-04-14.When LSTM model captures the trend better than the AR does, the hybrid model resembles the LSTM model in forecast shape: for example, in Figure 3d, San Francisco 2022-06-10 to 2022-09-05, and in Figure 4b, Riverside 2022-02-16 to 2022-12-20.On jagged testing data, where AR performs better on some part and LSTM better on the other, the hybrid model presents advantages of both models: for example, in Figure 4c, the hybrid model resembles AR on the two ends, where AR performs better, and it resembles LSTM in shape between day 5 to day 15, where LSTM seems to capture the trend better.

General performance
We evaluated the model performances numerically, in the 8 California counties across multiple trials.The results are given in Table 2.We observe that the hybrid model outperforms the AR model and the LSTM models almost uniformly: it generally yields the smallest average MAPE.To be specific, the general MAPE of each model (AR, LSTM, LSTM with 2 layers, and hybrid), averaged on the results for all 8 counties, is 5.629%, 4.934%, 6.804%, and 4.173% in order.In general, the hybrid model has the best general performance, and it outperforms the AR model by approximately 1.5%.The LSTM model suffers from overfitting when a second LSTM layer is added.As seen in the Supplementary material, the proposed hybrid model also yields the lowest RMSE and MAE values.

Interpretability
Interpretability of hybrid models can be defined as the ability to provide insight into the relationships they have learned, as introduced by Murdoch et al. 23 .The hybrid model proposed a decomposition approach to decipher the learned model underlying the data-generating mechanism, where the estimated AR model provides the easy-to-understand linear trend.On the other hand, the LSTM is able to capture the long-term and nonlinear trend in the time series data.Our hybrid model aims to strike a balance between interpretability and accuracy, enabling us to gain insights into the underlying data while still achieving high predictive performance.
In this section, we study how AR and LSTM components contribute to the hybrid model when fitting the data.Our purpose is to seek the insights into explaining why the hybrid model enjoys the better performance in general.And more importantly, we seek to use the interpretation from the fitted hybrid model to provide practical guidance to the public health policy making process.
Note that all models are trained on the normalized data as described in section Training.Consequently all figures below report predictions on the normalized scales.
In Figure 5, we present three settings with different signal strength ratio (represented by the value of α) of the AR components and LSTM components in the prediction of the hybrid model.Specifically, the larger value of α indicates the AR component dominates the LSTM component in prediction, and the smaller value of α indicates otherwise.We found that the component that has stronger signal characterizes the general trend in the data while the other helps to stabilize the variance.This observation sheds light into why the hybrid model provides better predictive performance in general than a single model.
Moreover, the fitted value of α provides a characterization of the intrinsic nonlinearity of the data, and consequently the difficulty of exploiting interpretation in the linear components of the fitted hybrid model.The smaller the value of α, the higher weight the nonlinear fit using LSTM has in the final prediction.In such a setting, coefficients in the AR components should be given less weight into generating interpretation for policy making.Equivalently, for larger value of α, it is more trustworthy to derive coefficients interpretation from the important AR part.This observation is helpful for public policy maker to distinguish among different virus transmission stages.Finally, we observe interesting patterns of the coefficients estimates in the AR components of the hybrid model compared with the coefficients in the pure AR model.As shown in Table 3, across the three settings of different values of α, the pure AR model tends to put heavier weight in coefficients of larger lags, say Y t−7 .In contrast, the AR component in the hybrid model tends to focus on capturing the short history, i.e., the coefficients associated with smaller lags (e.g., Y t−1 ) tend to have larger estimates.This indicates that the short history pattern in the data could be well approximated by a simple (say, linear) model, while the longer history in the data possesses more complicated nonlinear structure that requires a LSTM component to fit.

Comparative study on the WHO datasets
In this section, we compare our proposed hybrid model for COVID-19 prediction with its two component models, the ARIMA and LSTM models, as well as three other commonly used models: Support Vector Machines 53 (SVM), Random Forest 54 (RF), and eXtreme Gradient Boosting 55 (XGBoost).To ensure the effectiveness of our model in different application settings, we use a country-level data for this comparative study, focusing on datasets from 7 different countries collected by the World Health Organization.
We provide a brief overview of the three additional comparing methods.Support Vector Machines (SVM) 42,47 is a machine learning model that identifies the optimal hyperplane in a high-dimensional space that maximally separates data points into different classes.An SVM applies to both classification and regression problems.SVM is know to not perform well on noisy or unbalanced data 56,57 .
Random Forest [43][44][45] is an ensemble learning method that constructs a multitude of decision trees.A Random Forest is very flexible and can handle complex data types.On the other hand, the Random Forests are known for their reduced interpretability, sensitivity to noise, the need for hyperparameter tuning, and potential issues with imbalanced data.These factors may impact their performance in the context of COVID-19 predictions [58][59][60] .
Extreme Gradient Boosting (XGBoost) 44,46,48 has shown exceptional performance in various tasks.XGBoost is an ensemble learning method based on gradient boosting trees.It is known for its efficiency, scalability, and accuracy.However, like other tree-based ensemble methods, it can be more challenging to interpret.This may make it difficult to understand the driving factors behind predictions.In addition, XGBoost can be prone to overfitting, especially with small datasets or when the hyperparameters are not tuned properly 61,62 .
We present the numerical results of the comparative study, which are visualized in Figure 6.The comparative study is done on data collected by the World Health Organization 63 in Japan (JPN), Canada (CAN), Brazil (BRA), Argentina (ARG), Singapore (SGP), Italy (ITA), and the United Kingdom (GBR).
Overall, the proposed hybrid model performs better than the other models in most cases, as evidenced by its lower MAPE.This suggests that our model is effective in various situations and outperforms other commonly used models for COVID-19 prediction.

Discussion
In this paper, we introduce a novel hybrid model that borrows strength from a highly structured Autoregressive model and a LSTM model for the task of COVID-19 cases prediction.Through intensive numerical experiments, we conclude that the hybrid model yields more desirable predictive performance than considering the AR or the LSTM counterpart alone.In principle, the hybrid model enjoy the advantages of each of its two building blocks: the expressive power of LSTM in representing nonlinear patterns in the data and the interpretability from the simple structures in AR.Consequently, the proposed hybrid model is useful in simultaneously providing accurate prediction and shedding light into understanding the transition of the virus transmission phases, and thus providing guidance to the public health policy making process.
It is also noteworthy that the predictive performance of the proposed hybrid model can be further improved by properly choosing the hyperparameters.Furthermore, while we considered LSTM as the nonlinear component in the hybrid model, it can be substituted by any other deep learning models.
Differencing: Statistical methods such as AR have guaranteed performance on stationary time series.However, the raw data are typically not stationary.Differencing can help us stabilize the mean of a time series by removing changes in the level of a time series, and therefore eliminating (or reducing) trend and seasonality.This can make the differenced time series stationary.Specifically, we keep differencing data until stationarity is achieved by standard tests such as Augmented Dickey Fuller Unit Root Test (ADF) 66 or Kwiatkowski Phillips Schmidt Shin Test (KPSS) 67 .We found a consistent stationarity among different trials with only one differencing operation.Rescale.After differencing the data, we rescale the training data into the [−1, 1] with: where µ train denotes the mean of Y train , Y max,train denotes the maximum of Y train , Y min,train denotes the minimum of Y train , and Y train,scaled denotes training data after scaling.We also apply the same re-scaling map to the testing data: Thus the testing data does not affect the selection of our scalar.We train the model on Y train,scaled and make predictions with Y test,scaled .Finally, we apply the inverse rescaling map to retrieve the original scale, and thus make comparison with the original data.
Reshaping.After differencing and rescaling the data, we transform the data into supervised learning form as shown below in the Supplementary table 4.

Figure 1 .
Figure 1.An example of visualizing daily observations, where blue line represents the data before smoothing, orange line represents data after smoothing.The data is collected from the Los Angeles county.

Figure 2 .
Figure 2. Visualization of the hybrid architecture.
and Figure 4, as representatives of different combinations of training and testing data, since they reflect the general model performances well.6/20 San Diego Raw Data from 2020-12-03 to 2021-02-28 Prediction versus Ground Truth (a) Case 1. Curved Training Data and Down Trend Testing Data San Francisco Raw Data from 2020-02-17 to 2020-05-14 Prediction versus Ground Truth: AR, LSTM, and hybrid (in order) (b) Case 2. Up Trend Training and Down Trend Testing Los Angeles Raw Data from 2020-09-24 to 2020-12-20 Prediction versus Ground Truth (c) Case 3. Up Trend Training and Up Trend Testing San Francisco Raw Data from 2022-06-10 to 2022-09-05 Prediction versus Ground Truth (d) Case 4. Down Trend Training and Down Trend Testing

Figure 3 .
Figure 3.The left panels show the training and testing data.The right panels show the ground truth versus forecasts of the AR, LSTM, and hybrid model, respectively.We display the average prediction (solid line) with 2 times standard error (shaded region).The standard error across 100 runs are reported for LSTM and hybrid.The hybrid model is more stable than the LSTM.

Figure 4 .
Figure 4.The left panels show the training and testing data.The right panels show the ground truth versus forecasts of the AR, LSTM, and hybrid model, respectively.We display the average prediction (solid line) with 2 times standard error (shaded region).The standard error across 100 runs are reported for LSTM and hybrid.The hybrid model is more stable than the LSTM.

Figure 5 .
Figure 5.The forecasts of a hybrid model versus the ground truth, and the contribution of its AR and its LSTM component.

Table 1 .
A non-exhaustive list of previous works on data-driven models for COVID-19 cases prediction in the past three years.Most commonly used evaluation metrics are RMSE, MAE, and MAPE.
. AR is a simple model that utilized this empirical observation and can yield very accurate prediction in certain applications.It represents the time series values using linear combination of the past values.The number of past values used is called the lag number and often denoted by p.Let ε t denote the Gaussian noise at time t with mean 0 and variance σ 2 .The structure equation of AR(p) model can be represented as

Table 2 .
MAPE (by percentage) for each model on each county.General performance is averaged on all trials.The inconsistent performances of neural networks have been compensated by the small step value, which is 7.The Latest performance is on the latest trial, from 2022-06-10 to 2022-09-05.The results for LSTM, LSTM double and hybrid are each averaged on 100 runs, to compensate the inconsistent performances of neural networks.The value in parenthesis is the standard error.AR has 0 or small standard error for the same trial.The hybrid model is usually the best in performance and has the lowest standard error.Best performances are given in bold.

Table 3 .
Coefficients of AR model v.s.AR coefficients of hybrid model.When AR component dominates the prediction (α = 0.817), MAPE for AR and hybrid are 3.088% and 2.593%.When the AR and the LSTM components have similar weight in prediction (α = 0.547), MAPE for AR and hybrid are 2.523% and 1.950%.When LSTM component dominates the prediction (α = 0.174), MAPE for AR and hybrid are 9.337% and 4.665%.Coefficients with biggest absolute values are given in bold.

table 5 .
RMSE for each model on each county.General performance is averaged on all trials.The inconsistent performances of neural networks have been compensated by the small step value, which is 7.The Latest performance is on the latest trial, from 2022-06-10 to 2022-09-05.The results for LSTM, LSTM double and hybrid are each averaged on 100 runs, to compensate the inconsistent performances of neural networks.The value in parenthesis is the standard error.AR has 0 or small standard error for the same trial.The hybrid model is usually the best in performance and has the lowest standard error.

table 6 .
MAE for each model on each county.General performance is averaged on all trials.The inconsistent performances of neural networks have been compensated by the small step value, which is 7.The Latest performance is on the latest trial, from 2022-06-10 to 2022-09-05.The results for LSTM, LSTM double and hybrid are each averaged on 100 runs, to compensate the inconsistent performances of neural networks.The value in parenthesis is the standard error.AR has 0 or small standard error for the same trial.The hybrid model is usually the best in performance and has the lowest standard error.