Global short-term forecasting of COVID-19 cases

The continuously growing number of COVID-19 cases pressures healthcare services worldwide. Accurate short-term forecasting is thus vital to support country-level policy making. The strategies adopted by countries to combat the pandemic vary, generating different uncertainty levels about the actual number of cases. Accounting for the hierarchical structure of the data and accommodating extra-variability is therefore fundamental. We introduce a new modelling framework to describe the pandemic’s course with great accuracy and provide short-term daily forecasts for every country in the world. We show that our model generates highly accurate forecasts up to seven days ahead and use estimated model components to cluster countries based on recent events. We introduce statistical novelty in terms of modelling the autoregressive parameter as a function of time, increasing predictive power and flexibility to adapt to each country. Our model can also be used to forecast the number of deaths, study the effects of covariates (such as lockdown policies), and generate forecasts for smaller regions within countries. Consequently, it has substantial implications for global planning and decision making. We present forecasts and make all results freely available to any country in the world through an online Shiny dashboard.


Scientific Reports
| (2021) 11:7555 | https://doi.org/10.1038/s41598-021-87230-x www.nature.com/scientificreports/ effective implementation of restrictions or reopening measures, we provide all results as an R Shiny Dashboard, including week-long forecasts for every country in the world whose data is collected and made available by the European Centre for Disease Prevention and Control (ECDC).

Results
Our model displayed an excellent predictive performance for short-term forecasting. We validated the model by fitting it to the data up to 18-November-2020, after removing the last seven time points (from 19-November-2020 until 25-November-2020), and compared the forecasted values with the observed ones (Fig. 1A). We observe that it performs very well (Fig. 1B). The concordance correlation coefficient and Pearson correlation coefficient (a measure of precision) remain higher than 0.8, even for the last day ahead forecast. We observed an accuracy Logarithm of the observed y it versus the forecasted daily number of cases y * it for each country, for up to seven days ahead, where each day ahead constitutes one panel. The forecasts were obtained from the autoregressive state-space hierarchical negative binomial model, fitted using data up to 18-November-2020. The first day ahead corresponds to 19-November-2020, and the seventh to 25-November-2020. Each dot represents a country, and the sixteen countries shown in Fig. 2 are represented by blue triangles. We add 1 to the values before taking the logarithm. (B) Observed accuracy, concordance correlation coefficient (CCC) and Pearson correlation (r) between observed ( y it ) and forecasted ( y * it ) values for each of the days ahead of 18-November-2020. www.nature.com/scientificreports/ close to 1, showing our methodology has a high potential to expand the number of forecast days to more than seven. Interestingly, we observed that Spain reported zero cases on the 22nd and 23rd of November, while the forecast for these days are around 19, 950 ( ≈ 10 4.3 − 1 ). In these circumstances, the forecast could be considered a better depiction of reality than the reported observations. We carried out this same type of validation study using data up to 18-November-2020, 13-May-2020, 6-May-2020, and up to 29-Apr-2020, and the results were very similar to the ones outlined above, although the validation study carried out in November showed better performance than the validation done in May (see Supplementary Materials). Furthermore, even though performance is expected to fall as the number of days ahead increases, there are still many countries where the forecasted daily number of new cases is very close to the observed one.
The autoregressive component in the model directly relates to the pandemic behaviour over time for each country (see Supplementary Materials). It is directly proportional to the natural logarithm of the daily number of cases, given what happened in the previous day. Therefore, it is sensitive to changes and can be helpful to detect a possible second wave. See, for example, its behaviour for Australia, Iceland and Ireland-it shows that the outbreak is decaying; however, it may still take time to subside completely (Fig. 2). In Austria, France, Italy, Japan, Serbia, South Korea, Spain and the UK, however, the outbreak is in the middle of another wave, having peaked in some of those countries or aiming at a peak. Hence, these countries must be very cautious when relaxing restrictions. In Brazil and the United States, it seems as if the outbreak is taking a long time to subside.
The estimates for the model parameters suggest that about 12.6% of the number of reported cases can be viewed as contributing to extra variability and possibly may consist of outlying observations (see the model estimates in the Supplementary Materials). These observations may be actual outliers, but this is likely a feature of the data collection process. In many countries, the data recorded for a particular day actually reflects tests that were done over the previous week or even earlier. This generates aggregated-level data, which is prone to exhibiting overdispersion, which is accounted for in our model, but for some observations, this variability is even greater, since they reflect a large number of accumulated suspected cases that were then confirmed. There is also a large variability between countries regarding their underlying autoregressive processes (see the estimates for the variance components in the Supplementary Materials). This corroborates that countries are dealing with the pandemic in different ways and may collect and/or report data differently.
We propose clustering the countries based on the behaviour of their estimated autoregressive parameter over the last 60 days (Fig. 3). This gives governments the opportunity to see which countries have had the most similar recent behaviour of the outbreak and study similar or different measures taken by these other countries to help determine policy. We observe, for example, that Spain, the UK and Russia have been experiencing a very similar situation recently; the same can be said about India and the US. Our R Shiny dashboard displays results in terms of forecasts and country clustering. It can be accessed at https:// prof-thiag ooliv eira. shiny apps.

Discussion
Our modelling framework allows for forecasting the daily number of new COVID-19 cases for each country and territory for which data has been gathered by the ECDC. It introduces statistical novelty in terms of modelling the autoregressive parameter as a function of time. This makes it highly flexible to adapt to changes in the autoregressive structure of the data over time. In the COVID-19 pandemic, this translates directly into improved predictive power in terms of forecasting future numbers of daily cases. Our objective here is to provide a simple, yet not overly simplistic, framework for country-level decision-making, and we understand this might be easier for smaller countries when compared to nations of continental dimensions, where state-level decision-making should be more effective 20 . Moreover, the model can be adapted to other types of data, such as the number of deaths, and also be used to obtain forecasts for smaller regions within a country. A natural extension would be to include an additional hierarchical structure taking into account the nested relationship between cities, states, www.nature.com/scientificreports/ countries and ultimately continents, while also accommodating the spatial autocorrelation. This would allow for capturing the extra-variability introduced by aggregating counts over different cities and districts. We remark that one must be very careful when looking at the forecasted number of cases. These values must not be looked at in isolation. It is imperative that the entire context is looked at and that we understand how the data is actually generated 21 . The model will obtain forecasts based on what is being reported, and therefore will be a direct reflection of the data collection process, be it appropriate or not. When data collection is not trustworthy, model forecasts will not be either. Our estimated number of outlying observations of approximately 12.6% is relatively high. When looking at each country's time series, we observe that countries that display unusual behaviour in terms of more outlying observations are usually related to a poorer model predictive performance as well. This suggests that the data collection process is far from optimal. Therefore, looking at the full context of the data is key to implementing policy based on model forecasts.
As self-criticism, our model may possibly be simplistic in the sense that it relies on the previous observation to predict the next one but does not rely on mechanistic biological processes, which may be accommodated by SEIR-type models 5,22,23 . These types of models allow for a better understanding of the pandemic in terms of the disease dynamics and are able to provide long-term estimates 6 . However, as highlighted above, our objective is short-term forecasting, which is a task that is already very difficult. Long-term forecasting is even more difficult, and we believe it could even be speculative when dealing with its implications pragmatically. A more tangible solution to this issue is to combine the short-term forecasting power of our proposed modelling framework with the long-term projections provided by mechanistic models so as to implement policy that can solve pressing problems efficiently, while at the same time looking at how it may affect society in the long run. We acknowledge all models are wrong, including the one presented here. However, we have shown that it provides excellent forecasts for up to a week ahead, and this may be able to help inform the efficacy of country lockdown and/or reopening policies. This is especially useful when using our proposed method coupled with models that take into account the incubation period, variable lag times associated with disease progression, and the estimation of the impact of public health measures. Therefore, the practical relevance of our proposed method is stronger when linked to complementary long-term information.
Even though the model performs very well, we stress the importance of constantly validating the forecasts, since not only can the underlying process change, but also data collection practices change (to either better or worse). Many countries are still not carrying out enough tests, and hence the true number of infections is sub-notified 24 . This hampers model performance significantly, especially that of biologically realistic models 5 .
There is a dire need for better quality data 21 . This is a disease with a very long incubation period (up to 24 days 25 ), with a median value varying from 4.5 to 5.8 days 26 , which makes it even harder to pinpoint exactly when the virus was actually transmitted from one person to the other. Furthermore, around 97.5% of those who develop symptoms will do so in between 8.2 and 15.6 days 26 . The number of reported new cases for today reflects a mixture of infections transmitted at any point in time over the last few weeks. The time testing takes also influences this. One possible alternative is to model excess deaths when compared to averages over previous years 27,28 . This is an interesting approach, since it can highlight the effects of the pandemic (see, e.g., the insightful visualisations at Our World in Data 29 , and the Finantial Times 30 ). Again, this is highly dependent on the quality of the data collection process, not only for COVID-19 related deaths but also those arising from different sources. Many different teams of data scientists and statisticians worldwide are developing different approaches to work with COVID-19 data 3,9,16,31 . It is the duty of each country's government to collect and provide accurate data. This way, it can be used with the objective of improving healthcare systems and general social wellbeing.
The developed R Shiny Dashboard displays seven-day forecasts for all countries and territories whose data are collected by the ECDC and clustering of countries based on the last 60 days. This can help governments currently implementing or lifting restrictions. It is possible to compare government policies between countries at similar pandemic stages to determine the most effective courses of action. These can then be tailored by each particular government to their own country. The efficiency of measures put in place in each country can also be studied using our modelling framework, since it easily accommodates covariates in the linear predictor. Then, the contribution of these country-specific effects to the overall number of cases can serve as an indicator of how they may be influencing the behaviour of the outbreaks over time.
Government policies are extremely dependent on the reality of each country. It has become clear that there are countries that are well able to withstand a complete lockdown, whereas others cannot cope with the economic downfall 19 . The issue is not only economic, but also of newly emerging health crises that are not due to COVID-19 lethality alone, but to families relying on day-to-day work to guarantee their food supplies. For these countries, there is a trade-off between avoiding a new evil versus amplifying pre-existing problems or even creating new ones. It is indeed challenging to create a one-size-fits-all plan in such circumstances, which makes it even more vital to strive for better data collection practices.
We hope to be able to contribute to the development of an efficient short-term response to the pandemic for countries whose healthcare systems are at capacity, and countries implementing reopening plans. By providing a means of comparing recent behaviour of the outbreak between countries, we also hope to provide a means to opening dialogue between countries going through a similar stage, and those who have faced similar situations before.

Methods
Data acquisition. The data was obtained from the European Centre for Disease Prevention and Control (ECDC) up to 14 December 2020 (after this date, the ECDC started to report aggregated weekly data; however, the methodology proposed here works for any other data source collecting daily data), and the code is implemented such that it downloads and processes the data from https:// www. ecdc. europa. eu/ en/ geogr aphic al-distr www.nature.com/scientificreports/ ibuti on-2019-ncov-cases. We assumed non-available data to be zero prior to the first case being recorded for each country. Whenever the daily recorded data was negative (reflecting previously confirmed cases being discarded), we replaced that information with a zero. This is the case for only 18 out of 71,904 observations as of the 25th of November 2020. According to the ECDC, the number of cases is based on reports from health authorities worldwide (up to 500 sources), which are screened by epidemiologists for validation prior to being recorded in the ECDC dataset.
Modelling framework. We introduce a class of state-space hierarchical models for overdispersed count time series. Let Y it be the observed number of newly infected people at country i and time t, with i = 1, . . . , N and t = 1, . . . , T . We model Y it as a Negative binomial first-order Markov process, with for t = 2, . . . , T . The parameterisation used here results in E (Y it |Y i,t−1 ) = µ it and Var (Y it |Y i,t−1 ) = µ it + µ 2 it ψ . The mean is modelled on the log scale as the sum of an autoregressive component ( γ it ) and a component that accommodates outliers ( it ), i.e.
To accommodate the temporal correlation in the series, the non-stationary autoregressive process {γ it } is set up as where η it is a Gaussian white noise process with mean 0 and variance σ 2 η . Differently from standard AR(1)-type models, here φ it is allowed to vary over time through an orthogonal polynomial regression linear predictor using the time as covariate, yielding where P q (·) is the function that produces the orthogonal polynomial of order q, with P 0 (x) = 1 for any real number x; β q are the regression coefficients and b i is the vector of random effects, which are assumed to be normally distributed with mean vector 0 and variance-covariance matrix � b = diag σ 2 b 0 , . . . , σ 2 b Q . By assuming φ it varying by country over time, we allow for a more flexible autocorrelation function. Iterating (1) we obtain for t = 3, . . . , T . Not e t h at i n t h e p ar t i c u l ar c a s e w h e re φ it = φ i = β 0 + b 0i , t h e n which is equivalent to a country-specific AR(1) process. On the other hand, if φ it = φ i = β 0 , then γ it = φ t−1 γ i1 + φ t−2 η i2 + φ t−3 η i3 + · · · + φη it−1 + η it , which is equivalent to assuming the same autocorrelation parameter for all countries.
Finally, to accommodate extra-variability we introduce the observational-level random effect where it ∼ Bernoulli (π) and ω it ∼ N (0, σ 2 ω ) . When it = 1 , then observation y it is considered to be an outlier, and the extra variability is modelled by σ 2 ω . This can be seen as a mixture component that models the variance associated with outlying observations.
To forecast future observations y * i,t+1 , we use the median of the posterior distribution of Y i,t+1 |Y it . This is reasonable for short-term forecasting, since the error accumulates from one time step to the other. We produce forecasts for up to seven days ahead.
We fitted models considering different values for Q. Even though the results for Q = 3 showed that all β q parameters were different from zero when looking at the 95% credible intervals, we opted for Q = 2 for the final model fit, since it improves the convergence of the model, as well as avoids overfitting by assuming a large polynomial degree, while still providing the extra flexibility introduced by the autoregressive function (2). This can change, however, as more data becomes available for a large number of time steps.
Model validation. We fitted the model without using the observations from the last seven days and obtained the forecasts y * it for each country and day. We then compared the forecasts with the true observations y it for each day ahead by looking initially at the Pearson correlation between them. We also computed the concordance correlation coefficient 32,33 , an agreement index that lies between −1 and 1, given by It can be shown that ρ (CCC) t = ρ t C t , where ρ t is the Pearson correlation coefficient (a measure of precision), and C t is the bias corrector factor (a measure of accuracy) at the t− th day ahead. ρ t measures how far each observation deviated from the best-fit line while C b ∈ [0, 1] measures how far the best-fit line deviates from the identity line through the origin, defined as is a scale shift and u = (µ 1 − µ 2 )/ √ σ 1 σ 2 is a location shift relative to the scale. When C b = 1 then there is no deviation from the identity line.
Model implementation. The model is estimated using a Bayesian framework, and the prior distributions used are We used 3 MCMC chains, 2,000 adaptation iterations, 50,000 as burn-in, and 50,000 iterations per chain with a thinning of 25. We assessed convergence by looking at the trace plots, autocorrelation function plots, and the Gelman-Rubin convergence diagnostic 34 .
All analyses were carried out using R software 35 , and JAGS 36 . Model fitting takes approximately 14 hours to run in parallel computing, in a Dell Inspiron 17 7000 with 10th Generation Intel Core i7 processor, 1.80GHz× 8 processor speed, 16GB RAM plus 20GB of swap space, 64-bit integers, and the platform used is a Linux Mint 19.2 Cinnamon system version 5.2.2-050202-generic.
Clustering. We used the last 60 values of the estimated autoregressive component to perform clustering so as to obtain sets of countries that presented a similar recent behaviour. First, we computed the dissimilarities between the estimated time series γ i for each pair of countries using the dynamic time warp (DTW) distance 37,38 . Let M be the set of all possible sequences of m pairs preserving the order of observations in the form r = ((γ i1 ,γ i ′ 1 ), . . . , (γ im ,γ i ′ m )) . Dynamic time warping aims to minimise the distance between the coupled observations (γ it ,γ i ′ t ) . The DTW distance may be written as By using the DTW distance, we are able to recognise similar shapes in time series, even in the presence of shifting and/or scaling 38 .
Then, we performed hierarchical clustering using the matrix of DTW distances using Ward's method, aimed at minimising the variability within clusters 39 , and obtained ten clusters. Finally, we produced a dendrogram of the results of the hierarchical clustering analysis, with each cluster coloured differently so as to aid visualisation.