Abstract
The COVID19 pandemic has strained hospital resources and necessitated the need for predictive models to forecast patient care demands in order to allow for adequate staffing and resource allocation. Recently, other studies have looked at associations between Google Trends data and the number of COVID19 cases. Expanding on this approach, we propose a vector error correction model (VECM) for the number of COVID19 patients in a healthcare system (Census) that incorporates Google search term activity and healthcare chatbot scores. The VECM provided a good fit to Census and very good forecasting performance as assessed by hypothesis tests and mean absolute percentage prediction error. Although our study and model have limitations, we have conducted a broad and insightful search for candidate Internet variables and employed rigorous statistical methods. We have demonstrated the VECM can potentially be a valuable component to a COVID19 surveillance program in a healthcare system.
Introduction
The SARSCoV2 coronavirus, initially emerging in Wuhan, China on December 2019, has spread worldwide in what is now described as the COVID19 pandemic. The coronavirus outbreak was declared a global public health emergency^{1} by the World Health Organization [WHO] on January 30, 2020, and as of October 17, there were over 39 million confirmed cases worldwide with over a million lives lost^{2}. While evidence supports the effectiveness of guidelines and restrictions^{3,4} in containing the spread of SARSCoV2 (“flattening the curve”), the health and economic consequences have been devastating on many levels^{5,6}.
By April 11, 2020, the US had more COVID19 cases and deaths than any other country^{2}. As of June 30, the US had 4% of the world’s population, but 25% of its coronavirus cases. While most states avoided a rapid surge in cases during the first phase of the pandemic, the majority of them have begun to lift social distancing and gathering restrictions, raising concern that we will see large surges in infection incidence and mortality^{7,8,9}. Without a widely available vaccine, we expect that the pandemic activity will continue to rise and fall through the winter, requiring health care systems to remain vigilant as they balance hospital resources.
As has been seen in this pandemic, when SARSCoV2 prevalence grows quickly and reaches high levels in a community, large numbers of people develop symptomatic COVID19 infection. Many require hospitalization, and this has the capacity to overwhelm regional health care resources (e.g., Northern Italy and New York). Acknowledging this risk, health care systems have implemented crisis planning to guide infection management, bed capacity, and secure vital supplies (e.g., ventilators and personal protective equipment)^{10,11,12}. Ideally, health systems’ efforts to best prepare for COVID19 demand surges would be informed by data that provide early warning, or “lead time”, on the local prevalence and impact of COVID19.
Traditional epidemiological models (e.g., SIR model) do not provide health system leaders with lead time for accurate planning. In searching for leading indicators, researchers have turned to internet data that reflect trends in community behaviors and activity. In recent years, researchers in the field of infodemiology^{13} have utilized various internet search data to predict different healthrelated metrics, such as dengue incidence^{14}, infectious disease risk communication^{15}, influenza epidemic monitoring^{16}, and malaria surveillance^{17}. Google Trends is one of the most popular tools that allows researchers to pull search query data of a random, representative sample drawn from billions of daily searches on Googleassociated search engines^{18}.
In the last six months, several papers have made use of Google Trends data to test the association between the popularity of certain coronavirusrelated terms and the number of cases and deaths related to COVID19^{19,20,21,22}. While these papers do not address using Google Trends data to build a predictive model for COVID19 modeling applications^{21,23}, they have contributed to our collective understanding of the relationship between the public’s internet search behavior and the pandemic, supporting the notion that search query data can be used for surveillance purposes.
In addition to internet users’ search query data, another source of data that is of importance for public health research is geospatial mobility data. Since the initial outbreak of COVID19 in Wuhan, China, researchers have believed that population mobility is a major driver of the exponential growth in the number of infected cases^{24,25,26}. It is now wellaccepted that mobility reduction and social distancing are timely and effective measures to attenuate the transmission of COVID19^{27,28}. Thus, it stands to reason that mobility changes may be a predictor of COVID19 hospital case volume. Many different interactive dashboards are available and display uptodate regional mobility data that are publicly available, most notably from Facebook and Apple Maps^{29,30,31,32,33}. In this paper, we specifically considered Apple Mobility Trend Reports and Facebook Movement Range Maps, which are mobility data reported in the form of aggregated, privacyprotected information.
Another area with great potential in modeling COVID19 hospital case volume are the data generated by virtual AIbased triage systems (also known as “healthcare chatbots”). During the COVID pandemic, these chatbots have been deployed to provide virtual consultation to people who are concerned they may have SARSCoV2^{34,35}. In particular, Microsoft offers its Health Bot service to healthcare organizations^{36}. Medical content, together with an interactive symptom checker, custom conversational flow, and a system of digital personal assistants can be integrated into the Health Bot configuration to help screen people for potential coronavirus infection through a risk assessment^{37,38,39,40}. User outcomes (with no personally identifiable information) can be aggregated, and the number of people “flagged” with COVID19 could then be potentially used to predict COVID19 hospital case volume in the future. Specifically, if a hospital has its own Health Bot for delivering a telehealth COVID19 risk assessment to the public, then it is reasonable to expect that people who are identified as having COVID19 are likely to seek treatment from the same hospital.
Atrium Health is a healthcare system operating across North Carolina, South Carolina, and Georgia, with the majority of its hospitals located in the greater Charlotte metropolitan area. Investigators from the Atrium Health Center for Outcomes Research and Evaluation sought to leverage internet search term volumes, mobility data, and Health Bot risk assessment counts, collectively known as “Internet variables”, to provide leadership with information that would allow for planning purposes during the COVID19 pandemic. Specifically, this paper describes the steps to characterize and understand the relationships between our Internet variables and the daily total number of COVID19 patients hospitalized in our hospital system’s primary market. Furthermore, we sought to develop a novel forecast model for these patients to provide advance warning of any anticipated surges in patient care demands.
Methods
Measures
Our interest lies in the population served by Atrium Health’s greater Charlotte market area which spans approximately 11 counties in western North Carolina and two counties in northern South Carolina. This area includes approximately 400,000 South Carolina residents, 2.5 million North Carolina residents (24% of the North Carolina population), over 1.1 million of which live in Mecklenburg County and 900,000 within North Carolina’s largest city, Charlotte^{41}.
Because of the focus on health care system capacity, our outcome variable of interest is the total COVID19 positive census across 11 Atrium Health Hospitals that serve the greater Charlotte market (hereafter referred to as “Census”) with an additional virtual hospital, Atrium Health Hospital at Home, providing hospital level care in a patient's home. Census is a crosssectional count taken each morning as the total number of patients hospitalized and COVID19 positive.
Rather than a raw count of “hits”, Google Trends data reflect the relative popularity of a search term, or relative search volume (RSV). Specifically, the RSV of a search term is calculated as the proportion of interest in that particular topic relative to all searches over a specified time range and location. The RSV is normalized to a scale of 0–100. “0” indicates that the term appears in very few searches and “100” shows maximum interest in the term for the chosen time range and region^{18}. To retrieve Google Trends data for our analysis, we utilized the gtrendsR package in R (https://cran.rproject.org/web/packages/gtrendsR/gtrendsR.pdf). We performed twelve different queries from 02/21/20 to 08/01/20 for Google Trends’ “Charlotte NC” metro designation (countylevel data is unavailable) using a list of terms obtained based on our prior beliefs and the medical expertise of our physicians. Since punctuations can influence the search results^{42,43}, we followed the guidelines from Google News Initiative^{44} to refine our search queries. Details on the search terms can be found in Table 1.
Apple Mobility Trend Reports collect Apple Maps direction requests from users’ devices and record the relative percentage change in driving direction requests compared to the baseline requests volume on January 13, 2020 on a daily basis. These data are available at the countylevel allowing us to pull data specifically for Mecklenburg County, North Carolina from 02/21/20 to 08/01/20. For unknown reasons, data were missing for two days (May 11 and May 12), so we replaced them with estimates using linear interpolation.
Facebook Movement Range Maps include data from Facebook users who access Facebook on a mobile device, with Location History and background location collection enabled. A data point for a given region is computed using the aggregate locations of users for a particular day. Specifically, there are two metrics, Change in Movement and Staying Put, that provide slightly different perspectives on movement trends. The Change in Movement metric measures the proportion change in frequency of travel (relative to the day of the week) compared to the last two weeks of February recorded on a daily basis, while the Staying Put metric measures the proportion of the regional population who remained in one location for 24 h. Once again, we pulled data from 02/21/20 to 08/01/20 for Mecklenburg County, North Carolina.
In the early days of the pandemic, Atrium Health collaborated with Microsoft Azure to launch its own publicfacing Health Bot to converse with people about their COVID19 symptomology. Generally, a person will respond “Yes/No” to a series of questions on COVID19 symptoms, whether they belong to a vulnerable group (e.g., elderly people, pregnant women, people with compromised immune system, etc.) and whether they are scheduled for a medical procedure or surgery. Depending on the users’ answers, the Health Bot will use branched logic to indicate if the person is at risk of having COVID19 and prompt appropriate further actions. In this study, we focused on the number of times that people are flagged as “may have COVID19” for further analysis. These data are daily counts of users that have completed the risk assessment and that Health Bot has classified as “may have COVID19”.
After the data were pulled, we generated 16 time plots (12 for Google Trends, 1 for Apple, 2 for Facebook, and 1 for Health Bot). We then computed Spearman’s correlation coefficient for Census at time t and each of the “lagged” Internet variables at times t, t – 1, …, t – 14. A lag of − 14 was chosen because 14 days is consistent with the known maximum incubation period associated with COVID19^{3}. For each variable, we looked for the maximum absolute correlation coefficient across all 15 values to guide the selection of the most important variables for further study.
Analytic approach
The analytic approach discussed in this section can be briefly summarized as follows. We first introduce the time series model we used for forecasting and provide background information. After specifying the model, we then fit the model to our data. Goodnessoffit of the model is checked along with its assumptions. Lastly, we generate forecasts of the COVID19 hospital census. Details now follow.
In considering models for observed time series, suppose we have the stochastic process \(\left\{ {y_{t} :t = 0, \pm 1, \pm 2, \ldots } \right\}\), where \(y_{t}\) is oftentimes referred to as the “level of the time series”. A stochastic process \(\left\{ {y_{t} } \right\}\) is (weakly) stationary if the mean \(E\left[ {y_{t} } \right]\) is constant over time, and if the autocovariance \(Cov\left[ {y_{s} , y_{t} } \right] = Cov\left[ {y_{s + k} , y_{t + k} } \right]\) for all times s and t, and lags \(k = 0, \pm 1, \pm 2, \ldots\). Informally, a stationary time series is one whose properties do not depend on the time at which the series is observed. Thus, time series with nonconstant trends, seasonality, changes in variance, etc., are nonstationary. We used the methodology described in Pfaff^{45} and Dickey and Fuller^{46} to determine whether or not a time series is stationary. If it is not, then we further characterize the nature of the nonstationarity.
Suppose \(y_{t}\) can be decomposed into a deterministic linear trend component and a stochastic residual component that is an autoregressivemoving average (ARMA) process. A time series can exhibit a type of nonstationarity, perhaps confusingly, referred to as “differencestationary”, which means that \(y_{t}  y_{t  1}\) is a stationary stochastic process. Also, a time series can exhibit a type of nonstationarity referred to as “trendstationary”. Once the data are detrended, the resulting time series is a stationary stochastic process. The difference between these two types of nonstationarity may imply different time series dynamics and hence, different forecasts.
In order to understand the model proposed in this research, we must first define cointegration. We use a broader definition^{47} than is typically defined elsewhere in the literature. Specifically, let \({\varvec{y}}_{t}\) be an n × 1 vector of variables \(y_{t}\), where \({\varvec{y}}_{t}\) can contain time series that are either differencestationary or trendstationary. This vector is said to be cointegrated if there exists an n × 1 vector \({\varvec{\beta}}_{i}\) (\(\ne {\bf 0})\) such that \(\user2{\beta}^{\prime}_{i} {\varvec{y}}_{t}\) is trendstationary. \({\varvec{\beta}}_{i}\) is then called a cointegrating vector. In fact, it is possible that there are r linearly independent vectors \({\varvec{\beta}}_{i}\) (\(i = 1, \ldots , r)\).
We now consider some background behind our time series model. A vector autoregression model of order K (VAR(K)) is defined as:
where \(t = 1, \ldots , T\). Here, \({\varvec{y}}_{t}\) is an n × 1 vector of time series at time t, \({\varvec{\varPi}}_{i}\) (\(i = 1, \ldots , K\)) is an n x n matrix of coefficients for the lagged time series, \({\varvec{\mu}}\) is an n × 1 vector of constants, \({\varvec{d}}_{t}\) is an p × 1 vector of deterministic variables (e.g., seasonal indicators, time, etc.), and \({\varvec{\varPhi}}\) is a corresponding n x p matrix of coefficients. We assume the \({\varvec{\varepsilon}}_{t}\) are independent n × 1 multivariate normal errors with mean 0 and covariance matrix \({{\varvec{\Sigma}}}\). In order to determine a value for K in practice, one can sequentially fit a VAR model, for \(K = 1, \ldots , 10\), say, and compare Akaike’s Information Criterion (AIC) values^{48}, where smaller values of AIC offer more evidence to support a specific model^{49}.
One way to respecify the VAR model is as a (transitory) vector error correction model (VECM). Using linear algebra, we can obtain:
where \(\Delta {\varvec{y}}_{t}\) is the (first) difference \({\varvec{y}}_{t}  {\varvec{y}}_{t  1}\), \({\varvec{\varGamma}}_{i} =  \left( {{\varvec{\varPi}}_{i + 1} + \cdots +{\varvec{\varPi}}_{K} } \right)\), for \(i = 1, \ldots , K  1\) and \(K \ge 2\), and \({\varvec{\varPi}}=  \left( {{\varvec{I}} {\varvec{\varPi}}_{1}  \cdots {\varvec{\varPi}}_{K} } \right)\) for an identity matrix \({\varvec{I}}\) of order n. In effect, a VECM is a VAR model (in the differences of the data) allowing for cointegration (in the levels of the data). The matrix \({\varvec{\varPi}}\) measures the longrun relationships among the elements of \({\varvec{y}}_{t}\), while the \({\varvec{\varGamma}}_{i}\) measure shortrun effects. \(\user2{\Pi y}_{t  1}\) is oftentimes called the “error correction term” and it is assumed this term is (trend)stationary. More rigorous background on cointegration, the VAR model, and the VECM can be found in Pfaff^{45}.
An important part of fitting a VECM is determining the number (r) of cointegrating relationships that are present. It can be shown that the rank of the matrix \({\varvec{\varPi}}\) is equal to r. In practice, the most interesting case is when \(r \in \left( {0,n} \right)\). In this case, we can use a rank factorization to write, \({\varvec{\varPi}}= \user2{\alpha \beta} ^{\prime}\), where both \({\varvec{\alpha}}\) and \({\varvec{\beta}}\) are of size n x r. Therefore, \(\user2{\Pi y}_{t  1} = \user2{\alpha \beta}^{\prime}\user2{y}_{t  1}\) is (trend)stationary. Because \({\varvec{\alpha}}\) is a scale transformation, \(\user2{\beta} ^{\prime}\user2{y}_{t  1}\) is (trend)stationary. By our definition of cointegration, there are r linearly independent columns of \({\varvec{\beta}}\) that are the set of cointegrating vectors, with each of these column vectors describing a longrun relationship among the individual time series. Elements in the vector \({\varvec{\alpha}}\) are often interpreted as “speed of adjustment coefficients” that modify the cointegrating relationships. The number of cointegrating relationships can be formally determined using Johansen’s procedure^{50}.
Following Johansen^{51} and Johansen and Juselius^{52}, we consider how to specify the deterministic terms in the VECM using AIC and a likelihood ratio test on linear trend. In our case, due to the nature of the research problem and by visual inspection of the time plots, we initially set \(\user2{\Phi d}_{t} = \bf 0\) (this form of the model is known as a restricted VECM). We then consider two possibilities for the constant \({\varvec{\mu}}\). The first possibility is to place \({\varvec{\mu}}\) inside the error correction term. Specifically, define an additional restriction \({\varvec{\mu}} = \user2{\alpha \rho }\). Then, the error correction term can be rewritten as \({\varvec{\alpha}}\left( {\user2{\beta} ^{\prime}\user2{y}_{t  1} + {\varvec{\rho}}} \right)\) so that the cointegrating relationships have means, or intercepts, \({\varvec{\rho}}\). The second possibility is to leave \({\varvec{\mu}}\) as is to account for any linear trends in the data.
We used maximum likelihood estimation to fit the VECM and report estimates and standard errors for elements of \({\varvec{\alpha}},{ }{\varvec{\beta}}\) and \({\varvec{\varGamma}}_{1}\), along with corresponding ttests run at a significance level of 0.05.
When fitting a VECM, it is important to check the goodnessoffit. Using the fitted VECM, and r, as determined by Johansen’s procedure, we backed out estimates of the coefficients \({\varvec{\varPi}}_{i}\) of the corresponding VAR model of order K (in levels). This was then recast as a VAR model of order 1; that is, it was rewritten in “companion matrix” form^{53}. The VECM is stable, i.e., correctly specified with stationary cointegrating relationships, if the modulus of each eigenvalue of the companion matrix is strictly less than 1. Another stability check is to investigate the cointegration relationships for stationarity. For the later, we again used the methodology described in Pfaff^{45} and Dickey and Fuller^{46}.
Residuals diagnostics were run to check assumptions on the errors \({\varvec{\varepsilon}}_{t}\). We computed a multivariate Portmanteau test for serially correlation, and generated autocorrelation function (acf) and crosscorrelation function (ccf) plots to guide interpretation. Also, we computed univariate and multivariate Jarque–Bera tests for normality.
For a VECM, predictions and forecasts for the level of a time series are obtained by transforming the fitted VECM to its VAR form. It can be shown that insample (training) predictions are actually onedayahead forecasts using estimated model coefficients based on the whole time series. We obtain approximate insample prediction intervals by making use of the estimated standard deviation of the errors taken from the Census component of the model. Outofsample (test) forecasts are computed recursively using all three time series from the VAR model fit to past data, for horizons equal to 1, 2, …, 7, say. The construction of outofsample forecast intervals as a function of the horizon are described elsewhere in the literature^{54}.
In order to assess the outofsample forecasting performance of our VECM, we used a time series crossvalidation procedure. In this procedure, there is a series of test sets, each consisting of 7 Census observations. The corresponding training set consists only of observations that occurred prior to the first observation that forms the test set. Thus, no future observations can be used in constructing the forecast. We gave ourselves a 2week head start on the frontend of the Census time series, and a 1week runway on the backend. For the 88 days starting from 04/29/20 up to 07/25/20 by one day increments, we iteratively fit the VECM and computed the 7daysahead outofsample mean absolute percentage prediction error (MAPE). MAPE is defined here as \(\left( {100/7} \right)*\sum\nolimits_{i = 1}^{7} {\left {O_{i}  E_{i} } \right/O_{i} }\), where \(O_{i}\) is the observed Census value, \(E_{i}\) is the projected Census value, and \(i = 1, 2, \ldots , 7\) horizons. Notice the “origin” at which the forecast is based, and which delineates training versus test set, rolls forward in time. We chose 7 days because it is in accordance with the weekly cadence of reporting on pandemic behavior and forecast metrics at Atrium Health. In addition, 7 days is a reasonable average timeframe for infection with coronavirus, incubation, and the potential subsequent need for hospitalization. As a baseline for comparison, we also evaluated our VECM against a basic ARIMA model, derived using the approach of Hyndman and Khandakar^{55}, using the same time series crossvalidation procedure.
All data analysis, including creating plots, was done using R statistical software, version 3.6.2, with the packages tsDyn, vars, and urca being the more important ones for fitting the VECM. The data and code used in the data analysis is publicly available at GitHub (https://github.com/philturk/CovCenVECM).
Results
The 16 time plots for the Internet variables are shown in Fig. 1. The first three rows are for those from Google Trends, while the last row contain those from Apple, Facebook, and Health Bot. Clearly, several of the time series are visibly nonstationary.
In looking at the maximum absolute Spearman’s correlation coefficient between Census and each Internet variable across lags 0, − 1, …, − 14, two variables stood out (Table 2). The first was Health Bot, with a maximum absolute correlation coefficient of 0.865 at time t – 4. The second was the Google Trends search term for covid testing + covid test + covid19 testing + covid19 test + covid 19 testing + covid 19 test, henceforth, referred to as Testing. Testing had a maximum absolute correlation coefficient of 0.819 at time t. Three other search terms were noted, but their maximum absolute correlation coefficients were substantially lower than HealthBot and Testing. For covid and symptoms covid, it was felt their searches might substantially overlap with Testing. The search term coronavirus had a negative correlation, likely attributable to people’s initial interest in the novelty of COVID19, which waned over time, as reflected from the beginning of June onward when RSV values for coronavirus were quite small. Therefore, for the sake of parsimony, the three other search terms were not considered further for this research.
After examination of scatter plots, and in preparation for modeling, we transformed both Health Bot (by taking the natural logarithm) and Testing (by taking the square root) to linearize the relationship between each of these variables and Census. We then generated longitudinal “crosscorrelation”type of profiles for Health Bot and Testing using Pearson’s correlation coefficients for lags 0, − 1, …, − 14 as shown in Fig. 2. We can see strong correlations, all well above 0.80, throughout the time period under consideration.
To better understand the relationships among the three time series, we normalized both the Health Bot and Census time series to the same [0, 100] scale as Testing, and obtained the results in Fig. 3. Both the Testing and Health Bot time series appear to share common features of the Census time series (e.g., approximate linear increase from midMay until midJuly). There is also the suggestion that both the Testing and Health Bot time series “lead” the Census time. For example, from midApril until the beginning of May, Health Bot shows a downward linear trend and this behavior is mirrored in the Census time series roughly one week later.
Using the methodology of Pfaff^{45} and Dickey and Fuller^{46}, all three time series are nonstationary. Specifically, the Census time series is differencestationary, while the Health Bot and Testing time series are both trendstationary.
Results from examining AIC values after fitting a VAR model to Census, Testing, and Health Bot, sequentially increasing the lag order up to 10, were inconclusive. Therefore, we chose the minimum value of \(K = 2\). Johansen’s procedure (using the trace test version) indicated that two cointegrating vectors should be used. A comparison of the two AIC values for restricted VECM models described in the Methods suggested placing \({\varvec{\mu}}\) inside the error correction term (AIC = 212.598) as opposed to not doing so (AIC = 217.592). This was also corroborated by a likelihood ratio test for no linear trend (pvalue = 0.32).
For the sake of brevity, and because we are most interested in modeling Census, we only show the portion of the fitted VECM pertaining to Census. Both \({\varvec{\alpha}}\) and \({\varvec{\beta}}\) are not unique, so it is typical in practice to normalize them. The normalization we used is the Phillips triangular representation, as suggested by Johansen^{51}. The expression for Census in scalar form using general notation is:
where \(\gamma_{1} , \gamma_{2,}\) and \(\gamma_{3}\) are the corresponding elements of \({\varvec{\varGamma}}_{1}\), \(\alpha_{1}\) and \(\alpha_{2}\) are the corresponding elements of \({\varvec{\alpha}}\), and \(CR_{1}\) and \(CR_{2}\) are the first and second cointegrated relationships. Collectively, \(\alpha_{1} CR_{1, t  1}\) and \(\alpha_{2} CR_{2,t  1}\) are the error correction terms. In our case, we obtained the results shown in Table 3:
An overall omnibus test for the Census component of the VECM was statistically significant (F_{0} = 3.393 on 5 and 101 degrees of freedom; pvalue = 0.0071). We see that the longrun effects for both cointegrated relationships were important in modeling the first difference of Census at time t. However, the shortrun, transitory effects as measured by first differences of Census, Health Bot, and Testing at lag 1 were not statistically significant.
Furthermore, the expressions for the cointegrated relationships are:
where \(\rho_{1}\) and \(\rho_{2}\) are the corresponding elements of \({\varvec{\rho}}\), and \(\beta_{1}\) and \(\beta_{2}\) are the corresponding elements of \({\varvec{\beta}}\). We obtained the results shown in Table 4:
Considering the model, parameter estimates from the previous two tables, and looking at \(CR_{1,t  1}\), we see that if Testing is unusually low relative to Census at time \(t  1\), so that \(Testing_{t  1} <  0.0257Census_{t  1}  1.9911\), then this suggests a decrease in Census at time t. Similarly for \(CR_{2,t  1}\), if Health Bot is unusually low relative to Census at time \(t  1\), so that \(Health Bot_{t  1} <  0.0131Census_{t  1}  2.9994\), then this suggests a decrease in Census at time t.
A check of the modulus of all the eigenvalues from the companion matrix associated with the VECM showed them all to be well below 1, suggesting stability of the model. Inspection of the two fitted cointegration relationships \({\varvec{\beta}}_{1}^{\prime} {\varvec{y}}_{t  1}\) and \({\varvec{\beta}}_{2}^{\prime} {\varvec{y}}_{t  1}\) did not suggest any nonstationarity.
Results from the Portmanteau test for serially correlation suggested the presence of serially correlated errors (pvalue = 0.0035). Inspection of all nine acf and ccf plots of the residuals for lags between 15to15 identified the likely reason. The acf plots for Testing and Health Bot showed mild autocorrelation at lag 7. This can be attributed to a “day of the week” seasonal effect. We address this further in the Discussion. Turning our attention towards the normality of the errors, the univariate Jarque–Bera test for Health Bot suggested a departure from this model assumption (pvalue = 0.0001). This was attributed to the presence of two mild statistical outliers early on in the time series. Since these values were otherwise practically unremarkable and with no assignable cause, we did not remove them.
Figure 4 shows the VECM fit for Census on August 1, 2020. The red line corresponds to the predictions and forecasts (or “fitted values”) from the model, the black dots are the observations, the blue envelope is the approximate insample 95% prediction interval band, and the pink envelope is, in this case, the 14daysahead outofsample forecast interval cone. Up to August 1, the model fit evidences quite reasonable accuracy and precision. We have included the 14 actual Census values that were subsequently observed from August 2toAugust 15. The corresponding MAPE is 6.4%. While there is clearly a large outlying value on August 4, the VECM forecast captures the salient feature of the Census counts; that is, a declining local trend. It is interesting to observe the declining trend in Testing and Health Bot in late July (Fig. 3).
In Fig. 5, we show the distribution of 7daysahead outofsample MAPE using our time series crossvalidation procedure described in the Methods (n = 88). The distribution is clearly rightskewed. The median MAPE is 10.5%, while the 95th percentile is 32.9%. In the context of pandemic surveillance and planning, we interpret these results to suggest our MAPE exhibits very good accuracy of the Census forecast, on average. Ceteris paribus, a MAPE beyond 32.9% would be unusual and worthy of further investigation.
When we looked at 7daysahead outofsample MAPE for the ARIMA model using time series crossvalidation, the median MAPE was 8.3%, which was smaller than the value of 10.5% for the VECM. Whether or not this difference is statistically significant would require additional rigor, which is not done here.
Discussion
In this study, a VECM model inclusive of Internet variables that reflect human behavior during a pandemic performed very well at 7day forecasting for a regional health system’s COVID19 hospital census. In terms of shortrun fluctuations, there is insufficient evidence that lag 1 values of the three differenced series are useful for prediction of the differenced Census time series. However, in terms of longrun equilibrium, both the error correction terms are statistically significant. Although all three time series, Census, Testing, and HealthBot, are nonstationary, their cointegrating relationships allows to predict the change in Census using the VECM.
There are several advantages in adopting our approach. We have conducted a much more thorough search for candidate Internet variables than what we have observed in the current literature during this pandemic and employed more rigorous statistical methods. Not only have we used Google Trends search terms, but we also evaluated mobility data from Facebook and Apple. Further, we have added data from a healthcare chatbot specifically constructed to assess risk of having COVID19. Our approach is statistically more rigorous to the extent that we did not stop at stating correlations, but rather provided a formalized multivariate time series model that can be potentially used to provide highly accurate forecasts for health system leaders. We know of no other straightforward approach in statistics that allows one to simultaneously model nonstationary time series in a multivariate framework and subsequently generate forecasts. Lastly, using time series crossvalidation in the manner we have described here also provides a way of quantifying forecasting performance for various metrics. The VECM can be easily fit using base R and a few additional packages and we make our code publicly available on GitHub.
The research we have done here can be extended to look at other potential variables that may be leading indicators for predicting COVID19 Census. These include the communitylevel effective reproduction number \(R_{t}\)^{56} and the daily communitylevel COVID19 infection incidence, among other examples. Additionally, this same methodology described herein can be extended to look at other health system relevant outcomes, like ICU counts, ventilator counts, or hospital daily admissions.
During our specified time period, both the VECM and the ARIMA model provided very good forecasting performance as measured by MAPE, with the VECM returning a slightly larger MAPE value on average. Other performance metrics (e.g., RMSE) were not considered here. During the 88 days used for this comparison, the Health Bot and Testing time series were relatively stable with respect to linear trend. Using the PELT (Pruned Exact Linear Time) method in the EnvCpt R package, we found two linear trend changepoints for the Health Bot time series (on 05/28/20 and 07/04/20) and one for the Testing time series (on 06/21/20). How the VECM would compare to an ARIMA model if the time series under consideration were to exhibit different types of behaviors would require a further sensitivity study using simulation. We add that just prior to submission (September 26) we refit the VECM and compared it to the ARIMA model. Interestingly, the VECM 7day forecast projections were trending upward, while those from the ARIMA model were trending downward. A week later, when we computed outofsample MAPE for the week of September 27th, we obtained 18.1% for the ARIMA model and 6.8% for the VECM. In fact, Census was beginning to climb. Looking at a multivariate time series plot similar to Fig. 3, we observed both Testing and Health Bot start to rise in midSeptember and then roughly a week later, Census started to rise.
It is worth mentioning that the VECM is no more or less immune to the same problems we can encounter in obtaining good forecasts when working with any other models. For example, in order to have good forecasts, the future must resemble the past. In the midst of a pandemic, other variables can be introduced with the potential to dramatically alter observed behavior. If a shelterinplace order, say, were to go in effect in the midst of the forecast horizons and significantly dampen infection spread, then forecasting performance in that time frame would likely suffer. In this scenario, no model will work well.
A potential criticism of our work will likely be that the strong correlations we see between Health Bot and Census, and Testing and Census are “spurious”, being attributable to chance or some underlying unobserved lurking variable. We feel though it is a reasonable assumption that those individuals in the greater Charlotte area that are becoming sick with COVID19 are likely to search Google for a nearby test site (Testing) or take Atrium Health’s online risk assessment (HealthBot), and then as symptoms subsequently progress proceed to one of Atrium Health’s facilities to be hospitalized.
This study had several limitations, the first three of which are more specific to the field of infodemiology^{42,57}. First, in terms of data collection, Google’s designation of the Charlotte NC metro area does not perfectly spatially align with Atrium Health’s core market. Also, Facebook and Apple Map is biased towards users who have enabled their location history on their mobile devices in order to be detected. Second, the time series in this study were not collected using any probabilistic sampling design; rather, they were collected using convenience sampling. Hence, we should be cautious about generalizability of our results. Third, when working with data pulled from the internet, there is always the chance that the data could be made unavailable or be altered in some way, thus threatening the durability of such models. We were fortunate in that one of our two important Internet variables was from Atrium Health’s own publicfacing Microsoft Azure HealthBot, at least in part mitigating this risk for our model. Lastly, perhaps the biggest limitation is that the relationships we have observed in this research could change at any point in the future so that our model is no longer predictive. Stated another way, because these time series are nonstationary, they might not stay in sync over long periods of time as their crosscorrelations change.
We initially considered other simpler time series regression models (e.g., autoregressive distributed lag model). However, this approach requires time series under consideration to all be stationary, which ours were not. A spurious regression will result when one nonstationary time series is regressed against one or more other nonstationary time series. Hence, we initially spent a considerable effort trying to stationarize our variables (using differencing, taking logs), and then using lagged versions of the variables before fitting a regression model. In assessing model fit, we were unsuccessful with this approach. Ultimately, the best way to work with nonstationary time series in our case was to acknowledge the cointegration of the variables under study.
Because these are variables derived from the internet, it would not be unexpected to see evidence of seasonal effects in their time series (e.g., day of the week, weekend versus weekday, etc.). For Testing and Health Bot, we noted the presence of mild autocorrelation in the errors at lag 7. While our VECM results are already very good, incorporating seasonality into our analysis perhaps might improve forecasting performance. What are some options to do this? One approach would be to add seasonal effects directly to the VECM (through \({\varvec{d}}_{t}\)). However, with 7 days a week, this would add 18 effect parameters to the model. As we discovered in our case, if many of these effects were unimportant, then this would negatively affect model fit. It is also important to understand that this approach makes the assumption that seasonality is deterministic; whereas, we may actually have stochastic seasonality. A second approach would be to deseasonalize the time series before modeling, i.e., a twostage approach. We deseasonalized the three time series using seasonal decomposition by loess^{58}, noting that the seasonal effects were relatively small. After repeating our data analysis, we found that the VECM fit was not as good. A third approach we leave as a future research topic would be to look at initially fitting a VAR(7) model but disregarding some of the lags (e.g., keeping lags 1 and 7 to address the seasonality, but without the lags 2–6, say). This would require more intensive programming in R. With any of these approaches, one still has to check the model for goodnessoffit and assumptions on the errors; specifically, multivariate normality and lack of serial correlation.
Our VECM model provides a useful forecasting tool that can guide datadriven decision making as health system leaders continue to navigate the COVID19 pandemic. In exploring candidate predictors, valuable insight was gained as to the relationship between the Internet variables and the hospital census. Both the Health Bot and the Testing time series from the previous 14 days are strongly informative regarding the hospital COVID19 census and twice gave ample lead time to a substantial change in the census. The VECM provides another model for the hospital COVID19 positive census in case the simpler ARIMA model no longer exhibits a good fit. It also provides another candidate model that can be used for modelaveraged forecasting. While the statistical underpinning of the VECM is somewhat complex, we found the model outputs to be intuitive and thus easily communicated to clinical leaders. Access to this information can help better inform manpower planning and resource allocation throughout the health care system by leveraging insights derived using both of these Internet variables. For these reasons, it is worth considering adding a VECM to the repertoire of a COVID19 pandemic surveillance program.
Change history
19 August 2021
A Correction to this paper has been published: https://doi.org/10.1038/s4159802196363y
References
World Health Organization. IHR Emergency Committee on Novel Coronavirus (2019nCoV). https://www.who.int/dg/speeches/detail/whodirectorgeneralsstatementonihremergencycommitteeonnovelcoronavirus(2019ncov) (2020).
COVID19 Map. Johns Hopkins Coronavirus Resource Center https://coronavirus.jhu.edu/map.html.
Centers for Disease Control and Prevention. Coronavirus Disease 2019 (COVID19). Centers for Disease Control and Prevention https://www.cdc.gov/coronavirus/2019ncov/index.html (2020).
Jin, Y.H. et al. A rapid advice guideline for the diagnosis and treatment of 2019 novel coronavirus (2019nCoV) infected pneumonia (standard version). Mil. Med. Res. 7, 4 (2020).
Fowler, J. H., Hill, S. J., Obradovich, N. & Levin, R. The effect of stayathome orders on COVID19 cases and fatalities in the United States. doi:https://doi.org/10.1101/2020.04.13.20063628 (2020).
Matrajt, L. & Leung, T. Evaluating the effectiveness of social distancing interventions to delay or flatten the epidemic curve of coronavirus disease. Emerg. Infect. Dis. 26, (2020).
Opening Up America Again. The White House https://www.whitehouse.gov/openingamerica/.
Public Health Guidance for Reopening. https://www.alabamapublichealth.gov/covid19/guidance.html.
Yamana, T., Pei, S., Kandula, S. & Shaman, J. Projection of COVID19 Cases and Deaths in the US as Individual States Reopen May 4, 2020.https://doi.org/10.1101/2020.05.04.20090670 (2020).
Chopra, V., Toner, E., Waldhorn, R. & Washer, L. How should U.S. hospitals prepare for coronavirus disease 2019 (COVID19)? Ann. Intern. Med. 172, 621–622 (2020).
Murthy, S., Gomersall, C. D. & Fowler, R. A. Care for critically ill patients with COVID19. JAMA 323, 1499–1500 (2020).
Ng, K. et al. COVID19 and the risk to health care workers: A case report. Ann. Intern. Med. 172, 766–767 (2020).
Mavragani, A. Infodemiology and Infoveillance: Scoping review. J. Med. Internet. Res. 22, e16206 (2020).
Althouse, B. M., Ng, Y. Y. & Cummings, D. A. T. Prediction of dengue incidence using search query surveillance. PLoS Negl. Trop. Dis. 5, e1258 (2011).
Husnayain, A., Fuad, A. & Su, E.C.Y. Applications of Google Search Trends for risk communication in infectious disease management: A case study of the COVID19 outbreak in Taiwan. Int. J. Infect. Dis. 95, 221–223 (2020).
Santillana, M., Nsoesie, E. O., Mekaru, S. R., Scales, D. & Brownstein, J. S. Using clinicians’ search query data to monitor influenza epidemics. Clin. Infect. Dis. Off. Publ. Infect. Dis. Soc. Am. 59, 1446–1450 (2014).
Ocampo, A. J., Chunara, R. & Brownstein, J. S. Using search queries for malaria surveillance Thailand. Malar. J. 12, 390 (2013).
Google Trends. Google Trends https://trends.google.com/trends/?geo=US.
Effenberger, M. et al. Association of the COVID19 pandemic with internet search volumes: A google TrendsTM analysis. Int. J. Infect. Dis. 95, 192–197 (2020).
Li, C. et al. Retrospective analysis of the possibility of predicting the COVID19 outbreak from Internet searches and social media data, China, 2020. Euro Surveill. Bull. Eur. Sur Mal. Transm. Eur. Commun. Dis. Bull. 25, (2020).
Walker, A., Hopkins, C. & Surda, P. Use of Google Trends to investigate lossofsmellrelated searches during the COVID19 outbreak. Int. Forum Allergy Rhinol. 10, 839–847 (2020).
Yuan, X. et al. Trends and prediction in daily new cases and deaths of COVID19 in the United States: An internet searchinterest based model. Explor. Res. Hypothesis Med. 5, 1–6 (2020).
Nuti, S. V. et al. The use of google trends in health care research: A systematic review. PLoS ONE 9, e109583 (2014).
Cartenì, A., Di Francesco, L. & Martino, M. How mobility habits influenced the spread of the COVID19 pandemic: Results from the Italian case study. Sci. Total Environ. 741, 140489 (2020).
Jiang, J. & Luo, L. Influence of population mobility on the novel coronavirus disease (COVID19) epidemic: Based on panel data from Hubei China. Glob. Health Res. Policy 5, 30 (2020).
Kraemer, M. U. G. et al. The effect of human mobility and control measures on the COVID19 epidemic in China. Science 368, 493–497 (2020).
Badr, H. S. et al. Association between mobility patterns and COVID19 transmission in the USA: A mathematical modelling study. Lancet Infect. Dis. S1473309920305533 (2020). https://doi.org/10.1016/S14733099(20)305533.
Sasidharan, M., Singh, A., Torbaghan, M. E. & Parlikad, A. K. A vulnerabilitybased approach to humanmobility reduction for countering COVID19 transmission in London while considering local air quality. Sci. Total Environ. 741, 140515 (2020).
COVID19 Community Mobility Report. COVID19 Community Mobility Report https://www.google.com/covid19/mobility?hl=en.
Covid19 social distancing scoreboard—Unacast. https://www.unacast.com/covid19/socialdistancingscoreboard.
University of Maryland COVID19 impact analysis platform. https://data.covid.umd.edu/.
COVID‑19—Mobility Trends Reports. Apple https://www.apple.com/covid19/mobility.
Facebook Data for Good Mobility Dashboard. COVID19 Mobility Data Network https://www.covid19mobility.org/dashboards/facebookdataforgood/ (2020).
Bharti, U. et al. Medbot: Conversational artificial intelligence powered chatbot for delivering telehealth after COVID19. in 2020 5th International Conference on Communication and Electronics Systems (ICCES) 870–875 (2020). https://doi.org/10.1109/ICCES48766.2020.9137944.
Ting, D. S. W., Carin, L., Dzau, V. & Wong, T. Y. Digital technology and COVID19. Nat. Med. 26, 459–461 (2020).
Microsoft Health Bot Project  AI At Work For Your Patients. Microsoft Research https://www.microsoft.com/enus/research/project/healthbot/.
Covid19 Symptom Checker. intermountainhealthcare.org https://intermountainhealthcare.org/covid19coronavirus/covid19symptomchecker/.
WHO Health Alert brings COVID19 facts to billions via WhatsApp. https://web.archive.org/web/20200323042822/https://www.who.int/newsroom/featurestories/detail/whohealthalertbringscovid19factstobillionsviawhatsapp (2020).
Miner, A. S., Laranjo, L. & Kocaballi, A. B. Chatbots in the fight against the COVID19 pandemic. Npj Digit. Med. 3, 1–4 (2020).
Stankiewicz, C. F., Kevin. Apple updated Siri to help people who ask if they have the coronavirus. CNBC https://www.cnbc.com/2020/03/21/appleupdatedsiritohelppeoplewhoaskiftheyhavecoronavirus.html (2020).
Explore—Opendatasoft. https://demography.osbm.nc.gov/explore/?sort=modified.
Eysenbach, G. Infodemiology and infoveillance: Tracking online health information and cyberbehavior for public health. Am. J. Prev. Med. 40, S154–S158 (2011).
Mavragani, A. & Ochoa, G. Google trends in infodemiology and infoveillance: Methodology framework. JMIR Public Health Surveill 5, e13439 (2019).
Google News Initiative Training Center. Google News Initiative Training Center https://newsinitiative.withgoogle.com/training/lesson/6043276230524928?image=trends&tool=Google%20Trends.
Pfaff, B. Analysis of integrated and cointegrated time series with R. (SpringerVerlag, 2008). https://doi.org/10.1007/9780387759678.
Dickey, D. A. & Fuller, W. A. Likelihood ratio statistics for autoregressive time series with a unit root. Econometrica 49, 1057–1072 (1981).
Campbell, J. Y. & Perron, P. Pitfalls and opportunities: What macroeconomists should know about unit roots. NBER Macroecon. Annu. 6, 141–201 (1991).
Akaike, H. A new look at the statistical model identification. IEEE Trans. Autom. Control 19, 716–723 (1974).
Burnham, K. P. & Anderson, D. R. Model selection and multimodel inference: a practical informationtheoretic approach. (Springer, 2002).
Johansen, S. Estimation and hypothesis testing of cointegration vectors in Gaussian vector autoregressive models. Econometrica 59, 1551–1580 (1991).
Johansen, S. LikelihoodBased Inference in Cointegrated Vector Autoregressive Models Oxford University Press. N. Y. (1995).
Johansen, S. & Juselius, K. Maximum likelihood estimation and inference on cointegration—with appucations to the demand for money. Oxf. Bull. Econ. Stat. 52, 169–210 (1990).
Hamilton, J. Time series analysis (Princeton, Princeton University Press, 1994).
Zivot, E. & Wang, J. Modeling financial time series with SPlus®. (Springer New York, 2003). https://doi.org/10.1007/9780387217635.
Hyndman, R. J. & Khandakar, Y. Automatic time series forecasting: the forecast package for R. J. Stat. Softw. 27, (2008).
Cori, A., Ferguson, N. M., Fraser, C. & Cauchemez, S. A new framework and software to estimate timevarying reproduction numbers during epidemics. Am. J. Epidemiol. 178, 1505–1512 (2013).
Barros, J. M., Duggan, J. & RebholzSchuhmann, D. The application of internetbased sources for public health surveillance (infoveillance): Systematic review. J. Med. Internet. Res. 22, e13680 (2020).
Cleveland, R. B., Cleveland, W. S., McRae, J. E. & Terpenning, I. STL: A seasonaltrend decomposition procedure based on loess. J. Off. Stat. 6, 3–73 (1990).
Author information
Authors and Affiliations
Contributions
PT conceived of and conducted the statistical analysis, and wrote much of the manuscript text; TT helped with the statistical analysis and wrote portions of the manuscript text; GR and AW performed critical revision of the manuscript for important intellectual content and offered guidance in the study. All authors reviewed the manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher's note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
The original online version of this Article was revised: The original version of this Article contained errors in Table 1, where the first row was incorrectly formatted as a heading.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Turk, P.J., Tran, T.P., Rose, G.A. et al. A predictive internetbased model for COVID19 hospitalization census. Sci Rep 11, 5106 (2021). https://doi.org/10.1038/s41598021840912
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s41598021840912
This article is cited by

COVID19 hospitalizations forecasts using internet search data
Scientific Reports (2022)

Quantification of the effects of climatic conditions on French hospital admissions and deaths induced by SARSCoV2
Scientific Reports (2021)
Comments
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.