Flickering in Information Spreading Precedes Critical Transitions in Financial Markets

As many complex dynamical systems, financial markets exhibit sudden changes or tipping points that can turn into systemic risk. This paper aims at building and validating a new class of early warning signals of critical transitions. We base our analysis on information spreading patterns in dynamic temporal networks, where nodes are connected by short-term causality. Before a tipping point occurs, we observe flickering in information spreading, as measured by clustering coefficients. Nodes rapidly switch between "being in" and "being out" the information diffusion process. Concurrently, stock markets start to desynchronize. To capture these features, we build two early warning indicators based on the number of regime switches, and on the time between two switches. We divide our data into two sub-samples. Over the first one, using receiver operating curve, we show that we are able to detect a tipping point about one year before it occurs. For instance, our empirical model perfectly predicts the Global Financial Crisis. Over the second sub-sample, used as a robustness check, our two statistical metrics also capture, to a large extent, the 2016 financial turmoil. Our results suggest that our indicators have informational content about a future tipping point, and have therefore strong policy implications.

In this section, we detail our statistical procedure to build directed temporal networks.

Data normalization
Financial data exhibit auto-regressive conditional heteroskedasticity 1 . Thus, before implementing causality tests, we need to normalize the series by their conditional variance. The conditional variance is modeled as a generalized auto-regressive conditional heteroskedastic (GARCH) process: where r (i) denotes the total return of stock index i. In the variance equation Eq. (3), the parameters are allowed to switch from one value to another according to two different processes: i) Recurrent states, i.e., Markov-switching (MS-) GARCH 2 , or ii) Non-recurrent states, i.e., change-point (CP-) GARCH 3 . Define S T = {s 1 , s 2 , . . . , s T } , where the latent process {s t } is a first-order Markovian process whose transition matrix is defined either by: for a MS-GARCH process, or by: for a CP-GARCH process where p i j = P[s t = j|s t−1 = i] is the conditional probability to switch from state i on time t − 1 to state j on time t. We consider MS-GARCH models with up to three regimes, and CP-GARCH models with up to five regimes. Each model is estimated using a Bayesian approach, which combines sequential Monte Carlo (SMC) and Markov chain Monte Carlo (MCMC) methods 4 . After convergence in the Geweke sense 5 , we compute the marginal likelihood by bridge sampling (1,000 iterations). The number of particles is set to 150 for change-point models, and to 250 for Markov-switching models. The number of Gibbs iterations is set to 10,000. Finally, we select the model for which the marginal likelihood is maximal. Tables (S1) and (S2) display the marginal likelihoods for the two sub-periods. Over the first sub-period, results support a no-break model for AEX index and a two-break model (i.e., three non-recurrent states) for ASE index. All other stock index series exhibit a recurrent two-state Markov-switching process (high and low volatility regimes). Over the second sub-period, results are slightly different: AEX, BEL20, CAC40, DAX, UKX and SP500 indexes exhibit a two-regime MS-GARCH process, whereas IBEX is modeled as three-regime MS-GARCH model. ISEQ, PSI and FTSEMIB indexes display a two-regime CP-GARCH representation. Using the above procedure, we build two sets of normalized residuals: : t ∈ Z} relating to the ten European stock indexes, and

Causal reconstruction
To recover the topology of a causal network, we use Granger non-causality tests which rely on the cross-correlations of the normalized innovations of filtered time series 6 . Hence, the test has a straightforward interpretation in terms of shocks or information diffusion in the network. Moreover, the statistical procedure is robust to both departure from normality and outliers. Such features describe financial market data.
t ) : t ∈ Z} be two sets of innovations of suitable univariate or multivariate time series models, for observed time series Define the corresponding covariances and cross-covariances C (hh) with C . The corresponding correlations and cross-correlations at lag k are defined by: and R (12) Then, the null hypothesis of no-correlation or independence can be tested using the following portmanteau statistic: Under the null hypothesis of no-significance of cross-correlations at all leads and lags, Q M is distributed as a chi-square law with (2M + 1)d 1 d 2 degrees of freedom. Using the above statistic, Granger non-causality tests are easily built by summing over {1, M} or {−M, −1}. For instance, testing for Granger non-causality from X (2) to X (1) (X (2) X (1) ) amounts to computing the following test statistic: Similarly, to test for non-causality from X (1) to X (2) (X (1) X (2) ), we compute the following statistic: Under the null hypothesis, both tests Eq. 10 and Eq. 11 are chi-square distributed with Md 1 d 2 degrees of freedom. Hence, in our model, two nodes will be connected if the p-value of Eq. 10 and/or Eq. 11 is less than a given threshold, set here at 5%. All 2/6 the above tests are portmanteau tests. For data orthogonalization (see next Section), it is also useful to look at the significance of an individual lead/lag. A natural test statistic is given by: which is also chi-square distributed with d 1 d 2 degrees of freedom.

Data orthogonalization
The normalization procedure described above leads to two sets of normalized residuals among which η = {( η ) : t ∈ Z} corresponding to European stock indexes, and : t ∈ Z} corresponding to SP500 index. To tackle the omitted variable problem, we consider the SP500 index as a common driver influencing all European stock exchanges, and adopt a two-step procedure 7 . First, for each k ∈ {0, . . . , M}, we compute the test statistics given by Eq. 12 between each component of η and η bench , and keep the significant lags. Then, we regress { η   : t ∈ Z}. Iterating the procedure for each European stock index, we build a set of ten : t ∈ Z} with regard to the SP500 index. All tests are implemented on this set of orthogonalized European stock index series. Note that, since the causal structure may evolve over time, the orthogonalization process is performed over each rolling window under consideration.

Clustering measures
A binary network is described by a graph G = (N, A), where N is the number of nodes, here the number of stock indexes, and A = [a i j ] is the N × N adjacency matrix, where a i j = 1 if the null of non-causality hypothesis is rejected at standard threshold, and 0 otherwise. Let (A) i be the ith row of the matrix, then the in-degree (d in i ), out-degree (d out i ) and total degree (d tot i ) for node i are defined as: where 1 = (1, 1, ...., 1) ,and the bilateral edges by: Let T D i be the total number of clusters possibly formed by i, the different measures of triangular clustering 8 are given by Table  (S3). In this paper, only the total clustering coefficient C D i is used.