Eigenvalues of the covariance matrix as early warning signals for critical transitions in ecological systems

Many ecological systems are subject critical transitions, which are abrupt changes to contrasting states triggered by small changes in some key component of the system. Temporal early warning signals such as the variance of a time series, and spatial early warning signals such as the spatial correlation in a snapshot of the system’s state, have been proposed to forecast critical transitions. However, temporal early warning signals do not take the spatial pattern into account, and past spatial indicators only examine one snapshot at a time. In this study, we propose the use of eigenvalues of the covariance matrix of multiple time series as early warning signals. We first show theoretically why these indicators may increase as the system moves closer to the critical transition. Then, we apply the method to simulated data from several spatial ecological models to demonstrate the method’s applicability. This method has the advantage that it takes into account only the fluctuations of the system about its equilibrium, thus eliminating the effects of any change in equilibrium values. The eigenvector associated with the largest eigenvalue of the covariance matrix is helpful for identifying the regions that are most vulnerable to the critical transition.

D ij ∂ 2 p ∂z i ∂z j with a force matrix F and a diffusion matrix D. Assume that all of the eigenvalues of F are distinct with negative real parts and are denoted λ 1 , λ 2 , . . . , λ N . These eigenvalues are indexed from smallest to largest in terms of the value of their real part (i.e., |Re(λ 1 )| ≤ . . . ≤ |Re(λ N )|). The diffusion matrix D is assumed to have all positive eigenvalues. With these assumptions, the stationary distribution of z is Gaussian and we denote its covariance matrix as Σ.
Our results concern the relationship between the eigenvalues of F and Σ as the system undergoes a codimension-1 bifurcation. In such a bifurcation, typically only one of F's eigenvalues-or the real part of one complex conjugate pair of F's eigenvalues-vanishes at the critical transition.

II. EIGENVALUES OF THE COVARIANCE MATRIX
Lemma 1. Let the columns of a matrix T contain the eigenvectors of F. LetΣ be the covariance of the state variables if the eigenvectors are used as their coordinate basis. That is,Σ = T −1 ΣT −τ . Then the elements ofΣ satisfỹ Lemma 2. Suppose that the dominant eigenvalue of F is real and also suppose that |D 11 /(2λ 1 )| ≥ | max(D)/(λ 2 )| with 0 < 1. Then the dominant eigenvalue of Σ is equal to |D 11 /(2λ 1 )|+O( |Σ 11 |) and all of the other eigenvalues are O( |Σ 11 |).
Theorem 4. Suppose that the magnitude of the real part of a dominant eigenvalue of F is small such that the assumptions of either Lemmas 2 or 3 are satisfied. Then as this real part approaches zero, if the assumptions of Lemma 2 are satisfied, the largest eigenvalue of Σ becomes larger absolutely and larger relative to all of the other eigenvalues of Σ. Alternatively, if the assumptions of Lemma 3 are satisfied, the sum of the largest two eigenvalues of Σ becomes larger absolutely and relative to all of the other eigenvalues of Σ.

III. PROOFS
Lemma 1. Let the columns of a matrix T contain the eigenvectors of F. LetΣ be the covariance of the state variables if the eigenvectors are used as their coordinate basis. That is,Σ = T −1 ΣT −τ . Then the elements ofΣ satisfỹ Proof. Kwon and coauthors 2 show that the covariance matrix Σ may be written as where Q is an antisymmetric matrix with zeroes on its diagonal which satisfies Next, letQ = T −1 QT −τ and Λ = T −1 FT, the diagonalization of F. Equation (3) repeated in terms of these matrices is Thus the elements ofQ must satisfy To use (5) to find elements ofΣ, note that (2) holds in transformed coordinates as Putting (5) and (6) together yields (1).
Lemma 2. Suppose that the dominant eigenvalue of F is real and also suppose that Then the dominant eigenvalue of Σ is equal to |D 11 /(2λ 1 )|+O( |Σ 11 |) and all of the other eigenvalues are O( |Σ 11 |).
Proof. Our proof uses the same general approach to the case where λ 1 is real. Using (1), we can writeΣ as which along with Σ = TΣT τ yields Alternatively Σ may be factorized as where Σ (1) is a real 2 × 2 matrix that satisfies and T (1) is a matrix whose columns are eigenvectors of Specifically, so that (T 1 , T 2 ) = (Re(T 1 ), Im(T 1 ))T (1) .
The matrix Σ (1) represents the covariance in a two-dimensional subspace spanned by Re(T 1 ) and Im(T 1 ). Now let the product ofΣ (1) with its conjugate transpose. It is straightforward to verify that the eigenvalues of H are If |D 11 | > 0, the associated eigenvectors are the columns of where the normalizing constant n ensures that the eigenvectors have unit norms. The Takagi factorization 1 ofΣ (1) is thenΣ where h 1 and h 2 are positive and P is a diagonal matrix of phase factors that satisfies If |D 11 | = 0, thenΣ (1) is a real symmetric matrix. It can be decomposed in the form of (22) by letting and (12), (13), and (22) that To obtain a simple equation for the eigenvalues of Σ, we next decompose M into the product of an orthonormal matrix V and an upper triangular matrix U. This can be done by applying the Gram-Shmidt process. The resulting first column of V is simply M 1 /|M 1 | where M 1 is the first column of M.