Abstract
Understanding the interplay of order and disorder in chaos is a central challenge in modern quantitative science. Approximate linear representations of nonlinear dynamics have long been sought, driving considerable interest in Koopman theory. We present a universal, datadriven decomposition of chaos as an intermittently forced linear system. This work combines delay embedding and Koopman theory to decompose chaotic dynamics into a linear model in the leading delay coordinates with forcing by lowenergy delay coordinates; this is called the Hankel alternative view of Koopman (HAVOK) analysis. This analysis is applied to the Lorenz system and realworld examples including Earth’s magnetic field reversal and measles outbreaks. In each case, forcing statistics are nonGaussian, with long tails corresponding to rare intermittent forcing that precedes switching and bursting phenomena. The forcing activity demarcates coherent phase space regions where the dynamics are approximately linear from those that are strongly nonlinear.
Introduction
Dynamical systems describe the changing world around us, modeling the interactions between quantities that coevolve in time^{1}. These dynamics often give rise to rich, complex behaviors that may be difficult to predict from uncertain measurements, a phenomena commonly known as chaos. Chaotic dynamics are ubiquitous in the physical, biological, and engineering sciences, and they have captivated amateurs and experts for over a century. The motion of planets^{2}, weather and climate^{3}, population dynamics^{4,5,6}, epidemiology^{7}, financial markets, earthquakes, and turbulence^{8, 9}, are all compelling examples of chaos. Despite the name, chaos is not random, but is instead highly organized, exhibiting coherent structure and patterns^{10, 11}.
The confluence of big data and machine learning is driving a paradigm shift in the analysis and understanding of dynamical systems in science and engineering. Data are abundant, while physical laws or governing equations remain elusive, as is true for problems in climate science, finance, and neuroscience. Even in classical fields such as turbulence, where governing equations do exist, researchers are increasingly turning toward datadriven analysis^{12,13,14,15,16}. Many critical datadriven problems, such as predicting climate change, understanding cognition from neural recordings, or controlling turbulence for energy efficient power production and transportation, are primed to take advantage of progress in the datadriven discovery of dynamics^{17,18,19,20,21,22,23,24,25,26,27}.
An early success of datadriven dynamical systems is the celebrated Takens embedding theorem^{9}, which allows for the reconstruction of an attractor that is diffeomorphic to the original chaotic attractor from a time series of a single measurement. This remarkable result states that, under certain conditions, the full dynamics of a system as complicated as a turbulent fluid may be uncovered from a time series of a single point measurement. Delay embeddings have been widely used to analyze and characterize chaotic systems^{5,6,7, 28,29,30,31}. They have also been used for linear system identification with the eigensystem realization algorithm (ERA)^{32} and in climate science with the singular spectrum analysis (SSA)^{33} and nonlinear Laplacian spectrum analysis^{34}. ERA and SSA yield eigentimedelay coordinates by applying principal component analysis to a Hankel matrix. However, these methods are not generally useful for identifying meaningful models of chaotic nonlinear systems, such as those considered here.
In this work, we develop a universal datadriven decomposition of chaos into a forced linear system. This decomposition relies on timedelay embedding, a cornerstone of dynamical systems, but takes a new perspective based on regression models^{19} and modern Koopman operator theory^{35,36,37}. The resulting method partitions phase space into coherent regions where the forcing is small and dynamics are approximately linear, and regions where the forcing is large. The forcing may be measured from time series data and strongly correlates with attractor switching and bursting phenomena in realworld examples. Linear representations of strongly nonlinear dynamics, enabled by machine learning and Koopman theory, promise to transform our ability to estimate, predict, and control complex systems in many diverse fields. A video abstract is available for this work at: https://youtu.be/831Ell3QNck, and code is available at: http://faculty.washington.edu/sbrunton/HAVOK.zip.
Results
Linear representations of nonlinear dynamics
Consider a dynamical system^{1} of the form
where \({\bf{x}}(t) \in {{\Bbb R}^n}\) is the state of the system at time t and f represents the dynamic constraints that define the equations of motion. When working with data, we often sample (1) discretely in time:
where x _{k} = x(kΔt). The traditional geometric perspective of dynamical systems describes the topological organization of trajectories of (1) or (2), which are mediated by fixed points, periodic orbits, and attractors of the dynamics f. However, analyzing the evolution of measurements, y = g(x), of the state provides an alternative view. This perspective was introduced by Koopman in 1931^{38}, although it has gained traction recently with the pioneering work of Mezic et al. ^{35, 36} in response to the growing abundance of measurement data and the lack of known governing equations for many systems of interest. Koopman analysis relies on the existence of a linear operator \({\cal K}\) for the dynamical system in (2), given by
The Koopman operator \({\cal K}\) induces a linear system on the space of all measurement functions g, trading finitedimensional nonlinear dynamics in (2) for infinitedimensional linear dynamics in (3).
Expressing nonlinear dynamics in a linear framework is appealing because of the wealth of optimal control techniques for linear systems and the ability to analytically predict the future. However, obtaining a finitedimensional approximation of the Koopman operator is challenging in practice^{39}, relying on intrinsic measurements related to the eigenfunctions of the Koopman operator \({\cal K}\), which may be more difficult to obtain than the solution of the original system (2).
Hankel alternative view of Koopman (HAVOK) analysis
Obtaining linear representations for strongly nonlinear systems has the potential to revolutionize our ability to predict and control these systems. In fact, the linearization of dynamics near fixed points or periodic orbits has long been employed for local linear representation of the dynamics^{1}. The Koopman operator is appealing because it provides a global linear representation, valid far away from fixed points and periodic orbits, although previous attempts to obtain finitedimensional approximations of the Koopman operator have had limited success. Dynamic mode decomposition (DMD)^{40,41,42,43} seeks to approximate the Koopman operator with a bestfit linear model advancing spatial measurements from one time to the next. However, DMD is based on linear measurements, which are not rich enough for many nonlinear systems. Augmenting DMD with nonlinear measurements may enrich the model^{44}, but there is no guarantee that the resulting models will be closed under the Koopman operator^{39}. Details about these related methods are provided in Supplementary Note 2.
Instead of advancing instantaneous measurements of the state of the system, we obtain intrinsic measurement coordinates based on the timehistory of the system. This perspective is datadriven, relying on the wealth of information from previous measurements to inform the future. Unlike a linear or weakly nonlinear system, where trajectories may get trapped at fixed points or on periodic orbits, chaotic dynamics are particularly wellsuited to this analysis: trajectories evolve to densely fill an attractor, so more data provides more information.
This method is shown in Fig. 1 for the Lorenz system (details are provided in Supplementary Note 3). The conditions of the Takens embedding theorem are satisfied^{9}, so eigentimedelay coordinates may be obtained from a time series of a single measurement x(t) by taking a singular value decomposition (SVD) of the following Hankel matrix H:
The columns of U and V from the SVD are arranged hierarchically by their ability to model the columns and rows of H, respectively. Often, H may admit a lowrank approximation by the first r columns of U and V. Note that the Hankel matrix in (4) is the basis of ERA^{32} in linear system identification and SSA^{33} in climate time series analysis. Interestingly, a connection between the Koopman operator and the Takens embedding was explored as early as 2004^{45}.
The lowrank approximation to (4) provides a datadriven measurement system that is approximately invariant to the Koopman operator for states on the attractor. By definition, the dynamics map the attractor onto itself, making it invariant to the flow. We may rewrite (4) with the Koopman operator \({\cal K}\):
The columns of (4), and thus (5), are wellapproximated by the first r columns of U, so these eigentimeseries provide a Koopmaninvariant measurement system. The first r columns of V provide a time series of the magnitude of each of the columns of UΣ in the data. By plotting the first three columns of V, we obtain an embedded attractor for the Lorenz system, shown in Fig. 1e.
The connection between eigentimedelay coordinates from (4) and the Koopman operator motivates a linear regression model on the variables in V. Even with an approximately Koopmaninvariant measurement system, there remain challenges to identifying a linear model for a chaotic system. A linear model, however detailed, cannot capture multiple fixed points or the unpredictable behavior characteristic of chaos with a positive Lyapunov exponent^{39}. Instead of constructing a closed linear model for the first r variables in V, we build a linear model on the first r−1 variables and allow the last variable, v _{ r }, to act as a forcing term:
Here \({\bf{v}} = {[ {\begin{array}{*{20}{c}} {{v_1}} & {{v_2}} & \cdots & {{v_{r  1}}} \end{array}} ]^T}\) is a vector of the first r−1 eigentimedelay coordinates. In all of the examples below, the linear model on the first r−1 terms is accurate, while no linear model represents v _{ r }. Instead, v _{ r } is an input forcing to the linear dynamics in (6), which approximate the nonlinear dynamics in (1). The statistics of v _{ r }(t) are nonGaussian, as seen in Fig. 1h. The long tails correspond to rareevent forcing that drives lobe switching in the Lorenz system; this is related to rareevent forcing observed and modeled by others^{12, 13, 46}. However, the statistics of the forcing alone is insufficient to characterize the switching dynamics, as the timing is crucial. The longtail forcing comes in highfrequency bursts, which are not captured in the statistics alone. In fact, forcing the system in (6) with other forcing signatures from the same statistics, for example by randomly shuffling the forcing time series, does not result in the same dynamics. Thus, the timing of the forcing is as important as the distribution. In principle, it is also possible to split the variables into r−s highenergy modes for the linear model and s lowenergy forcing modes, although this is not explored in the present work. The splitting of dynamics into deterministic linear and chaotic stochastic dynamics was proposed in ref. ^{35}. Here we extend this concept to fully chaotic systems where the Koopman operators have continuous spectra and develop a robust numerical algorithm for the splitting.
The forced linear system in (6) was discovered after applying the sparse identification of nonlinear dynamics (SINDy)^{19} algorithm to delay coordinates of the Lorenz system. Even when allowing for the possibility of nonlinear dynamics in v, the most parsimonious model is linear (shown in Fig. 2). This strongly suggests a connection with the Koopman operator, motivating the present work. The last term v _{ r } is not accurately represented by either linear or polynomial nonlinear models^{19}, as is shown in Supplementary Fig. 18.
The structure of the HAVOK model for the Lorenz system is shown in Fig. 2. There is a dominant skewsymmetric structure in the A matrix, and the entries are nearly integer valued. In Supplementary Note 4, we demonstrate that the dynamics of a nearby model with exact integer entries qualitatively matches the dynamics of the Lorenz model, including the lobe switching events. This offdiagonal structure and near integrability is the subject of current investigation by colleagues. It was argued in ref. ^{35} that on an example deterministic chaotic system, there is a random dynamical system representation that has the same spectrum and may be used for longterm prediction. The Lorenz system is mixing and does not have a simple spectrum^{47}, although it appears that there are functions in the pseudo spectrum that are nearly eigenfunctions of the Koopman operator. Indeed, in the system in ref. ^{35}, the Koopman representation has a similar offdiagonal structure to the Lorenz example here.
HAVOK analysis and prediction in the Lorenz system
In the case of the Lorenz system, the long tails in the statistics of the forcing signal v _{ r }(t) correspond to bursting behavior that precedes lobe switching events. It is possible to directly test the power of the forcing signature v _{ r }(t) to predict lobe switching in the Lorenz system. First, a HAVOK model is trained using data from 200 time units of a trajectory; this results in the basis U and the model matrices A and B. Next, the prediction of lobe switching is tested on a new validation (test) trajectory consisting of the next 1,000 time units (i.e., time t = 200 to t = 1200). Figure 3 shows 20 time units of this test trajectory. Regions where the forcing term v _{ r } is active are isolated when \({v_r}\) is larger than a threshold value; in this case, we choose r = 11 and the threshold is 0.002. These regions are colored red in Fig. 3 for v _{1} and v _{ r }. The remaining portions of the trajectory, when the forcing is small, are colored in dark gray. It is clear by eye that the activity of the forcing precedes lobe switching by nearly one period. During the 1,000 time units of test data there are 605 lobe switching events, of which the HAVOK model correctly identifies 604, for a accuracy of 99.83%. There are likewise 2,047 lobe orbits that do not precede lobe switching, and the HAVOK model identifies 54 false positives at a rate of 2.64%. Note that in this example, both v _{1}(t) and v _{ r }(t) are computed directly from the timeseries using U, and are not simulated using the dynamic model. Computing v _{ r } using U introduces a short delay of qΔt = 0.1 time units; however, forcing activity precedes lobe switching by considerably more than 0.1 time units, so that it is still predictive.
It is important to note that when the forcing term is small, corresponding to the gray portions of the trajectory, the dynamics are largely governed by linear dynamics. Thus, the forcing term in effect distills the essential nonlinearity of the system, indicating when the dynamics are about to switch lobes of the attractor. The same trajectories are plotted in threedimensions in Fig. 4a, where it can be seen that the nonlinear forcing is active precisely when the trajectory is on the outer portion of the attractor lobes. A single lobe switching event is shown in Fig. 4b, illustrating the geometry of the trajectories.
Figure 5 shows that the dynamic HAVOK model in (6) generalizes to predict behavior in test data that was not used to train the model. In this figure, a HAVOK model of order r = 15 is trained on data from t = 0 to t = 50, and then simulated on test data from t = 50 to t = 100. The model captures the main features and lobe transitions, although small errors gradually increase for long times. This model prediction must be run online, as it requires access to the forcing signature v _{ r }, which may be obtained by multiplying a sliding window of v(t) with the basis U.
Connection to almostinvariant sets and PerronFrobenius
The Koopman operator is the dual, or leftadjoint, of the PerronFrobenius operator, which is also called the transfer operator on the space of probability densities. Thus, Koopman analysis is typically concerned with measurements from a single trajectory, while PerronFrobenius analysis is concerned with an ensemble of trajectories. Because of the close relationship of the two operators, it is interesting to compare the HAVOK analysis with the almostinvariant sets from the PerronFrobenius operator. Almostinvariant sets represent dynamically isolated phase space regions, in which the trajectory resides for a long time. These sets are almost invariant under the action of the dynamics and are related to dominant eigenvalues and eigenfunctions of the PerronFrobenius operator. They can be numerically determined from its finiterank approximation by discretizing the phase space into small boxes and computing a large, but sparse, transition probability matrix of how initial conditions in the various boxes flow to other boxes in a fixed amount of time; for this analysis, we use the same q = 100 for the length of the U vectors as in the HAVOK analysis. Following the approach proposed by ref. ^{48}, almostinvariant sets can then be estimated by computing the associated reversible transition matrix and levelset thresholding its right eigenvectors.
The almostinvariant sets of the PerronFrobenius operator are shown in Fig. 6 for the Lorenz system. There are two sets, each corresponding to the near basin of one attractor lobe as well as the outer basin of the opposing attractor lobe and the bundle of trajectories that connect them. These two almostinvariant sets dovetail to form the complete Lorenz attractor. Underneath the almostinvariant sets, the Lorenz attractor is colored by the thresholded magnitude of the nonlinear forcing term in the HAVOK model, which partitions the attractor into two sets corresponding to regions where the flow is approximately linear (inner black region) and where the flow is strongly nonlinear (outer red region). The boundaries of the almostinvariant sets of the PerronFrobenius operator closely match the boundaries from the HAVOK analysis.
Demonstration on examples
The HAVOK analysis is applied to analytic and realworld systems in Fig. 7. More details about each of these systems is presented in Supplementary Note 6, and code for every example is publicly available. The examples span a wide range of systems, including canonical chaotic dynamical systems, such as the Lorenz and Rössler systems, and the double pendulum, which are among the simplest systems that exhibit chaotic motion. As a more realistic example, we consider a stochastically driven simulation of the Earth’s magnetic field reversal^{49}, where complex magnetohydrodynamic equations are modeled as a dynamo driven by turbulent fluctuations. In this case, the exact form of the attractor is not captured by the linear model, although the attractor switching, corresponding to magnetic field reversal, is preserved. In the final three examples, we explore the method on data collected from an electrocardiogram (ECG), electroencephalogram (EEG), and recorded measles cases in New York City over a 36 year timespan from 1928 to 1964; sources for all data are provided in Supplementary Note 6.
In each example, the qualitative attractor dynamics are captured, and large transients and intermittent phenomena are highly correlated with the intermittent forcing in the model. These large transients and intermittent events correspond to coherent regions in phase space where the forcing is large (right column of Fig. 7, red). Regions where the forcing is small (black) are wellmodeled by a Koopman linear system in delay coordinates. Large forcing often precedes intermittent events (lobe switching for Lorenz system and magnetic field reversal, or bursting measles outbreaks), making this signal strongly correlated and potentially predictive. However, caution must be taken when using timedelay coordinates in streaming or realtime applications, as the HAVOK forcing signature will be delayed by qΔt. In the case of the Lorenz system, the HAVOK forcing predicts lobe switching by about 1 time unit, while qΔt = 0.1; thus, the prediction still precedes the lobe switching. It is important to note that every model identified and presented here is either neutrally or asymptotically stable. Although we are not aware of theoretical guarantees that datadriven methods like HAVOK will remain stable, it is intuitive that if we sample enough data from a chaotic attractor, the eigenvalues of the models should converge to the unit circle (in discretetime). In practice, it is certainly possible to obtain unstable models, although this is usually preventable by careful choice of the model order r, as discussed above. For example, if the choice of r is too large^{50}, the model overfits to noise, and is thus prone to instability. In general, sparse regression can have a stabilizing effect by penalizing model terms that are not necessary, preventing overfitting that can lead to instability. In practice, it may also be helpful to add a small amount of numerical diffusion to stabilize models.
Discussion
In summary, we have presented a datadriven procedure, the HAVOK analysis, to identify an intermittently forced linear system representation of chaos. This procedure is based on machine learning regression, Takens’ embedding, and Koopman theory. In practice, HAVOK first applies DMD or sparse regression (SINDy) to delay coordinates followed by a splitting of variables to handle strong nonlinearities as intermittent forcing; applying DMD to delay coordinates has already been explored in the context of rankdeficient data^{42, 43, 51}. The activity of the forcing signal in the Lorenz model is shown to predict lobe switching, and it partitions phase space into coherent linear and nonlinear regions. In the other examples, the forcing signal is correlated with intermittent transient events, such as switching and bursting, and may be predictive.
There are many interesting directions to investigate related to this work. Understanding the skewsymmetric structure of the HAVOK model and the nearintegrability of chaotic systems is a topic of ongoing research. Moreover, a detailed mathematical understanding of chaotic systems with continuous spectra will also improve the interpretation of this work. Because the method is datadriven, there are open questions related to the required quantity and quality of data and the resulting model performance. There are also interesting relationships between the number of delays included in the Hankel matrix and the geometry of the resulting embedded attractor. Finally, the use of HAVOK analysis for realtime prediction, estimation, and control is the subject of ongoing work by the authors.
The search for intrinsic or natural measurement coordinates is of central importance in finding simple representations of complex systems, and this will only become increasingly important with growing data. Specifically, intrinsic measurement coordinates can benefit other theoretical and applied work involving Koopman theory^{44, 52,53,54,55,56} and related topics^{57,58,59,60,61}. Simple, linear representations of complex systems is a long sought goal, providing the hope for a general theory of nonlinear estimation, prediction, and control. This analysis will hopefully motivate novel strategies to measure, understand, and control^{62} chaotic systems in a variety of scientific and engineering applications.
Methods
Choice of model parameters
In practice, there are a number of important considerations when applying HAVOK analysis. Heuristically, there are two main choices that are important in every example: first, choosing the timestep and number of rows, q, in the Hankel matrix to obtain a suitable delay embedding basis U, and second, choosing the truncation rank r, which determines the model order r−1. For the first choice, it has been observed that models are more accurate and predictive when the basis U resembles polynomials of increasing order, as shown in Fig. 5b or in Supplementary Fig. 11. Decreasing Δt can improve the basis U to a point, and then decreasing further has little effect. Similarly, there is a relatively broad range of q values that admit a polynomial basis for U, and this is chosen in every example. As seen in Supplementary Table 3, for the numerical examples where time is nondimensionalized, the product qΔt (i.e., the time window considered in the row direction) is equal to 0.1 time units. For the second choice, there are many important factors to consider when selecting the model order r. These factors are explored in detail for the Lorenz system in Supplementary Figs 16 and 17 in Supplementary Note 5, and they are summarized here: model accuracy on both the training data and ideally a holdout data set not used for training; clear distillation of a forcing signature that is active during important intermittent events and quiescent otherwise; signal to noise in the data; prediction of intermittent events; and desired amount of structure in the resulting linear model. For the Lorenz example, we choose r = 15 for Fig. 2, because this is the highest order attainable before numerical roundoff corrupts the model. In this example, higher model order elucidates more structure in the sparse linear model shown in Fig. 2. However, the correlation of the forcing signature with intermittent events is relatively insensitive to model order, and we use a model with order r = 11 for prediction in Figs 3 and 4.
Data availability
All data supporting the findings are available within the article and its Supplementary Information, or are available from the authors upon request. In addition, all code used in this study is available at: http://faculty.washington.edu/sbrunton/HAVOK.zip.
Additional information
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
 1.
Guckenheimer, J. & Holmes, P. Nonlinear Oscillations, Dynamical Systems, and Bifurcations of Vector Fields, Vol. 42 of Applied Mathematical Sciences (Springer, 1983).
 2.
Poincaré, H. Sur le probleme des trois corps et les équations de la dynamique. Acta Math. 13, A3–A270 (1890).
 3.
Lorenz, E. N. Deterministic nonperiodic flow. J. Atmosph. Sci. 20, 130–141 (1963).
 4.
Bjørnstad, O. N. & Grenfell, B. T. Noisy clockwork: time series analysis of population fluctuations in animals. Science 293, 638–643 (2001).
 5.
Sugihara, G. et al. Detecting causality in complex ecosystems. Science 338, 496–500 (2012).
 6.
Ye, H. et al. Equationfree mechanistic ecosystem forecasting using empirical dynamic modeling. Proc. Natl Acad. Sci. 112, E1569–E1576 (2015).
 7.
Sugihara, G. & May, R. M. Nonlinear forecasting as a way of distinguishing chaos from measurement error in time series. Nature 344, 734–741 (1990).
 8.
Kolmogorov, A. The local structure of turbulence in incompressible viscous fluid for very large Reynolds number. Dokl. Akad. Nauk. SSSR 30, 9–13 (1941). translated and reprinted 1991 in Proc. R. Soc. A 434, 9–13.
 9.
Takens, F. Detecting strange attractors in turbulence. Lect. Notes Math. 898, 366–381 (1981).
 10.
Tsonis, A. A. & Elsner, J. B. Nonlinear prediction as a way of distinguishing chaos from random fractal sequences. Nature 358, 217220 (1992).
 11.
Crutchfield, J. P. Between order and chaos. Nat. Phys. 8, 17–24 (2012).
 12.
Sapsis, T. P. & Majda, A. J. Statistically accurate loworder models for uncertainty quantification in turbulent dynamical systems. Proc. Natl Acad. Sci. 110, 13705–13710 (2013).
 13.
Majda, A. J. & Lee, Y. Conceptual dynamical models for turbulence. Proc. Natl Acad. Sci. 111, 6548–6553 (2014).
 14.
Brunton, S. L. & Noack, B. R. Closedloop turbulence control: Progress and challenges. Appl. Mech. Rev. 67, 050801 (2015).
 15.
Parish, E. J. & Duraisamy, K. Nonlocal closure models for large eddy simulations using the morizwanzig formalism. Preprint at https://arxiv.org/abs/1611.03311 (2016).
 16.
Duriez, T., Brunton, S. L. & Noack, B. R. Machine Learning Control: Taming Nonlinear Dynamics and Turbulence (Springer, 2016).
 17.
Bongard, J. & Lipson, H. Automated reverse engineering of nonlinear dynamical systems. Proc. Natl Acad. Sci. 104, 9943–9948 (2007).
 18.
Schmidt, M. & Lipson, H. Distilling freeform natural laws from experimental data. Science 324, 81–85 (2009).
 19.
Brunton, S. L., Proctor, J. L. & Kutz, J. N. Discovering governing equations from data by sparse identification of nonlinear dynamical systems. Proc. Natl Acad. Sci. 113, 3932–3937 (2016).
 20.
Mangan, N. M., Brunton, S. L., Proctor, J. L. & Kutz, J. N. Inferring biological networks by sparse identification of nonlinear dynamics. IEEE Trans. Mol. Biol. MultiScale Commun. 2, 52–63 (2016).
 21.
Tran, G. & Ward, R. Exact recovery of chaotic systems from highly corrupted data. Preprint at https://arxiv.org/abs/1607.01067 (2016).
 22.
Loiseau, J.C. & Brunton, S. L. Constrained sparse Galerkin regression. Preprint at https://arxiv.org/abs/1611.03271 (2016).
 23.
Quade, M., Abel, M., Shafi, K., Niven, R. K. & Noack, B. R. Prediction of dynamical systems by symbolic regression. Phys. Rev. E. 94, 012214 (2016).
 24.
Schaeffer, H. Learning partial differential equations via data discovery and sparse optimization. In Proc. R. Soc. A, Vol. 473, 20160446 (The Royal Society, 2017).
 25.
Rudy, S. H., Brunton, S. L., Proctor, J. L. & Kutz, J. N. Datadriven discovery of partial differential equations. Sci. Adv. 3, e1602614 (2017).
 26.
Raissi, M. & Karniadakis, G. E. Machine learning of linear differential equations using gaussian processes. Preprint at https://arxiv.org/abs/1701.02440 (2017).
 27.
Mangan, N. M., Kutz, J. N., Brunton, S. L. & Proctor, J. L. Model selection for dynamical systems via sparse regression and information criteria. Preprint at https://arxiv.org/abs/1701.01773 (2017).
 28.
Farmer, J. D. & Sidorowich, J. J. Predicting chaotic time series. Phys. Rev. Lett. 59, 845 (1987).
 29.
Crutchfield, J. P. & McNamara, B. S. Equations of motion from a data series. Comp. Sys. 1, 417–452 (1987).
 30.
Rowlands, G. & Sprott, J. C. Extraction of dynamical equations from chaotic data. Phys. D. 58, 251–259 (1992).
 31.
Abarbanel, H. D. I., Brown, R., Sidorowich, J. J. & Tsimring, L. S. The analysis of observed chaotic data in physical systems. Rev. Mod. Phys. 65, 1331 (1993).
 32.
Juang, J. N. & Pappa, R. S. An eigensystem realization algorithm for modal parameter identification and model reduction. J. Guid. Control Dyn. 8, 620–627 (1985).
 33.
Broomhead, D. S. & Jones, R. Timeseries analysis. Proc. Roy. Soc. A. 423, 103–121 (1989).
 34.
Giannakis, D. & Majda, A. J. Nonlinear Laplacian spectral analysis for time series with intermittency and lowfrequency variability. Proc. Natl Acad. Sci. 109, 2222–2227 (2012).
 35.
Mezić, I. Spectral properties of dynamical systems, model reduction and decompositions. Nonlinear Dynam. 41, 309–325 (2005).
 36.
Mezić, I. Analysis of fluid flows via spectral properties of the Koopman operator. Ann. Rev. Fluid Mech. 45, 357–378 (2013).
 37.
Giannakis, D. Datadriven spectral decomposition and forecasting of ergodic dynamical systems. arXiv preprint arXiv 1507, 02338 (2015).
 38.
Koopman, B. O. Hamiltonian systems and transformation in Hilbert space. PNAS 17, 315–318 (1931).
 39.
Brunton, S. L., Brunton, B. W., Proctor, J. L. & Kutz, J. Koopman observable subspaces and finite linear representations of nonlinear dynamical systems for control. PLoS ONE 11, e0150171 (2016).
 40.
Schmid, P. J. Dynamic mode decomposition of numerical and experimental data. J. Fluid Mech. 656, 5–28 (2010).
 41.
Rowley, C. W., Mezić, I., Bagheri, S., Schlatter, P. & Henningson, D. Spectral analysis of nonlinear flows. J. Fluid Mech. 645, 115–127 (2009).
 42.
Tu, J. H., Rowley, C. W., Luchtenburg, D. M., Brunton, S. L. & Kutz, J. N. On dynamic mode decomposition: theory and applications. J. Comput. Dyn. 1, 391–421 (2014).
 43.
Kutz, J. N., Brunton, S. L., Brunton, B. W. & Proctor, J. L. Dynamic Mode Decomposition: DataDriven Modeling of Complex Systems (SIAM, 2016).
 44.
Williams, M. O., Kevrekidis, I. G. & Rowley, C. W. A datadriven approximation of the Koopman operator: extending dynamic mode decomposition. J. Nonlin. Sci. 25, 1307–1346 (2015).
 45.
Mezić, I. & Banaszuk, A. Comparison of systems with complex behavior. Phys. D: Nonlin. Phenom. 197, 101–133 (2004).
 46.
Majda, A. J. & Harlim, J. Physics constrained nonlinear regression models for time series. Nonlinearity 26, 201 (2012).
 47.
Luzzatto, S., Melbourne, I. & Paccaut, F. The lorenz attractor is mixing. Commun. Math. Phys. 260, 393–401 (2005).
 48.
Froyland, G. Statistically optimal almostinvariant sets. Phys. D. 200, 205–219 (2005).
 49.
Pétrélis, F., Fauve, S., Dormy, E. & Valet, J.P. Simple mechanism for reversals of Earth’s magnetic field. Phys. Rev. Lett. 102, 144503 (2009).
 50.
Gavish, M. & Donoho, D. L. The optimal hard threshold for singular values is 4/√3. IEEE Trans. Inf. Theory 60, 5040–5053 (2014).
 51.
Brunton, B. W., Johnson, L. A., Ojemann, J. G. & Kutz, J. N. Extracting spatial–temporal coherent patterns in largescale neural recordings using dynamic mode decomposition. J. Neurosci. Methods 258, 1–15 (2016).
 52.
Budišić, M., Mohr, R. & Mezić, I. Applied Koopmanism. Chaos: An Interdisciplinary J. Nonlin. Sci. 22, 047510 (2012).
 53.
Lan, Y. & Mezić, I. Linearization in the large of nonlinear systems and Koopman operator spectrum. Phys. D. 242, 42–53 (2013).
 54.
Bagheri, S. Koopmanmode decomposition of the cylinder wake. J. Fluid Mech. 726, 596–623 (2013).
 55.
Surana, A. Koopman operator based observer synthesis for controlaffine nonlinear systems. 2016 IEEE 55th Conference on Decision and Control (CDC) 6492–6499 (2016).
 56.
Surana, A. & Banaszuk, A. Linear observer synthesis for nonlinear systems using Koopman operator framework. IFACPapersOnLine 49, 716–723 (2016).
 57.
Dellnitz, M., Froyland, G. & Junge, O. in Ergodic Theory, Analysis, and Efficient Simulation of Dynamical Systems (ed. Fielder, B.) 145–174 (Springer, 2001).
 58.
Froyland, G. & Padberg, K. Almostinvariant sets and invariant manifolds – connecting probabilistic and geometric descriptions of coherent structures in flows. Phys. D. 238, 1507–1523 (2009).
 59.
Froyland, G., Gottwald, G. A. & Hammerlindl, A. A computational method to extract macroscopic variables and their dynamics in multiscale systems. SIAM J. Appl. Dynam. Sys. 13, 1816–1846 (2014).
 60.
Kaiser, E. et al Clusterbased reducedorder modelling of a mixing layer. J. Fluid Mech. 754, 365–414 (2014).
 61.
Gouasmi, A., Parish, E. & Duraisamy, K. Characterizing memory effects in coarsegrained nonlinear systems using the morizwanzig formalism. Preprint at https://arxiv.org/abs/1611.06277 (2016).
 62.
Shinbrot, T., Grebogi, C., Ott, E. & Yorke, J. A. Using small perturbations to control chaos. Nature 363, 411–417 (1993).
Acknowledgements
We acknowledge fruitful discussions with Dimitris Giannakis and Igor Mezić. S.L.B. and J.N.K. acknowledge support from the Defense Advanced Research Projects Agency (DARPA HR001116C0016). B.W.B. acknowledges support from the Washington Research Foundation. E.K. acknowledges support from the Moore/Sloan and WRF Data Science Fellowship in the eScience Institute. J.L.P. would like to thank Bill and Melinda Gates for their active support of the Institute for Disease Modeling and their sponsorship through the Global Good Fund.
Author information
Affiliations
Department of Mechanical Engineering, University of Washington, Seattle, WA, 98195, USA
 Steven L. Brunton
 & Eurika Kaiser
Department of Biology, University of Washington, Seattle, WA, 98195, USA
 Bingni W. Brunton
Institute for Disease Modeling, Bellevue, WA, 98004, USA
 Joshua L. Proctor
Department of Applied Mathematics, University of Washington, Seattle, WA, 98195, USA
 J. Nathan Kutz
Authors
Search for Steven L. Brunton in:
Search for Bingni W. Brunton in:
Search for Joshua L. Proctor in:
Search for Eurika Kaiser in:
Search for J. Nathan Kutz in:
Contributions
S.L.B. designed and performed research and analyzed results; all authors were involved in discussions to interpret results related to Koopman theory, machine learning, and prediction; B.W.B. helped with interpretation and analysis of sleep EEG data, and J.L.P. helped with interpretation and analysis of measles data; E.K. performed the PerronFrobenius analysis to compute almost invariant sets; S.L.B. and J.N.K. received funds to support this work; S.L.B. wrote the paper, and all authors helped review and edit.
Competing interests
The authors declare no competing financial interests.
Corresponding author
Correspondence to Steven L. Brunton.
Electronic supplementary material
Supplementary Information
Supplementary Notes, Supplementary Figures, Supplementary Tables and Supplementary References
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Further reading

DelayCoordinate Maps and the Spectra of Koopman Operators
Journal of Statistical Physics (2019)

Deep learning for universal linear embeddings of nonlinear dynamics
Nature Communications (2018)

SpatioTemporal Koopman Decomposition
Journal of Nonlinear Science (2018)

On samplebased computations of invariant sets
Nonlinear Dynamics (2018)
Comments
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.