Introduction

Nonlinearity is a hallmark feature of complex systems, giving rise to a rich diversity of observed dynamical behaviors across the physical, biological, and engineering sciences1,2. Although computationally tractable, there exists no general mathematical framework for solving nonlinear dynamical systems. Thus representing nonlinear dynamics in a linear framework is particularly appealing because of powerful and comprehensive techniques for the analysis and control of linear systems3, which do not readily generalize to nonlinear systems. Koopman operator theory, developed in 19314,5, has recently emerged as a leading candidate for the systematic linear representation of nonlinear systems6,7. This renewed interest in Koopman analysis has been driven by a combination of theoretical advances6,7,8,9,10, improved numerical methods such as dynamic mode decomposition (DMD)11,12,13, and an increasing abundance of data. Eigenfunctions of the Koopman operator are now widely sought, as they provide intrinsic coordinates that globally linearize nonlinear dynamics. Despite the immense promise of Koopman embeddings, obtaining representations has proven difficult in all but the simplest systems, and representations are often intractably complex or are the output of uninterpretable black-box optimizations. In this work, we utilize the power of deep learning for flexible and general representations of the Koopman operator, while enforcing a network structure that promotes parsimony and interpretability of the resulting models.

Neural networks (NNs), which form the theoretical architecture of deep learning, were inspired by the primary visual cortex of cats where neurons are organized in hierarchical layers of cells to process visual stimulus14. The first mathematical model of a NN was the neocognitron15 which has many of the features of modern deep neural networks (DNNs), including a multi-layer structure, convolution, max pooling, and nonlinear dynamical nodes. Importantly, the universal approximation theorem16,17,18 guarantees that a NN with sufficiently many hidden units and a linear output layer is capable of representing any arbitrary function, including our desired Koopman eigenfunctions. Although NNs have a four-decade history, the analysis of the ImageNet data set19, containing over 15 million labeled images in 22,000 categories, provided a watershed moment20. Indeed, powered by the rise of big data and increased computational power, deep learning is resulting in transformative progress in many data-driven classification and identification tasks19,20,21. A strength of deep learning is that features of the data are built hierarchically, which enables the representation of complex functions. Thus, deep learning can accurately fit functions without hand-designed features or the user choosing a good basis21. However, a current challenge in deep learning research is the identification of parsimonious, interpretable, and transferable models22.

Deep learning has the potential to enable a scaleable and data-driven architecture for the discovery and representation of Koopman eigenfunctions, providing intrinsic linear representations of strongly nonlinear systems. This approach alleviates two key challenges in modern dynamical systems: (1) equations are often unknown for systems of interest23,24,25, as in climate, neuroscience, epidemiology, and finance; and, (2) low-dimensional dynamics are typically embedded in a high-dimensional state space, requiring scaleable architectures that discover dynamics on latent variables. Although NNs have also been used to model dynamical systems26 and other physical processes27 for decades, great strides have been made recently in using DNNs to learn Koopman embeddings, resulting in several excellent papers28,29,30,31,32,33. For example, the VAMPnet architecture28,29 uses a time-lagged auto-encoder and a custom variational score to identify Koopman coordinates on an impressive protein folding example. In all of these recent studies, DNN representations have been shown to be more flexible and exhibit higher accuracy than other leading methods on challenging problems.

The focus of this work is on developing DNN representations of Koopman eigenfunctions that remain interpretable and parsimonious, even for high-dimensional and strongly nonlinear systems. Our approach (see Fig. 1) differs from previous studies, as we are focused specifically on obtaining parsimonious models that match the intrinsic low-rank dynamics while avoiding overfitting and remaining interpretable, thus merging the best of DNN architectures and Koopman theory. In particular, many dynamical systems exhibit a continuous eigenvalue spectrum, which confounds low-dimensional representation using existing DNN or Koopman representations. This work develops a generalized framework and enforces new constraints specifically designed to extract the fewest meaningful eigenfunctions in an interpretable manner. For systems with continuous spectra, we utilize an augmented network to parameterize the linear dynamics on the intrinsic coordinates, avoiding an infinite asymptotic expansion in harmonic eigenfunctions. Thus, the resulting networks remain parsimonious, and the few key eigenfunctions are interpretable. We demonstrate our deep learning approach to Koopman on several examples designed to illustrate the strength of the method, while remaining intuitive in terms of classic dynamical systems.

Fig. 1
figure 1

Diagram of our deep learning schema to identify Koopman eigenfunctions φ(x). a Our network is based on a deep auto-encoder, which is able to identify intrinsic coordinates y = φ(x) and decode these coordinates to recover x = φ−1(y). b, c We add an additional loss function to identify a linear Koopman model K that advances the intrinsic variables y forward in time. In practice, we enforce agreement with the trajectory data for several iterations through the dynamics, i.e. Km. In b, the loss function is evaluated on the state variable x and in c it is evaluated on y

Results

Data-driven dynamical systems

To give context to our deep learning approach to identify Koopman eigenfunctions, we first summarize highlights and challenges in the data-driven discovery of dynamics. Throughout this work, we will consider discrete-time dynamical systems

$${\mathbf{x}}_{k + 1} = {\mathbf{F}}({\mathbf{x}}_k),$$
(1)

where \({\mathbf{x}} \in {\Bbb R}^n\) is the state of the system and F represents the dynamics that map the state of the system forward in time. Discrete-time dynamics often describe a continuous-time system that is sampled discretely in time, so that xk = x(kΔt) with sampling time Δt. The dynamics in F are generally nonlinear, and the state x may be high dimensional, although we typically assume that the dynamics evolve on a low-dimensional attractor governed by persistent coherent structures in the state space2. Note that F is often unknown and only measurements of the dynamics are available.

The dominant geometric perspective of dynamical systems, in the tradition of Poincaré, concerns the organization of trajectories of Eq. (1), including fixed points, periodic orbits, and attractors. Formulating the dynamics as a system of differential equations in x often admits compact and efficient representations for many natural systems25; for example, Newton's second law is naturally expressed by Eq. (1). However, the solution to these dynamics may be arbitrarily complicated, and possibly even irrepresentable, except for special classes of systems. Linear dynamics, where the map F is a matrix that advances the state x, are among the few systems that admit a universal solution, in terms of the eigenvalues and eigenvectors of the matrix F, also known as the spectral expansion.

Koopman operator theory

In 1931, B.O. Koopman provided an alternative description of dynamical systems in terms of the evolution of functions in the Hilbert space of possible measurements y = g(x) of the state4. The so-called Koopman operator, \({\cal K}\), that advances measurement functions is an infinite-dimensional linear operator:

$${\cal K}{\mathbf{g}} \buildrel \Delta \over = {\mathbf{g}} \circ {\mathbf{F}}\quad \Rightarrow \quad {\cal K}{\mathbf{g}}({\mathbf{x}}_k) = {\mathbf{g}}({\mathbf{x}}_{k + 1}).$$
(2)

Koopman analysis has gained significant attention recently with the pioneering work of Mezic et al.6,7,8,9,10, and in response to the growing wealth of measurement data and the lack of known equations for many systems13,25. Representing nonlinear dynamics in a linear framework, via the Koopman operator, has the potential to enable advanced nonlinear prediction, estimation, and control using the comprehensive theory developed for linear systems. However, obtaining finite-dimensional approximations of the infinite-dimensional Koopman operator has proven challenging in practical applications.

Finite-dimensional representations of the Koopman operator are often approximated using the DMD12,13, introduced by Schmid11. By construction, DMD identifies spatio-temporal coherent structures from a high-dimensional dynamical system, although it does not generally capture nonlinear transients since it is based on linear measurements of the system, g(x) = x. Extended DMD (eDMD) and the related variational approach of conformation dynamics (VAC)34,35,36 enriches the model with nonlinear measurements33,37. It has recently been shown that eDMD is equivalent to the variational approach of conformation dynamics (VAC)34,35,36, first derived by Noé and Nüske in 2013 to simulate molecular dynamics with a broad separation of timescales. Further connections between eDMD and VAC and between DMD and the time lagged independent component analysis (TICA) are explored in a recent review38. A key contribution of VAC is a variational score enabling the objective assessment of Koopman models via cross-validation. Recently, eDMD has been demonstrated to improve model predictive control performance in nonlinear systems39.

Identifying regression models based on nonlinear measurements will generally result in closure issues, as there is no guarantee that these measurements form a Koopman invariant subspace40. The resulting models are of exceedingly high dimension, and when kernel methods are employed41, the models may become uninterpretable. Instead, many approaches seek to identify eigenfunctions of the Koopman operator directly, satisfying:

$$\varphi ({\mathbf{x}}_{k + 1}) = {\cal K}\varphi ({\mathbf{x}}_k) = \lambda \varphi ({\mathbf{x}}_k).$$
(3)

Eigenfunctions are guaranteed to span an invariant subspace, and the Koopman operator will yield a matrix when restricted to this subspace40,42. In practice, Koopman eigenfunctions may be more difficult to obtain than the solution of (1); however, this is a one-time up-front cost that yields a compact linear description. The challenge of identifying and representing Koopman eigenfunctions provides strong motivation for the use of powerful emerging deep learning methods28,29,30,31,32,33.

Koopman for systems with continuous spectra

The Koopman operator provides a global linearization of the dynamics. The concept of linearizing dynamics is not new, and locally linear representations are commonly obtained by linearizing around fixed points and periodic orbits1. Indeed, asymptotic and perturbation methods have been widely used since the time of Newton to approximate solutions of nonlinear problems by starting from the exact solution of a related, typically linear problem. The classic pendulum, for instance, satisfies the differential equation \(\ddot x = - {\mathrm{sin}}(\omega x)\) and has eluded an analytic solution since its mathematical inception. The linear problem associated with the pendulum involves the small angle approximation whereby sin(ωx) = ωx − (ωx)3/3! + …; and only the first term is retained in order to yield exact sinusoidal solutions. The next correction involving the cubic term gives the Duffing equation, which is one of the most commonly studied nonlinear oscillators in physics1. Importantly, the cubic contribution is known to shift the linear oscillation frequency of the pendulum, ω → ω + Δω, as well as generate harmonics such as exp(±3)43,44. An exact representation of the solution can be derived in terms of Jacobi elliptic functions, which have a Taylor series representation in terms of an infinite sum of sinusoids with frequencies (2n − 1)ω, where n = 1,2,…,∞. Thus, the simple pendulum oscillates at the (linear) natural frequency ω for small deflections, and as the pendulum energy is increased, the frequency decreases continuously, resulting in a so-called continuous spectrum.

The importance of accounting for the continuous spectrum was discussed in 1932 in an extension by Koopman and von Neumann5. A continuous spectrum, as described for the simple pendulum, is characterized by a continuous range of observed frequencies, as opposed to the discrete spectrum consisting of isolated, fixed frequencies. This phenomena is observed in a wide range of physical systems that exhibit broadband frequency content, such as turbulence and nonlinear optics. The continuous spectrum thus confounds simple Koopman descriptions, as there is not a straightforward finite approximation in terms of a small number of eigenfunctions10. Indeed, away from the linear regime, an infinite Fourier sum is required to approximate the shift in frequency and eigenfunctions. In fact, in some cases, eigenfunctions may not exist at all.

Recently, there have been several algorithmic advances to approximate systems with continuous spectra, including nonlinear Laplacian spectral analysis45 and the use of delay coordinates46,47. A critically enabling innovation of the present work is explicitly accounting for the parametric dependence of the Koopman operator K(λ) on the continuously varying λ, related to the classic perturbation results above. By constructing an auxiliary network (see Fig. 2) to first determine the parametric dependency of the Koopman operator on the frequency λ± = ±, an interpretable low-rank model of the intrinsic dynamics can then be constructed. In particular, a nonlinear oscillator with continuous spectrum may now be represented as a single pair of conjugate eigenfunctions, mapping trajectories into perfect sines and cosines, with a continuous eigenvalue parameterizing the frequency. If this explicit frequency dependence is unaccounted for, then a high-dimensional network is necessary to account for the shifting frequency and eigenvalues. We conjecture that previous Koopman models using high-dimensional DNNs represent the harmonic series expansion required to approximate the continuous spectrum for systems such as the Duffing oscillator.

Fig. 2
figure 2

Schematic of modified schema with auxiliary network to identify (parametrize) the continuous eigenvalue spectrum λ. This facilitates an aggressive dimensionality reduction in the auto-encoder, avoiding the need for higher harmonics of the fundamental frequency that are generated by the nonlinearity43, 44. For purely oscillatory motion, as in the pendulum, we identify the continuous frequency λ± = ±

Deep learning to identify Koopman eigenfunctions

The overarching goal of this work is to leverage the power of deep learning to discover and represent eigenfunctions of the Koopman operator. Our perspective is driven by the need for parsimonious representations that are efficient, avoid overfitting, and provide minimal descriptions of the dynamics on interpretable intrinsic coordinates. Unlike previous deep learning approaches to Koopman28,29,30,31, our network architecture is designed specifically to handle a ubiquitous class of nonlinear systems characterized by a continuous frequency spectrum generated by the nonlinearity. A continuous spectrum presents unique challenges for compact and interpretable representation, and our approach is inspired by the classical asymptotic and perturbation approaches in dynamical systems.

Our core network architecture is shown in Fig. 1, and it is modified in Fig. 2 to handle the continuous spectrum. The objective of this network is to identify a few key intrinsic coordinates y = φ(x) spanned by a set of Koopman eigenfunctions \(\varphi : {\Bbb R}^{n} \to {\Bbb R}^{p}\), along with a dynamical system yk+1 = Kyk. There are three high-level requirements for the network, corresponding to three types of loss functions used in training:

  1. 1.

    Intrinsic coordinates that are useful for reconstruction. We seek to identify a few intrinsic coordinates y = φ(x) where the dynamics evolve, along with an inverse x = φ−1(y) so that the state x may be recovered. This is achieved using an auto-encoder (see Fig. 1a), where φ is the encoder and φ−1 is the decoder. The dimension p of the auto-encoder subspace is a hyperparameter of the network, and this choice may be guided by knowledge of the system. Reconstruction accuracy of the auto-encoder is achieved using the following loss: \(\left\| {{\mathbf{x}} - \varphi ^{ - 1}(\varphi ({\mathbf{x}}))} \right\|\).

  2. 2.

    Linear dynamics. To discover Koopman eigenfunctions, we learn the linear dynamics K on the intrinsic coordinates, i.e., yk+1 = Kyk. Linear dynamics are achieved using the following loss: \(\left\| {\varphi ({\mathbf{x}}_{k + 1}) - {\mathbf{K}}\varphi ({\mathbf{x}}_{k})} \right\|\). More generally, we enforce linear prediction over m time steps with the loss: \(\left\| {\varphi ({\mathbf{x}}_{k + m}) - K^{m}\varphi ({\mathbf{x}}_{k})} \right\|\). (see Fig. 1c).

  3. 3.

    Future state prediction. Finally, the intrinsic coordinates must enable future state prediction. Specifically, we identify linear dynamics in the matrix K. This corresponds to the loss \(\left\| {{\mathbf{x}}_{k + 1} - \varphi ^{ - 1}(\mathbf{K}\varphi ({\mathbf{x}}_{k}))} \right\|\), and more generally \(\left\| {{\mathbf{x}}_{k + m} - \varphi ^{ - 1}({\mathbf{K}}^{m}\varphi ({\mathbf{x}}_{k}))} \right\|\). (see Fig. 1b).

Our norm \(\left\| \cdot \right\|\) is mean-squared error, averaging over dimension then number of examples, and we add \({\ell}_{2}\) regularization.

To address the continuous spectrum, we allow the eigenvalues of the matrix K to vary, parametrized by the function λ = Λ(y), which is learned by an auxiliary network (see Fig. 2). The eigenvalues λ± = μ ±  are then used to parametrize block-diagonal K(μ,ω). For each pair of complex eigenvalues, the discrete-time K has a Jordan block of the form:

$$B(\mu ,\omega) = {\mathrm{exp}}(\mu {\mathrm{\Delta }}t)\left[ {\begin{array}{*{20}{c}} {{\mathrm{cos}}(\omega {\mathrm{\Delta }}t)} & {-{\mathrm{sin}}(\omega {\mathrm{\Delta }}t)} \\ {{\mathrm{sin}}(\omega {\mathrm{\Delta }}t)} & {{\mathrm{cos}}(\omega {\mathrm{\Delta }}t)} \end{array}} \right].$$
(4)

This network structure allows the eigenvalues to vary across phase space, facilitating a small number of eigenfunctions. To enforce circular symmetry in the eigenfunction coordinates, we often parameterize the eigenvalues by the radius \(\lambda \left({\left\| {\mathbf{y}} \right\|_2^2} \right)\). The second and third prediction loss function must also be modified for systems with continuous spectrum, as discussed in the Methods section.

To train our network, we generate trajectories from random initial conditions, which are split into training, validation, and test sets. Models are trained on the training set and compared on the validation set, which is also used for early stopping to prevent overfitting. We report accuracy on the test set.

Demonstration on examples

We demonstrate our deep learning approach to identify Koopman eigenfunctions on several example systems, including a simple model with a discrete spectrum and two examples that exhibit a continuous spectrum: the nonlinear pendulum and the high-dimensional unsteady fluid flow past a cylinder. The training, validation, and test errors for all examples are reported in Table 1. Additional details for each example are provided in Supplementary Note 1.

Table 1 Errors for each problem

Example 1: Simple model with discrete spectrum

Before analyzing systems with the additional challenges of a continuous spectrum and high-dimensionality, we consider a simple nonlinear system with a single fixed point and a discrete eigenvalue spectrum:

$$\dot x_1 = \mu x_1$$
(5)
$$\dot x_2 = \lambda (x_2 - x_1^2).$$
(6)

This dynamical system has been well-studied in the literature40,48, and for stable eigenvalues λ < μ < 0, the system exhibits a slow manifold given by \(x_2 = x_1^2\); we use μ = −0.05 and λ = −1. As shown in Fig. 3, the Koopman embedding identifies nonlinear coordinates that flatten this inertial manifold, providing a globally linear representation of the dynamics; moreover, the correct Koopman eigenvalues are identified. Specific details about the network and training procedure are provided in the Methods.

Fig. 3
figure 3

Demonstration of neural network embedding of Koopman eigenfunctions for simple system with a discrete eigenvalue spectrum

In this example, we include the auxiliary network even though it is not required for examples with discrete eigenvalues. As shown in Supplementary Fig. 3, although the eigenvalues have the freedom to vary, they stay in a narrow range around the correct values. This numerically demonstrates that it is possible to identify a discrete spectrum without a priori knowledge about whether the spectrum is continuous or discrete.

Example 2: Nonlinear pendulum with continuous spectrum

As a second example, we consider the nonlinear pendulum, which exhibits a continuous eigenvalue spectrum with increasing energy:

$$\ddot x = - {\mathrm{sin}}(x)\quad \Rightarrow \quad \left\{ {\begin{array}{*{20}{l}} {\dot x_1 = x_2} \hfill \\ {\dot x_2 = - {\mathrm{sin}}(x_1).} \hfill \end{array}} \right.$$
(7)

Although this is a simple mechanical system, it has eluded parsimonious representation in the Koopman framework. The deep Koopman embedding is shown in Fig. 4, where it is clear that the dynamics are linear in the eigenfunction coordinates, given by y = φ(x). As the Hamiltonian energy of the system increases, corresponding to an elongation of the oscillation period, the parameterized Koopman network accounts for this continuous frequency shift and provides a compact representation in terms of two conjugate eigenfunctions. Alternative network architectures that are not specifically designed to account for continuous spectra with an auxiliary network would be forced to approximate this frequency shift with the classical asymptotic expansion in terms of harmonics. The resulting network would be overly bulky and would limit interpretability.

Fig. 4
figure 4

Illustration of deep Koopman eigenfunctions for the nonlinear pendulum. The pendulum, although a simple mechanical system, exhibits a continuous spectrum, making it difficult to obtain a compact representation in terms of Koopman eigenfunctions. However, by leveraging a generalized network, as in Fig. 2, it is possible to identify a parsimonious model in terms of a single complex conjugate pair of eigenfunctions, parameterized by the frequency ω. In eigenfunction coordinates, the dynamics become linear, and orbits are given by perfect circles. For the sake of visualization, we use 10 evenly spaced trajectories instead of the random trajectories in the testing set

Recall that we have three types of losses on the network: reconstruction, prediction, and linearity. Figure 4b shows that the network is able to function as an auto-encoder, accurately reconstructing the 10 example trajectories. Next, we show that the network is able to predict the evolution of the system. Figure 4c shows the prediction horizon for 10 initial conditions that are simulated forward with the network, stopping the prediction when the relative error reaches 10%. As expected, the prediction horizon deteriorates as the energy of the initial condition increases, although the prediction is still quite accurate. The nearly concentric circles in Fig. 4d demonstrate that the dynamics in the intrinsic coordinates y are truly linear.

In this example, both the eigenfunctions and the eigenvalues are spatially varying. When originally designing the Koopman network, we did not impose any constraints on how these eigenfunctions and eigenvalues vary in space, and the resulting network did not converge to a unique and interpretable solution. This led us to decide on an important design constraint, that a nonlinear oscillator, like the pendulum, should map to coordinates that have radial symmetry, so that the spatial variation of the eigenfunctions and eigenvalues depends on the radius of the intrinsic coordinates.

The eigenfunctions φ1(x) and φ2(x) are shown in Fig. 4e. It is possible to map these eigenfunctions into magnitude and phase coordinates, as shown in Fig. 5, where it can be seen that that magnitude essentially traces level sets of the Hamiltonian energy. This is consistent with previous theoretical derivations of Mezić49 that represent Koopman eigenfunctions in action–angle coordinates, and we thank him for communicating this connection to us.

Fig. 5
figure 5

Magnitude and phase of the pendulum eigenfunctions

Example 3: High-dimensional nonlinear fluid flow

As our final example, we consider the nonlinear fluid flow past a circular cylinder at Reynolds number 100 based on diameter, which is characterized by vortex shedding. This model has been a benchmark in fluid dynamics for decades50, and has been extensively analyzed in the context of data-driven modeling25,51 and Koopman analysis52. In 2003, Noack et al.50 showed that the high-dimensional dynamics evolve on a low-dimensional attractor, given by a slow-manifold in the following model:

$$\dot x_1 = \mu x_1 - \omega x_2 + Ax_1x_3$$
(8)
$$\dot x_2 = \omega x_1 + \mu x_2 + Ax_2x_3$$
(9)
$$\dot x_3 = - \lambda \left({x_3 - x_1^2 - x_2^2} \right).$$
(10)

This mean-field model exhibits a stable limit cycle corresponding to von Karman vortex shedding, and an unstable equilibrium corresponding to a low-drag condition. Starting near this equilibrium, the flow unwinds up the slow manifold toward the limit cycle. In51, Loiseau and Brunton showed that this flow may be modeled by a nonlinear oscillator with state-dependent damping, making it amenable to the continuous spectrum analysis. We use trajectories from this model where μ = 0.1, ω = 1, A = −0.1, and λ = 10 to train a Koopman network. The resulting eigenfunctions are shown in Fig. 6.

Fig. 6
figure 6

Learned Koopman eigenfunctions for the mean-field model of fluid flow past a circular cylinder at Reynolds number 100. a Reconstruction of trajectory from linear Koopman model with two states; modes for each of the state space variables x are shown along the coordinate axes. b Koopman reconstruction in eigenfunction coordinates y, along with eigenfunctions \(y = \varphi (x)\). c Two examples of trajectories that begin off the attractor. The Koopman model is able to reconstruct both given only the initial condition

In this example, the damping rate μ and frequency ω are allowed to vary along level sets of the radius in eigenfunction coordinates, so that μ(R) and ω(R), where \(R^2 = y_1^2 + y_2^2\); this is accomplished with an auxiliary network as in Fig. 2. These continuously varying eigenvalues are shown in Supplementary Fig. 5, where it can be seen that the frequency ω is extremely close to the true constant −1, while the damping μ varies significantly, and in fact switches stability for trajectories outside the natural limit cycle. This is consistent with the data-driven model of Loiseau and Brunton51.

Although we only show the ability of the model to predict the future state in Fig. 6, corresponding to the second and third loss functions, the network also functions as an autoencoder. Figure 6c shows the prediction performance of the Koopman network for trajectories that start away from the attractor; in both cases, the dynamics are faithfully captured and the dynamics attract onto the limit cycle. Thus, it is possible to capture nonlinear transients, as long as these are sufficiently represented in the training data.

Discussion

In summary, we have employed powerful deep learning approaches to identify and represent coordinate transformations that recast strongly nonlinear dynamics into a globally linear framework. Our approach is designed to discover eigenfunctions of the Koopman operator, which provide an intrinsic coordinate system to linearize nonlinear systems, and have been notoriously difficult to identify and represent using alternative methods. Building on a deep auto-encoder framework, we enforce additional constraints and loss functions to identify Koopman eigenfunctions where the dynamics evolve linearly. Moreover, we generalize this framework to include a broad class of nonlinear systems that exhibit a continuous eigenvalue spectrum, where a continuous range of frequencies is observed. Continuous-spectrum systems are notoriously difficult to analyze, especially with Koopman theory, and naive learning approaches require asymptotic expansions in terms of higher order harmonics of the fundamental frequency, leading to unwieldy models. In contrast, we utilize an auxiliary network to parametrize and identify the continuous frequency, which then parameterizes a compact Koopman model on the auto-encoder coordinates. Thus, our deep neural network models remain both parsimonious and interpretable, merging the best of neural network representations and Koopman embeddings. In most deep learning applications, although the basic architecture is extremely general, considerable expert knowledge and intuition is typically used in the training process and in designing loss functions and constraints. Throughout this paper, we have also used physical insight and intuition from asymptotic theory and continuous spectrum dynamical systems to guide the design of parsimonious Koopman embeddings.

There are many ongoing challenges and promising directions that motivate future work. First, there are still several limitations associated with deep learning, including the need for vast and diverse data and extensive computation to train models53. This training may be considered a one-time upfront cost, and deep learning frameworks, such as TensorFlow parallelize the training on GPUs and across GPUs54; further, there is ongoing work to improve the scalability53. Even more concerning is the dubious generalizability and interpretability of the resulting models, as deep learning architectures may be viewed as sophisticated interpolation engines with limited ability to extrapolate beyond the training data55. This work attempts to promote interpretability by forcing the network to have physical meaning in the context of Koopman theory, although the issue with generalizability still requires sufficient volumes and diversity of training data. There are also more specific limitations to the current proposed architecture, foremost, choosing the dimension of the autoencoder coordinates, y. Continued effort will be required to automatically detect the dimension of the intrinsic coordinates and to classify spectra (e.g., discrete and continuous, and real and complex eigenvalues). It will be important to extend these methods to higher-dimensional examples with more complex energy spectra, as the examples considered here are relatively low-dimensional. Fortunately, with sufficient data, deep learning architectures are able to learn incredibly complex representations, so the prospects for scaling these methods to larger systems is promising.

The use of deep learning in physics and engineering is increasing at an incredible rate, and this trend is only expected to accelerate. Nearly every field of science is revisiting challenging problems of central importance from the perspective of big data and deep learning. With this explosion of interest, it is imperative that we as a community seek machine learning models that favor interpretability and promote physical insight and intuition. In this challenge, there is a tremendous opportunity to gain new understanding and insight by applying increasingly powerful techniques to data. For example, discovering Koopman eigenfunctions will result in new symmetries and conservation laws, as conserved eigenfunctions are related to conservation laws via a generalized Noether's theorem. It will also be important to apply these techniques to increasingly challenging problems, such as turbulence, epidemiology, and neuroscience, where data is abundant and models are needed. The goal is to model these systems with a small number of coupled nonlinear oscillators using similar parameterized Koopman embeddings. Finally, the use of deep learning to discover Koopman eigenfunctions may enable transformative advances in the nonlinear control of complex systems. All of these future directions will be facilitated by more powerful network representations.

Methods

Creating the datasets

We create our datasets by solving the systems of differential equations in MATLAB using the ode45 solver.

For each dynamical system, we choose 5000 initial conditions for the test set, 5000 for the validation set, and 5000–20,000 for the training set (see Table 2). For each initial condition, we solve the differential equations for some time span. That time span is t = 0,.02,…,1 for the discrete spectrum and pendulum datasets. Since the dynamics on the slow manifold for the fluid flow example are slower and more complicated, we increase the time span for that dataset to t = 0,.05,…,6. However, when we include data off the slow manifold, we want to capture the fast dynamics as the trajectories are attracted to the slow manifold, so we change the time span to t = 0,.01,…,1. Note that for the network to capture transient behavior as in the first and last example, it is important to include enough samples of transients in the training data.

Table 2 Dataset sizes

The discrete spectrum dataset is created from random initial conditions x where x1, x2[−0.5, 0.5], since this portion of phase space is sufficient to capture the dynamics.

The pendulum dataset is created from random initial conditions x, where x1[−3.1,3.1] (just under [−π,π]), x2[−2,2], and the potential function is under 0.99. The potential function for the pendulum is \(\frac{1}{2}x_2^2 - {\mathrm{cos}}(x_1)\). These ranges are chosen to sample the pendulum in the full phase space where the pendulum approaches having an infinite period.

The fluid flow problem limited to the slow manifold is created from random initial conditions x on the bowl where r[0, 1.1], θ[0, 2π], x1 = rcos(θ), x2 = rsin(θ), and \(x_3 = x_1^2 + x_2^2\). This captures all of the dynamics on the slow manifold, which consists of trajectories that spiral toward the limit cycle at r = 1.

The fluid flow problem beyond the slow manifold is created from random initial conditions x where x1[−1.1,1.1], x2[−1.1,1.1], and x3[0,2.42]. These limits are chosen to include the dynamics on the slow manifold covered by the previous dataset, as well as trajectories that begin off the slow manifold. Any trajectory that grows to x3 > 2.5 is eliminated so that the domain is reasonably compact and well-sampled.

Code

We use the Python API for the TensorFlow framework54 and the Adam optimizer56 for training. All of our code is available online at github.com/BethanyL/DeepKoopman.

Network architecture

Each hidden layer has the form of Wx + b followed by an activation with the rectified linear unit (ReLU): f(x) = max{0,x}. In our experiments, training was significantly faster with ReLU as the activation function than with sigmoid. See Table 3 for the number of hidden layers in the encoder, decoder, and auxiliary network, as well as their widths. The output layers of the encoder, decoder, and auxiliary network are linear (simply Wx + b).

Table 3 Network architecture

The input to the auxiliary network is y, and it outputs the parameters for the eigenvalues of K. For each complex conjugate pair of eigenvalues λ± = μ ± , the network defines a function Λ mapping \(y_j^2 + y_{j + 1}^2\) to μ and ω, where yj and yj+1 are the corresponding eigenfunctions. Similarly, for each real eigenvalue λ, the network defines a function mapping yj to λ. For example, for the fluid flow problem off the attractor, we have three eigenfunctions. The auxiliary network learns a map from \(y_1^2 + y_2^2\) to μ and ω and another map from y3 to λ. This could be implemented as one network defining a mapping \(\Lambda :{\Bbb R}^2 \to {\Bbb R}^3\) where the layers are not fully connected (\(y_1^2 + y_2^2\) should not influence λ and y3 should not influence μ and ω). However, for simplicity, we implement this as two separate auxiliary networks, one for the complex conjugate pair of eigenvalues and one for the the real eigenvalue.

Explicit loss function

Our loss function has three weighted mean-squared error components: reconstruction accuracy \({\cal L}_{{\mathrm {recon}}}\), future state prediction \({\cal L}_{{\mathrm {pred}}}\), and linearity of dynamics \({\cal L}_{{\mathrm {lin}}}\). Since we know that there are no outliers in our data, we also use an \({\cal L}_\infty\) term to penalize the data point with the largest loss. Finally, we add \(\ell _2\) regularization on the weights W to avoid overfitting. More specifically:

$${\cal L} = \alpha _1({\cal L}_{\mathrm{recon}} + {\cal L}_{\mathrm{pred}}) + {\cal L}_{\mathrm{lin}} + \alpha _2{\cal L}_\infty + \alpha _3\left\| {\mathbf{W}} \right\|_2^2$$
(11)
$${\cal L}_{\mathrm{recon}} = \left\| {{\mathbf{x}}_1 - \varphi ^{ - 1}(\varphi ({\mathbf{x}}_1))} \right\|_{\mathrm{MSE}}$$
(12)
$${\cal L}_{\mathrm{pred}} = \frac{1}{{S_p}}\mathop {\sum}\limits_{m = 1}^{S_p} \left\| {{\mathbf{x}}_{m + 1} - \varphi ^{ - 1}(K^m\varphi ({\mathbf{x}}_1))} \right\|_{\mathrm{MSE}}$$
(13)
$${\cal L}_{\mathrm{lin}} = \frac{1}{{T - 1}}\mathop {\sum}\limits_{m = 1}^{T - 1} \left\| {\varphi ({\mathbf{x}}_{m + 1}) - K^m\varphi ({\mathbf{x}}_1)} \right\|_{\mathrm{MSE}}$$
(14)
$${\cal L}_\infty = \left\| {{\mathbf{x}}_1 - \varphi ^{ - 1}(\varphi ({\mathbf{x}}_1))} \right\|_\infty + \left\| {{\mathbf{x}}_2 - \varphi ^{ - 1}(K\varphi ({\mathbf{x}}_1))} \right\|_\infty ,$$
(15)

where MSE refers to mean squared error and T is the number of time steps in each trajectory. The weights α1, α2, and α3 are hyperparameters. The integer Sp is a hyperparameter for how many steps to check in the prediction loss. The hyperparameters α1, α2, α3, and Sp are listed in Table 4.

Table 4 Loss hyperparameters

The matrix K is parametrized by the function λ = Λ(y), which is learned by an auxiliary network. The eigenvalues can vary along a trajectory, so in \({\cal L}_{\mathrm{pred}}\) and \({\cal L}_{\mathrm{lin}}\), Km = K(λ1) K(λ2)…K(λm). However, on Hamiltonian systems, such as the pendulum, the eigenvalues are constant along each trajectory. If a system is known to be Hamiltonian, the network training could be sped up by encoding the constraint that Km = K(λ)m. In order to demonstrate that this specialized knowledge is not necessary, we use the more general case for all of our datasets, including the pendulum.

Training

We initialize each weight matrix W randomly from a uniform distribution in the range [−s, s] for \(s = 1/\sqrt a\), where a is the dimension of the input of the layer. This distribution was suggested in ref. 21. Each bias vector b is initialized to 0. The model for the discrete spectrum example is trained for 2 h on an NVIDIA K80 GPU. The other models are each trained for 6 h. The learning rate for the Adam optimizer is 0.001. On the pendulum and fluid flow datasets, for 5 min, we pretrain the network to be a simple autoencoder, using the autoencoder loss but not the linearity or prediction losses, as this speeds up the training. We also use early stopping; for each model, at the end of training, we resume the step with the lowest validation error.

Hyperparameter tuning

There are many design choices in deep learning, so we use hyperparameter tuning, as described in ref. 21. For each dynamical system, we train multiple models in a random search of hyperparameter space and choose the one with the lowest validation error. Each model is also initialized with different random weights. We find that α1, which defines a trade-off between the two objectives that include the decoder and the one that does not, has a significant effect on the training speed.

Code availability

All code used in this study is available at github.com/BethanyL/DeepKoopman.