Variational multiscale reinforcement learning for discovering reduced order closure models of nonlinear spatiotemporal transport systems

A central challenge in the computational modeling and simulation of a multitude of science applications is to achieve robust and accurate closures for their coarse-grained representations due to underlying highly nonlinear multiscale interactions. These closure models are common in many nonlinear spatiotemporal systems to account for losses due to reduced order representations, including many transport phenomena in fluids. Previous data-driven closure modeling efforts have mostly focused on supervised learning approaches using high fidelity simulation data. On the other hand, reinforcement learning (RL) is a powerful yet relatively uncharted method in spatiotemporally extended systems. In this study, we put forth a modular dynamic closure modeling and discovery framework to stabilize the Galerkin projection based reduced order models that may arise in many nonlinear spatiotemporal dynamical systems with quadratic nonlinearity. However, a key element in creating a robust RL agent is to introduce a feasible reward function, which can be constituted of any difference metrics between the RL model and high fidelity simulation data. First, we introduce a multi-modal RL to discover mode-dependant closure policies that utilize the high fidelity data in rewarding our RL agent. We then formulate a variational multiscale RL (VMRL) approach to discover closure models without requiring access to the high fidelity data in designing the reward function. Specifically, our chief innovation is to leverage variational multiscale formalism to quantify the difference between modal interactions in Galerkin systems. Our results in simulating the viscous Burgers equation indicate that the proposed VMRL method leads to robust and accurate closure parameterizations, and it may potentially be used to discover scale-aware closure models for complex dynamical systems.

A detailed discussion on these models can be found in a recent survey 4 . In principle, the problem of building a data-driven closure model from the multimodal datasets can be posed as an optimization and ML task. Although supervised learning techniques become more of commodity tools nowadays, reinforcement learning (RL) is relatively an uncharted approach in computational science communities. In contrast to many other data-driven supervised learning based approaches introduced for closures 4 , this powerful approach can be formulated to tackle with the closure problem in an automated fashion. RL often provides a comprehensive iterative computational framework that implies modular goal-directed interactions of an agent with its environment. More recently, Novati et al. 15 demonstrated the power of RL in discovery of turbulence closure models in large-eddy simulation of turbulent flows. The RL framework being introduced in the turbulence modeling context is quite comprehensive, and might apply equally well to other coarse-grained reduced order modeling approaches. In relevant works 16,17 , the feasibility of using RL to learn optimal ROM closures has been discussed. We also refer to a recent review 18 for the theory and application of RL approaches in fluid mechanics.
A fundamental question in RL is how to construct robust state, action and reward definitions relevant to the underlying problem. In this study, we put forth and examine the scale-aware RL mechanisms that automate closure modeling of projection based ROMs. Specifically, we first introduce a multi-modal RL (MMRL) approach, which discovers the mode dependant policies to stabilize the evolution of the truncated Galerkin system. One of the main objectives of our study is therefore to demonstrate how physical insights play a key role in designing an effective RL environment that converges to a robust control policy for learning and parameterizing the closure terms.
In fact, a key element in forging a robust RL agent is to introduce an appropriate reward function, which can be, in principle, constituted of any difference metrics between the RL model and high fidelity simulation data. In this study, instead, we explore how RL can be formulated using a variational multiscale approach to discover closure models without requiring access to the high fidelity data in designing the reward function. The general concept of the variational multiscale framework has been first introduced in finite element community [19][20][21][22][23] , and its underpinning idea has been later adopted by ROM modellers [24][25][26][27][28][29] . In our study, we put forth a variational multiscale RL (VMRL) approach by leveraging this multiscale concept that introduces a natural hierarchy to quantify the difference between modal interactions in the Galerkin projection based ROM systems. Specifically, our work addresses the following questions: • How can RL be used to discover reliable closure models in reduced order models of transport equations?
• Which parameterization processes lead to improved closure approaches that may reduce uncertainty in the evolution of projection coefficients of the Galerkin ROM systems? • How does a mode-dependent closure formulation affect overall predictive performance?
• What are the design considerations for formulating a feasible reward function that does not require access to the supervised training data?
Therefore, the main goal of this paper is to address these questions in the context of closure model discovery for complex nonlinear spatiotemporal systems.

Methods
Reduced order modeling. To illustrate the proposed approaches, we focus on the viscous Burgers equation, a generic partial differential equation that represents broad nonlinear transport phenomena in fluid dynamics, which is given as 4 where u refers to velocity, and ν is the kinematic viscosity (i.e., ν = 1/Re in dimensionless form, where Re refers to the Reynolds number). From a model reduction perspective, we highlight that Eq. (1) possesses the hallmarks of general nonlinear multidimensional advection-diffusion problems 30 . Defining our spatiotemporal domain, x ∈ [0, 1] and t ∈ [0, 1] , the viscous Burgers equation admits an analytical solution in the form of 30,31 where t 0 = exp( 1 8ν ) . This closed form expression is used to generate snapshot data for our forthcoming model order reduction analysis. The database is constituted with Eq. (2) using N = 1024 spatial collocation points at each snapshot. We set ν = 0.001 (i.e., Re = 1000 ) to generate our training data and our training database consists of 500 snapshots from t = 0 to t = 1 . In considering such spatiotemporal system, we often use a modal decomposition to the field u(x, t) where α k (t) and ψ k (x) refer to the kth modal coefficient and kth proper orthogonal decomposition (POD) basis function, respectively. Figure 1 shows the the first eight most energetic POD basis functions utilized in this www.nature.com/scientificreports/ study. We note that, without losing generality, one can also use Fourier harmonics 9 or randomized orthogonal functions 32 to forge a set of basis functions. Here, we note that access to a set of previously recorded data snapshots is necessary in order to generate the POD basis functions. The proposed framework, however, is not necessarily dependent on the POD procedure and is easily adaptable to different base functions. In other words, let's take into account the following layers when formulating our problem: (i) create basis functions, (ii) develop a projection-based reduced order model, (iii) establish an ansatz/model for closures, and (iv) use reinforcement learning to learn the parameters of the ansatz/model. We stress that the data has only been used in step (i) when we construct bases here since we concentrate on the POD-based model reduction strategy. However, our approach can be used effectively with the use of different bases, such as Fourier basis functions. Once a set of spatial orthonormal modes (i.e., for k = 1, 2, . . . , R ) is obtained from snapshot data, we then apply the Galerkin projection (GP) to obtain a dynamical system that evolves in the latent space of α k (t) . The resulting ROM, denoted as GP in this study, becomes where where the notion of (·, ·) represents the standard inner product. We note that these tensorial coefficients in GP model only depend on spatial modes, which are often precomputed from the available snapshot data when designing projection based ROMs.
Closure modeling. We first illustrate the underlying closure modeling concept for a prototype demonstration as shown in Fig. 2. To formulate our ROM closure problem, we modify Eq. (4) by adding a functional form of distributed control. This control is often referred to as eddy viscosity approach that has strong roots in large eddy simulations to model or compensate the residual effects of the truncated scales [33][34][35][36] . Therefore, the modified ROM becomes where the proposed closure term can be parameterized by defining an eddy viscosity coefficient η as follows Several techniques have been introduced to improve the accuracy of closure parameterizations, including definition of a nonlinear eddy viscosity model 37 or dynamic closure models 38,39 that allow varying eddy viscosity in time (i.e., η(t) ← η ). In this paper, we first formulate an RL environment and design an agent to discover this eddy viscosity parameter η(t) . We call this approach linear-mode RL (LMRL).   40 further enhanced the closure theory emphasizing the modal eddy viscosity concept. The roots of such mode-dependent correction go back to the work of Rempfer and Fasel 41 in order to provide improved closure models. These multi-modal closures can be specified as where η k refers to the kth modal eddy viscosity coefficient. In our current work, we formulate an RL framework to learn η k (t) , and call this approach as multi-modal RL (MMRL). Although the proposed closure problem can be formulated using more traditional adjoint based 37 or sensitivity based approaches 42 , our chief motivation in this study is to explore the feasiblity of RL workflows for the ROM closure problems. More specifically, in this paper we aim to introduce a variational multiscale RL (VMRL) approach by formulating a new procedure to forge a reward function that does not require access to the training data. Our approach therefore facilitates new RL workflows since RL enhanced computational frameworks might play an integral role in designing many end-to-end data-driven approaches for broader optimization and control problems.
Deep reinforcement learning. In our context, deep RL presents a modular computational framework to learn η k (t) in Eq. (9). Here, we briefly describe the formulation of the RL problem and the proximal policy optimization (PPO) algorithm 43 . In RL, at each time step t, the agent observes some representation of the state of the system, s t ∈ S , and based on this observation selects an action, a t ∈ A . The agent receives the reward, r t ∈ R as a consequence of the action and the environment enters in a new state s t+1 . Therefore, the interaction of an agent with the environment gives rise to a trajectory as follows The goal of the RL is to find an optimal strategy for the agent that will maximize the expected discounted reward over the trajectory τ and can be written mathematically as follows where γ is a parameter called discount rate that lies between [0, 1], and T is the horizon of the trajectory. The discount rate determines how much weightage to be assigned to the long-term reward compared to an immediate reward.
In RL, the agent's decision making strategy is characterized by a policy π(s, a) ∈ � . The RL agent is trained to find a policy to optimize the expected return when starting in the state s at time step t and is called as state-value function. We can write the state-value function as follows When we truncate a model spanned in a higher dimensional state space (i.e., R = 3 in the figure, a threedimensional model spanned in α 1 , α 2 , α 3 ) to a lower dimensional latent space (i.e., R = 2 in the figure, a reduced order two-dimensional model spanned only in α 1 , α 2 ), a closure error will be introduced due to the underlying nonlinear interactions. The main goal in closure modeling is to find a parameterized model, which is only function of resolved modal coefficients (i.e., α 1 , . . . , α R ) to minimize this closure error. Therefore, in this paper we formulate an RL problem to discover this closure model using the state variables (i.e., resolved modal coefficients). www.nature.com/scientificreports/ Similarly, the expected return starting in a state s, taking an action a, and thereafter following a policy π is called as the action-value function and can be written as We also define an advantage function that can be considered as an another version of action-value function with lower variance by taking the state-value function as the baseline. The advantage function can be written as We use π w (·) to denote that the policy is parameterized by w ∈ R d (i.e., the weights and biases of the neural network in deep RL). The agent is trained with an objective function defined as 44 where an episode starts in some particular state s 0 , and V π w is the value function for the policy π w . The policy parameters w are updated by estimating the gradient of an objective function and plugging it into a gradient ascent algorithm as follows where β is the learning rate of the optimization algorithm. The gradient of an objective function can be computed using the policy gradient theorem 45 as follows The accurate calculation of empirical expectation in Eq. (17) requires large number of samples. Furthermore, the performance of policy gradient methods is highly sensitive to the learning rate leading to difficulty in obtaining stable and steady improvement. The PPO algorithm introduces a clipped surrogate objective function 43 to avoid excessive update in policy parameters in a simplified way as follows where r t (w) denotes the probability ratio between new and old policies as given below The ε in Eq. (18) is a hyperparameter that controls how much new policy can deviate from the old. This is done through a function clip(r t (w), 1 − ε, 1 + ε) that enforces the ratio between new and old policy ( r t (w) ) to stay between the limit [1 − ε, 1 + ε].

Variational multiscale approach.
Here we present a two-scale variational multiscale formulation as depicted in Fig. 3, which utilize two orthogonal spaces, X A and X B . Since the POD basis is orthonormal by construction, we can build these two orthogonal spaces in a natural, straightforward way: X A := span{ψ 1 , ψ 2 , . . . , ψ R } , which represents the resolved ROM scales, and X B := span{ψ R+1 , ψ R+2 , . . . , ψR} , which represents the test scales (i.e., unresolved ROM scales). Following Eq. (4), next we use the ROM approximation of u in the space where the first term in the right hand side of Eq. (20) represents the resolved ROM components of u, and the second term represents the unresolved test scales. Plugging the ROM approximation of u in Eq. (1), projecting it onto X A , and using ROM basis orthogonality, we obtain To describe the hierarchical structure of the ROM basis, we make use of the variational framework. Therefore, the scales are naturally divided into two groups in the first stage using ROM projection: (i) resolved scales and (ii) unresolved scales. The phrases describing the relationships between the two categories of scales are expressly identified in the second stage. The novelty of the proposed framework is demonstrate in the third step, where we build our reinforcement learning based closure models for the interaction between the two types of scales. We also highlight that the choice of cut-off scale R and the test scale R might have impact on the results of the approach. Moreover, in POD-based reduced order models, raising R typically results in an increase for the (12) V π (s) = E π ∞ k=0 γ k r t+k |s t = s, π .
r t (w) = π w+�w (s, a) π w (s, a) . In summary, our RL environment consists of three model definition for the evolution of the state variables α 1 , α 2 , . . . , α R : (i) base ROM given by Eq. (4), (ii) improved ROM by the closure model given by Eq. (7), and (iii) test model given by Eq. (21). Our key hypothesis relies on the fact that the proposed closure model is accurate and representative if the difference between states obtained by Eqs. (7) and 21 is minimized. Therefore, the reward function in our RL framework can now be easily constructed by exploiting the difference between these resolved and test scale modal coefficients. More precisely, let's define the following states at where σ > 1 is a scaling factor that can be chosen between 1 and 2 in practice. In our calculations, we set σ = 1.6 . We note that this binary definition of reward function eliminates the need for access to the snapshot data as we will detail further in our results section. The selection of ±10 in our binary definition is arbitrary since the RL workflows are designed to maximize the sum of the reward over each episodic experiment. Figure 4 displays the complete deep RL framework for the MMRL approach where the agent observes the POD modal coefficients as the state of the system and takes the action of selecting modal eddy viscosity coefficients. Due to the modal truncation, the effect of unresolved scales on resolved scales is not captured and there is a discrepancy between the true modal coefficients and the ROM modal coefficients. The goal of the agent is to minimize this difference, and therefore, the l 2 norm of the deviation between true and ROM modal coefficients is used as the reward function. Since we are maximizing the reward, the negative of the l 2 norm is first assigned as a reward at each time step that is given by where û := u(x, t) refers to snapshot data at time step t. The choice of the reward function can have a significant effect on the performance of the agent 15 and needs to be carefully designed depending upon the problem. The performance of the MMRL approach is compared against the linear-modal RL (LMRL) approach where the agent selects only a scalar value of eddy viscosity amplitude as an action and linear viscosity kernel 35 is utilized to assign the modal eddy viscosity coefficients. Specifically, in LMRL approach we have η k (t) = η e (t)(k/R) , where η e is the eddy viscosity amplitude selected by the agent as an action.

Results
In Fig. 5, the trajectory of the mean reward is shown for training an RL agent with the LMRL and MMRL approaches. The agent is trained for Reynolds number Re = 1000 . It can be seen that the maximum reward attained with the MMRL approach is almost twice the magnitude of the reward achieved with the LMRL approach. Figure 6 depicts the evolution of selected POD modal coefficients for Re = 1500 . The prediction from both LMRL and MMRL approaches is in better agreement with the true projection modal coefficients compared  www.nature.com/scientificreports/ to GP with the prediction from MMRL being more accurate compared to LMRL. However, we highlight that both LMRL and MMRL approaches utilize the reward function given by Eq. (23), which requires access to the true snapshot data. We highlight that in our evaluation, both mean and two-standard deviation (i.e., 95% confidence interval) of 10 different RL models are shown (e.g., see red symbols in Fig. 5 for those models). On the other hand, Fig. 7 illustrates the trajectory of the mean reward for training an RL agent with the VMRL approach that utilizes a binary reward function given by Eq. (22). Figure 8 shows a comparison between MMRL and VMRL approaches at Re = 1500 . We highlight that both approaches utilizes multi-modal action space (i.e., discovering η k (t) for k = 1, 2, . . . , R ). Figure 8 clearly demonstrates that the VMRL approach obtains an accurate policy without requiring access to the true labeled data in defining the reward function. This key aspect of the proposed VMRL approach paves the way of designing novel RL workflows exploiting the modal interaction between resolved and test scales.
The spatiotemporal velocity field with different ROMs is shown in Fig. 9. It should be noted that we can at the most recover the true projection of the full order model (FOM) solution. This true reduced order representation (ROR) is also shown in Fig. 9. In our analogy given in Fig. 2, the blue curve represents the ROR, the red curve represents the GP model. For quantitative analysis, the root mean squared error (RMSE) for different ROM approaches at different Reynolds numbers (different from the training setting at Re = 1000 ) is reported in Table 1. The RMSE for LMRL, MMRL, and VMRL approaches is significantly smaller compared the the GP model. We also observe that the VMRL approach provides marginally more accurate solution than the LMRL and MMRL approaches at higher Reynolds number. The trends for the LMRL and MMRL techniques indicate that

Discussion
This study introduces scale-aware reinforcement learning (RL) framework to automate the discovery of closure models in projection based ROMs. We treat the closure as a control input in the latent space of the ROM and build the parameterized model with a dissipation term. The feasibility of the RL framework is first demonstrated with linear-modal RL (LMRL) where a linear eddy viscosity constraint is utilized for parameterization and with multi-modal RL (MMRL) which finds mode dependant eddy viscosity model coefficient. The agent is incorporated in a reduced-order solver, observes the POD modal coefficients, and accordingly computes the closure term. Here, both RL approaches minimize the discrepancy between the true POD modal coefficients and prediction from closure ROM, and the obtained closure model generalizes to different Reynolds number. We then demonstrate how to formulate an RL framework without requiring access to the true data using the variational multiscale formalism. We find that this variational-multiscale RL (VMRL) is a robust closure discovery framework that utilizes a reward function based on the modal energy transfer effect. Building on the promising results presented in this study to develop ROM closure models using deep RL, our future work will concentrate on incorporating uncertainties associated with observations into account while selecting the action of an agent. Although our report demonstrates the feasibility of the proposed RL framework while considering the Burgers equation, more complex examples need to be examined in order to conclude whether the proposed methods are indeed applicable to simulations of complex phenomena, a subject we intend to investigate in the future. The key benefit of utilizing reinforcement learning in this reduced order modeling context is its modular nature, which allows for the identification and learning of new closure modeling approaches, despite the fact that it is likely one of the most computational time-and resource-intensive disciplines of machine learning.

Data availability
The data that supports the findings of this study is available within the article. The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.