Abstract
We propose a unified datadriven reduced order model (ROM) that bridges the performance gap between linear and nonlinear manifold approaches. Deep learning ROM (DLROM) using deepconvolutional autoencoders (DC–AE) has been shown to capture nonlinear solution manifolds but fails to perform adequately when linear subspace approaches such as proper orthogonal decomposition (POD) would be optimal. Besides, most DLROM models rely on convolutional layers, which might limit its application to only a structured mesh. The proposed framework in this study relies on the combination of an autoencoder (AE) and Barlow Twins (BT) selfsupervised learning, where BT maximizes the information content of the embedding with the latent space through a joint embedding architecture. Through a series of benchmark problems of natural convection in porous media, BT–AE performs better than the previous DLROM framework by providing comparable results to PODbased approaches for problems where the solution lies within a linear subspace as well as DLROM autoencoderbased techniques where the solution lies on a nonlinear manifold; consequently, bridges the gap between linear and nonlinear reduced manifolds. We illustrate that a proficient construction of the latent space is key to achieving these results, enabling us to map these latent spaces using regression models. The proposed framework achieves a relative error of 2% on average and 12% in the worstcase scenario (i.e., the training data is small, but the parameter space is large.). We also show that our framework provides a speedup of \(7 \times 10^{6}\) times, in the best case, and \(7 \times 10^{3}\) times on average compared to a finite element solver. Furthermore, this BT–AE framework can operate on unstructured meshes, which provides flexibility in its application to standard numerical solvers, onsite measurements, experimental data, or a combination of these sources.
Similar content being viewed by others
Introduction
A reduced order model (ROM) is devised to provide an acceptable accuracy while utilizing a much lower computational cost compared to the full order model (FOM)^{1}. In recent years, a nonintrusive or datadriven ROM approach has grasped attention because (1) it has a straightforward implementation (i.e., does not require any modifications of FOM), (2) it easily lends itself to different kinds of physical problems, and (3) it allows for more stable and much faster prediction than intrusive ROM for nonlinear problems^{2,3,4,5,6,7}. Traditionally, proper orthogonal decomposition (POD) is used as a data compression tool (i.e., linear subspace approach), which is the optimal way to construct the linear reduced manifolds. However, PODbased solutions on a linear subspace are often restrictive for highly nonlinear problems where reduced spaces lie in nonlinear manifolds. More recently, nonlinear compression using autoencoderbased deep learning (DL) architectures or nonlinear manifold approach^{5,6,8,9} has been suggested to reconstruct these nonlinear manifolds, resulting in generic and more refined predictive capabilities than linear subspace approaches for nonlinear problems. Recent extensive comparisons, however, show a performance deficit for DLROM approaches in some cases^{6}.
Kadeethum et al.^{6} illustrate that there are two essential issues for DLROM. First, the nonlinear approach outperforms its linear counterpart in specific settings (e.g., boundary conditions and domain geometry), but the opposite can occur in other settings. This is because POD provides the optimal data compression in a linear subspace for the problems with fastdecaying Kolmogorov’s nwidth that measures the degree of accuracy by ndimensional linear subspaces^{10,11,12,13}. Therefore, the DLROM approach could not exceed the level of POD accuracy for problems that naturally lie within linear manifolds. However, for problems with slowly decaying Kolmogorov’s width, the nonlinear manifold approach outperforms the linear subspace one. Even though the authors hypothesize that a visual comparison between principal component analysis (PCA) and tDistributed Stochastic Neighbor Embedding (tSNE) could indicate which method will perform better before employing any specific compression strategy, there is no unified model that could be used across problem settings without an extensive casebased hyperparameter search. Second, although the nonlinear approach excels in very complex settings, it relies on convolutional operators, hindering its application for unstructured meshes and limiting DLROM approaches to less practical problems. Hence, these limitations in DLROM methods need to be resolved and tested with varying degrees of complex problems.
Convection in porous media is an important process in various applications in natural and engineered environments (e.g., biomedical engineering, multiphase flow in the subsurface, seawater intrusion, geothermal energy, and storage of nuclear and radioactive waste)^{14,15,16,17}. As the media temperature and composition (fluid concentration) are altered, the dynamics of fluid density and viscosity variations could drive the flow field through flow instabilities^{18,19}. The gravitydriven flow problem is usually characterized by Rayleigh number (\(\textrm{Ra}\)) in which if the \(\textrm{Ra}\) is low, the flow field is laminar, while if the \(\textrm{Ra}\) is high, the flow turns into a turbulent regime. In cases where the driving force is strong enough (very high \(\textrm{Ra}\)), the flow might also exhibit fingering behavior^{20}.
Numerical simulation of gravitydriven flow in porous media has been a subject of extensive research. Notable examples of full order model (FOM) include: (1) TOUGH software suite, which includes multidimensional numerical models for simulating the coupled thermohydromechanicalchemical (THMC) processes in porous and fractured media^{21,22}, (2) SIERRA Mechanics, which has simulation capabilities for coupling thermal, fluid, aerodynamics, solid mechanics and structural dynamics^{23}, (3) PyLith, a finiteelement code for modeling dynamic and quasistatic simulations of coupled multiphysics processes^{24}, (4) OpenGeoSys project, which is developed mainly based on the finite element method using objectoriented programming THMC processes in porous media^{25}, (5) ICFERST, a reservoir simulator based on controlvolume finite element methods and dynamic unstructured mesh optimization^{26}, (6) DYNAFLOW™, a nonlinear transient finite element analysis platform^{27}, (7) DARSim, multiscale multiphysics finite volume based simulator^{28}, (8) the CSMP, an objectoriented application program interface, for the simulation of complex geological processes, e.g. THMC, and their interactions^{29}, and (9) PorePy, an opensource modeling platform for multiphysics processes in fractured porous media^{30}. In this study, we utilize the FOM developed in the previous works, a locally conservative mixed finite element framework for coupled hydromechanicalchemical processes in heterogeneous porous media^{31,32} in which interior penalty enriched Galerkin and mixed finite element are employed. This FOM, however, is computationally expensive for two reasons. The first one is the problem of interest is highly nonlinear; hence, it takes more nonlinear iterations to converge. The second reason is to satisfy the Courant–Friedrichs–Lewy (CFL) condition, the FOM needs to march through many intermediate timesteps to reach the timesteps of interest^{33,34,35}.
Kadeethum et al.^{6} propose a datadriven reduced order model (ROM) that can reduce computation cost while maintaining an acceptable accuracy for natural convection in porous media problems. The model is applicable to parameterized problems^{1,13,36,37,38,39,40,41}, depending on a set of parameters (\({\varvec{\mu }}\)) which could correspond to physical properties, geometric characteristics, or boundary conditions. This model sequentially follows (1) the offline and (2) online stages^{1,42}. The offline stage begins with initializing a set of input parameters, which we call a training set. Then the FOM is solved corresponding to each member in the training set (in the following, we will refer to the corresponding solutions as snapshots). Either linear, relying on POD^{4,43} or nonlinear compression, depending on deep convolutional autoencoder (DLAE or DLROM)^{5,6,8}, is then used to compress FOM snapshots to produce basis functions that span reduced spaces of very low dimensionality, but still guarantee accurate reproduction of the snapshots^{44,45}. The ROM can then be solved during the online stage for any new value of \({\varvec{\mu }}\) by seeking an approximated solution in the reduced space.
In this work, we propose a unified datadriven ROM using a combination of Barlow Twins (BT) selfsupervised learning and an autoencoder (BT–AE) that bridges the performance gap between linear and nonlinear manifold approaches. In particular, we use BT selfsupervised learning to maximize the information content of the embedding with the latent space through a joint embedding architecture^{46}. With four different example cases that span the degree of complexity to cover both linear and nonlinear problems, a comparison of the proposed BT–AE framework with both linear (POD) and nonlinear (DLAE) ROM approaches is conducted to demonstrate the performance of the unified datadriven ROM framework that (1) excels in all test cases (whether the solution can be captured in a linear or nonlinear manifold) and (2) operates on either structured or unstructured meshes. Importantly, this model is fully datadriven; it could be trained by data produced by FOM, onsite measurement, experimental data, or a combination of them. This characteristic can provide flexibility across the spectrum in more complex problems. Since it is not limited by the Courant–Friedrichs–Lewy condition for conventional FOMs, it could deliver quantities of interest at any given time contrary to the FOM^{6}.
Results
Data generation
We present a summary of all geometries and boundary conditions we use in Fig. 1. In short, Examples 1, 2, and 3 represent cases where \({\varvec{\mu }}\) is a scalar quantity, namely \(\textrm{Ra}\), while Example 4 illustrates a case where \({\varvec{\mu }}\) is a fourdimensional vector, composed of \(\mathrm {Ra_1}\), \(\mathrm {Ra_2}\), \(\mathrm {Ra_3}\), and \(\mathrm {Ra_4}\). The information of each example is presented in Table 1. We note that \(\textrm{M}_{\textrm{validation}}\) and \(\textrm{M}_{\textrm{test}}\) represent the number of the validation and testing sets with varying Rayleigh number (\(\textrm{Ra}\)), respectively (Table 1). Due to time dependence, the total number of training, validation, and test samples is the product of \(\textrm{M}\) and \(N^t\) with varying \(N^t\) ranges. Specifically the validation samples, \(\textrm{M}_{\textrm{validation}} N^t\), is determined by \(\textrm{M}_{\textrm{validation}} N^t = 0.1\textrm{M}N^t\) by randomly sampling 10% of the sum of training/validation sets (\(\textrm{M}N^t\)).
It is noted that the final total training samples are \(0.9\textrm{M} N^t\) because we allocate 10% of the training samples for the validation set. The total of testing data is \(\mathrm {M_{test}} N^t\). We want to emphasize that our \(N^t\) is not constant, but it is a function of \({\varvec{\mu }}\). To elaborate, the higher \(\textrm{Ra}\) value will result in the higher \(N^t\) to satisfy CFL condition.
The summary of each model, including the subspace dimension and compression method, is presented in Table 2. The detailed description of POD, AE, and DC–AE models is provided in Kadeethum et al.^{6}, and our newly developed BT–AE models are described in “Methodology” section. In short, for POD models, we use proper orthogonal decomposition as a compression tool. The AE models use an autoencoder as a compression method. We employ a deep convolutional autoencoder to compress our training snapshots (\(\textrm{M} N^t\)) for DC–AE models. The BT–AE models utilize a combination of an autoencoder and Barlow Twins selfsupervised learning in their compression procedure. For the POD models, linear compression, subspace dimension refers to the number of reduced basis or \(\textrm{N}\) as well as the number of intermediate reduced basis or \(\mathrm {N_{int}}\). We assume \(\textrm{N} = \mathrm {N_{int}}\) for all models for simplicity. The subspace dimension is the number of latent space (\(\textrm{Q}\)) for the nonlinear compression, AE, DC–AE, and BT–AE models.
Details of POD, AE, DC–AE models are provided in Kadeethum et al.^{6}.
Comparisons of BT–AE with POD, AE, and DC–AE models in simple domains
We first compare the BT–AE model accuracy (for different numbers of \(\textrm{Q}\)) with the models developed by Kadeethum et al.^{6,43} (i.e., POD, AE, and DC–AE models) in relatively simple model domains. Example 1 illustrates a case where a linear manifold is optimal, while Example 2 presents a case where a nonlinear manifold is optimal. The results of POD, AE, and DC–AE models presented in Kadeethum et al.^{6} demonstrated that the PODbased and DLROM approaches are more suitable for the linear and nonlinear manifold problems, respectively, and they are used in this manuscript to evaluate the performance of BT–AE models.
Example 1: Heating from the left boundary
The geometry and boundary conditions are shown in Fig. 1a, and we adopt this example from Zhang et al. and Kadeethum et al.^{6,47}. This example represents a case where its fluid flow is driven by buoyancy as the fluid is heated on the left side of the domain. The fluid then flows upwards and rotates to the right side of the domain. We set \({\varvec{\mu }} = (\textrm{Ra})\), and its admissible range of variation is [40.0, 80.0], see Table 1. For the training set, we use \(\textrm{M} = 40\), which lead to, in total, \(\textrm{M} N^t = 16802\) training data points.
We present the test case results of the BT–AE model (BT–AE 16 Q) as supplimental information (SIAnimationExample 1). The difference between solutions produced by the FOM and ROM (DIFF) is calculated by
where \(\varphi _h\) is a finitedimensional approximation of the set of primary variables corresponding to velocity, pressure, and temperature fields. \(\widehat{\varphi }_h\) is an approximation of \(\varphi _h\) produced by the ROM. Thus, \(\varphi _h(\cdot ; t^k, {\varvec{\mu }}_{\textrm{test}}^{(i)})\) and \(\widehat{\varphi }_h(\cdot ; t^k, {\varvec{\mu }}_{\textrm{test}}^{(i)})\) represent \(\varphi _h\) and \(\widehat{\varphi }_h\) at all space coordinates (i.e., evaluations at each DOF) at time \(t^k\) with input parameter \({\varvec{\mu }}_{\textrm{test}}^{(i)}\), respectively. Note that we only present the results of the temperature field. Hence, \(\varphi _h\) and \(\widehat{\varphi }_h\) represent \(T_h\) and \(\widehat{T}_h\), respectively. From SIAnimationExample 1, we observe that BT–AE 16 Q provides a reasonable approximation of the temperature field.
The results of Example 1 is presented in Fig. 2. In Fig. 2a, The performance of the different models (Table 2) is evaluated with the mean square error (\(\textrm{MSE}_\varphi (:, {\varvec{\mu }}_{\textrm{test}}^{(i)})\)) of the test cases defined as follows
where \(\textrm{MSE}_\varphi (:, {\varvec{\mu }}_{\textrm{test}}^{(i)})\) represents the MSE values of all t for each \({\varvec{\mu }}_{\textrm{test}}^{(i)}\). The \(\textrm{MSE}\) results show that BT–AE models perform better than AE and DC–AE models. Besides, BT–AE 16 Q delivers similar \(\textrm{MSE}\) results to those of the POD models. In contrast to the findings presented in Kadeethum et al.^{6} where the linear compression (POD) outperforms nonlinear compression (AE and DC–AE), BT–AE models in this study could perform similar to the POD models. To be accurate, BT–AE models still underperform, but errors are comparable.
We then investigate how the performance of BT–AE models compares to DC–AE. First, we examine the data compression loss of the validation set (see Eq. 18) which is presented in Fig. 2b. From this figure, the data compression losses of BT–AE models are slightly better than those of the DC–AE models. Subsequently, we illustrate the mapping using ANN loss of the validation set, see Eq. (19), in Fig. 2c. From Fig. 2c, we observe that the mapping losses of the BT–AE models are six orders of magnitude less than those of the DC–AE models. This behavior shows that the BT–AE’s latent spaces are easier to be mapped (i.e., ANN loss of the validation set for the BT–AE mapping is much lower than that of the DC–AE.). This speculation is explained by Fig. 2d,e, using a tDistributed Stochastic Neighbor Embedding (tSNE) plot. From Fig. 2d, one could see that all latent variables of DC–AE 16 Q blend (i.e., you cannot differentiate among cases with different \(\textrm{Ra}\) values.). The latent variables of the BT–AE 16 Q model, on the other hand, shown in Fig. 2e, behave in a much better structure (i.e., we can differentiate among cases with different \(\textrm{Ra}\) values.).
Example 2: Elder problem
The Elder problem^{48} is a significantly more complicated and illposed problem^{48,49}. High \(\textrm{Ra}\) numbers considered in this case may cause the flow instability to be fingering behavior. The domain and boundary conditions are presented in Fig. 1b^{6,47,50}. In short, the model domain is heated from the half of the bottom boundary (Fig. 1b), and the flow is driven upwards by buoyancy force. We set \({\varvec{\mu }} = (\textrm{Ra})\), and its admissible range as [350.0, 400.0] (Table 1). Compared to Example 1, this higher range of \(\textrm{Ra}\) values affects the minimum and maximum \(N^t\) as its range increases to [790, 1010].
The results of Example 2 are presented in Fig. 3. From Fig. 3a, we observe that all the models using nonlinear compression (AE, DC–AE, and BT–AE) perform better than the linear compression (POD). Furthermore, the BT–AE model accuracy is comparable to that of the DC–AE models. However, the BT–AE model results seem to be insensitive to the number of \(\textrm{Q}\), while the DC–AE model results are affected by the number of \(\textrm{Q}\) (i.e., the DC–AE 16 Q and DC–AE 256 Q are more accurate than the DC–AE 4 Q). We also present the results of the test cases for the BT–AE 16 Q model in the supplemental animation (SIAnimationExample 2). From these results, we observe that the BT–AE 16 Q model delivers a reasonable approximation of the solution \(T_h\) (i.e., \(\widehat{T}_h\)).
We present the data compression loss of the validation set (Eq. 18) in Fig. 3b. In contrast to the ones shown in Fig. 2b, the DC–AE models have a slightly lower loss than that of the BT–AE models. We then investigate the ANN mapping loss (see Eq. 19) of the validation set in Fig. 3c. Similar to those presented in Fig. 2c, the BT–AE models have much lower mapping losses compared to those of the DC–AE models. Among the BT–AE models, BT–AE 256 Q has the highest value of ANN mapping loss, which is expected since it has the highest output dimension (i.e., we are mapping t and \({\varvec{\mu }}\) to \({\varvec{z}}^{\textrm{Q}}\)). Again, we observe a much better structure of the BT–AE 16 Q latent space than the one from DC–AE 16 Q (see Fig. 3d,e). To elaborate, the latent variables of the DC–AE 16 Q are overlapped to hinder us from differentiating among each case (different \(\textrm{Ra}\) values). The latent variables of the BT–AE, on the contrary, are structured in a way that one can clearly observe different parts that represent different \(\textrm{Ra}\) values as shown in Fig. 3e.
Model performance of BT–AE models on complex geometries
From Examples 1 and 2, we have observed that the BT–AE models could provide good results while operating on unstructured data. In this section, more challenging geometries which require an unstructured mesh for the FOM are evaluated with BT–AE models only since other methods are not suitable for unstructured mesh problems.
Example 3: Unit cell of micromodel
Example 3 uses a unit cell of micromodel where a central part of honeycomb shape and four corners are impermeable for flow. Still, the heat can conduct through these five subdomains as presented in Fig. 1c. Over the past decade, the micromodel has been used to study multiple coupled processes, including flow, reactive transport, bioreaction, and flow instability^{19,20,51,52,53,54}. The flow is initiated from an influx at the bottom of the domain. This geometry is more complex than those utilized in Examples 1 and 2 (see Fig. 1a,b). The higher temperature at the bottom surface (shown in red) alters a fluid density at the bottom, and subsequently, a buoyancy force drives the flow upwards from the bottom (shown in red) to the top of the domain. Five subdomains contain very low flow conductivity, but they can conduct heat. Again, we set \({\varvec{\mu }} = (\textrm{Ra})\) and its range as [350.0, 400.0] (Table 1). The range of \(\textrm{Ra}\) can also cause flow instability. We use \(\textrm{M} = 40\), \(\textrm{M}_{\textrm{validation}}N^t = 10\)% of \(\textrm{M}N^{t}\), and \(\textrm{M}_{\textrm{test}} = 10\). We have in total \(\textrm{M} N^t = 44354\) training data points.
The summary of the Example 3 results is shown in Fig. 4. For all test cases the MSE values over time in Fig. 4a–c are in the range of \(\approx 1 \times 10^{5}\). The MSE values tend to decrease over time until the temperature field becomes a steady state. Besides, BT–AE models with different \(\textrm{Q}\) values provide approximately similar results (in line with our findings from Examples 1 and 2). The behavior infers that utilizing only a small number of latent spaces; the model can achieve the same level of accuracy as the one with a large number of latent spaces. This behavior is very beneficial because the mapping between parameter space and latent space becomes more manageable. We also present the results of the test cases for the BT–AE 16 Q model in the supplemental animation (SIAnimationExample 3). Overall, BT–AE 16 Q delivers a reasonable approximation of the \(T_h\) (i.e., DIFF results are low, and the relative error lies within 2%).
The data compression loss (Eq. 18) is in the range of \(\approx 1 \times 10^{5}\) to \(1 \times 10^{6}\) (Fig. 4d) which is similar to that of Example 1 (Fig. 2b), but slightly lower than that of the Example 2 (Fig. 3b). The data compression loss seems to be invariant to \(\textrm{Q}\) values. We also present the Barlow Twins loss (Eq. 14) in Fig. 4e. We observe that the Barlow Twins loss increases with increasing the \(\textrm{Q}\) value as in Zbontar et al.^{46}. This can be explained that as the \(\textrm{Q}\) value grows larger, the crosscorrelation matrix \(\textbf{C}^{T}\left( t, {\varvec{\mu }}\right)\) becomes bigger, resulting in more members in Eqs. (15) and (16). As stated by Zbontar et al.^{46}, the absolute value of Eqs. (15) and (16) is not as important as their trend. To elaborate this, in Fig. 4e, all models (different \(\textrm{Q}\) values) reach their saturated points around 40 epochs, meaning that the minimization of Eqs. (15) and (16) is completed.
The mapping of the latent space using ANN loss (Eq. (19)) is presented in Fig. 4f. Similar to Examples 1 and 2 (Figs. 2c, 3c), the mapping loss is in range of \(\approx 1 \times 10^{5}\) to \(1 \times 10^{7}\). The higher \(\textrm{Q}\) values, the mapping loss grows larger because there are more outputs to map. We present the latent space structure in Fig. 4g (only for BT–AE 16 Q). Following the results shown in Figs. 2e and 3e, the latent structure of the BT–AE model has a good structure since we can differentiate among different \(\textrm{Ra}\) values. This behavior stems from the fact that the BT losses maximize the information content of the embedding with the latent space through a joint embedding architecture.
Example 4: Modified hydrocoin with four subdomains
Example 4 uses the hydrocoin problem^{55,56} with the domain geometry shown in Fig. 1d. In this example, the domain is subdivided into four subdomains with different \(\textrm{Ra}\) values (i.e., \({\varvec{\mu }} = \left( \mathrm {Ra_1}, \mathrm {Ra_2}, \mathrm {Ra_3}, \mathrm {Ra_4}\right)\)). The range of \(\textrm{Ra}\) values is [350.0, 400.0]. Similar to the previous examples, this \(\textrm{Ra}\) range causes fingering behavior as shown in the supplemental animation (SIAnimationExample4). We use \(\textrm{M} = 81\), \(\textrm{M}_{\textrm{validation}}N^t = 10\)% of \(\textrm{M}N^{t}\), and \(\textrm{M}_{\textrm{test}} = 10\). We have in total \(\textrm{M} N^t = 90{,}175\) training data points. We note that as we use \(\textrm{M} = 81 = 3^4\) equally spaced samples, for each parameter \(\textrm{Ra}_i\), \(i = 1,2,3,4\), we only have three values. As an example, for \(\mathrm {Ra_1}\) we only sample \(\mathrm {Ra_1} = \left( 350, 400, 450\right)\) for the training set. The same goes for \(\mathrm {Ra_2}, \mathrm {Ra_3},\) and \(\mathrm {Ra_4}\). As a result, training with relatively sparse samples of each parameter \(\textrm{Ra}_i\) makes it very challenging to obtain an accurate datadriven framework in general^{1,4}.
Even though this setting is very challenging, we still observe that the BT–AE 16 Q delivers a reasonable approximation of the \(T_h\) as seen in the supplemental animation (SIAnimationExample4). The summary of the Example 4 results is shown in Fig. 5. We present the MSE values as a function of time in Fig. 5a–c. We can observe that the MSE values for all test cases are in the range of \(\approx 1 \times 10^{1}\) to \(1 \times 10^{5}\), which are significantly higher than those of Examples 1, 2, and 3. Moreover, the MSE values generally increase as we approach steadystate solutions, unlike the behaviors shown in Example 3. Again, BT–AE models with different \(\textrm{Q}\) provide approximately similar results (in line with our finding from Examples 1, 2, and 3).
The data compression loss (Eq. 18) is in the range of \(\approx 1 \times 10^{2}\) to \(1 \times 10^{4}\) (Fig. 5d), which is significantly higher than that of Examples 1, 2, and 3. This behavior illustrates that this example is the most challenging case for the BT–AE models. The data compression loss is the lowest for \(\textrm{Q} = 256\) and the highest for \(\textrm{Q} = 4\), but the difference is not critical. As shown in the Barlow Twins loss (Eq. 14) in Fig. 5e, the higher values of \(\textrm{Q}\) the larger Barlow Twins loss is (as we discussed in the previous example.).
The mapping of the latent space using ANN loss (Eq. 19) is presented in Fig. 5f. The mapping loss is in the range of \(\approx 1 \times 10^{4}\) to \(1 \times 10^{5}\), which is significantly higher than those of Examples 1, 2, and 3 (see Figs. 2c, 3c, 4f). This behavior also contributes to the higher MSE values of the BT–AE models. We present the latent space structure (only for BT–AE 16 Q) in Fig. 5g,h for \(\mathrm {Ra_1}\) and \(\mathrm {Ra_4}\), respectively. Since Example 4 has different \(\textrm{Ra}\) values in four subdomains, the differentiation of the latent space of individual \(\textrm{Ra}\) does not provide good solutions as each latent space of each subdomain might also be interconnected.
Discussion
Recent developments in MLbased datadriven reduced order modeling (DLROM or DC–AE in this study)^{5,6} have shown promising results in capturing parametrized solutions of systems of nonlinear equations. These models, however, rely on convolutional operators, which hinders the applicability of these models to complex geometries where an unstructured mesh is required for FOMs, as in Examples 3 and 4. Though we could utilize an autoencoder without convolutional layers, the model could not achieve the same level of accuracy as DLROM^{6}. Kadeethum et al.^{6} also illustrate that in a specific setting (simple geometry and boundary conditions), a linear compression approach using POD can outperform the DLROM model (Example 1). We have demonstrated that the autoencoder model through Barlow Twins selfsupervised learning (BT–AE) could achieve the same accuracy as DLROM (Example 2 where POD models perform much worse than DLROM) by regularizing the latent space or nonlinear manifolds. Besides, it also yields optimal results in the case where the linear compression model outperforms the DLROM (Example 1). It means that the BT–AE model excels in all the test cases (Examples 1 and 2) while it still can operate on an unstructured mesh. This behavior has a significant advantage in scientific computing since most realistic problems require unstructured mesh representations. Besides, the BT–AE’s performance is insensitive to the number of latent spaces, suggesting that with only a small number of latent spaces, the model can achieve the same level of accuracy as the one with a large number of latent spaces. This behavior is very beneficial because the mapping between parameter space and latent space becomes more manageable.
The computational time used to develop our ROM can be broken down into three primary parts: (1) generation of training data through FOM (the second step in Fig. 6), (2) training BT–AE (the third step in Fig. 6), and (3) mapping of t and \({\varvec{\mu }}\) to reduced subspace (the fourth step in Fig. 6). Each FOM model (corresponding to each set of \({\varvec{\mu }}\) or \(\textrm{Ra}\) in this work) takes, on average, about two hours on AMD Ryzen Threadripper 3970X (4 threads). We note that our FOM utilizes the adaptive timestepping; hence, each \({\varvec{\mu }}^{(i)}\) may require a substantially different computational time. To elaborate, cases that have higher \(\textrm{Ra}\) usually have a smaller timestep (\(N^t\) becomes larger), and subsequently, they require more time to complete.
The wall time used to train BT–AE is approximately 0.4 hours using a single Quadro RTX 6000. It is noted that this computational cost is much cheaper than that of the DC–AE model, taking around four to six hours^{6}. This is because DC–AE relies on convolutional layers, dropout, and batch normalization, which require much higher computational resources. The BT–AE, on the other hand, utilizes only a plain autoencoder. The BT–AE model is also cheaper than the POD model. However, we note that this may not be a fair comparison as we perform POD and BT–AE using different machines (i.e., our POD framework only works on CPU, but our BT–AE is trained using GPU). Please refer to Kadeethum et al.^{6} for detailed wall time comparisons among POD and DC–AE models. The mapping of t and \({\varvec{\mu }}\) to reduced subspace through artificial neural networks (ANN) takes around half an hour to one hour using a single Quadro RTX 6000. As mentioned in “Methodology” section, we do not terminate the training of both BT–AE and mapping of t and \({\varvec{\mu }}\) to reduced subspace through ANN early, but rather use the model with the best validation loss through the final epochs. For example, we train for 50 epochs, but the model that offers the best validation loss might be the model at 20 epochs. However, the training time we report here is for 50 epochs. Thus, our training time provided here is considered conservative.
Even though the ROM training time is not trivial, it could provide a fast prediction during the online phase. Using AMD Ryzen Threadripper 3970X, the ROM takes approximately several milliseconds for a query of a pair of \(t^{k}\) and \({\varvec{\mu }}^{(i)}\). We also note that, as discussed previously, our ROM is needed to be trained on GPU for the problems at hand. Still, it could utilize CPU during an online time since we do not have to deal with backpropagation or optimization during the prediction time. On the contrary, one FOM simulation (for each \({\varvec{\mu }}^{(i)}\) for all t \(\in\) \(0=: t^{0}<t^{1}<\cdots <t^{N} := \tau\)) takes about two hours. So, assuming that we query all t similar to those of the FOM, ROM takes only a matter of several seconds. In practice, however, we might not need to evaluate all timestamps in \(0=: t^{0}<t^{1}<\cdots <t^{N}:= \tau\) because the quantities of interest at the specific time may be more important. Since ROM is not bound by the CFL condition and can predict the quantities of interest at any specific time without intermediate computation, we could simply perform one query—\(t^{N}\) and \({\varvec{\mu }}^{(i)}\), resulting in saving computational time significantly. Our ROM could provide a speedup of \(7 \times 10^{6}\) at any specific time step for Example 2, and a speedup of \(7 \times 10^{3}\) to \(7 \times 10^{6}\) for all examples considered in this work.
Our model is developed upon the datadriven paradigm, which is applicable to any FOM. Besides, it could be trained using data produced by FOM, onsite measurements, experimental data, or a combination among them. This characteristic provides flexibility, which intrusive approaches could not provide. The datadriven model, though, is usually hungry for training samples. We have illustrated that as the dimensionality of our parameter space grows, the model requires more training samples, or it will suffer by losing its accuracy significantly as in Example 4 compared to accurate prediction in Example 3. We speculate that an adaptive sampling technique^{57,58,59}, incorporating physical information^{60,61}, or including multimodal unsupervised training^{62} might provide a resolution to this issue in the future work. Another gap in datadriven machine learning ROM is that a posteriori error is exceptionally challenging to quantify. An error estimator developed by Xiao^{63} for linear manifolds could be adapted and extended to the nonlinear manifold paradigm. Additionally, epistemic uncertainty could also be quantified by adopting the ensemble technique proposed by Jacquier et al.^{64}.
Methodology
A graphical summary of our procedure is presented in Fig. 6: the computations are divided into an offline phase for the ROM construction, which we will show through four consecutive main steps and (singlestep) online stage for the ROM evaluation.
The first step of the offline stage represents an initialization of a training set (\({\varvec{\mu }}\)), validation set (\({\varvec{\mu }}_{\textrm{validation}}\)), and test set (\({\varvec{\mu }}_{\textrm{test}}\)) of parameters used to train, validate, and test the framework, of cardinality \(\textrm{M}\), \(\textrm{M}_{\textrm{validation}}\), \(\textrm{M}_{\textrm{test}}\). For the rest of sections we will discuss only \({\varvec{\mu }}\). The same analogy goes for \({\varvec{\mu }}_{\textrm{validation}}\) and \({\varvec{\mu }}_{\textrm{test}}\). Let \(\mathbb {P} \subset \mathbb {R}^P\), \(P \in \mathbb {N}\), be a compact set representing the range of variation of the parameters \({\varvec{\mu }} \in \mathbb {P}\). For the sake of notation we denote by \(\mu _p\), \(p = 1, \ldots , P\), the pth component of \({\varvec{\mu }}\). To explore the parametric dependence of the phenomena, we define a discrete training set of \(\textrm{M}\) parameter instances. Each parameter instance in the training set will be indicated with the notation \({\varvec{\mu }}^{(i)}\), for \(i = 1, \ldots , \textrm{M}\). Thus, the pth component of the ith parameter instance in the training set is denoted by \(\mu _p^{(i)}\) in the following. The choice of the value of \(\textrm{M}\), as well as the sampling procedure from the range \(\mathbb {P}\), is typically user and problemdependent. In this work, we use an equispaced distribution for the training set as done in^{6,43}.
In the second step, we query the FOM, based on the finite element solver proposed and made publicly available in Kadeethum et al.^{6,32}, for each parameter \({\varvec{\mu }}\) in the training set. In short, we are interested in gravity driven flow in porous media, and here we briefly describe all the equations used in this study: (1) mass balance and (2) heat advection–diffusion equations. Let \(\Omega \subset \mathbb {R}^d\) (\(d \in \{1,2,3\}\)) denote the computational domain and \(\partial \Omega\) denote the boundary. \({\varvec{X}}^{*}\) are spatial coordinates in \(\Omega\) (e.g., \({\varvec{X}}^{*}=[x^{*}, y^{*}]\) when \(d=2\), which we will focus on throughout this study). The time domain is denoted by \(\mathbb {T} = \left( 0,\tau \right]\) with \(\tau >0\) (i.e., \(\tau\) is the final time). Primary variables used in this paper are \({\varvec{u}}^* (\cdot , t^*) : \Omega \times \mathbb {T} \rightarrow \mathbb {R}^d\), which is a vectorvalued Darcy velocity (m/s), \(p^* (\cdot , t^*) : \Omega \times \mathbb {T} \rightarrow \mathbb {R}^d\), which is a scalarvalued fluid pressure (Pa), and \(T^* (\cdot , t^*) : \Omega \times \mathbb {T} \rightarrow \mathbb {R}^d\), which is a scalarvalued fluid temperature (C). Time is denoted as \(t^*\) (s).
Following Joseph^{65}, the Boussinesq approximation to the mass balance equations results in the density difference only appearing in the buoyancy term. The mass balance equation reads
and
where \({\varvec{\kappa }}={\varvec{k}}/\mu _f\) is the porous medium conductivity, \({\varvec{k}}\) is the matrix permeability tensor, \(\mu _f\) is the fluid viscosity, \(\textbf{y}\) is a unit vector in the direction of gravitational force, g is the constant acceleration due to gravity, \(\rho\) and \(\rho _0\) are the fluid density at current and initial states, respectively. We assume that \(\rho\) is a linear function of \(T^{*}\)^{47,66}
where \(\alpha\) is the thermal expansion coefficient, and \(T_{0}^{*}\) is the reference fluid temperature. We note that Eq. (5) is the simplest approximation, and one may easily adapt the proposed method when employing a more complex relationship provided in^{67}. The heat advection–diffusion equation defined as
Here, \(\gamma\) is the ratio between the porous medium heat capacity and the fluid heat capacity, K is the effective thermal conductivity, and \({f_c}^*\) is a sink/source. We follow Nield and Bejan^{18} and define dimensionless variables as follows
where H is the dimensional layer depth, and \(\Delta T^{*}\) is the temperature difference between two boundary layers. From these dimensionless variables, we could rewrite our Eqs. (3) and (4) as
where \(\partial \Omega _{p}\) and \(\partial \Omega _{q}\) are the pressure and flux boundaries (i.e., Dirichlet and Neumann boundary conditions), respectively. Here, \(\textrm{Ra}\) is the Rayleigh number
We then write Eq. (6) in dimensionless form as follows
where \(\partial \Omega _{{T}}\) is temperature boundary (Dirichlet boundary condition), \(\partial \Omega _{\textrm{in}}\) and \(\partial \Omega _{\textrm{out}}\) denote inflow and outflow boundaries, respectively, which are defined as
The detail of discretization could be found in Kadeethum et al.^{6,32}, and the FOM source codes are provided in Kadeethum et al.^{32}. After the second step, we have \(\textrm{M}\) snapshots of FOM results associated with the different parametric configurations in \({\varvec{\mu }}\). Since the problem formulation is timedependent, the output of the FOM solver for each parameter instance \({\varvec{\mu }}^{(i)}\) collects the time series representing the time evolution of the primary variables for each timestep t. Thus, each snapshot contains approximations of the primary variables (\({\varvec{u}}_{h}\), \(p_h\), and \(T_h\)) at each timestep of the partition of the time domain \(\mathbb {T}\). Therefore, based on the training set cardinality \(\textrm{M}\) and the number \(N^t\) of timesteps, we have a total of \(N^t \textrm{M}\) training data to be employed in the subsequent steps. We note that as our finite element solver utilizes an adaptive timestepping^{6,32}, each snapshot may have a different number of timesteps \(N^t\), i.e. \(N^t = N^t({\varvec{\mu }})\).
The third step aims to compress the information provided by the training snapshots provided by the second step. Kadeethum et al.^{6} provide detailed derivations and comparisons between linear and nonlinear compression. Especially the convolutional layers, in their classical form, could not deal with an unstructured data structure (unstructured mesh), which is very common in scientific computing or, more specifically, finite element analysis. Hence, our goal is to develop a nonlinear compression that (1) consistently outperforms (or at least delivers similar accuracy) the linear compression and (2) is compatible with an unstructured data structure.
To achieve this goal, we propose a nonlinear compression utilizing feedforward layers in combination with selfsupervised learning (SSL) of Barlow Twins (BT) (Fig. 6). The BT for redundancy reduction is proposed by Zbontar et al.^{46}. It operates on a joint embedding of noisy images by producing two distorted images from an original one through a series of random cropping, resizing, horizontal flipping, color jittering, converting to grayscale, Gaussian blurring, and solarization. Since we do not operate on structured data (image) but unstructured data produced by finite element solver, we only employ random noise and Gaussian blur operations to produce our noisy data set, see Fig. 6.
Let \({z}_1^{{\varvec{u}}}, \cdots , {z}_{\textrm{Q}}^{{\varvec{u}}}\), \({z}_1^p, \cdots , {z}_{\textrm{Q}}^p\), and \({z}_1^T, \cdots , {z}_{\textrm{Q}}^T\) be the nonlinear manifolds of the \({\varvec{u}}_{h}\), \(p_h\), and \(T_h\), respectively. For the sake of compactness, we will only discuss primary variable \(T_h\). The same procedure holds for \({\varvec{u}}_{h}\) and \(p_h\). Our goal is to achieve \(\textrm{Q} \ll \textrm{M} N^t\) where \(\textrm{M} N^t\) is the total training data, which implies that our nonlinear manifolds could represent our training data using much lower dimension. We employ a vanilla AE (using only feedforward layers) that is regularized by Barlow Twins SSL to obtain \({\varvec{z}}^T = \left[ {z}_1^T, \cdots , {z}_{\textrm{Q}}^T \right]\). We do not use any batch normalization or dropout. The summary of the training process is presented in Algorithm 1. We will provide the detailed implementation in https://github.com/sandialabs.
In short, during the training phase, our BT–AE model is composed of one encoder, one decoder, and one projector. The training entails two subtasks; the first is BT (encoder and projector), which takes place in the outer loop. The second subtask is responsible for the training of AE (encoder and decoder), which takes place inside the inner loop. The main reasons for this procedure are two folds. The first reason is Zbontar et al.^{46} states that the BT works better with large batch sizes. The AE, however, generally requires a small batch size^{68,69}. Our previous numerical experiments based on DC–AE^{6} also align with this statement. Consequently, we set our batch size of the outer loop as \(\textbf{B}_{\textrm{outer}} = 512\), and the batch size of the inner loop as \(\textbf{B}_{\textrm{inner}} = 32\)
Prior to the training, we distort our training set (i.e., creating \(T_{h,A}\left( t, {\varvec{\mu }}\right)\) and \(T_{h,B}\left( t, {\varvec{\mu }}\right)\) from \(T\left( t, {\varvec{\mu }}\right)\)) through a series of two operations. First, add random noise is added as follows
where \(\widetilde{T_{h,A}}\left( t, {\varvec{\mu }}\right) , \widetilde{T_{h,B}}\left( t, {\varvec{\mu }}\right)\) are intermediate distorted input data. The constant \(\epsilon\), which is set to 0.1, determines the noise level as it is multiplied with the standard deviation of the input field. \(\mathscr {G}\left( 0,1\right)\) is a random value which is sampled from the standard normal distribution with mean and standard deviation of zero and one, respectively.
Subsequently, we pass \(\widetilde{T_{h,A}}\left( t, {\varvec{\mu }}\right) , \widetilde{T_{h,B}}\left( t, {\varvec{\mu }}\right)\) through Gaussian blur operation, which reads
to obtain \(T_{h,A}\left( t, {\varvec{\mu }}\right)\) and \(T_{h,B}\left( t, {\varvec{\mu }}\right)\).
We use a number of the epoch of 50, see Algorithm 1. The outer loop works as follows: training BT begins with passing \(T_{h,A}\left( t, {\varvec{\mu }}\right)\) and \(T_{h,B}\left( t, {\varvec{\mu }}\right)\) to the encoder (it is noted we have only one encoder) resulting in \({\varvec{z}}_A^T\left( t, {\varvec{\mu }}\right)\) and \({\varvec{z}}_B^T\left( t, {\varvec{\mu }}\right)\). We then use \({\varvec{z}}_A^T\left( t, {\varvec{\mu }}\right)\) and \({\varvec{z}}_B^T\left( t, {\varvec{\mu }}\right)\) as input to the projector resulting in crosscorrelation matrix \(\textbf{C}^{T}\left( t, {\varvec{\mu }}\right)\). \(\textbf{C}^{T}\left( t, {\varvec{\mu }}\right)\) is a square matrix with the dimensionality of the projector’s output, and its values range between 1, perfect anticorrelation, and 1, perfect correlation.
The Barlow Twins loss \(\mathscr {L}_{\textrm{BT}}^{T}\) (BT loss) is then calculated using
where
and
where \(\textbf{C}_{i i}^T\left( t, {\varvec{\mu }}\right)\) denotes the ith diagonal entry of \(\textbf{C}^{T}\left( t, {\varvec{\mu }}\right)\), \(\lambda\) is a positive constant, which is set to \(5 \times 10^{3}\) as recommended by Zbontar et al.^{46}, and \(\textbf{C}_{i j}^T\) are offdiagonal entries of \(\textbf{C}^{T}\). In short, we train our BT part by trying to force \(\mathscr {L}_{\textrm{I}}^{T}\) to 1, but \(\mathscr {L}_{\textrm{RR}}^{T}\) to 0 resulting in teaching the encoder and projector learn how to get rid off noise from the distorted data, \(T_{h,A}\left( t, {\varvec{\mu }}\right)\) and \(T_{h,B}\left( t, {\varvec{\mu }}\right)\), and construct a representation that conserves as much \(T\left( t, {\varvec{\mu }}\right)\) information as possible.
Here, we follow the training procedures used by Kadeethum et al.^{6,70}. We use the ADAM algorithm^{71} to adjust learnable parameters of encoder(\(\textrm{W}\) and \(\textrm{b}\)) and projector(\(\textrm{W}\) and \(\textrm{b}\)). The learning rate (\(\eta\)) is calculated as^{72}
where \(\eta _{c}\) is a learning rate at step \(\mathrm {step_c}\), \(\eta _{\min }\) is the minimum learning rate, which is set as \(1 \times 10^{16}\), \(\eta _{\max }\) is the maximum or initial learning rate, which is selected as \(1 \times 10^{4}\), \(\mathrm {step_c}\) is the current step, and \(\mathrm {step_f}\) is the final step. We note that each step refers to each time we perform backpropagation, including updating both encoder and projector’s parameters.
The inner loop is as follows: training AE starts with obtaining \({\varvec{z}}^T\left( t, {\varvec{\mu }}\right)\) by passing \(T_h\left( t, {\varvec{\mu }}\right)\) to the encoder. We then use \({\varvec{z}}^T\left( t, {\varvec{\mu }}\right)\) to reconstruct \(\widehat{T}_h\left( t, {\varvec{\mu }}\right)\) through the decoder. Subsequently, we calculate our data compression loss or AE loss (\(\mathscr {L}_{\textrm{AE}}^{T}\)) using
Similar to the training of BT, we use ADAM to adjust learnable parameters of encoder(\(\textrm{W}\) and \(\textrm{b}\)) and decoder(\(\textrm{W}\) and \(\textrm{b}\)) according the gradient of Eq. (18). The \(\eta _{c}\) is adjusted by Eq. (17). In contrast to the training of BT, we use \(\eta _{\min } = 1 \times 10^{16}\), and \(\eta _{\max } = 1 \times 10^{5}\).
Following the training of the BT–AE, we now establish the manifold \({\varvec{z}}^T\left( t, {\varvec{\mu }}\right) , \quad \forall t \in \mathbb {T} \, \text {and} \, \forall {\varvec{\mu }} \in \mathbb {P}\) during the fourth step shown in Fig. 6. The data available for this task are the pairs \((t, {\varvec{\mu }})\) and \({\varvec{z}}^T\left( t, {\varvec{\mu }}\right)\) in the training set. We achieve this through the training of artificial neural networks (ANN). Following Kadeethum et al.^{6,43}, our ANN has five hidden layers, and each layer has seven neurons. We use tanh as our activation function. Here, we use a mean squared error (\({\textrm{MSE}^{{\varvec{z}}^T}}\)) as the metric of our network loss function, defined as follows
To minimize Eq. (19), we use the ADAM algorithm to adjust each neuron \(\textrm{W}\) and \(\textrm{b}\), a batch size of 32, a learning rate of 0.001, a number of epoch of 10,000, and we normalize both our input (\(t, {\varvec{\mu }}\)) and output (\({\varvec{z}}^T\)) to [0, 1]. To prevent our networks from overfitting behavior, we follow early stopping and generalized crossvalidation criteria^{4,73,74}. Note that instead of literally stopping our training cycle, we only save the set of trained weight and bias to be used in the online phase when the current validation loss is lower than the lowest validation from all the previous training cycle.
During the online phase (the fifth step shown in Fig. 6), we utilize the trained ANN and the trained decoder to approximate \(\widehat{{T}}_{h}\left( \cdot ; t, {\varvec{\mu }}\right)\) for each inquiry (i.e., a pair of \((t, {\varvec{\mu }})\) ) through
and, subsequently,
We note that, for the prediction phase, our ROM could be evaluated using any timestamps, including one that does not exist in the training phase (i.e., any t that lies within \([t^{0}, \tau\)]) because our ROM treats the time domain \(\mathbb {T}\) continuously. Besides, in contrast with the FOM, the ROM is not bound by the CFL condition and can predict the quantities of interest at any specific time without intermediate computation. Hence, our proposed framework can reduce the computational time significantly.
Data availability
Our model scripts and all data generated or analyzed during this study will be available publicly through the Sandia National Laboratories software portal—a hub for GitHubhosted open source projects (https://github.com/sandialabs).
References
Hesthaven, J., Rozza, G. & Stamm, B. Certified Reduced Basis Methods for Parametrized Partial Differential Equations (Springer, 2016).
Xiao, D., Fang, F., Pain, C. & Hu, G. Nonintrusive reducedorder modelling of the Navier–Stokes equations based on RBF interpolation. Int. J. Numer. Methods Fluids 79, 580–595 (2015).
Xiao, D. et al. Nonintrusive reduced order modelling of the Navier–Stokes equations. Comput. Methods Appl. Mech. Eng. 293, 522–541 (2015).
Hesthaven, J. & Ubbiali, S. Nonintrusive reduced order modeling of nonlinear problems using neural networks. J. Comput. Phys. 363, 55–78 (2018).
Fresca, S., Dede, L. & Manzoni, A. A comprehensive deep learningbased approach to reduced order modeling of nonlinear timedependent parametrized PDEs. J. Sci. Comput. 87, 1–36 (2021).
Kadeethum, T. et al. Nonintrusive reduced order modeling of natural convection in porous media using convolutional autoencoders: Comparison with linear subspace techniques. Adv. Water Resour. 20, 104098 (2022).
Ahmed, S., San, O., Rasheed, A. & Iliescu, T. Nonlinear proper orthogonal decomposition for convectiondominated flows. Phys. Fluids 33, 121702 (2021).
Kim, Y., Choi, Y., Widemann, D. & Zohdi, T. A fast and accurate physicsinformed neural network reduced order model with shallow masked autoencoder. J. Comput. Phys. 20, 110841 (2021).
Kim, Y., Choi, Y., Widemann, D. & Zohdi, T. Efficient nonlinear manifold reduced order model (2020). arXiv:2011.07727 (arXiv preprint).
Chatterjee, A. An introduction to the proper orthogonal decomposition. Curr. Sci. 20, 808–817 (2000).
Willcox, K. & Peraire, J. Balanced model reduction via the proper orthogonal decomposition. AIAA J. 40, 2323–2330 (2002).
Choi, Y., Coombs, D. & Anderson, R. SNS: A solutionbased nonlinear subspace method for timedependent model order reduction. SIAM J. Sci. Comput. 42, A1116–A1146 (2020).
Kim, Y., Wang, K. & Choi, Y. Efficient spacetime reduced order model for linear dynamical systems in python using less than 120 lines of code. Mathematics 9, 1690 (2021).
Taron, J. & Elsworth, D. Thermal–hydrologic–mechanical–chemical processes in the evolution of engineered geothermal reservoirs. Int. J. Rock Mech. Min. Sci. 46, 855–864 (2009).
Nick, H., Raoof, A., Centler, F., Thullner, M. & Regnier, P. Reactive dispersive contaminant transport in coastal aquifers: Numerical simulation of a reactive henry problem. J. Contam. Hydrol. 145, 90–104 (2013).
Zheng, C. & Bennett, G. Applied Contaminant Transport Modeling Vol. 2 (Wiley, 2002).
Rutqvist, J. et al. Effects of THM coupling in sparsely fractured rocks. A numerical study of THM effects on the nearfield safety of a hypothetical nuclear waste repositoryBMT1 of the DECOVALEX III project. Part 3. Int. J. Rock Mech. Min. Sci. 42, 745–755 (2005).
Nield, D. & Bejan, A. Convection in Porous Media Vol. 3 (Springer, 2006).
Park, S. W., Lee, J., Yoon, H. & Shin, S. Microfluidic investigation of salinityinduced oil recovery in porous media during chemical flooding. Energy Fuels 35, 4885–4892 (2021).
Davison, S. M., Yoon, H. & Martinez, M. J. Pore scale analysis of the impact of mixinginduced reaction dependent viscosity variations. Adv. Water Resour. 38, 70–80 (2012).
Pruess, K. TOUGH user’s guide (1987).
Rutqvist, J. An overview of TOUGHbased geomechanics models. Comput. Geosci. 108, 56–63 (2017).
Bean, J., Sanchez, M. & Arguello, J. Sierra mechanics, an emerging massively parallel hpc capability, for use in coupled THMC analyses of HLW repositories in clay/shale. In 5th International meeting Book of Abstracts (2012).
Aagaard, B., Williams, C. & Knepley, M. PyLith: A finiteelement code for modeling quasistatic and dynamic crustal deformation. Eos Trans. AGU 89, 25 (2008).
Kolditz, O. et al. OpenGeoSys: An opensource initiative for numerical simulation of thermohydromechanical/chemical (THM/C) processes in porous media. Environ. Earth Sci. 67, 589–599 (2012).
Obeysekara, A. et al. Modelling stressdependent single and multiphase flows in fractured porous media based on an immersedbody method with mesh adaptivity. Comput. Geotech. 103, 229–241 (2018).
Prévost, J. H. Dynaflow Vol. 8544 (Princeton University, 1983).
HosseiniMehr, M., Vuik, C. & Hajibeygi, H. Adaptive dynamic multilevel simulation of fractured geothermal reservoirs. J. Comput. Phys. X 5, 100061 (2020).
Matthai, S. et al. Numerical simulation of multiphase fluid flow in structurally complex reservoirs. Geol. Soc. Lond. Spec. Publ. 292, 405–429 (2007).
Keilegavlen, E. et al. Porepy: An opensource software for simulation of multiphysics processes in fractured porous media (2019). arXiv:1908.09869 (arXiv preprint).
Kadeethum, T., Lee, S. & Nick, H. Finite element solvers for biot’s poroelasticity equations in porous media. Math. Geosci. 52, 977–1015 (2020).
Kadeethum, T., Lee, S., Ballarin, F., Choo, J. & Nick, H. A locally conservative mixed finite element framework for coupled hydromechanicalchemical processes in heterogeneous porous media. Comput. Geosci. 25, 104774 (2021).
Diersch, H. Finite element modelling of recirculating densitydriven saltwater intrusion processes in groundwater. Adv. Water Resour. 11, 25–43 (1988).
Frolkovič, P. & De Schepper, H. Numerical modelling of convection dominated transport coupled with density driven flow in porous media. Adv. Water Resour. 24, 63–72 (2000).
Kolditz, O., Ratke, R., Diersch, H. & Zielke, W. Coupled groundwater flow and verification of variable density flow and transport: 1. Transport models. Adv. Water Resour. 21, 27–46 (1998).
Carlberg, K., Choi, Y. & Sargsyan, S. Conservative model reduction for finitevolume models. J. Comput. Phys. 371, 280–314 (2018).
Ballarin, F., D’amario, A., Perotto, S. & Rozza, G. A PODselective inverse distance weighting method for fast parametrized shape morphing. Int. J. Numer. Methods Eng. 117, 860–884 (2019).
Venturi, L., Ballarin, F. & Rozza, G. A weighted POD method for elliptic PDEs with random inputs. J. Sci. Comput. 81, 136–153 (2019).
Choi, Y. & Carlberg, K. Spacetime leastsquares Petrov–Galerkin projection for nonlinear model reduction. SIAM J. Sci. Comput. 41, A26–A58 (2019).
Copeland, D., Cheung, S., Huynh, K. & Choi, Y. Reduced order models for Lagrangian hydrodynamics. Comput. Methods Appl. Mech. Eng. 388, 114259 (2022).
Hoang, C., Choi, Y. & Carlberg, K. Domaindecomposition leastsquares Petrov–Galerkin (DDLSPG) nonlinear model reduction. Comput. Methods Appl. Mech. Eng. 384, 113997 (2021).
Choi, Y., Brown, P., Arrighi, W., Anderson, R. & Huynh, K. Spacetime reduced order model for largescale linear dynamical systems with application to Boltzmann transport problems. J. Comput. Phys. 424, 109845 (2021).
Kadeethum, T., Ballarin, F. & Bouklas, N. Datadriven reduced order modeling of poroelasticity of heterogeneous media based on a discontinuous Galerkin approximation. GEMInt. J. Geomath. 12, 1–45 (2021).
DeCaria, V., Iliescu, T., Layton, W., McLaughlin, M. & Schneier, M. An artificial compression reduced order model. SIAM J. Numer. Anal. 58, 565–589 (2020).
Cleary, J. & Witten, I. Data compression using adaptive coding and partial string matching. IEEE Trans. Commun. 32, 396–402 (1984).
Zbontar, J., Jing, L., Misra, I., LeCun, Y. & Deny, S. Barlow twins: Selfsupervised learning via redundancy reduction (2021). arXiv:2103.03230 (arXiv preprint).
Zhang, C., Zarrouk, S. & Archer, R. A mixed finite element solver for natural convection in porous media using automated solution techniques. Comput. Geosci. 96, 181–192 (2016).
Elder, J. Transient convection in a porous medium. J. Fluid Mech. 27, 609–623 (1967).
Simpson, M. & Clement, T. Theoretical analysis of the worthiness of Henry and Elder problems as benchmarks of densitydependent groundwater flow models. Adv. Water Resour. 26, 17–31 (2003).
Diersch, H. & Kolditz, O. Variabledensity flow and transport in porous media: Approaches and challenges. Adv. Water Resour. 25, 899–944 (2002).
Yoon, H., Valocchi, A., Werth, C. & Dewers, T. Porescale simulation of mixinginduced calcium carbonate precipitation and dissolution in a microfluidic pore network. Water Resour. Res. 48, 5 (2012).
Yoon, H., Kang, Q. & Valocchi, A. Lattice Boltzmannbased approaches for porescale reactive transport. Rev. Mineral. Geochem. 80, 393–431 (2015).
Yoon, H., Chojnicki, K. & Martinez, M. Porescale analysis of calcium carbonate precipitation and dissolution kinetics in a microfluidic device. Environ. Sci. Technol. 53, 14233–14242 (2019).
Yoon, H. et al. Adaptation of delftia acidovorans for degradation of 2, 4dichlorophenoxyacetate in a microfluidic porous medium. Biodegradation 25, 595–604 (2014).
Inspectorate, S. N. P. The International Hydrocoin ProjectBackground and Results (Organization for Economic Cooperation and Development, 1987).
Flemisch, B. et al. Benchmarks for singlephase flow in fractured porous media. Adv. Water Resour. 111, 239–258 (2018).
PaulDuboisTaine, A. & Amsallem, D. An adaptive and efficient greedy procedure for the optimal training of parametric reducedorder models. Int. J. Numer. Methods Eng. 102, 1262–1292 (2015).
Vasile, M. et al. Adaptive sampling strategies for nonintrusive podbased surrogates. Eng. Comput. 20, 20 (2013).
Choi, Y., Boncoraglio, G., Anderson, S., Amsallem, D. & Farhat, C. Gradientbased constrained optimization using a database of linear reducedorder models. J. Comput. Phys. 423, 109787 (2020).
Raissi, M., Perdikaris, P. & Karniadakis, G. Physicsinformed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. J. Comput. Phys. 378, 686–707 (2019).
Kadeethum, T., Jørgensen, T. M. & Nick, H. M. Physicsinformed neural networks for solving nonlinear diffusivity and Biot’s equations. PLoS One 15, e0232683 (2020).
Huang, X., Liu, M., Belongie, S. & Kautz, J. Multimodal unsupervised imagetoimage translation. In Proceedings of the European Conference on Computer Vision (ECCV), 172–189 (2018).
Xiao, D. Error estimation of the parametric nonintrusive reduced order model using machine learning. Comput. Methods Appl. Mech. Eng. 355, 513–534 (2019).
Jacquier, P., Abdedou, A., Delmas, V. & Soulaïmani, A. Nonintrusive reducedorder modeling using uncertaintyaware deep neural networks and proper orthogonal decomposition: Application to flood modeling. J. Comput. Phys. 424, 109854 (2021).
Joseph, D. Stability of Fluid Motions I Vol. 27 (Springer, 2013).
Chen, Z., Huan, G. & Ma, Y. Computational Methods for Multiphase Flows in Porous Media Vol. 2 (Siam, 2006).
Lake, L., Johns, R., Rossen, B. & Pope, G. Fundamentals of Enhanced Oil Recovery (Society of Petroleum Engineers Richardson, **, 2014).
Wang, H., Ren, K. & Song, J. A closer look at batch size in minibatch training of deep autoencoders. In 2017 3rd IEEE International Conference on Computer and Communications (ICCC), 2756–2761 (IEEE, 2017).
Karras, T., Aila, T., Laine, S. & Lehtinen, J. Progressive growing of gans for improved quality, stability, and variation. (2017) arXiv:1710.10196 (arXiv preprint).
Kadeethum, T. et al. A framework for datadriven solution and parameter estimation of pdes using conditional generative adversarial networks. Nat. Comput. Sci. 1, 819–829. https://doi.org/10.1038/s43588021001713 (2021).
Kingma, D. & Ba, J. Adam: A method for stochastic optimization (2014). arXiv:1412.6980 (arXiv preprint).
Loshchilov, I. & Hutter, F. Sgdr: Stochastic gradient descent with warm restarts (2016). arXiv:1608.03983 (arXiv preprint).
Prechelt, L. Early stoppingbut when? In Neural Networks: Tricks of the Trade 55–69 (Springer, 1998).
Prechelt, L. Automatic early stopping using cross validation: Quantifying the criteria. Neural Netw. 11, 761–767 (1998).
Acknowledgements
TK and HY were supported by the Laboratory Directed Research and Development program (218328) at Sandia National Laboratories and US Department of Energy Office of Fossil Energy and Carbon Management, ScienceInformed Machine Learning to Accelerate Real Time Decisions in Subsurface ApplicationsCarbon Storage (SMARTCS) initiative. FB thanks the project “Numerical modeling of flows in porous media” funded by the Catholic University of the Sacred Heart, and the European Union’s Horizon 2020 research and innovation program under the Marie SkłodowskaCurie Actions, Grant agreement 872442 (ARIA). DO acknowledges support from Los Alamos National Laboratory’s Laboratory Directed Research and Development Early Career Award (20200575ECR). YC acknowledges LDRD funds (21FS042) from Lawrence Livermore National Laboratory (LLNLJRNL831095). Lawrence Livermore National Laboratory is operated by Lawrence Livermore National Security, LLC, for the U.S. Department of Energy, National Nuclear Security Administration under Contract DEAC5207NA27344. NB acknowledges startup support from the Sibley School of Mechanical and Aerospace Engineering, Cornell University. Sandia National Laboratories is a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia LLC, a wholly owned subsidiary of Honeywell International Inc. for the U.S. Department of Energy’s National Nuclear Security Administration under contract DENA0003525. This paper describes objective technical results and analysis. Any subjective views or opinions that might be expressed in the paper do not necessarily represent the views of the U.S. Department of Energy or the United States Government.
Author information
Authors and Affiliations
Contributions
T.K.: conceptualization, formal analysis, software, validation, writing—original draft, writing—review and editing. F.B.: conceptualization, formal analysis, supervision, validation, writing—review and editing. D.O.: conceptualization, formal analysis, supervision, validation, writing—review and editing. Y.C.: conceptualization, formal analysis, supervision, validation, writing—review and editing. N.B.: conceptualization, formal analysis, funding acquisition, supervision, writing—review and editing. H.Y.: conceptualization, formal analysis, funding acquisition, supervision, writing—review and editing.
Corresponding authors
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher's note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Kadeethum, T., Ballarin, F., O’Malley, D. et al. Reduced order modeling for flow and transport problems with Barlow Twins selfsupervised learning. Sci Rep 12, 20654 (2022). https://doi.org/10.1038/s41598022245453
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s41598022245453
This article is cited by

Progressive transfer learning for advancing machine learningbased reducedorder modeling
Scientific Reports (2024)

Generative adversarial reduced order modelling
Scientific Reports (2024)

Enhancing dynamic mode decomposition workflow with in situ visualization and data compression
Engineering with Computers (2024)
Comments
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.