VpROM: A novel Variational AutoEncoder-boosted Reduced Order Model for the treatment of parametric dependencies in nonlinear systems

Reduced Order Models (ROMs) are of considerable importance in many areas of engineering in which computational time presents difficulties. Established approaches employ projection-based reduction such as Proper Orthogonal Decomposition, however, such methods can become inefficient or fail in the case of parameteric or strongly nonlinear models. Such limitations are usually tackled via a library of local reduction bases each of which being valid for a given parameter vector. The success of such methods, however, is strongly reliant upon the method used to relate the parameter vectors to the local bases, this is typically achieved using clustering or interpolation methods. We propose the replacement of these methods with a Variational Autoencoder (VAE) to be used as a generative model which can infer the local basis corresponding to a given parameter vector in a probabilistic manner. The resulting VAE-boosted parametric ROM \emph{VpROM} still retains the physical insights of a projection-based method but also allows for better treatment of problems where model dependencies or excitation traits cause the dynamic behavior to span multiple response regimes. Moreover, the probabilistic treatment of the VAE representation allows for uncertainty quantification on the reduction bases which may then be propagated to the ROM response. The performance of the proposed approach is validated on an open-source simulation benchmark featuring hysteresis and multi-parametric dependencies, and on a large-scale wind turbine tower characterised by nonlinear material behavior and model uncertainty.


Introduction
The use of Reduced Order Models (ROMs) in structural dynamics simulations forms a main ingredient of research revolving around the use of accelerated surrogates for the purpose of Structural Health Monitoring (23; 62), digital twinning (68; 24) and uncertainty quantification (70; 47).Reduced order modeling techniques are often categorised in terms of purely data-driven methods or physics-based methods.Purely data-driven methods employ input and output simulations, or even recorded data, of the system of interest to learn the underlying dynamics (79; 34).The advent of computing power, leading to increasingly deep machine learning architectures, has rendered such methods extremely capable of recreating even complex dynamics (82; 13).However, such methods always remain limited by the breadth and quality of the data used to train them (58; 28).Physics-based methods, on the other hand, allow for the creation of structured Reduced Order Model (ROM) representations, initiating from the equations of motion and projecting these onto a lower dimensional space upon which they can be solved (14; 60; 51).Such a formulation maintains a stronger physics connotation and is, in this sense, often easier to interpret.
Reduction methods, which rely on the principle of projection, which are capable of addressing nonlinear and/or parametric systems, often exploit Proper Orthogonal Decomposition (POD) (9).These methods involve the execution of evaluations of the Full Order Model (FOM) and the use of the output response of these simulations, the so-called snapshots, for determining an appropriate reduction basis (15; 31).An alternative to POD methods is that of Proper Generalized Decomposition (PGD) (18), whilst PGD is in someways inspired by POD methods it has the very key difference of being an a-priori method, not requiring any simulation of the FOM in order to construct the ROM these methods have also had significant success in various dynamical systems (52; 17).
A straightforward approach to POD-based Model Order Reduction (MOR) consists in constructing a global POD basis, in which data snapshots from simulations carried out at different points in the parameter/phase space are all stacked together.From this collection, a single (global) projection basis is then extracted.This method is widely used and has proven robust performance for a number of applications (1; 19; 29), however, in the case of nonlinear systems, the POD only provides an optimal approximate linear manifold (42).As such, with moderate to large nonlinearities, the respective POD-based reduction can become inefficient and require the retention of several modes (at the cost of reduction), or even fail entirely (83).Similarly, with parametric models, a global POD reduction basis can yield poor performance or become computationally inefficient (6).
To this end, alternative strategies can be employed as a remedy, relying on either enriched low-order subspaces or a pool of local POD bases.The first technique is exemplified in (81; 48), where the authors make use of enriched reduction bases, in which underlying linear modal or vibration modes-based subspaces are enriched with modal derivatives in order to capture moderate geometric non-linearities.The second approach, on the other hand, relies on a library of pre-assembled, local ROMs, which are highly successful in approximating localised phenomena (61; 41).In this context, local ROMs can be defined with respect to time, implying the assembly of projection subspaces, and thus ROMs, which only capture the dynamics on a certain time window of the full behavior (5).Thus, each ROM of the corresponding library refers to a different time window of the response, establishing locality.Alternatively, the local nature of these bases may refer to certain regions or subdomains of the input parameter space (49), estimated through uniform (35; 64) or adaptive error estimators (54), which perform a model or basis selection operation between the local ROMs during model evaluations.Such local bases can better deal with more heavily nonlinear and parameter-dependent systems, as they enable an indirect form of clustering of the parameter space (including time if needed), thus providing an accurate subspace approximation for the governing equations of motion at any parametric sample (33).
Although this family of schemes can be considered an established pathway, when deriving an actionable ROM that serves across a broad range of operational conditions, efficiency can be compromised and is highly sensitive to the basis selection technique from the assembled library of local ROMs.In this context, state of the art techniques employ clustering (30) or interpolation (5) operations performed in the proper manifold to maximise precision.On the other hand, recent contributions suggest that machine learning-inspired techniques can increase utility and improve performance of ROMs, whilst achieving an automated training process (27; 20).Inspired by the latter, in this work, we suggest substituting interpolation-or clustering-based schemes with an ML-based generative model, while retaining the projection-based reduction that guarantees domain-wide accuracy.Our approach aims to increase efficiency and robustness of the ROM by approximating the generalised mapping between parametric inputs and local projection bases, and the resulting ROMs, via the use of generative modeling.
Generative models are a group of statistical models that can serve for generating outputs from observed/simulated systems, under unseen initial conditions, loads, or for properties outside those used in the original training set .This can be accomplished via conditioning on a parametric vector that reflects the characteristics of the system at hand.Formally, such a generative model learns the joint distribution P (X, p) of the observed data X and the parameter set p. As such, a generative model learns the distribution of the data itself hence allowing new samples to be drawn for simulation of new (previously unobserved or not simulated) outputs.More relevantly for our case, a generative model learns the distribution P (X|p), which is the distribution of the data given a certain parameter vector.The utility of such a generative model is twofold; via learning the joint distribution of reduction bases s and associated model parameters, we can generate the local basis corresponding to any parameter sample, whilst further capturing the uncertainty on this inference.
One popular branch of generative models is deep generative models, which make use of deep neural networks as powerful and flexible nonlinear approximators that are suitable for modelling complex dependencies.The two most common examples of modern deep generative models are the Generative adversarial network (GAN) (32) and the Variational AutoEncoder (VAE) (44).Both architectures have garnered significant interest in a wide range of fields, ranging from the traditional machine learning subjects of computer vision (32) and natural language processing (12; 53), to the domains of life sciences for novel molecule development (65; 40) and de-noising and analysis of Electronmicroscopy images (59; 63).Recently, diffusion models have become the state of the art, beating the performance of GANs and VAEs in typical tasks for generative models such as image and video generation (21; 36).With regards to structural engineering, significant works utilising deep generative models include the application of GANs to nonlinear model analysis (72) and the use of VAEs for wind turbine blade fatigue estimation (50).
In this work, we tackle the problem of generating local bases at unseen parameter/input values, making use of a VAE as a nonlinear generative model.The VAE model was chosen due to its proven ability to learn highly nonlinear manifolds and efficiently work with high dimensional data.VAEs have the additional advantage of estimating uncertainty on the predicted bases.The VAE model was used to replace clustering or interpolation methods previously used for basis generation (30; 5) with the aim to improve accuracy, tackle high dimensional dependencies, and allow for uncertainty quantification.The structure of this paper is organised as follows: Section 2 gives a background on parameteric reduced order modelling and the current state of the art regarding projection methods for treating nonlinear parameteric systems.Section 3 then describes the VAE model and how it is used in this work to replace the current state of the art interpolation or clustering methods in the parametric ROM.Section 4 then demonstrates the use of the VAE-boosted parametric ROM on 2 example problems.Firstly on a three-dimensional shear frame, modeled with Bouc-Wen hysteretic nonlinearities in its joints, with multi-parametric behavior depending on system properties and excitation characteristics.Secondly, the method is demonstrated on a large scale Finite Element (FE) model of a wind turbine tower undergoing plastic deformation.In this case, the methodology is combined with a hyper-reduction scheme to demonstrate the full potential for reduction in computational time.In both cases, the VAE is shown to outperform the state of the art methods whilst also allowing for uncertainty in the ROM prediction to be captured.Section 5 concludes the paper by summarising the work and results achieved as well as the limitations of the method and by offering perspectives on future developments.

Parametric Reduced Order Modelling
The context of our work is the physics-based reduction of parameterised dynamical systems to derive an equivalent low-order surrogate of a FOM, namely FE formulations adopted for nonlinear structural dynamics simulations.Such reduced representations provide accelerated system evaluations, which are useful for downstream tasks such as structural health monitoring diagnostic and prognostic, and decision support for operation and maintenance planning of engineered systems.In this context, projection-based reduction has been previously used for delivering response estimates (1), as well as for parameter estimation (71), or damage localisation and quantification tasks(2).This section first introduces the nonlinear equations of motion governing the problem at hand.Then, a projectionbased reduction framework is described, along with the additional components needed for the treatment of parametric dependencies, largely following a similar methodology to the available state of the art approaches (11; 5).The efficiency considerations when propagating the dynamics in the low-order formulation are discussed last.

Problem Statement
We assume a general nonlinear dynamical system, characterised by the parameter vector p = [p 1 , ..., p k ] T ∈ Ω ⊂ R k , which captures all system-and excitation-relevant parameters.Each realisation of p corresponds to a unique configuration of the system at hand.Thus, the dynamic behavior of such a system is given by the following set of nonlinear governing equations of motion: where u(t) ∈ R n represents the response of the system in terms of displacements, M (p) ∈ R n×n denotes the mass matrix, and F (t, p) ∈ R n the external excitation.The order of the system is expressed by the variable n, termed the full-order dimension, which physically represents the size of the coordinate space and thus the number of degrees of freedom in our system.This variable indirectly functions as a measure of the computational resources required for the model evaluation at full-order dimensions.Finally, the nonlinear effects are injected in the restoring force term g (u(t), u(t), p) ∈ R n .This term potentially encodes complex nonlinear phenomena of different nature, ranging from material nonlinearity to hysteresis or interface nonlinearities, which, in turn, depend on the parameter vector realisation and the response of the system.

Projection-based model order reduction
In our work, we employ a Galerkin projection-based scheme, as described in (75), to derive an efficient and accurate reduced order representation for the problem at hand, as described in subsection 2.1.Several alternative methodologies can be found in (9).We opt for a projection-based approach due to its interpretability and its utility for applications such as higher-level Structural Health Monitoring (SHM) systems (71).Specifically, the derived ROM delivers a low-order, yet still physics-based, representation of the full physical space of the model.Thus, the ROM is not limited to capturing displacements, but additionally infers stresses, strains, and accelerations at once, rather than deriving a ROM for specific elements or only at a few nodes (45; 66).This ability of the parametric ROM allows for an estimation of the FOM response at any given physical field of interest.
The derivation approach of the parametric ROM is described in what follows in a step-wise manner.The approach assumes the availability of a high fidelity FOM, in our case a FE model that spatially discretises the full-order representation of the system in Equation (1).Typically, a projection-based ROM relies on the premise that the dynamic response, in the present case the solution of Equation ( 1), lies in a low-order subspace of size r, where r is orders of magnitude smaller than the FOM dimension, denoted by n (r n).Thus, the following approximation holds: where V ∈ R N ×r represents the ROM basis that expresses the aforementioned subspace and q ∈ R r is the respective low-order coordinate vector.By substituting u into Equation (1) and multiplying with u T , thus performing a Galerkin projection, the following equivalent system is derived: where M = V T M V , g = V T g and F = V T F .Key to a reduction that achieves an accurate low-order representation is the assembly of the projection basis V .Following the suggestions in (5), we employ the POD technique to this end.This strategy evaluates Equation (1) for a training set of parameters and harvests response information to form the following matrix: where Ŝ ∈ R N ×(Nt×Ns) is termed as the snapshot matrix, and Û (p i ) ∈ R N ×Nt contains the time history of the response for every degree of freedom (DOF) for a given parametric realisation, henceforth termed as a snapshot.N t designates the number of simulation time steps, p i is the parametric input for snapshot i and N s is the number of snapshots.In turn, the projection basis V is assembled via Singular Value Decomposition (SVD) of Ŝ: and after truncating L: where L i are columns of matrix L, termed as modes.The truncation is applied to obtain the first r principal orthonormal components of the reduction basis V .The error measure utilised in (30) is herein employed for this purpose.

MACpROM: Treatment of parametric dependencies via clustering
As indicated in the governing equations of motion in Equation (1), the dynamic behavior of the system depends on a set of parameters that express system properties or traits of the induced excitation.Thus, the resulting response of the system is strongly dependent on the parameter vector realisation and may be dominated by localised effects (in the parametric space) due to the corresponding activation of nonlinear terms.As a result, a large number of truncated modes in Equation ( 6) are required to capture the underlying behavior for the whole parametric domain if the use of a single projection basis is assumed (1).Such an approach would lead to a prohibitively large ROM dimension, rendering the reduction inefficient or even intractable (83).To this end, an alternative strategy can be employed as a remedy, relying on a pool of local POD bases.Obtaining a pool of FOM snapshots and the corresponding POD training bases enables the ROM to capture localised effects and utilise proper interpolation or clustering techniques to approximate the response in intermediate points (6).
In our previous work in (76), we have successfully employed a MAC-guided clustering scheme, partially following the suggestions in (4) and exploiting a cosine similarity measure, also referred to as a Modal Assurance Criterion in Evaluation phase 1: Identify cluster i for p q using k − N N .2: Assume that V p q is approximated using the respective V i cluster 3: Evaluate the ROM of Equation ( 3) to approximate the quantity of interest the SHM domain.In this case, the locality on the POD bases refers to forming clusters within the original parametric domain.Thus, for each parametric sample, the ROM utilises a dedicated POD basis, termed V i cluster , based on the assigned cluster, to accurately reproduce the underlying FOM behavior.
However, since the dynamic behavior in each sample is dominated by localised nonlinear effects, which are captured V i , clustering is not performed directly on the parametric samples nor employs the usual distance metrics.Instead, an adaptive sampling procedure is followed, exploiting clustering techniques that rely on the truncated modes in Equation ( 6), resulting from each FOM evaluation.Specifically, the MAC is utilised as a comparative measure between projection bases, evaluating their ability to capture similar nonlinear effects.The MAC or vector cosine or cosine similarity, is defined as a scalar constant, expressing an indirect form of confidence when evaluating information originating from different sources (3).In our case, the MAC serves as a measure of similarity and coherence between truncated modes of neighbouring local bases.Assuming w r and w s correspond to the i th truncated mode from Equation ( 6) for the projection bases V r and V s respectively, the mathematical expression for the MAC reads: In turn, this measure is utilised to evaluate the value of new information that each mode captures, thus orienting an adaptive sampling and a subsequent clustering formulation during the training phase.
This approach is termed here as the Model Assurance Criterion parametric ROM (MACpROM) for reference purposes and its respective algorithmic framework is summarised in Table 1.The elements of this approach have been validated in previous works and have been shown capable of delivering an accurate and efficient reduced-order representation of nonlinear systems (76; 77).In this work, this approach serves as an established reference scenario for the validation of the proposed VAE-boosted ROM, termed VpROM.

CpROM: Treatment of parametric dependencies via local Basis Coefficients Interpolation
An alternative formulation for treating parametric dependencies in the context of nonlinear MOR has been proposed and verified with respect to state of the art approaches in (75).Specifically, the authors proposed an interpolation approach on the local projection bases relying on the established techniques in (5).To this end, a two-stage projection was introduced, thus allowing the dependence on parameters p to be formulated on a separate level from that of the snapshot procedure or the local subspaces.Thus, after constructing a pool of local bases, as described previously for the MACpROM, each local basis V i is projected to the assembled global POD basis of the domain V global through a coefficient matrix X as follows: where The variable r signifies the number of modes retained on the global basis V global .In this manner, interpolation can be performed on the level of coefficient matrices X i , which comprise a reduced dimension (r n), thus removing any dependency on the large FOM dimension n and rendering the required operations more efficient.In turn, the respective matrices X i are projected and interpolated in an element-wise manner on the tangent space of the proper Grassmannian manifold and projected back on the original space to obtain the respective local ROM basis V for any validation sample.This strategy is required for the local bases to retain certain orthogonality and positive-definiteness properties and is described in detail in (6; 83).A schematic visualisation of the approach, along with its algorithmic framework can be found in (75).In our work, this framework is termed CpROM, adopting the same acronym as in the original work for reference purposes.This serves as an additional comparison ROM framework to validate the performance of the proposed VpROM.

Hyper-reduction
Both the MACpROM and the CpROM frameworks, which were previously presented, are employed in conjunction with an additional operation, known as hyper-reduction, in order to achieve a substantial reduction in computational cost when dealing with nonlinear systems.Hyper-reduction refers to a second-tier approximation strategy, that addresses the bottleneck of updating and reconstructing the ROM system matrices due to the presence of the nonlinear terms in an online manner (56).In essence, this technique relies on a weighted evaluation of the corresponding projections of the nonlinear terms only at a subset of the total elements in the spatial discretisation, thus providing substantially accelerated model evaluations.The detailed description of the method and the discussion of the existing alternative approaches are already covered in previous works and, thus, lie beyond the scope of this paper.The interested reader can refer to (26; 55; 57).The validation case studies in our work make use of the Energy Conserving Mesh Sampling and Weighting (ECSW) technique presented in (25; 33).

VpROM: Coupling of Generative Models with projection-based ROMs
Current state of the art methods for the creation of low-order surrogates of parameterised nonlinear dynamical systems rely on the use of interpolation or clustering methods for the estimation of local bases for given parametric configurations.This work introduces a nonlinear generative model, exploiting a conditional Variational AutoEncoder (cVAE) formulation, in place of these methods, with the aim of improving robustness and performance issues of the ROM by allowing for nonlinearities in the parameter-basis relation to be captured and for high dimensional dependencies to be better-dealt with.Furthermore, the derived VpROM allows for increased utility with regard to the capturing of uncertainty in the predicted bases.

Variational Autoencoder (VAE)
The VAE first described in (44) is a latent variable model, that is, a model in which it is assumed that the observations are driven by certain unobserved latent variables.Such latent variable models are popular in many areas of science and engineering and are often used to reduce the effective dimensionality of data since the dimension of the latent variables is reduced compared to that of the observations (10).Indeed, such a concept is inherent to structural dynamics, as modal analysis, and its nonlinear extensions, all utilise lower dimensional representations to simplify the required analysis.In a probabilistic sense, modes can be considered latent variables, which are unobserved and drive the observed dynamics of the system.The VAE architecture thus serves to infer relationships between the latent variables and the observed variables by means of deep neural network functions.
In the context of MOR, a VAE can be considered as a Bayesian implementation of a deterministic autoencoder; a popular deterministic deep learning technique, which has been often exploited for dimensionality reduction.Lee and Carlberg (46) make use of a convolutional autoencoder, in conjunction with the nonlinear Galerkin method to construct ROMs for advection-dominated problems dynamics problems.Further work has combined an autoencoder with statistical regression methods to create fully data-based ROMs of nonlinear dynamical systems for structural dynamics (66; 67).A similar methodology known as Learning Effective Dynamics has also been shown to be effective in creating ROMs of some more classical nonlinear dynamical systems in various scientific disciplines (78; 80).
In a VAE model it is assumed that the data X are characterised by a probabilistic distribution p(X), which we would like to approximate by means of a parameterised, and possibly simplified, distribution p φ (X) which is parameterised by the vector φ.We assume that the complex distribution of the data is driven by a lower dimensional and more simply distributed hidden variable set Z, with assumed prior distribution p(Z).The concept here is that given a sufficiently powerful and flexible approximator, it is possible to learn a function that maps the simply distributed latent variables Z to the complexly distributed data X by learning the distribution p φ (X|Z) (22).This approximated distribution is found in the form of a deep neural network, namely the decoder network.This network is then parameterised by φ corresponding to the weights and biases.This results in the following expression for the generative model: The training of such a generative model necessitates the inference of those decoder parameters, φ, that maximise the likelihood of the observations.It is here noted that the term observations in this case refers to synthetically generated data from FOM snapshots.This can be expressed as: Encoder Decoder The evaluation of this integral, however, presents a problem, as it is generally analytically intractable and computationally inefficient to approximate via sampling.For this reason, a second encoder network is introduced in the typical VAE setup, which is additionally parameterised by θ.This allows the intractable posterior p(Z|X) to be approximated by the parameterised distribution q θ (Z|X) and hence creates a mapping from the observation space to the latent space.A variational approximation of the distribution of the latent space variable is also made whereby it is assumed that the latent variable takes on a certain known distribution p(Z).This variational approximation of the true posterior results in the following lower bound on the log-likelihood: where D KL denotes the Kullback-Leibler divergence, a metric used to measure the similarity of distributions.
It is then this lower bound, known as the evidence based lower bound (ELBO), which is optimised with respect to the parameters of the two networks, θ and φ.The maximisation of this function aims to i) improve the expected reconstruction loss of the decoder, i.e., the success in recovering observations from the latent variables, and ii) to minimise the KL divergence (and thus maximize the similarity) between the true and approximate posterior of the latent space.Once the VAE is trained based on this process, it is then possible to sample the latent space, using the inferred variational distribution, q θ (Z|X), and subsequently employ the decoder in order to recreate desired quantities of interest (outputs).
To optimise Equation (11), it is necessary to estimate the gradients of the ELBO.Kingma et al. (44) achieved this by using the re-parameterisation trick.By choosing the form of the approximate posterior to be a diagonal Gaussian parameterised by the encoder network, sampling from this distribution can be re-parameterised as follows; in which η represents a sample from the diagonal Gaussian distribution N (0, I) and µ θ (X), σ θ (X) are the mean and standard deviation values of the latent space as output by the encoder network.The approximate posterior is then reformulated as being a stochastic draw η and the deterministic mean and standard deviation values are predicted by the encoder.
This allows for evaluation of the expectation through sampling from a standard multivariate Gaussian, whilst the gradient can be assessed deterministically for each sample allowing the use of back propagation for training.Further, with the choice of a spherical unit Gaussian prior for p(Z), the KL divergence term can be calculated analytically (44).This results in the following per sample, differentiable cost function, in which the expectation is evaluated using N v samples from the latent space per data point.
(1 + log((σ where J denotes the dimension of the latent space and Z i,j = µ θ (X i ) + η l σ θ (X i ).The number of samples to take from the latent space to evaluate the expectation can even be 1 as in the original formulation of Kingma et al. (44).
As mentioned above, in the VAE model the encoder and decoder functions are approximated using deep neural networks (DNNs).DNNs are a very widely used class of models that exploit multiple neural network layers applied one after another in order to approximate very complex functions more efficiently than shallow networks (8).A thorough description of DNNs and their training can be found in (32), in the work herein the DNNs utilised only made use of fully connected layers.In fully connected layers, the transform performed by each layer consists of the matrix multiplication of the input vector with a trainable weights matrix and the addition of a trainable bias vector.A nonlinear activation function, often a tanh or sigmoid is then applied element-wise to the output of this operation.This results in a very flexible and powerful model for learning general nonlinear relations (32).

VpROM: A conditional Variational Autoencoder (cVAE)-boosted ROM
In our use case, we don't simply wish to sample possible, plausible bases for our system of interest, rather, we want to sample these bases conditioned on given system and load (excitation) parameters.We can achieve this relatively straightforwardly by concatenating the conditioning parameters p with the inputs of the VAE X, and with the latent space variables Z as demonstrated in Figure 2. Mathematically, the distribution approximated by the encoder now becomes q θ (Z|X, p) and the distribution approximated by the decoder becomes p φ (X|Z, p).To clarify the role of cVAE in the derived VpROM, the input referred to in Figure 2 corresponds to the ROM basis coefficients X in Equation ( 8).Thus, Figure 2 serves as a visualisation of the mapping process that the cVAE carries out to relate the parametric dependencies of the FOM with the ROM projection basis V .The model dependencies are expressed in Equation ( 1) and captured in variable p, whereas the relation to V from Equation ( 3) is expressed through the coefficients X from Equation (8).

VpROM: Generating New Bases
In this work, we propose to train the cVAE for creating a generative model, which can be sampled in order to produce the reduced basis coefficients X i from Equation ( 8) for a given parameter vector p i .In which the parameters either reflect certain properties of the system or of the loading applied.Concretely, once a trained VAE is available, the encoder portion is no longer used and predictions are made purely using the decoder and the assumed variational distribution on the latent space.In this case, a diagonal Gaussian is used as shown in Figure 3.
Hence, in order to sample from the decoder we simply take a draw, , from the chosen prior distribution p(Z) and concatenate this draw with the given parameters p.We then pass this latent vector through the decoder, which results in a sample from the observation distribution p φ (X|Z, p).This sample being a single draw from the distribution of the predicted coefficient values for a given parameter vector.Multiple such samples can then be taken by repeating this process in order to find quantities such as the mean and standard deviation of the predicted coefficients.This generative procedure emphasises the importance of minimising the KL divergence term in the loss function.If the KL loss is low then the the approximate posterior distribution q θ (Z|X, p) better approaches the prior p(Z).
Figure 2: Architecture of a cVAE in which the conditioning variables are injected via concatenation with both the input vector X and the latent vector Z.The input refers to the ROM basis coefficients X in Equation ( 8) Figure 3: Architecture of the cVAE when used for basis generation: sample latent vectors are taken from the prior distribution and concatenated with the conditioning vector p before these latent vectors are decoded to find the generated basis coefficients X from Equation (8).

Training the VpROM
As mentioned previously, we wish to train the cVAE to generate not the local bases themselves, but rather the coefficient matrices X i as introduced in Equation ( 8), which are then used to generate the local ROM subspaces from a global basis.To do this, we require training pairs of parameter vectors p i and corresponding coefficient matrices X i .These training pairs are created by first sampling the parameter vector with Latin Hypercube Sampling (LHS).Each of these training vectors is then used as model/input parameters for a FOM snapshot.The generated snapshots are then used to assemble the local bases V i and global basis V global respectively, and hence the coefficient matrices, using the procedure described in subsection 2.4.After initial efforts, it was decided that a separate cVAE would be trained for each column of the coefficient matrix X i .This was considered to be reasonable as each column of X i relates to a different retained POD mode, which ought to be mutually orthogonal.Further, the separate consideration of each column results in improved performance of the cVAE in generating new bases.
To prepare the data for training, the individual columns Xc , c = 1, ..., n of the coefficient matrix X ∈ R r * n are taken as vectors and are paired with the parameter vectors p i for training.Further, the parameter values are normalised between −1 and 1 and the coefficient vectors are normalised, as shown in Equation ( 16).The coefficient vectors Xc are normalised using the application of a natural logarithm; this offers the advantage of rendering the amplitude of the vector components more similar and hence preventing extreme values from dominating the cost function.The addition of the constant 2 was required such that all the values were greater than zero in order to avoid an error in the logarithm operation.
The models were all built and trained using Tensorflow and the Adam algorithm (43).In training the cVAE, the architecture of the network, the number, and size of layers, and the activation functions used, must also be chosen, these can, however, be treated as hyperparameters and can be optimised according to common methods such as grid search.The network architecture has a massive effect on the expressive power and generalisation of the model.All trained models made use of only dense and dropout layers Dense layers are a traditional fully connected feedforward neural network layer.Between each of the dense layers, a dropout layer was inserted.Dropout is a technique used for regularising deep neural networks, according to which -during each training update -a certain percentage (the dropout percentage) of activation values of a given layer are set to zero.This technique has been shown to improve generalisation performance of deep neural networks (69).As such, the architectural hyperparameters to be optimised for each model include the number of dense layers in the encoder and decoder, the number of neurons in these layers, the activation function used by these layers, and the amount of dropout included between each layer.The size of the bottleneck layer, or in other words, the number of latent variables driving the process is also key for reduction.Noteworthy is, however, that in this case it was found that the application of dropout was not beneficial to the performance; as such, this was not used in any of the finally implemented models.

Results
In this section, all aspects relevant to the performance of the proposed framework are validated on two case studies featuring parametric dependencies in system properties and excitation traits.The proposed cVAE-boosted ROM is firstly validated on a nonlinear benchmark simulator of a two-story shear frame featuring hysteretic joints (73), and then on a larger scale simulation, featuring computational plasticity, which is based on the NREL reference 5-MW wind turbine tower (39).
As already mentioned in section 2, we offer a comparison across alternate parametric ROM configurations in order to offer a comprehensive discussion on the potential and performance limits of the suggested framework.The first parametric ROM configuration refers to the MACpROM, as presented in subsection 2.3.This employs a MAC-guided clustering approach on the local POD bases.Next, the CpROM presented in subsection 2.4 is evaluated, following the local basis coefficients interpolation approach in (75).These two ROMs are assembled employing existing state-ofthe-art approaches and serve comparison purposes.The last two parametric ROMs are derived based on the cVAE framework proposed here to inject parametric variability in the local projection bases.We evaluate the performance of the proposed cVAE-boosted ROM, termed VpROM, both with and without the inclusion of hyper-reduction, which is described in subsection 2.5.The notation and configuration of these five schemes are summarised in Table Table

VpROM
The cVAE-boosted ROM as presented in section 3 HP-* The respective * ROM additionally equipped with hyper-reduction Regarding computational resources and timing, the validation simulations of the presented examples are implemented using an in-house built FE code, based on the suggestions by (7) and tested on a workstation equipped with an 11 th Gen Intel(R) Core(TM) i7-1165G7 processor, running at 2.80GHz, and 32GB of memory.In addition, the reported computational time is averaged over all FOM or ROM evaluations of each respective set of configurations (training or testing).The performance of the various frameworks in terms of reproducing the time history responses of the respective dynamic validation case studies is reported as follows: where ÑDOF represents a set of DOFs selected for response comparison, Ñt a set of selected time steps, q i is the FOM quantity of interest at DOF i, and qi is the respective inferred value computed using the ROM approximation.

Two story shear frame with hysteretic links
As an initial example, we consider a FE model of a three-dimensional two-story shear frame with nonlinear nodal couplings, each exhibiting a Bouc-Wen hysteretic nonlinearity (38).This example is chosen as a demonstrative case study due to the inherent ability of the simulator to model multiple simultaneously activated instances of nonlinearity, thus challenging the precision of any derived ROM.Because this is a low dimensional example, we use it with the main purpose of assessing the accuracy of the respective ROMs from Table 2, as the model is rather trivial to demonstrate any substantial computational savings.The capacity of the various parametric ROMs in terms of accelerating model evaluations is documented in the next case study, featuring a large-scale wind turbine tower.
A graphical illustration of the setup of the shear frame is visualized in Figure 4a, where the hysteretic links assume no length, although the virtual nodes are depicted within a distance from the reference node in Figure 4a for demonstration purposes.The respective model files that allow for results reproduction can be found in (73), as this example has been published as a benchmark multi-degree of freedom nonlinear response simulator.Regarding material properties, the case study follows the template configuration (74).Specifically, steel HEA cross-sections have been used for all beam elements, whereas the structure is assembled using two frames along axis x, each of l = 7.5 m length and one of w = 6 m along the width.In addition, each story has a height of h = 3.2 m.A Bouc-Wen formulation has been utilised to model the behavior of the nonlinear joints: this reflects a smooth hysteretic model, often adopted for modeling material nonlinearity (37).Therefore, based on the benchmark description in (73) a Bouc-Wen model is introduced at every DOF of every nodal coupling to simulate the total restoring force R of each joint.An example illustration of the nonlinear mechanism in the longitudinal x-DOF is provided in Figure 4b.The Bouc-Wen link models R as a superposition of a linear and a nonlinear term, represented by the two springs in Figure 4b.The linear and nonlinear terms, or springs, depend on the instantaneous nodal response δu and the hysteretic, and thus history-dependent, component of the response z, respectively.In turn, the respective vectorized mathematical formulation for all DOFs of the link reads: where du represents the nodal displacement, and α, k are traits characterizing the Bouc-Wen model on each link.Regarding their physical interpretation, α represents the characteristic post-yield to elastic stiffness reaction for each link, whereas k is the corresponding stiffness coefficient.The variable z stands for the hysteretic portion of the elongation, or displacement in general, and controls the hysteretic forcing.It obeys the following: where the shape, smoothness, and overall amplitude of the hysteretic curve that characterises the dynamic behavior of each joint is determined by the Bouc-Wen parameters β, γ, w, and A respectively.The terms ν(t), η(t) are introduced to capture strength deterioration and stiffness degradation effects via the corresponding coefficients δ ν and δ eta .In turn, their evolution in time depends on the absorbed hysteretic energy, (t).This representation allows for a structural dynamics simulator, which is parametrised with respect to system properties and traits of the joints' behavior.For a more detailed elaboration on the physical connotations of the Bouc-Wen model parameters in terms of yielding, softening, and hysteretic behavior effects, the reader is referred to (16; 73).
This parameterised shear frame simulator is selected due to its ability to model a variety of nonlinear dynamic effects that dominate the response and are dependent on the parametric configuration of the model.In the presented case studies, the parameters defining the structure itself and those defining the acting loads can significantly affect the response.The parameter set includes the forcing signal's temporal and spectral characteristics, the frame's material properties, and the traits that dictate the hysteretic effects on the joints.The six parameters employed in this example are summarised in Table 3.
First, uncertainty is introduced in the material properties of the system by treating the Young modulus of elasticity E as a parameter of the model.Its range is summarized in Table 3.In addition, three traits of the nonlinear joints of the shear frame are parametrically to model and simulate various qualities and shapes of the corresponding hysteretic behavioral curves.Specifically, parameters α and k in Equation ( 18) and parameter δ n in Equation ( 19) are injected as dependencies in the derived ROM.The numerical range for each parameter is also provided in Table 3. Forcing is applied to the frame system as a base excitation scenario representing an earthquake.The force is applied at an angle of θ = π/4 with respect to the x-axis, as depicted in Figure 4a.To produce a parameterised version of the excitation, a white noise template accelerogram is used as a reference.This template signal is then passed through a second-order butterworth low-pass filter and multiplied with an amplitude factor to produce the actual accelerogram of the motion imposed on the system.The amplitude coefficient Amp and the frequency of the filter f but are treated as dependencies.Thus, due to the dependencies of the model both in system parameters and excitation traits, the dynamic system under consideration exhibits substantially different behavior depending on the chosen parametric configuration.
The numerical study has been designed in such a way to validate the requirement to inject dependencies in the derived surrogate while making use of the ability of the simulator to output a variety of nonlinear behaviors, thus challenging the accuracy limit of the ROMs.
All ROMs as referenced in Table 2 are implemented here, employing the same training scheme of fifty samples, drawn using LHS.The corresponding performance measures for each ROM are evaluated on a validation set of five hundred samples, drawn using an LHS with a different seed.Regarding the low-order dimension, r = 16 modes are retained for each local basis V and r = 200 modes for V global in Equation (8).A detailed evaluation of the accuracy for the implemented ROMs of Table 2 is presented in Figure 5.The precision of the respective surrogates is evaluated with respect to two measures, namely err u and err ü of Equation ( 17) that correspond to the error on capturing the displacement and acceleration time histories respectively.The boxplots provide a visualisation of the ability of each ROM to capture the FOM response both in terms of displacements and accelerations, whereas the respective values are also reported in Table 4.Although the overall precision is relatively low, this example merely serves to offer a comparison that deliberately employs a relatively wide domain of parameters, in order to excite substantially different dynamic behavior.Nevertheless, in Figure 5 and Table 4 the MACpROM implemented exhibits similar accuracy with the reference CpROM, in terms of approximating both the displacement and the acceleration time histories.The respective median error and the boxplot quartiles almost coincide, whilst both approaches seem to deliver a similar distribution of outliers.
The proposed VpROM on the other hand achieves a substantially improved performance.The outliers are fewer, the respective discrepancy for the outlier samples is substantially lower, and the visualised distribution has a lower median and maximum error.
A further comparative visualisation of the accuracy for the implemented ROMs is provided in Figure 6.An example projection plane has been chosen for demonstration purposes and the validation measure is depicted on the vertical axis and via the color scale.Similar to Figure 5a, the err u is visualised in Figure 6 as a representative measure of the ROMs ability to reproduce the displacement time histories of the FOM.In addition, all samples with errors greater than 20% are depicted in the 20% color level for better scaling and a clearer comparison.Since the established CpROM and the MACpROM deliver similar precision with respect to displacements in Figure 5a, the VpROM suggested in this study is compared only with CpROM in Figure 6 for the sake of a clearer demonstration.
As already highlighted, the VpROM captures the dynamics across the domain of parametric inputs with an overall superior precision and fewer accuracy outliers than CpROM or MACpROM.This is visualised in Figure 6 through the fewer circles located in the dark red region for the VpROM and the substantially fewer evaluations colored outside the blue-to-green spectrum.Thus, despite a few outliers, the overall accuracy of the framework remains superior to the compared established alternatives for physics-based MOR.17) for displacement time histories (err u ).The proposed VpROM is compared with CpROM, as the MACpROM delivers a slightly worse performance as indicated in Figure 5.All samples with errors greater than 20% are depicted in the 20% color level for a clearer comparison.
In Figure 7, a more detailed evaluation of the approximation quality achieved by the VpROM is illustrated.Specifically, the time history estimation is depicted for different levels of precision to visualise and validate the overall performance of the VpROM.One sample from each family of sample points as captured by the color scale in Figure 6 is presented.The VpROM approximation is visualised for various response patterns to highlight the ability of the surrogate model to infer dynamic behaviors dominated by different effects as modeled via the variety of shapes of the hysteretic curves on the links.The VpROM is shown to deliver a robust approximation in a complex example with a rich dynamic behavior represented by several different shapes and amplitudes of the hysteretic curves characterizing the behavior of the nonlinear joints.Figure 7: Visualisation of the different levels of the approximation quality achieved using the VpROM.The VpROM estimation is reported for various response patterns the system exhibits depending on its parametric features.
Beyond offering a robust reduction framework, that is shown to generalise across parametric configurations, the derived VpROM offers the potential of quantifying the uncertainty in the respective estimations.This is due to the latent space of the cVAE component being trained to approximate a given variational distribution; thus, by sampling this variational distribution, the uncertainty on the predicted local basis coefficients may be captured in addition to simply mean   estimates.Naturally, by using only the mean predicted values of the coefficients, significant information is lost regarding the uncertainty of these values.Hence, it is worth considering utilising this technique for uncertainty estimation on the response of the ROM for each sample.In order to propagate this uncertainty, multiple parallel evaluations of the VpROM are performed employing different coefficient values as generated from different samples from the latent space of the VAE.In turn, the distribution of the responses is inspected to evaluate and quantify the uncertainty in the respective inference.The resulting approximation is equipped with confidence bounds that can provide increased utility for many problems in structural dynamics.
A visualisation of the respective output is provided in Figure 8, where the average performance of the VpROM for both displacements and accelerations is depicted.The respective shaded area represents the confidence bounds of the inference scheme, evaluated by sampling the predicted distributions of the latent space 40 times and propagating the response using the respective local bases assembled by the decoder for each of these 40 sampled vectors.The shaded area encompasses the maximum and minimum values at each time point for the 40 simulations carried out.
The respective average quality of the VpROM approximation in Figure 8 indicates a high-precision physics-based surrogate, with the inherent ability to provide a quantification on the uncertainty of the respective estimations.To further evaluate the suitability of the proposed framework and demonstrate its utility in reducing computational toll, a large-scale example is discussed next.

Wind Turbine Tower with plasticity
This section evaluates the performance of the suggested VpROM on a large-scale example based on the simulated dynamic response of the NREL 5-MW reference wind turbine tower (39).Regarding the configuration of this case study, the interested reader is referred to (75).In brief, the three-dimensional FE model of the monopile is visualized in Figure 9a and features a circular cross-section, which is linearly tapered from the base to the top.The respective diameter and wall thickness are equal to 6m and 0.027m on the base and 3.87m and 0.019m on the top of the monopile.However, for simplification purposes, a constant thickness assumption is made throughout the tower, and 8170 shell elements are used.The wind turbine is assembled at the top of the monopile assuming a lumped mass scheme and regular beam elements, assembled through multi-point constraints to the tower.Regarding the material properties, steel is assumed, with E steel = 210GP a and a density of ρ = 7850kg/m 3 and a nonlinear constitutive law, which is characterised by isotropic von Mises plasticity.Although the employed stress-strain relation might seem relatively simple, this problem features an extensive yielding domain (≈ 30% of the height), additional model uncertainties, and a stochastic excitation that increase the complexity and pose certain requirements when deriving a high precision ROM.In addition, this large-scale case study is used to demonstrate the efficiency of hyper-reduced ROMs, which allow for a substantial reduction in the overall computational toll.For this reason, the hyper-reduced variant of all ROMs, termed as HR-VpROM for example in Table 2, is validated herein.Regarding the parametric dependencies, the Kobe earthquake accelerogram is utilised as a ground motion scenario, parameterised with respect to its amplitude A. The yield stress σ V M and the Young modulus of elasticity E are also varied.The range of these parametric dependencies is summarised in Table 5.The training and validation domain is designed using LHS sampling, similar to subsection 4.1.In this case study, a low-order dimension of r = 4 is chosen, while r = 32 global modes are retained for V global in Equation ( 8), whereas the τ parameter for the ECSW hyper-reduction technique discussed in subsection 2.5 is set to τ = 0.01.The ability of the proposed HP-VpROM to accurately infer response fields that are relevant for dynamic structural systems, while providing accelerated model evaluations is exhibited herein.
In Figure 10 a visualisation of the HP-VpROM approximation for the internal stress field in the yielding domain is provided for two validation examples, which feature different dynamic behavior.The respective high-fidelity field is also visualised via the FOM for reference purposes.Stresses and strains are important metrics to be monitored in many structural applications; thus, the ability to capture their distribution accurately is often of critical importance.Despite the minor discrepancies observed, the overall quality of the HP-VpROM approximation illustrated in Figure 10 indicates an effective low-order representation, able to deliver high precision estimates of stress state distributions.This exemplifies the potential utility of the proposed HP-VpROM in condition monitoring, fatigue, or damage localisation.A more comprehensive summary of all aspects of the ROMs' performance is provided in Table 6.Specifically, the average and maximum error measures of the respective approximations on displacement and acceleration time histories are summarised for all hyper-reduced surrogates of Table 2.In addition, the precision is reported for two example validation samples in the extreme regions of the input domain for reference.
Similar to what was already observed in the previous case study in subsection 4.1, the hyper-reduced variant of the VpROM, namely HP-VpROM, delivers a superior surrogate in terms of accuracy.The respective low maximum err u measure on capturing the displacements indicates a robust approximation, whereas the other two ROMs experience performance outliers where accuracy deteriorates significantly.At the same time, HP-VpROM achieves an average discrepancy lower than 1%, implying a high precision representation.Similar conclusions can be drawn by observing the respective measures on the two validation samples offered as additional examples.Regarding the inference of the acceleration response, the VpROM maintains its superior accuracy, although it seems only marginally better than the two alternative ROMs implemented.
The utility of the proposed HP-VpROM for applications, in which (near) real-time model evaluations are required is also documented in Table 6.The hyper-reduction technique, along with the ability of the ROM to propagate the dynamics in a proper low-order subspace achieves a substantial computational toll reduction and accelerated computations.The respective average speed-up factor t F OM /t ROM reported in Table 6 implies significant savings in computational resources during model evaluations.
The reported performance measures highlight the fact that the proposed ROM framework remains robust and precise, also when coupled with hyper-reduction.The generative model injected guarantees the ability of the HP-VpROM to capture different dynamic trends in the response and avoid accuracy outliers.This is indicatively visualised in Figure 11, where the acceleration response of the full-order model and the respective HP-VpROM approximation is illustrated for validation samples located near the edges of the input domain.Figure 11: Visualisation of the different levels of the approximation quality achieved using the HP-VpROM.The estimation is reported for various response patterns the system exhibits depending on its parametric features.
The different patterns in the system's behavior are demonstrated clearly, along with the ability of the assembled HP-VpROM to capture the different trends sufficiently accurately.Despite the minor discrepancies observed, especially when high-frequency components are present in the response like in the bottom right example, the HP-VpROM maintains a robust performance with a high-quality approximation across the input domain.The coupling of the ROM with a cVAE-based generative model offers an additional feature, namely the quantification of the uncertainty in the estimations.Relying on the probabilistic nature of the latent space of the assembled cVAE the proposed HP-VpROM comes with confidence bounds on its predictions.This is exemplified in Figure 12, where the approximation for the acceleration is visualised for a representative validation sample.The shaded region in Figure 12 represents the uncertainty of the respective prediction and may be used as a confidence measure during when using the ROM predictions this improves the utility of the scheme compared to determinstic methods.

Limitations and concluding remarks
This work demonstrates the use of a cVAE-based ROM, termed as VpROM, as an extension to state of the art methods for generating local reduction bases for nonlinear parametric ROMs.
The following conclusions are drawn: • cVAE neural networks can successfully be used to generate local bases for nonlinear parametric ROMs with high dimensional parameterisation and strongly nonlinear behaviour.• The verification of the proposed scheme on a large-scale system results in significantly accelerated model evaluations, almost 40 times faster compared to the FOM.• The cVAE can outperform current state of the art methods such as interpolation and clustering algorithms in terms of precision of the RROM.• The VpROM formulation offers the additional benefit of encoding the uncertainty in the predicted local bases and the ability to propagate this in the predicted response.
The newly developed method is demonstrated on two simulated nonlinear systems, which are parameterised in terms of both system and loading traits.The first example demonstrates the viability of the method for a system of high dimensional parametric dependency exhibiting strongly varying nonlinear behaviour.The second example verifies the proposed scheme on a large-scale system, with the inclusion of hyper-reduction, in order to demonstrate the utility of the method for hyper-accelerated model evaluations.
In both examples, the potential of the method for quantifying uncertainty on its estimates is also demonstrated.
The main limitation of the method in comparison to current methodologies is the relative complexity of the training process of the cVAE models.Owing to their very flexible nature neural network methods, such as VAEs, comprise a relatively high number of hyper-parameters that must be tuned for optimal performance.When training the VpROM, it is necessary to select such hyper-parameters, which include for instance, the number of layers in the network and the number of neurons in each of these layers, the activation functions used in the network, and the learning rate of the optimisation algorithm.There exist a number of heuristics for choosing such parameters, such as grid search, yet it remains worth highlighting that this process requires more effort than is required for other state-of-the-art methods exploiting clustering or interpolation methods.

Figure 4 :
Figure 4: Graphic of the frame setup and illustration of the nonlinear mechanism in its links.The green arrow indicates the direction of ground motion, whereas the colored arrows the orientation of the beam elements.
Precision with respect to accelerations.

Figure 5 :
Figure 5: Box plots reporting the accuracy of capturing the displacement and acceleration time histories.The distributions of the respective error measures err u , err ü from Equation (17) are visualised along with the respective median (red line) and outliers (red crosses).
Precision of the established CpROM.
Precision of the suggested VpROM.

Figure 6 :
Figure 6: Parameter Visualisation of the error distribution of Equation (17) for displacement time histories (err u ).The proposed VpROM is compared with CpROM, as the MACpROM delivers a slightly worse performance as indicated in Figure5.All samples with errors greater than 20% are depicted in the 20% color level for a clearer comparison.
Approximation for a sample in the dark red region in Fig.6.Approximation for a sample in the red region in Figure6.Approximation for a sample in the yellow region in Figure6.Approximation for a sample in the cyan region in Figure6.
Approximation on displacement response.
Approximation on acceleration response.

Figure 8 :
Figure 8: Average quality of the VpROM approximation.Evaluation is performed on the degree of freedom with the maximum absolute response.The shaded area quantifies the uncertainty of the response inference.
(a) FE mesh in an example deformed state.(b)ECSW elements highlighted in red for one of the clusters.

Figure 9 :
Figure 9: Wind turbine tower: FE model and example of ECSW mesh.For the ECSW elements a horizontal cut is depicted for visualization purposes.

Figure 10 :
Figure 10: Visualisation of the approximation of internal stresses achieved by the proposed VpROM using nodal averaging.Only the yielding domain is visualised, which extends to one-third of the total height.

2 )Figure 12 :
Figure 12: Quality and confidence bounds of the HR-VpROM approximation.Evaluation is performed on the degree of freedom with the maximum absolute response.The shaded area quantifies the uncertainty of the response inference.

Table 1 :
The algorithmic MACpROM framework.The maximum number of clusters can be used instead of a user-defined tolerance during Step 2.
b : Evaluate MAC between local bases of training realisations.2 c : Assign each training basis to a cluster, whose basis minimises the respective MAC. 2 d : Identify the maximum obtained MAC and check if exceeds pre-defined tolerance 2 e : If so, define a new cluster center on the point of maximum MAC 2 f : Refine the sampling domain by adding training states between cluster center(s) and maximum MAC point(s) and repeat from 1 (if needed).

Table 2 :
Reference table for compared ROMs.
2 for reference.In what follows we present different case studies comparing the performance of these different schemes.

Table 3 :
Two-story frame: Range of the parameter values of the ROM.All parameters follow a uniform distribution.

Table 4 :
Performance for the ROMs from Table2.The median and maximum error is reported with respect to displacements and accelerations.Efficiency is also reported, although hyper-reduction has not been implemented.

Table 5 :
Range of the parametric values of the implemented ROMs.The E ref refers to the typical modulus assigned to each material of the model.Parameter: Excitation amplitude A Yield stress σ V M [375, 450](M P a) [0.80, 1.20]×E ref (GP a)

Table 6 :
(17)ormance measures for the hyper-reduced ROMs from Table2.Two validation samples are presented as an example, along with the respective median and max error measures from Equation(17).Efficiency is also reported.