Abstract
The phasefield method is a powerful and versatile computational approach for modeling the evolution of microstructures and associated properties for a wide variety of physical, chemical, and biological systems. However, existing highfidelity phasefield models are inherently computationally expensive, requiring highperformance computing resources and sophisticated numerical integration schemes to achieve a useful degree of accuracy. In this paper, we present a computationally inexpensive, accurate, datadriven surrogate model that directly learns the microstructural evolution of targeted systems by combining phasefield and historydependent machinelearning techniques. We integrate a statistically representative, lowdimensional description of the microstructure, obtained directly from phasefield simulations, with either a timeseries multivariate adaptive regression splines autoregressive algorithm or a long shortterm memory neural network. The neuralnetworktrained surrogate model shows the best performance and accurately predicts the nonlinear microstructure evolution of a twophase mixture during spinodal decomposition in seconds, without the need for “onthefly” solutions of the phasefield equations of motion. We also show that the predictions from our machinelearned surrogate model can be fed directly as an input into a classical highfidelity phasefield model in order to accelerate the highfidelity phasefield simulations by leaping in time. Such machinelearned phasefield framework opens a promising path forward to use accelerated phasefield simulations for discovering, understanding, and predicting processing–microstructure–performance relationships.
Similar content being viewed by others
Introduction
The phasefield method is a popular mesoscale computational method used to study the spatiotemporal evolution of a microstructure and its physical properties. It has been extensively used to describe a variety of important evolutionary mesoscale phenomena, including grain growth and coarsening^{1,2,3}, solidification^{4,5,6}, thinfilm deposition^{7,8}, dislocation dynamics^{9,10,11}, vesicles formation in biological membranes^{12,13}, and crack propagation^{14,15}. Existing highfidelity phasefield models are inherently computationally expensive because they solve a system of coupled partial differential equations for a set of continuous field variables that describe these processes. At present, the efforts to minimize computational costs have focused primarily on leveraging highperformance computing architectures^{16,17,18,19,20,21} and advanced numerical schemes^{22,23,24}, or on integrating machinelearning algorithms with microstructurebased simulations^{25,26,27,28,29,30,31}. For example, leading studies have constructed surrogate models capable of rapidly predicting microstructure evolution from phasefield simulations using a variety of methods, including Green’s function solution^{25}, Bayesian optimization^{26,28}, or a combination of dimensionality reduction and autoregressive Gaussian processes^{29}. Yet, even for these successful solutions, the key challenge has been to balance the accuracy with computational efficiency. For instance, the computationally efficient Green’s function solution cannot guarantee accurate solutions for complex, multivariable phasefield models. In contrast, Bayesian optimization techniques can solve complex, coupled phasefield equations, but at a higher computational cost (although the number of simulations to be performed is kept to a minimum, since each subsequent simulation’s parameter set is informed by the Bayesian optimization protocol). Autoregressive models are only capable of predicting microstructural evolution for the values for which they were trained, limiting the ability of this class of models to predict future values beyond the training set. For all three classes of models, computational costeffectiveness decreases as the complexity of the microstructure evolution process increases.
In this work, we create a costminimal surrogate model capable of solving microstructural evolution problems in fractions of a second by combining a statistically representative, lowdimensional description of the microstructure evolution obtained directly from phasefield simulations with a historydependent machinelearning approach (see Fig. 1). We illustrate this protocol by simulating the spinodal decomposition of a twophase mixture. The results produced by our surrogate model were achieved in fractions of a second (lowering the computational cost by four orders in magnitude) and showed only a 5% loss in accuracy compared to the highfidelity phasefield model. To arrive at this improvement, our surrogate model reframes the phasefield simulations as a multivariate timeseries problem, forecasting the microstructure evolution in a lowdimensional representation. As illustrated in Fig. 1, we accomplish our accelerated phasefield framework in three steps. We first perform highfidelity phasefield simulations to generate a large and diverse set of microstructure evolutionary paths as a function of phase fraction, c_{A} and phase mobilities, M_{A} and M_{B} (Fig. 1a). We then capture the most salient features of the microstructures by calculating the microstructures’ autocorrelations and we subsequently perform principal component analysis (PCA) on these functions in order to obtain a faithful lowdimensional representation of the microstructure evolution (Fig. 1b). Lastly, we utilize a historydependent machinelearning approach (Fig. 1c) to learn the timedependent evolutionary phenomena embedded in this lowdimensional representation to accurately and efficiently predict the microstructure evolution without solving computationally expensive phasefieldbased evolution equations. We compare two different machinelearning techniques, namely timeseries multivariate adaptive regression splines (TSMARS)^{32} and long shortterm memory (LSTM) neural network^{33}, to gauge their efficacy in developing surrogate models for phasefield predictions. These methods are chosen due to their nonparametric nature (i.e. they do not have a fixed model form), and their demonstrated success in predicting complex, timedependent, nonlinear behavior^{32,34,35,36}. Based on the comparison of results, we chose the LSTM neural network as the primary machinelearning architecture to accelerate phasefield predictions (Fig. 1c), because the LSTMtrained surrogate model yielded better accuracy and longterm predictability, even though they are more demanding and finicky to train than the TSMARS approach. Besides being computationally efficient and accurate, we also show that the predictions from our machinelearned surrogate model can be used as an input for a classical highfidelity phasefield model via a phaserecovery algorithm^{37,38} in order to accelerate the highfidelity predictions (Fig. 1d).
Hence, the present study consists of three major parts: (i) constructing surrogate models trained via machinelearning methods based on a large phasefield simulation data set; (ii) executing these models to produce accurate and rapid predictions of the microstructure evolution in a lowdimensional representation; (iii) performing accelerated highfidelity phasefield simulations using the predictions from this machinelearned surrogate model.
Results and discussion
Lowdimensional representation of phasefield results
We base the formulation of our historydependent surrogate model on a lowdimensional representation of the microstructure evolution. To this end, we first generated large training (5000 simulations) and moderate testing (500 simulations) phasefield data sets for the spinodal decomposition of an initially random microstructure by independently sampling the phase fraction, c_{A}, and phase mobilities, M_{A} and M_{B}, and using our inhouse multiphysics phasefield modeling code MEMPHIS (mesoscale multiphysics phasefield simulator)^{8,39}. The results of these simulations gave a wide variety of microstructural evolutionary paths. Details of our phasefield model and numerical solution are provided in “Methods” and in Supplementary Note 1. Examples of microstructure evolutions as a function of time for different set of model parameters (c_{A}, M_{A}, M_{B}) are reported in Supplementary Note 2.
We then calculated the autocorrelation \({{\boldsymbol{S}}}_{2}^{\left({\rm{A}},{\rm{A}}\right)}\left({\bf{r}},{t}_{i}\right)\) of the spatially dependent composition field c(x, t_{i}) at equally spaced time intervals t_{i} for each spinodal decomposition phasefield simulation in our training set. Additional information on the calculation of the autocorrelation is provided in “Methods”. For a given microstructure, the autocorrelation function can be interpreted as the conditional probability that two points at positions x_{1} and x_{2} within the microstructure, or equivalently for a random vector r = x_{2} − x_{1}, are found to be in phase A. Because the microstructures of interest comprise two phases, the microstructure’s autocorrelation and its associated radial average, \(\overline{S}(r,{t}_{i})\), contain the same information about the microstructure as the highfidelity phasefield simulations. For example, the volume fraction of phase A, c_{A}, is the value of the autocorrelation at the center point, while the average feature size of the microstructure corresponds to the first minimum of \(\overline{S}(r,{t}_{i})\) (i.e. \({\mathrm d}\overline{S}(r,{t}_{i})/{\mathrm d}r=0\)). Collectively, this set of autocorrelations provides us with a statistically representative quantification of the microstructure evolution as a function of the model inputs (c_{A}, M_{A}, M_{B})^{40,41,42}. Figure 2a illustrates the time evolution of the microstructure, its autocorrelation, and the radial average of the autocorrelation for phase A for one of our simulations at three distinct time frames. For all the simulations in our training and testing data set, we observe similar trends for the microstructure evolution, regardless of the phase fraction and phase mobilities selected. We first notice that, at the initial frame t_{0}, the microstructure has no distinguishable feature since the compositional field is randomly distributed spatially. We then observe the rapid formation of subdomains between frame t_{0} and frame t_{10}, followed by a smooth and slow coalescence and evolution of the microstructure from frame t_{10} until the end of the simulation at frame t_{100}. Based on this observation, we trained our machinelearned surrogate model starting at frame t_{10}, once the microstructure reached a slow and steady evolution regime.
We simplified the statistical, highdimensional microstructural representation given by the microstructures’ autocorrelations via PCA^{25,43,44}. This operation enables us to construct a lowdimensional representation of the time evolution of the microstructure spatial statistics, while at the same time still faithfully capturing the most salient features of the microstructure and its evolution. Details on PCA are provided in “Methods”. Figure 2b shows the 5500 microstructure evolutionary paths from our training and testing data sets for the first three principal components. For the 5000 microstructure evolutionary paths in our training data set, the principal components are fitted to the phasefield data. For the 500 microstructure evolutionary paths in our testing data set, the principal components are projected. In the reduced space, we can make the same observations regarding the evolution of the microstructure: a rapid microstructure evolution followed by a steady, slow evolution. In Fig. 2c, we show that we only need the first 10 principal components to capture over 98% of the variance in the data set. Thus, we use the time evolution of these 10 principal components to construct our lowdimensional representation of the microstructure evolution. Therefore the dimensionality of the microstructure evolution problem was reduced from a (512 × 512) × 100 to a 10 × 100 spatiotemporal space.
LSTM neural network parameters and architecture
The previous step captured the time history of the microstructure evolution in a statistical manner. We combine the PCAbased representation of the microstructure with a historydependent machinelearning technique to construct our microstructure evolution surrogate model. Based on performance, we employed a LSTM neural network, which uses the model inputs (c_{A}, M_{A}, M_{B}) and the previous known time history of the microstructure evolution (via a sequence of previous principal scores) to predict future time steps (results using TSMARS, which uses the “m” most recent known and predicted time frames of the microstructure history to predict future time steps, are discussed in Supplementary Note 3).
In order to develop a successful LSTM neural network, we first needed to determine its optimal architecture (i.e. the number of LSTM cells defining the neural network, see Supplementary Note 4 for additional details) as well as the optimal number of frames on which the LSTM needs to be trained. We determined the optimal number of LSTM cells by training six different LSTM architectures (architectures comprising 2, 4, 14, 30, 40, and 50 LSTM cells) for 1000 epochs. For all these architectures, we added a fully connected layer after the last LSTM cell in order to produce the desired output sequence of principal component scores. We trained each of these architectures on the sequence of principal component scores from frame t_{10} to frame t_{70} for each of the 5000 spinodal decomposition phasefield simulations in our training data set. As a result, each different LSTM architecture was trained on a total of 300,000 time observations (i.e. 5000 sequences comprised of 60 frames). To prevent overfitting, we kept the number of training weights among all the different architectures constant at approximately one half of the total time observations (i.e. ~150,000) by modifying the hidden layer size of each different architecture accordingly. The training of each LSTM architecture required 96 hours of training using a single node with 2.1 GHz Intel Broadwell®E52695 v4 processors with 36 cores per node and 128 GB RAM per node. Details of the LSTM architecture are provided in Supplementary Note 4.
In Fig. 3a, we report our training and validation loss results for the 6 different LSTM architectures tested for the first principal component. Our results show that the architectures with two and four cells significantly outperformed the architectures that have a higher number of cells. These results are not a matter of overfitting with more cells, since the sparser (in number of cells) networks train better as well. Rather, this performance can be explained by the fact that, just as in traditional neural networks, the deeper the LSTM architecture, the higher number of observations the network needs in order to learn. The main reason as to why the LSTM architectures with fewer number of cells outperform the architectures with a higher number of cells is due to the “limited” data set on which we are training the LSTM networks. Additionally, for those same reasons, we note that the twocell LSTM architecture converged faster than the fourcell LSTM architecture, and it is therefore our architecture of choice. As a result, the best performing architecture, and the one we chose for the rest of this work, is the architecture with twocell LSTM network with one fully connected layer.
Regarding the optimal number of frames, we assessed the accuracy of the six different LSTM architectures using two error metrics for each of the realizations k in our training and testing data sets and for each frame t_{i}. The first error metric is based on the absolute relative error \(AR{E}^{(k)}\left({t}_{i}\right)\) which quantifies the accuracy of the model to predict the average microstructural feature size. The second error, \({D}^{(k)}\left({t}_{i}\right)\), uses the Euclidean distance between the predicted and true autocorrelations normalized by the Euclidean norm of the true autocorrelation. This error metric provides insights into the local accuracy of the predicted autocorrelation on a pervoxel basis. Upon convergence of these two metrics, the optimal number of frames on which the LSTM needs to be trained guarantees that the predicted autocorrelation is accurate at a local level but also in terms of the average feature size. Descriptions of the error metrics are provided in “Methods”. We trained the different neural networks starting from frame t_{10} onwards. We then evaluated the following number of training frames: 1, 2, 5, 10, 20, 40, 60, and 80. Recall that the number of frames controls the number of time observations. Therefore, just as before, in order to prevent overfitting, we ensured that the number of weights trained was roughly half of the time observations.
In Fig. 3b, c, we provide the results for both \({\mathrm {ARE}}^{(k)}\left({t}_{100}\right)\) and \({D}^{(k)}\left({t}_{100}\right)\) with respect to the number of frames for which the LSTM was trained. The mean value of each distribution is indicated with a thick black line, and the dashed green line indicates the 5% accuracy difference target. Our convergence study shows that we achieved a good overall accuracy for the predicted autocorrelation when the LSTM neural network was trained for 80 frames. It is interesting to note that fewer frames were necessary to achieve convergence for the normalized distance (Fig. 3c) than the average feature size (Fig. 3b).
Surrogate model prediction and validation
We then evaluated the quality and accuracy of the best performing LSTM surrogate model (i.e. the one that has the twocell architecture, one fully connected layer and trained for 80 frames) for predicted microstructure evolution for frames ranging from t_{91} to t_{100} and for each set of parameters in both our testing and training sets. We report these validation results in Fig. 4.
For our error metrics \({\mathrm {ARE}}^{(k)}\left({t}_{i}\right)\) and \({D}^{(k)}\left({t}_{i}\right)\), our results show an approximate average 5% loss in accuracy compared to the highfidelity phasefield results, as seen in Fig. 4a, b. The mean value of the loss of accuracy for \({\mathrm {ARE}}^{(k)}\left({t}_{i}\right)\) is 5.3% for the training set and 5.4% for the testing set. The mean value of the loss of accuracy for \({D}^{(k)}\left({t}_{i}\right)\) is 6.8% for the training set and 6.9% for the testing set. Additionally, the loss of accuracy from our machinelearned surrogate model is constant as we further predict the microstructure evolution in time beyond the number of training frames. This is not surprising since the LSTM neural network utilizes the entire previous history of the microstructure evolution to forecast future frames.
In Fig. 4c–e, we further illustrate the good accuracy of our machinelearned surrogate model by analyzing in detail our predictions for a randomly selected microstructure (i.e. for a randomly selected set of model inputs c_{A}, M_{A}, and M_{B}) in our testing data set at frame t_{100}. In Fig. 4c, we show the pointwise error between the predicted and true autocorrelation for that microstructure, and the corresponding cumulative probability distribution. Overall, we notice a good agreement between the two microstructure autocorrelations, with the greatest error incurred for the longrange microstructural feature correlations. The agreement is easily understood, given the relatively small number of principal components retained in our lowdimensional microstructural representation. An even better agreement could have been achieved if additional principal components had been included. As seen in Fig. 4e, the predictions for the characteristic feature sizes in the microstructure given by our surrogate model are in good agreement with those obtained directly from the highfidelity phasefield model. These results show that, despite some local errors, both microstructures simulated by the highfidelity phasefield model and the ones predicted by our machinelearned surrogate model are statistically similar. Finally, we note that both our training and testing data sets cover a range of phasefield input parameters that correspond to a majority of problems of interests, avoiding issues with extrapolating outside of that range.
Computational efficiency
The results above not only illustrated the good accuracy relative to the highfidelity phasefield model for the broad range of model parameters (c_{A}, M_{A}, M_{B}), but they were also computationally efficient. The two main computational costs in our accelerated phasefield protocol were onetime costs incurred during (i) the execution of \({N}_{{\rm{sim}}}=5000\) highfidelity phasefield simulations to generate a data set of different microstructure evolutions as a function of the model parameters and (ii) the training of the LSTM neural network. Our machinelearned surrogate model predicted the timeshifted principal component score sequence of 10 frames (i.e. a total of 5,000,000 time steps) in 0.01 s, and an additional 0.05 s to reconstruct the microstructure from the autocorrelation on a single node with 36 processors. In contrast, the highfidelity phasefield simulations required approximately 12 minutes on 8 nodes with 16 processors per node using our highperformance computing resources for the same calculation of 10 frames. The computational gain factor was obtained by first dividing the total time of the LSTMtrained surrogate model by 3.55 (given the fact that the LSTMtrained model uses approximately four times less computational resources). Subsequently, the total time of the highfidelity phasefield model to compute 10 frames (i.e. 12 minutes) was divided by the time obtained in the previous step. As such, the computational efficiency of the LSTM model yields results 42,666 times faster than the fullscale phasefield method. Although the set of model inputs can introduce some variability in computing time, once trained, the computing time of our surrogate model was independent of the selection of input parameters to the surrogate model.
Acceleration of phasefield predictions
We have demonstrated a robust, fast, and accurate way to predict microstructure evolution by considering a statistically representative, lowdimensional description of the microstructure evolution integrated with a historydependent machinelearning approach, without the need for “onthefly” solutions of phasefield equations of motion. This computationally efficient and accurate framework opens a promising path forward to accelerate phasefield predictions. Indeed, as illustrated in Fig. 5, we showed that the predictions from our machinelearned surrogate model can be fed directly as an input to a classical highfidelity phasefield model in order to accelerate the highfidelity phasefield simulations by leaping in time. We used a phaserecovery algorithm^{30,37,38} to reconstruct the microstructure (Fig. 5a) from the microstructure autocorrelation predicted by our LSTMtrained surrogate model at frame t_{95} (details of the phaserecovery algorithm are provided in Supplementary Note 5). We then used this reconstructed microstructure as the initial microstructure in a regular highfidelity phasefield simulation and let the microstructure further evolve to frame t_{100} (Fig. 5b). Our results in Fig. 5c–e showed that the microstructures predicted solely from a highfidelity phasefield simulation and that obtained from our accelerated phasefield framework are statistically similar. Even though our reconstructed microstructure has some noise due to some deficiencies associated with the phaserecovery algorithm^{30}, the phasefield method rapidly regularized and smoothed out the microstructure as it further evolved. Hence, besides drastically reducing the computational time required to predict the last five frames (i.e. 2,500,000 time steps), our accelerated phasefield framework enables us to “time jump” to any desired point in the simulation with minimal loss of accuracy. This maneuverability is advantageous since we can make use of this accelerated phasefield framework to rapidly explore a vast phasefield input space for problems where evolutionary mesoscale phenomena are important. The intent of the present framework is not to embed physics per se, rather our machinelearned surrogate model learns the behavior of a timedependent functional relationship (which is a function of many input variables) to represent the microstructure evolution problem. However, even though we have trained our machinelearned surrogate model over a broad range of input parameter values, and over a range of initial conditions, these may not necessarily be representative of the generality of phasefield methods, which can have many types of nonlinearities and nonconvexities in the free energy. We further discuss this point in the section “Beyond spinodal decomposition”.
Comparison with other machinelearning approaches
The comparison of the TSMARS and LSTMtrained surrogate model highlights both the advantages and inconveniences of using the LSTM neural network as the primary machinelearning architecture to accelerate phasefield predictions (see Supplementary Note 3 for TSMARS results). The TSMARStrained model, which is an autoregressive, timeseries, forecasting technique, proved to be less accurate for extrapolating the evolution of the microstructure than the LSTMtrained model, and demonstrated a dramatic loss of accuracy as the number of predicted time frames increases, with predictions acceptable only for a couple of time frames beyond the number of training frames. The TSMARS model proved unsuitable for establishing our accelerated phasefield framework because it uses predictions from previous time frames to predict subsequent time frames, thus compounding minor errors as the number of time frames increases. The LSTM architecture does not have this problem, since it only uses the microstructure history from previous time steps and not predictions to forecast a timeshifted sequence of future microstructure evolution. However, the LSTM model is computationally more expensive to train than the TSMARS model. Our LSTM architecture required 96 hours of training using a single node with 2.1 GHz Intel Broadwell®E52695 v4 processors with 36 cores per node and 128 GB RAM per node, whereas the TSMARS model only required 214 seconds on a single node on the same highperformance computer. Therefore, given its accuracy for predicting the next frame and its inexpensive nature, the TSMARStrained model may prove useful for data augmentation in cases where the desired prediction of the microstructure evolution is not far ahead in time.
Beyond spinodal decomposition
There are several extensions to the present framework that can be implemented in order to improve the accuracy and acceleration performances. These improvements are related to (i) the dimensionality reduction of the microstructure evolution problem, (ii) the historydependent machinelearning approach that can be used as an “engine” to accelerate predictions, and (iii) the extension to multiphase, multifield microstructure evolution problems. The first topic is related to improve the accuracy of the lowdimensional representation of the microstructure evolution in order to better capture nonlinearities, nonconvexities of the free energy representative of the system. The second and third topics are related to replace the LSTM “engine” with another approach that can either improve accuracy, reduce the required amount of training data, or enable extrapolation over a greater number of frames. As we move forward, we anticipate that these extensions will enable better predictions and capture more complex microstructure evolution phenomena beyond the case study presented here.
Regarding the dimensionality reduction, several ameliorations can be made to the second step of the protocol presented in Fig. 1b. First, we can further improve the efficiency of our machinelearned surrogate model by incorporating higherorder spatial correlations (e.g., threepoint spatial correlations and twopoint clustercorrelation functions)^{45,46} in our lowdimensional representation of the microstructure evolution in order to better capture high and loworder spatial complexity in these simulations. Second, algorithms such as PCA, or similarly independent component analysis and nonnegative matrix factorization, can be viewed as matrix factorization methods. These algorithms implicitly assume that the data of interest lies on an embedded linear manifold within the higherdimensional space describing the microstructure evolution. In the case of the spinodal decomposition exemplar problem studied here, this assumption is for the most part valid, given the linear regime seen in all the lowdimensional microstructure evolution trajectories presented in Fig. 2b. However, for microstructure evolution problems where these trajectories are no longer linear and/or convex, a more flexible and accurate lowdimensional representation of the (nonlinear) microstructure evolution can be obtained by using unsupervised algorithms learning the nonlinear embedding. Numerous algorithms have been developed for nonlinear dimensionality reduction to address this issue, including kernel PCA^{47}, Laplacian eigenmaps^{48}, ISOMAP^{49}, locally linear embedding^{50}, autoencoders^{51}, or Gaussian process latent variable models^{52} for instance (for a more comprehensive survey of nonlinear dimensionalityreduction algorithms, see Lee and Verleysen^{53}). In this case, a researcher would simply substitute PCA with one of these (nonlinear) manifold learning algorithms in the second step of our protocol illustrated in Fig. 1b.
The comparison between the TSMARS and LSTMtrained surrogate model in the previous subsection demonstrated the ability of the LSTM neural network to successfully learn the time history of the microstructure evolution. At the root of this performance is the ability of the LSTM network to carry out sequence learning and store traces of past events from the microstructure evolutionary path. LSTM are a subclass of the recurrent neural network (RNN) architecture in which the memory of past events is maintained through recurrent connections within the network. Alternatives RNN options to the LSTM neural network such as the gated recurrent unit^{54} or the independently RNN (IndRNN)^{55} may prove to be more efficient at training our surrogate model. Other methods for handling temporal information are also available, including memory networks^{56} or temporal convolutions^{57}. Instead of RNN architectures, a promising avenue may be to use selfmodifying/plastic neural networks^{58} which harness evolutionary algorithms to actively modulate the timedependent learning process. Recurrent plastic networks have demonstrated their higher potential to be successfully trained to memorize and reconstruct sets of new, highdimensional, timedependent data as compared to traditional (nonplastic) recurrent network^{58,59}. Such networks may be more efficient “engine” solutions to accelerate phasefield predictions for complex microstructure evolutionary paths, especially when dealing with very large computational domains and multifield, phasefield models, or for nonlinear, nonconvex microstructural evolutionary paths. Ultimately, the best solution will depend on both the accuracy of the lowdimensional representation and the complexity of the phasefield problem at hand.
The machinelearning framework presented here is also not limited to the spinodal decomposition of twophase mixture, and it can also be applied more generally to other multiphase and multifield models, although this extension is non trivial. In the case of a multiphase model, there are numerous ways by which the free energy functional can be extended to multiple phases/components, and it is a wellstudied topic in the phasefield community^{60,61}. As it relates to this work, it is certainly possible to build surrogate models for multicomponents systems based on some reasonable model output metrics (e.g., microstructure phase distribution in the current work)—although the choice of this metric may not be trivial or straightforward. For example, in a purely interfacialenergydriven graingrowth model or grain growth via Ostwaldripening model, one may build a surrogate model by tracking each individual order parameter for every grain and the composition in the system, which may become prohibitive for many grains. However, one could reduce the number of grains to a single metric using the condition that ∑(ϕ_{i}) = 1 at every grid point and be left with a single order parameter (along with the composition parameter) defining grain size, distribution, and time evolution as a function of the input variables (e.g., mobilities). Thus the construction of surrogate models based on these metrics with twopoint statistics and PCA becomes straightforward. Another possibility would be to calculate and concatenate all npoint spatial statistics deemed necessary to quantify each multiphase microstructure, and then perform PCA on the concatenated autocorrelation vector. Note that in the present case study, we only needed one autocorrelation to fully characterize the twophase mixture, more autocorrelations would be needed when the number of phases increases.
In the case of a multifield phasefield model, in which there are multiple coupled field variables (or order parameters) describing different evolutionary phenomena^{8}, it would be essentially required to track the progression of each order parameter separately, along with the associated crosscorrelation terms. However, actual details in each step of the protocol are a little more convoluted than those presented here, as it will depend on (i) the accuracy of the lowdimensional representation and (ii) the complexity of the phasefield problem considered. We envision that for the lowdimensional representation step illustrated in Fig. 1b, the dimensionalityreduction technique to be used would depend on the type of field variable considered. Similarly, depending on the complexity (e.g., linear vs. nonlinear) of the lowdimensional trajectories of the different fields considered, we may be forced to use different historydependent machine approaches for each field separately used in the step presented in Fig. 1c. An interesting alternative^{31} might be to use neural network techniques such as convolutional neural networks to learn and predict the homogenized, macroscopic free energy and phase fields arising in a multicomponent system.
To summarize, we developed and used a machinelearning framework to efficiently and rapidly predict complex microstructural evolution problems. By employing LSTM neural networks to learn longterm patterns and solve historydependent problems, we reformulate microstructural evolution problems as multivariate timeseries problems. In this case, the neural network learns how to predict the microstructure evolution via the time evolution of the lowdimensional representation of the microstructure. Our results show that our machinelearned surrogate model can predict the spinodal evolution of a twophase mixture in a fraction of a second with only a 5% loss in accuracy compared to highfidelity phasefield simulations. We showed that surrogate model trajectories can be used to accelerate phasefield simulations when used as an input to a classical highfidelity phasefield model. Our framework opens a promising path forward to use accelerated phasefield simulations for discovering, understanding, and predicting processing–microstructure–performance relationships in problems where evolutionary mesoscale phenomena are critical, such as in materials design problems.
Methods
Phasefield model
The microstructure evolution for spinodal decomposition of a twophase mixture^{62} specifically uses a single compositional order parameter \(c\left({\bf{x}},t\right)\), to describe the atomic fraction of solute. The evolution of c is given by the Cahn–Hilliard equation^{62} and is derived from an Onsager force–flux relationship^{63} such that
where ω_{c} is the energy barrier height between the equilibrium phases and κ_{c} is the gradient energy coefficient, respectively. The concentration dependent Cahn–Hilliard mobility is taken to be M_{c} = s(c)M_{A} + (1 − s(c))M_{B}, where M_{A} and M_{B} are the mobilities of each phase, and \(s(c)=\frac{1}{4}(2c){(1+c)}^{2}\) is a smooth interpolation function between the mobilities. The free energy of the system in Eq. (1) is expressed as a symmetric doublewell potential with minima at c = ±1. For simplicity, both the mobility and the interfacial energy are isotropic. This model was implemented, verified, and validated for use in Sandia’s inhouse multiphysics phasefield modeling capability MEMPHIS^{8,39}.
The values of the energy barrier height between the equilibrium phases and the gradient energy coefficient were assumed to be constant with ω_{c} = κ_{c} = 1. In order to generate a diverse and large set of phasefield simulations exhibiting a rich variety of microstructural features, we varied the phase concentrations and phase mobilities parameters. For the phase concentration parameter, we decided to focus on the cases where the concentration of each phase satisfies c_{i} ≥ 0.15, i = A or B. Note that we only need to specify one phase concentration, since c_{B} = 1 − c_{A}. For the phase mobility parameters, we chose to independently vary the mobility values over four orders of magnitude such that M_{i} ∈ [0.01, 100], i = A or B. We used a Latin Hypercube Sampling (LHS) statistical method to generate 5000 sets of parameters (c_{A}, M_{A}, M_{B}) for training, and an additional 500 sets of parameters for validation.
All simulations were performed using a 2D square grid with a uniform mesh of 512 × 512 grid points, dimensionless spatial and temporal discretization parameters, a spatial discretization of Δx = Δy = 1, and a temporal discretization of Δt = 1 × 10^{−4}. The composition field within the simulation domain was initially randomly populated by sampling a truncated Gaussian distribution between −1 and 1 with a standard deviation of 0.35 and means chosen to generate the desired nominal phase fraction distributions. Each simulation was run for 50,000,000 time steps with periodic boundary conditions applied to all sides of the domain. The microstructure was saved every 500,000 time steps in order to capture the evolution of the microstructure over 100 frames. Each simulation required approximately 120 minutes on 128 processors on our highperformance computer cluster. Illustrations of the variety of microstructure evolutions obtained when sampling various combinations of c_{A}, M_{A}, and M_{B} are provided in Supplementary Note 2.
Statistical representation of microstructures
We use the autocorrelation of the spatially dependent concentration field, \(c\left({\bf{x}},{t}_{i}\right)\), to statistically characterize the evolving microstructure. For a given microstructure, we use a compositional indicator function, \({I}^{{\rm{A}}}\left({\bf{x}},{t}_{i}\right)\) to identify the dominant phase A at a location x within the microstructure and tesselate the spatial domain at each time step such that,
Note that, in our case, the range of the field variable c is −1 ≤ c ≤ 1, thus motivating our use of 0 as the cutoff to “binarize” the microstructure data. The autocorrelation \({{\boldsymbol{S}}}_{2}^{\left({\rm{A}},{\rm{A}}\right)}\left({\bf{r}},{t}_{i}\right)\) is defined as the expectation of the product \({I}^{{\rm{A}}}\left({{\bf{x}}}_{1},{t}_{i}\right){I}^{{\rm{A}}}\left({{\bf{x}}}_{2},{t}_{i}\right)\), i.e.
In this form, the microstructure’s autocorrelation resembles a convolution operator and can be efficiently computed using fast Fourier transform^{38} as applied to the finitedifference discretized scheme.
Principal component analysis
The autocorrelations describing the microstructure evolution cannot be readily used in our accelerated framework since they have the same dimension as the highfidelity phasefield simulations. Instead, we describe the microstructure evolutionary paths via a reduceddimensional representation of the microstructure spatial autocorrelation by using PCA. PCA is a dimensionalityreduction method that rotationally transforms the data into a new, truncated set of orthonormal axes that captures the variance in the data set with the fewest number of dimensions^{64}. The basis vectors of this space, φ_{j} are called principal components (PC), and the weights, α_{j}, are called PC scores. The principal components are ordered by variance. The PCA representation \({{\boldsymbol{S}}}_{{\rm{pca}}}^{(k)}\) of the autocorrelation of phase A for a given microstructure is given by,
where Q is the number of PC direction retained, and the term \(\overline{{\boldsymbol{S}}}\) represents the sample mean of the autocorrelations, \({{\boldsymbol{S}}}_{2}^{{\left({\rm{A}},{\rm{A}}\right)}^{(k)}}\), for \(k=1\ldots {N}_{{\rm{sim}}}\), with \({N}_{{\rm{sim}}}\) being the number of simulations in our training data set. In the construction of our model, PCA is only fitted to the training data. The testing data are projected into the fitted PCA space.
Historydependent machinelearning approaches
Our machinelearning approach establishes a functional relationship \({\mathcal{F}}\) between the lowdimensional representation descriptors of the microstructures (i.e. the principal component scores) at a current time and prior lagged values (t_{i−1}…t_{i−n}) of these microstructural descriptors and other simulation parameters affecting the microstructure evolution process such that, each principal component score, \({\alpha }_{j}^{(k)}\), can be approximated as
This functional relationship can rapidly (in a fraction of a second as opposed to hours if we use our highfidelity phasefield model in MEMPHIS) predict a broad class of microstructures as a function of simulation parameters with good accuracy. There are many different ways by which we can establish the desired functional relationship \({\mathcal{F}}\). In the present study, we compared two different historydependent machinelearning techniques, namely the TSMARS and LSTM neural network. We chose LSTM based on its superior performance.
LSTM networks are RNN architectures, wherein nodes are looped, allowing information to persist between consecutive time steps by tracking an internal (memory) state. Since the internal state is a function of all the past inputs, the prediction from the LSTMtrained surrogate model depends on the entire history of the microstructure. In contrast, instead of making predictions from a state that depends on the entire history, TSMARS is an autoregressive model which predicts the microstructure evolution using only “m” most recent inputs of the microstructure history. Details of both algorithms are provided in the Supplementary Notes 3 and 4.
Error metrics
The loss used to train our neural network is the mean squared error (MSE) in terms of the principal component scores \({\mathrm {MSE}}_{{\alpha }_{j}}\) which is defined as
where N denotes the number of time frames for which the error is calculated, K denotes the total number of microstructure evolution realizations for which the error is being calculated (i.e. number of microstructure in the training data set), and \({\alpha }_{j}^{(k)}\) is the jth principal component score of microstructure realization k at time t_{i}. The hat, \(\hat{\alpha }\), and tilde, \(\tilde{\alpha }\), notations indicate the true and predicted values of the principal component score, respectively. The MSE scalar error metric for each principal component does not convey information about the accuracy of our surrogate model as a function of the frame being predicted. For this purpose, we calculated the ARE between the true (\(\hat{\ell }\)) and predicted (\(\tilde{\ell }\)) average feature size at each time frame t_{i} and for each microstructure evolution realization k in our data set, such that
The average feature size corresponds to the first minimum of the radial average of the autocorrelation. For each microstructure realization k and for each time frame t_{i}, we also calculated the Euclidean distance D^{(k)} between the true and predicted autocorrelation, normalized by the Euclidean of the true autocorrelation such that
where \({\hat{{\boldsymbol{S}}}}_{2}^{{\left({\rm{A}},{\rm{A}}\right)}^{(k)}}({\bf{r}},{t}_{i})\) and \({\tilde{{\boldsymbol{S}}}}_{2}^{{\left({\rm{A}},{\rm{A}}\right)}^{(k)}}({\bf{r}},{t}_{i})\) index the true (\(\hat{\,}\)) and predicted (\(\tilde{\,}\)) autocorrelations respectively at time frame t_{i}. Note that by summing over all r vectors for which the autocorrelations are defined, this metric corresponds to the normalized Euclidean distance between the predicted and the true autocorrelations.
Data availability
The data that support the findings of this study are available from the corresponding author upon reasonable request.
Code availability
The codes used to calculate the results of this study are available from the corresponding author upon reasonable request.
References
Krill, C. E. III. & Chen, L.Q. Computer simulation of 3D grain growth using a phasefield model. Acta Mater. 50, 3059–3075 (2002).
Chang, K., Chen, L.Q., Krill, C. E. III. & Moelans, N. Effect of strong nonuniformity in grain boundary energy on 3D grain growth behavior: a phasefield simulation study. Comput. Mater. Sci. 127, 67–77 (2017).
Miyoshi, E. et al. Largescale phasefield simulation of threedimensional isotropic grain growth in polycrystalline thin films. Model. Simul. Mater. Sci. Eng. 27, 054003 (2019).
Kim, S. G., Kim, W. T., Suzuki, T. & Ode, M. Phasefield modeling of eutectic solidification. J. Cryst. Growth 261, 135–158 (2004).
Hötzer, J. et al. Large scale phasefield simulations of directional ternary eutectic solidification. Acta Mater. 93, 194–204 (2015).
Zhao, Y., Zhang, B., Hou, H., Chen, W. & Wang, M. Phasefield simulation for the evolution of solid/liquid interface front in directional solidification process. J. Mater. Sci. Technol. 35, 1044–1052 (2019).
Stewart, J. A. & Spearot, D. E. Phasefield simulations of microstructure evolution during physical vapor deposition of singlephase thin films. Comput. Mater. Sci. 131, 170–177 (2017).
Stewart, J. & Dingreville, R. Microstructure morphology and concentration modulation of nanocomposite thinfilms during simulated physical vapor deposition. Acta Mater. 188, 181–191 (2020).
Hu, S. Y. & Chen, L.Q. Solute segregation and coherent nucleation and growth near a dislocation—a phasefield model integrating defect and phase microstructures. Acta Mater. 49, 463–472 (2001).
Chan, P. Y., Tsekenis, G., Dantzig, J., Dahmen, K. A. & Goldenfeld, N. Plasticity and dislocation dynamics in a phase field crystal model. Phys. Rev. Lett. 105, 015502 (2010).
Beyerlein, I. J. & Hunter, A. Understanding dislocation mechanics at the mesoscale using phase field dislocation dynamics. Philos. Trans. R. Soc. A Math. Phys. Eng. Sci. 374, 20150166 (2016).
Campelo, F. & HernándezMachado, A. Shape instabilities in vesicles: a phasefield model. Eur. Phys. J. Spec. Top. 143, 101–108 (2007).
Elliott, C. M. & Stinner, B. A surface phase field model for twophase biological membranes. SIAM J. Appl. Math. 70, 2904–2928 (2010).
Aranson, I. S., Kalatsky, V. A. & Vinokur, V. M. Continuum field description of crack propagation. Phys. Rev. Lett. 85, 118–121 (2000).
Karma, A., Kessler, D. A. & Levine, H. Phasefield model of mode III dynamic fracture. Phys. Rev. Lett. 87, 045501 (2001).
Shimokawabe, T. et al. Petascale phasefield simulation for dendritic solidification on the TSUBAME 2.0 supercomputer. In Proceedings of 2011 International Conference for High Performance Computing, Networking, Storage and Analysis 111 (ACM, New York, NY, USA, 2011).
Hunter, A., Saied, F., Le, C. & Koslowski, M. Largescale 3D phase field dislocation dynamics simulations on highperformance architectures. Int. J. High. Perform. Comput. Appl. 25, 223–235 (2011).
Vondrous, A., Selzer, M., Hötzer, J. & Nestler, B. Parallel computing for phasefield models. Int. J. High. Perform. Comput. Appl. 28, 61–72 (2014).
Yan, H., Wang, K. G. & Jones, J. E. Largescale threedimensional phasefield simulations for phase coarsening at ultrahigh volume fraction on highperformance architectures. Model. Simul. Mater. Sci. Eng. 24, 055016 (2016).
Miyoshi, E. et al. Ultralargescale phasefield simulation study of ideal grain growth. npj Comput. Mater. 3, 25 (2017).
Shi, X., Huang, H., Cao, G. & Ma, X. Accelerating largescale phasefield simulations with GPU. AIP Adv. 7, 105216 (2017).
Seol, D. et al. Computer simulation of spinodal decomposition in constrained films. Acta Mater. 51, 5173–5185 (2003).
Muranushi, T. Paraiso: an automated tuning framework for explicit solvers of partial differential equations. Comput. Sci. Discov. 5, 015003 (2012).
Du, Q. & Feng, X. The phase field method for geometric moving interfaces and their numerical approximations. In Bonito, A. & Nochetto, R. H. (eds), Handbook of Numerical Analysis, vol. 21, pp. 425–508 (Elsevier, 2020).
Brough, D. B., Kannan, A., Haaland, B., Bucknall, D. G. & Kalidindi, S. R. Extraction of processstructure evolution linkages from xray scattering measurements using dimensionality reduction and time series analysis. Integr. Mater. Manuf. Innov. 6, 147–159 (2017).
Pfeifer, S., Wodo, O. & Ganapathysubramanian, B. An optimization approach to identify processing pathways for achieving tailored thin film morphologies. Comput. Mater. Sci. 143, 486–496 (2018).
Latypov, M. I. et al. BisQue for 3D materials science in the cloud: microstructure–property linkages. Integr. Mater. Manuf. Innov. 8, 52–65 (2019).
Teichert, G. H. & Garikipati, K. Machine learning materials physics: surrogate optimization and multifidelity algorithms predict precipitate morphology in an alternative to phase field dynamics. Comput. Methods Appl. Mech. Eng. 344, 666–693 (2019).
Yabansu, Y. C., Iskakov, A., Kapustina, A., Rajagopalan, S. & Kalidindi, S. R. Application of gaussian process regression models for capturing the evolution of microstructure statistics in aging of nickelbased superalloys. Acta Mater. 178, 45–58 (2019).
Herman, E., Stewart, J. A. & Dingreville, R. A datadriven surrogate model to rapidly predict microstructure morphology during physical vapor deposition. Appl. Math. Model. 88, 589–603 (2020).
Zhan, X. & Garikipati, K. Machine learning materials physics: multiresolution neural networks learn the free energy and nonlinear elastic response of evolving microstructures. Comput. Methods Appl. Mech. Eng. 372, 113362 (2020).
Lewis, P. A. & Ray, B. K. Modeling longrange dependence, nonlinearity, and periodic phenomena in sea surface temperatures using TSMARS. J. Am. Stat. Assoc. 92, 881–893 (1997).
Hochreiter, S. & Schmidhuber, J. Long shortterm memory. Neural Comput. 9, 1735–1780 (1997).
Zaytar, M. A. & El Amrani, C. Sequence to sequence weather forecasting with long shortterm memory recurrent neural networks. Int. J. Comput. Appl. 143, 7–11 (2016).
Zhao, Z., Chen, W., Wu, X., Chen, P. C. & Liu, J. LSTM network: a deep learning approach for shortterm traffic forecast. IET Intell. Transp. Syst. 11, 68–75 (2017).
Vlachas, P. R., Byeon, W., Wan, Z. Y., Sapsis, T. P. & Koumoutsakos, P. Datadriven forecasting of highdimensional chaotic systems with long shortterm memory networks. Proc. R. Soc. A Math. Phys. Eng. Sci. 474, 20170844 (2018).
Yang, G., Dong, B., Gu, B., Zhuang, J. & Ersoy, O. Gerchberg–Saxton and Yang–Gu algorithms for phase retrieval in a nonunitary transform system: a comparison. Appl. Opt. 33, 209–218 (1994).
Fullwood, D. T., Niezgoda, S. R. & Kalidindi, S. R. Microstructure reconstructions from 2point statistics using phaserecovery algorithms. Acta Mater. 56, 942–948 (2008).
Dingreville, R., Stewart, J. A. & Chen, E. Y. Benchmark Problems for the Mesoscale Multiphysics Phase Field Simulator (Memphis). Tech. Rep., Albuquerque, NM (United States) (2020).
Torquato, S. Random Heterogeneous Materials: Microstructure and Macroscopic Properties (SpringerVerlag, New York, 2002).
Fullwood, D. T., Niezgoda, S. R., Adams, B. L. & Kalidindi, S. R. Microstructure sensitive design for performance optimization. Prog. Mater. Sci. 55, 477–562 (2010).
Kalidindi, S. R. Hierarchical Materials Informatics: Novel Analytics for Materials Data (Elsevier, 2015).
Niezgoda, S. R., Kanjarla, A. K. & Kalidindi, S. Novel microstructure quantification framework for databasing, visualization, and analysis of microstructure data. Integr. Mater. 2, 54–80 (2013).
Gupta, A., Cecen, A., Goyal, S., Singh, A. K. & Kalidindi, S. R. Structure–property linkages using a data science approach: application to a nonmetallic inclusion/steel composite system. Acta Mater. 91, 239–254 (2015).
Jiao, Y., Stillinger, F. & Torquato, S. Modeling heterogeneous materials via twopoint correlation functions: basic principles. Phys. Rev. E 76, 031110 (2007).
Jiao, Y., Stillinger, F. & Torquato, S. Modeling heterogeneous materials via twopoint correlation functions. II. Algorithmic details and applications. Phys. Rev. E 77, 031135 (2008).
Schölkopf, B., Smola, A. & Müller, K.R. Nonlinear component analysis as a kernel eigenvalue problem. Neural Comput. 10, 1299–1319 (1998).
Belkin, M. & Niyogi, P. Laplacian eigenmaps and spectral techniques for embedding and clustering. In Advances in Neural Information Processing Systems, 585–591 (Vancouver, BC, Canada, 2002).
Tenenbaum, J. B., De Silva, V. & Langford, J. C. A global geometric framework for nonlinear dimensionality reduction. Science 290, 2319–2323 (2000).
Roweis, S. T. & Saul, L. K. Nonlinear dimensionality reduction by locally linear embedding. Science 290, 2323–2326 (2000).
Hinton, G. E. & Salakhutdinov, R. R. Reducing the dimensionality of data with neural networks. Science 313, 504–507 (2006).
Lawrence, N. Probabilistic nonlinear principal component analysis with gaussian process latent variable models. J. Mach. Learn. Res. 6, 1783–1816 (2005).
Lee, J. A. & Verleysen, M. Nonlinear Dimensionality Reduction (Springer Science & Business Media, 2007).
Cho, K. et al. Learning phrase representations using RNN encoderdecoder for statistical machine translation. Preprint at https://arxiv.org/abs/1406.1078 (2014).
Li, S., Li, W., Cook, C., Zhu, C. & Gao, Y. Independently recurrent neural network (IndRNN): building a longer and deeper RNN. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 5457–5466 (Salt Lake City, UT, USA, 2018).
Sukhbaatar, S., Weston, J., Fergus, R. et al. Endtoend memory networks. In Advances in Neural Information Processing Systems 2440–2448 (Montreal, QC, Canada, 2015).
Varol, G., Laptev, I. & Schmid, C. Longterm temporal convolutions for action recognition. IEEE Trans. Pattern Anal. Mach. Intell. 40, 1510–1517 (2017).
Stanley, K. O., Clune, J., Lehman, J. & Miikkulainen, R. Designing neural networks through neuroevolution. Nat. Mach. Intell. 1, 24–35 (2019).
Soltoggio, A., Stanley, K. O. & Risi, S. Born to learn: the inspiration, progress, and future of evolved plastic artificial neural networks. Neural Netw. 108, 48–67 (2018).
Nestler, B. & Wheeler, A. A. A multiphasefield model of eutectic and peritectic alloys: numerical simulation of growth structures. Phys. D 138, 114–133 (2000).
Zhang, L. & Steinbach, I. Phasefield model with finite interface dissipation: extension to multicomponent multiphase alloys. Acta Mater. 60, 2702–2710 (2012).
Chen, L.Q. Phasefield models for microstructure evolution. Annu. Rev. Mater. Res. 32, 113–140 (2002).
Balluffi, R. W., Allen, S. M. & Carter, W. C. Kinetics of Materials (Wiley, 2005).
Suh, C., Rajagopalan, A., Li, X. & Rajan, K. The application of principal component analysis to materials science data. Data Sci. J. 51, 19–26 (2002).
Acknowledgements
This work was performed, in part, at the Center for Integrated Nanotechnologies, an Office of Science User Facility operated for the U.S. Department of Energy. This work was also supported by a Laboratory Directed Research and Development (LDRD) program at Sandia National Laboratories. Sandia National Laboratories is a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC., a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy’s National Nuclear Security Administration under Contract No. DENA0003525. The views expressed in this article do not necessarily represent the views of the U.S. Department of Energy or the United States Government.
Author information
Authors and Affiliations
Contributions
R.D., J.A.S., D.M.d.O.Z. conceived the idea; J.A.S performed the phasefield simulations; D.M.d.O.Z. trained the LSTM model; R.D. supervised the work. All authors contributed to the discussion and writing of the paper.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary information
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Montes de Oca Zapiain, D., Stewart, J.A. & Dingreville, R. Accelerating phasefieldbased microstructure evolution predictions via surrogate models trained by machine learning methods. npj Comput Mater 7, 3 (2021). https://doi.org/10.1038/s41524020004718
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s41524020004718
This article is cited by

Rapid and accurate predictions of perfect and defective material properties in atomistic simulation using the power of 3D CNNbased trained artificial neural networks
Scientific Reports (2024)

An ANNassisted efficient enriched finite element method via the selective enrichment of moment fitting
Engineering with Computers (2024)

Accelerating the solving of mechanical equilibrium caused by lattice misfit through deep learning method
Advances in Manufacturing (2024)

A Gaussian process autoregressive model capturing microstructure evolution paths in a Ni–Mo–Nb alloy
Journal of Materials Science (2024)

Prediction of creep properties of Co–10Al–9W superalloys with machine learning
Journal of Materials Science (2024)