Introduction

Advances in computational power and electronic structure methods have enabled large materials databases.1,2,3,4 Using high-throughput approaches,5 these databases have proven a useful tool to predict the properties of materials. However, given the combinatorial nature of materials space,6,7 it is infeasible to compute properties for more than a tiny fraction of all possible materials using electronic structure methods such as density functional theory (DFT).8,9 A potential answer to this challenge lies in a new paradigm: surrogate machine-learning models for accurate materials predictions.10,11,12

The key idea is to use machine learning to rapidly and accurately interpolate between reference simulations, effectively mapping the problem of numerically solving for the electronic structure of a material onto a statistical regression problem.13 Such fast surrogate models could be used to filter the most suitable materials from a large pool of possible materials and then validate the found subset by electronic structure calculations. Such an “accelerated high-throughput” (AHT) approach (Fig. 1) could potentially increase the number of investigated materials by several orders of magnitude.

Fig. 1
figure 1

The accelerated high-throughput approach. Candidate structures and properties are generated by surrogate machine-learning models based on reference electronic structure calculations in a materials repository. Selected structures are validated by electronic structure calculations, preventing false positive errors

Traditionally, empirical interatomic potentials were used to reproduce macroscopic properties of materials faster than DFT. Well-known empirical interatomic potentials for periodic solids include Lennard–Jones potentials, the Stillinger–Weber potential and embedded-atom methods (EAM) for alloys. A problem with empirical interatomic potentials is that they are designed with a fixed functional form and cannot be systematically improved. In contrast, surrogate models which are empirical interatomic models based on machine learning systematically improve with additional data. This potential advantage over traditional potentials has resulted in the proposal of many machine-learned surrogate models for materials prediction.

We demonstrate the feasibility of machine-learned surrogate models for predicting enthalpies of formation of materials across composition, lattice types, and atomic configurations. Our findings were motivated toward knowing whether different surrogate models proposed in the literature are consistent in their predictions of formation enthalpy rather than comparing the performance of different surrogate models. We find that five combinations of state-of-the-art representations and regression methods (Table 1) all yield consistent predictions with errors of ~10 meV/atom or less depending on the system. We also find that when we combined the data from all 10 systems to build a single model, the combined model is essentially as good as the 10 individual models.

Table 1 State-of-the-art surrogate machine-learning models investigated in this work

A surrogate machine-learning model replaces ab initio simulations by mapping a crystal structure to properties such as formation enthalpy, elastic constants, or band gaps, etc. Its utility lies in the fact that once the model is trained, properties of new materials can be predicted very quickly. The prediction time is either constant, or scales linearly with the number of atoms in the system, with a low pre-factor, typically in milliseconds.

The two major parts of a surrogate machine-learning model are the numerical representation of the input data11,14 and the learning algorithm. We use the term “representation” for a set of features (as opposed to a collection of unrelated or only loosely related descriptors) that satisfies certain physical requirements12,13,15,16 such as invariance to translation, rotation, permutation of atoms, uniqueness (representation is variant against transformations changing the property, as systems with identical representation but differing in the property would introduce errors17), differentiability, and computational efficiency. The role of the representation is akin to that of a basis set in that the predicted property is expanded in terms of a set of reference structures.

To model materials, it is desirable that a representation enables accurate predictions and is able to handle multiple elements simultaneously. The materials community has proposed several representations10,11,12,14,15,18,19,20,21 for crystal structures. Some do not fulfill the above properties exactly or are restricted, in practice, to materials with a single element. Consequently, surrogate models based on these representations are limited in their accuracy, due to the violation of any of the physical requirements mentioned above (e.g., for the sorted and eigenspectrum variants of the Coulomb matrix, continuity and uniqueness, respectively16,17).

We explore three state-of-the-art representations that fulfill above properties for construction of general surrogate models: many-body tensor representation12 (MBTR), smooth overlap of atomic positions10,15 (SOAP), and moment tensor potentials11 (MTP). Each representation is employed as proposed and implemented by its authors, including the regression method: Kernel ridge regression13 (KRR) for MBTR, Gaussian process regression22 (GPR) for SOAP, and polynomial regression11 for MTP. Since predictions (but not necessarily other properties) of the kernel-based KRR and GPR are identical, we will use the two terms interchangeably here. We also employed cluster expansion23,24,25,26 (CE) and deep neural network27,28 (DNN) models. Our purpose is not to compare the performance of these different surrogate models. Consequently, the models were not optimized to minimize the error; rather they were generated to maintain a typical speed/accuracy balance.

CE models have been used for three decades to efficiently model ground state energies of metal alloys, but require that the atomic structure can be mapped to site occupancies on a fixed lattice. They are therefore less suited to model different materials. In this work, we use them as a baseline and build a separate CE model for each alloy. The comparison is not between CE and other models regarding performance, but our intention is to see how consistent are these different models in predicting the formation enthalpy of materials.

DNNs are essentially recursively stacked layers of functions, a large number of layers being a major difference between DNNs and conventional neural networks. They have been used to predict energies29,30,31,32,33 and to learn representations.34,35 While DNNs can learn representations (“end-to-end learning”, here from nuclear charges, atom positions and unit cell basis vectors to enthalpy of formation), this requires substantially more data than starting with a representation as input.18,19,20 We, therefore, provide the DNN with MBTR as input. MBTR is a manually designed representation and works well with the Gaussian kernel. The idea of using MBTR along with DNN is to explore whether a representation-learning technique can improve upon a manually designed representation in conjunction with the standard Gaussian kernel (MBTR + KRR).

Results and discussion

Energy predictions for single alloys

Prediction errors for enthalpies of formation of each of the five surrogate models on each binary alloy subset of the data are presented in Fig. 2a. Prediction errors of all surrogate models agree qualitatively on all subsets of the data. We interpret this consistency to be indicative of the validity of the machine-learning approach to surrogate models of formation enthalpy of materials, independently of the parametrization details of the models.

Fig. 2
figure 2

Consistency in prediction errors of formation enthalpy of five machine-learning surrogate models on the DFT-10B dataset. a Root mean squared error (RMSE) of predicted enthalpies of formation of each surrogate model on each binary alloy subset in meV/atom (colored bars). RMSE for MTP results is computed using pure atom total energies obtained from DFT. The consistency of errors across models indicates the validity of machine-learning surrogate models to predict formation enthalpy of materials—prediction errors are similar, independent of the details of model parametrization. b Root mean squared error (RMSE) of predicted enthalpies of formation of each surrogate model on each binary alloy subset as a percentage of energy range. Note that relative errors are below 2.5% for all systems

For four binary systems (AgCu, AlMg, CoNi, CuNi) predictions errors are below 3 meV/atom. The prediction errors of all surrogate models on the remaining six systems (AlFe, AlNi, AlTi, CuFe, FeV, NbNi) are consistent, and it is not obvious as to why these systems are harder to learn. When generating the data, the same methodology and parameters were used for all alloys, and similar fitting procedures were employed for each surrogate model.

We point out that whenever the elements that constitute a binary alloy system belong to the same column of the periodic table or are close to each other in the periodic table in terms of atomic number, the surrogate models’ predictions are good and vice versa. Indeed, together these numbers explain 80% of the variance in prediction errors (see supplementary material).

A complementary observation is that while absolute errors vary from alloy to alloy, relative errors (δRMSE), expressed as a percentage of the range of energies of an alloys’ subset of the data, remains <2.5% for all systems (Fig. 2b).

General models trained on all alloys

We trained four of the five investigated surrogate models simultaneously on all 10 alloy systems and compared the mean absolute error (MAE) of these combined models with the average MAE when trained on each alloy system separately (Table 2; note that RMSE would differ from MAE due to its non-linear nature). The quantitative agreement indicates that the deviation of the prediction errors is <1 meV/atom when trained on multiple systems. For the cluster expansion, these results suggest that there is a single set of parameters for generating a prior probability distribution over effective cluster interaction (ECI) values (provided in the supporting information) that works well across a variety of chemistries and lattice types.

Table 2 Performance of general models

For CE, the representation is naturally tied to a particular lattice (e.g. fcc, bcc), making it difficult to train on multiple alloy systems with different lattices at the same time. Here we train a cluster expansion on all alloys by constraining all 30 systems to use a single set of hyperparameters for regularization (i.e. all use the same prior probability distribution of ECI values). The machine-learning surrogate models based on MBTR, SOAP, and MTP do not suffer from the problem of representation being tied to a particular lattice. They express energy as a continuous function of atomic positions and can be trained on multiple materials simultaneously.

We investigate simultaneous training of alloys in more detail for the MBTR + KRR model. Figure 3 presents deviations of the MAE of a single model trained on k alloy systems from the average MAE when the model is trained on each alloy system separately. In all of the possible \(\mathop {\sum}\nolimits_{k = 1}^{10} \left( {\begin{array}{*{20}{c}} {10} \\ k \end{array}} \right) = 1023\) cases, the deviation is below 1 meV/atom. These deviations are on the order one would expect from minor differences in hyperparameter values. We conclude that prediction errors remain consistently unaffected when increasing the number of simultaneously modeled alloys.

Fig. 3
figure 3

Performance of MBTR + KRR model for multiple alloy systems. Shown are deviation of mean absolute error (MAE, vertical axis) of an MBTR + KRR surrogate model trained on k (horizontal axis) alloy systems simultaneously from the average MAE of k models trained on each alloy subsystem separately. Whiskers, boxes, horizontal line and numbers inside the plot show the range of values, quartiles, median and sample size, respectively. Difference in error between individual and combined models is always <1 meV/atom

In the case of MBTR + DNN model, we observe improvement in prediction errors on the combined model when compared with the average of separate models (Table 2 [see also Fig. 2 in supplementary material]). This suggests that it might be possible to learn element similarities between chemical element species using a DNN to improve learning rates further.36

Caveat emptor

Are reported errors reliable estimates of future performance in applications? It depends. We discuss the role of training and validation set composition as an example of the intricacies of statistical validation of machine-learning models.

In the limit of infinite independent and identically distributed data, one would simply sample a large enough validation set and measure prediction errors, with the law of large numbers ensuring the reliability of the estimates. Here, however, data are limited due to the costs of generating them via ab initio simulations, and are neither independent nor identically distributed. In such a setting, part of the available data is used for validation, either in the form of a hold-out set (as in this work) or via cross-validation, suited for even smaller datasets.

Prediction errors in machine-learning models improve with data (otherwise it would not be machine learning). This implies that if only few training samples exist for a “subclass” of structures, prediction errors for similar structures will be high. For example, consider the number of atoms per unit cell in the 10 alloys dataset (DFT-10B) used in this work: There are only 11 structures for each alloy that have 1–2 atoms in the unit cell. Consequently, prediction errors are high for those structures (see Fig. 3 in supplementary material).

In addition to being sparse, smaller unit cells also have a different information content than the larger unit cells. Small unit cells are typically far away from the large unit cells and from each other. Each structure is a point in the representation space and interpolating between structures that are far apart is more prone to error than in regions where the data is tightly clustered (see Fig. 4 in supplementary material). Ideally, the data that the model is trained on would be uniformly distributed in the representation space. Because small unit cells are few in number and because they have a different information content, it is best to include them in the training set.

For combinatorial reasons, the number of possible structures increases strongly with the number of atoms in the unit cell (Table 3). This biases error statistics in two ways: As discussed, prediction errors will be lower for classes with more samples. At the same time, because these classes have more samples, they will contribute more to the measured errors, dominating averages such as the RMSE.

Table 3 Size distribution in the DFT-10B dataset

Figure 4 presents MBTR + KRR prediction errors (RMSE in meV/atom) for different but same-size splits of the data into training and validation sets. On the left, all structures with \(\left| k \right|\) or fewer atoms in the unit cell are excluded from the training set (and therefore included in the validation set). This results in many high-error structures in the validation set, with the effect decreasing for smaller \(\left| k \right|\). For k = 0, size does not influence the split. On the right, structures with ≤k atoms are always included in the training set, resulting in fewer high-error structures in the validation set. The dashed line marks the value of k = 2 recommended in this work (see supplementary material).

Fig. 4
figure 4

Influence of biased training and validation sets. Shown are the root mean squared errors (meV/atom) as a function of training and validation set composition obtained using MBTR + KRR model. See main text for discussion

Retrospective errors reported in the literature should, therefore, be critically assessed. The design of such studies should report on “representative” validation sets instead of those tweaked to yield lowest possible errors. For combinatorial datasets, the smallest structures (those that can be considered to be outliers) should be included in the training set.37

We showed that it is possible to use machine learning to build a combined surrogate model that can simultaneously predict the enthalpy of formation of crystal structures across 10 different binary alloy systems, for three lattice types (fcc, bcc, hcp) and for structures not in their ground state. In this, we find that the concept of using machine learning to predict formation enthalpy of materials to be independent of the details of the used surrogate models as predictions of several state-of-the-art materials representations and learning algorithms were found to be in qualitative agreement. This observation also seems to be congruent with recent efforts toward a unifying mathematical framework for some of the used representations.38

The ability to use a single surrogate model for multiple systems simultaneously has the potential to simplify the use of surrogate models for exploration of materials spaces by avoiding the need to identify “homogeneous” subspaces and then building separate models for each of them. This also avoids problems such as discontinuities at the boundaries of separate models.

Is it possible to do better? Recent results suggest that it might be possible to exploit similarities between chemical element species to improve learning rates further.36 This requires either to explicitly account for element similarities in the representations or to learn element similarities from the data, for example with a DNN. While such alchemical learning is outside of the scope of this work, we do observe an improvement in prediction errors for the general MBTR + DNN model (Table 2 [see also Fig. 2 in supplementary material]).

Methods

Data

We created a dataset (DFT-10B) containing structures of the 10 binary alloys AgCu, AlFe, AlMg, AlNi, AlTi, CoNi, CuFe, CuNi, FeV, and NbNi. Each alloy system includes all possible unit cells with 1–8 atoms for face-centered cubic (fcc) and body-centered cubic (bcc) crystal types, and all possible unit cells with 2–8 atoms for the hexagonal close-packed (hcp) crystal type. This results in 631 fcc, 631 bcc, and 333 hcp structures, yielding 1595 × 10 = 15,950 unrelaxed structures in total. We refer to this dataset as DFT-10B in this work. The cell shape, volume, and atomic positions were not optimized and the calculations are all unrelaxed, for the sake of efficiency. The crystal structures were generated using the enumeration algorithm by Hart and Forcade.39

Lattice parameters for each crystal structure were set according to Vegard’s law.40,41 Total energies were computed using DFT with projector-augmented wave (PAW) potentials42,43,44 within the generalized gradient approximation (GGA) of Perdew, Burke, and Ernzerhof45 (PBE) as implemented in the Vienna Ab Initio Simulation Package46,47 (VASP). The k-point meshes for sampling the Brillouin zone were constructed using generalized regular grids.48,49 The details of the k-point density for all 10 alloys is mentioned in Table 1 of the supplementary material.

Models

All single-alloy surrogate models were trained using the same set of 1000 randomly selected crystal structures, including optimization of hyperparameters, and the prediction errors are reported on a hold-out test set of 595 different structures, never seen during training. The same set of decorations are used as training and test sets for all binaries. Models trained on multiple alloys use the union of the individual alloy’s splits. Parametrization details of all surrogate models used in this work can be found in the supplementary material.