Accelerating Materials-Space Exploration for Thermal Insulators by Mapping Materials Properties via Artificial Intelligence

Reliable artificial-intelligence models have the potential to accelerate the discovery of materials with optimal properties for various applications, including superconductivity, catalysis, and thermoelectricity. Advancements in this field are often hindered by the scarcity and quality of available data and the significant effort required to acquire new data. For such applications, reliable surrogate models that help guide materials space exploration using easily accessible materials properties are urgently needed. Here, we present a general, data-driven framework that provides quantitative predictions as well as qualitative rules for steering data creation for all datasets via a combination of symbolic regression and sensitivity analysis. We demonstrate the power of the framework by generating an accurate analytic model for the lattice thermal conductivity using only 75 experimentally measured values. By extracting the most influential material properties from this model, we are then able to hierarchically screen 732 materials and find 80 ultra-insulating materials.


I. INTRODUCTION
Artificial-intelligence (AI) techniques have the potential to significantly accelerate the search for novel, functional materials, especially for applications where different physical mechanisms compete with each other nonlinearly, e.g., quantum materials [1], and where the cost of characterizing the materials makes a large-scale search intractable, e.g., thermoelectrics [2].Due to this inherent complexity, only limited amounts of data are currently available for such applications, which in turn severely limits the applicability and reliability of AI techniques [3].Using thermal transport as an example, we propose a route to overcome this hurdle by presenting an AI framework that is applicable to scarce datasets and that provides heuristics able to steer further data creation into materials-space regions of interest.
Heat transport, as measured by the temperaturedependent thermal conductivity, κ L , is a ubiquitous property of materials and plays a vital role for numerous scientific and industrial applications including energy conversion [4], catalysis [5], thermal management [6], and combustion [7].Finding new crystalline materials with either an exceptionally low or high thermal conductivity is a prerequisite for improving these and other technologies or making them commercially viable at all.Accordingly, finding new thermal insulators and understanding where in materials space to search for such compounds is an important open challenge in this field.From a theory perspective, thermal transport depends on a complex interplay of different mechanisms, especially in thermal insulators, for which strongly anharmonic, higher-order effects can be at play [8].Despite significant progress in the computational assessment of κ L in solids [9,10], these ab initio approaches are too costly for a large-scale exploration of material space.For this reason, computational high-throughput approaches have so far covered only a small subset of materials [11][12][13].Experimentally, an even smaller number of materials have had their thermal conductivities measured, and less than 150 thermal insulators identified [14,15].
Recently, increased research efforts have been devoted to leveraging AI frameworks to extend our knowledge in this field.In particular, various regression techniques have been proven to successfully interpolate between the existing data and approximate κ L using only simpler properties [11,14,16,17]; however, using these techniques to extrapolate into new areas of materials space is a known challenge.More importantly, the explainbility of these models is limited by their inherent complexity.Physically motivated, semi-empirical models, e.g. the Slack model [18], perform slightly better in this regard because they encapsulate information about the actuating mechanism.Recent efforts have used AI to extend the capabilities of these models [2,16,19,20] to increase their accuracy in estimating κ L .However, the applicability of such models is still limited by the physical assumptions entering the original expressions [2,19].A general model that removes these assumptions and achieves the quantitative accuracy of AI approaches, while retaining the qualitative interpretability of analytical models, is however, still lacking.
In this work, we tackle this challenge by using a symbolic regression technique to quantitatively learn κ L , using easily calculated materials properties.While symbolic regression methods are typically more expensive to train than other kernel based methods, such as Kernel Ridge Regression (KRR) and Gaussian Process Regression (GPR), their prediction errors are typically equivalent to other methods and their natural feature reduc-tion and resulting analytical expressions make them a useful method for explainable AI, as further illustrated below [21].Furthermore, the added cost of training does not affect the evaluation time of the given models, meaning the extra time only has to be spent at the beginning.The inherent uncertainty estimate in methods like GPR, allows for a prediction of where the resulting models are expected to perform worse; however, we also propose a method to get an ensemble uncertainty estimate for symbolic regression that can be applied more generally to these types of models.We further exploit the feature reduction of SISSO and expand upon its interpretability by using a global sensitivity analysis method to distill out the key material properties that are most important for modelling κ L and to find the conditions necessary for obtaining an ultra-low thermal conductivity.From here, we use this analysis to learn the conditions needed to screen materials in each step of a hierarchical, high-throughput workflow to discover new thermal insulators.Using this workflow we can then establish qualitative design principles that lend themselves to general application across material space and use them to find 80 materials with an ultra-low κ L .

A. Symbolic Regression Models for Thermal Conductivity
For this study, we use the sure-independence screening and sparsifying operator (SISSO) method as implemented in the SISSO++ code [22].This method has been used to successfully describe multiple applications including the stability of materials [23], catalysis [24], and glass transition temperatures [25].To find the best low-dimensional models for a specific target property, in our case the room temperature, lattice thermal conductivity, κ L (300 K), SISSO first builds an exhaustive set of analytical, non-linear functions, i.e. trillions of candidate descriptors, from a set of mathematical operators and primary features, the set of user-provided properties that will be used to model the target property.Here we are focusing on room temperature data only because that is what is the most abundant in the literature and relevant for potential applications; however, some temperature dependence will be inherently included via the temperature dependence of our anharmonicity factor σ A .For this application the primary features are both the structural and dynamical properties for seventy-five materials with experimentally measured κ L (300 K) [17,[26][27][28][29][30][31][32][33][34][35][36][37][38][39][40][41][42][43] (see Section IV D and VIII Supplementary Note 1 for more details).By using the experimentally measured values for κ L we avoid the issues related to the inconsistent reliability of different approaches to calculating κ L for different material classes [44,45], and hopefully create a universal model for it.For many of the materials of interest here the standard Boltzmann Transport approach The proposed hierarchical workflow that can screen out materials before the final calculations.
will be unreliable [44,45], but the fully anharmonic ab initio Green Kubo approach is unnecessarily expensive to use for all materials [45].Combining theoretical and experimental data in this way allows one to avoid both the cost or unreliability of calculating, κ L and the challenges of experimentally synthesizing and characterizing candidate materials.As long as all samples are consistent across each feature, AI and ML based models will adapt the computational features to the experimental target.
Figure 1b illustrates the main goal of the work: to learn which primary features are important for modeling κ L and what thresholds of those indicate where thermal insulators are present.As a result the figure also represents the workflow used to calculate κ L and generate the primary features for the model.All of the data generated in this workflow will be calculated using ab initio methods, with each step representing an increasing cost of calculation, as shown in Figure 1a.The total cost of calculating these primary features is several orders of magnitude smaller than explicitly calculating κ L , either with the Boltzmann Transport Equation or aiGK.While using only compositional and structural features would further reduce the cost of generating them, it comes at the expense of decreasing the reliability and explainability of the models.A goal of this work is to learn the screening conditions needed to remove materials at each step of the workflow in Figure 1b and only perform the intensive κ L calculations on the most promising materials.
Because of this, we feel that using the features generated from this workflow is the most logical set to use.Importantly, as described in Section IV D we use a consistent and accurate formalism for calculating all features in this workflow, and therefore expect a quantitative agreement between these features and their experimental counterparts.Even if this framework were restricted to explore only high-symmetry materials, the overall cost of the calculations in a supercell would be reduced by a factor of one hundred as shown by the non-green bars in Figure 1a.In the more general case we would be able to screen closer to 1000 more materials using this procedure over the brute-force workflows of calculating κ L for all materials.With the learned conditions one could then create a prescreening procedure by learning models for each of the relevant structural or harmonic properties using only compositional inputs, and use those to estimate κ L [46]; however, that is outside of the scope of this work.
In practice, we model the log (κ L (300 K)) instead of κ L (300 K) itself to better handle the wide range of possible thermal conductivities.The parity plot in Figure 2(a) illustrates the performance of the identified SISSO model when the entire dataset is used (see Section IV A for more details).The resulting expression is characterized by where a 0 = 6.327, a 1 = −8.219× 10 4 , and a 2 = −1.704are constants found by least-square regression and all variables are defined in Table I.We find that this model has a training root-mean squared error (RMSE) of 0.14, with an R 2 of 0.98 for log κ SISSO (300 K) .To better understand how these error terms translate to κ L (300 K), we also use the average factor difference (AFD) AFD = 10 x (2a) where n is the number of training samples.Here, we find an AFD of 1.30 that is on par if not smaller than models previously found by other methods (e.g.1.36 ± 0.03 for a Gaussian Process Regression model [17] and 1.48 for a semi-empirical Debye-Callaway Model [2]).However, differences in the training sets and cross-validation scheme prevent a fair comparison of these studies for the prediction error.To see a complete representation of the training error for all models refer to VIII Supplementary Note 2.
To get a better estimate of the prediction error, we use a nested cross-validation scheme further defined in Sec-tion IV E. As expected, the prediction error is slightly higher than the training error with an RMSE of 0.22 ± 0.02 and an AFD of 1.45 ± 0.03.As shown in Fig. 2(b), these errors are comparable to those of a KRR and GPR model trained on the same data, following the procedures listed in Sections IV B and IV C, respectively.We chose to retrain the models using the same dataset and cross-validation splits in order to single out the effect of the methodology itself, and not changes in the data set and splits.These results show that the performance of SISSO and more traditional regression methods are similar, but the advantage of the symbolic regression models is that only seven of the primary features are selected.Another advantage of the nested cross-validation scheme is that it creates an ensemble of independent models, which can also be used to approximate the uncertainty of the predictions.These results substantiates that our symbolic regression approach performs as well as interpolative methods and outperform the Slack model, which was originally developed for elemental cubic solids [18].Interestingly, offering the features of the Slack model to SISSO does not improve the results, and even some primary features previously thought to be decisive, e.g., the Grüneisen parameter, γ. are not even selected by SISSO (see VIII Supplementary Note 5).
A key advantage of using symbolic regression techniques over interpolative methods such as KRR and GPR is that the resulting models not only yield reliable quantitative predictions, but also allows for a qualitative inspection of the underlying mechanisms.To get a better understanding of how the thermal conductivity changes across materials space we map the model in Figure 2c.From this map we can see that the thermal conductivity of a material is mostly controlled by d 2 with d 1 providing only a minor correction.While these observed trends are already helpful, the complex non-linearities in both d 1 and d 2 impedes the generation of qualitative design rules.Furthermore, some primary features such as V m and σ A enter both d 1 and d 2 , with contrasting trends, e.g., σ A lowers d 1 but increases d 2 .To accelerate the exploration of materials space, one must first be able to disentangle the contradicting contributions of the involved primary features.

B. Extracting Physical Understanding by identifying the Most Physically Relevant Features via Sensitivity Analysis
The difficulties in interpreting the "plain" SISSO descriptors described above can be overcome by performing a sensitivity analysis or a feature importance study to identify the most relevant primary features that build d 1 and d 2 .For this purpose, we employ both the Sobol indices, i.e., the main effect index S i and the total effect index S T i [47], and the Shapley Additive Explanations (SHAP) [48] metric for the model predictions.To calculate the Sobol indices we use an algorithm that in-  κ SISSO (300K) Error evaluation for the presented models.a) Comparison of the predicted κ SISSO (300 K) against the measured κL (300 K) for the model trained against all data.The gray shaded region corresponds to the 95% confidence interval.b) Violin plots of the mean prediction error of all samples for the SISSO, KRR, and GPR models using all features (red, left) and a reduced set including only σ A , ΘD,∞, and Vm (blue, right) and the Slack model.Gray lines are the median, white circles are the mean of the distributions, the boxes represent the quartiles, and the whiskers are the minimum and 95% absolute error.For all calculations the parameterization depth and dimension are determined by cross-validation on each training set.The red stars and blue hexagons are the outliers for the box plots.c) A map of the two-dimensional SISSO model, where the features on the x− and y−axes correspond to the two features selected by SISSO.The labeled points represent the convex-hull of the scatter plot and related points.
The main advantage of this approach is its ability to include correlative effects between the inputs, which if ignored can largely bias or even falsify the sensitivity analysis results [52].Qualitatively, S i quantifies how much the variance of log (κ L (300 K)) correlates with the variance of a primary feature, xi , and S T i quantifies how much the variance of log (κ L (300 K)) correlates with xi including all interactions between xi and the other primary features.For example, Sobol indices of 0.0 indicate that log (κ L (300 K)) is fully independent of xi , whereas a value of 1.0 indicates that log (κ L (300 K)) can be completely represented by changes in xi [51].Moreover, S T i < S i implies that correlative effects are significant, with an S T i = 0 indicating that a primary feature is perfectly correlated to the other inputs [51].
The SHAP values constitute a local measure of how each feature influences a given prediction in the data set.This metric is based on the Shapley values used in game theory for assigning payouts to players in a game based on their contribution towards the total reward [48].In the context of machine learning models each input to the model represents the players and the difference between individual predictions from the global mean prediction of a dataset represents the payouts [53].The SHAP values then perfectly distribute the difference from the mean prediction to each feature for each sample, with negative values indicating that the feature is responsible for reducing the prediction from the mean and a positive value is responsible for increasing it.[53].A similar metric is the Local Interpretable Model-agnostic Explanations (LIME) values [54].LIME first defines a local neighborhood for each data point, and then uses a similar algorithm to SHAP to compare each prediction against their corresponding local area.Because of the computational complexity of calculating SHAP values makes their exact calculation intractable with a large number of features, these values can be approximated by the Kernel SHAP method [48].Originally the Kernel SHAP method assumed feature independence [48], but was recently advanced to include feature dependence via sampling over a multivariate distribution represented by a set of marginal distributions and a Gaussian Copula [53].However, there are some cases for small data sets with highly correlated features where the SHAP values are qualitatively different from the true Shapley values [55].
Figure 3 compares the different sensitivity metrics including and excluding feature dependence.To get the global values of the SHAP and LIME indexes we take the mean absolute value for each feature across all 75 materials, but other metrics have been proposed in the literature and it is not clear which one is best [56][57][58].However the local information contained in metrics such as SHAP and LIME is an advantage they have over global metrics such as the Sobol indexes as it allows for the identification of regions in the material space that do not follow the global trends.Comparing the plots in Figure 3a and b illustrates the importance of not treating the input primary features as independent, as all four sensitivity analysis metrics are qualitatively wrong under that assumption.This is likely a result of sampling over physically unreachable parts of the feature space, e.g. a areas with a high density, low mass, and high molar volume, and suggests that caution should be used when applying these techniques to highly correlated datasets.The impact of this is demonstrated in Supplementary Figure 3, where we explicitly simplify the model to remove some   of the dependencies.All three indexes that include correlative effects show that σ A , V m , Θ D,∞ , and ω Γ,max predominately control the variance of κ SISSO (300 K).The main difference between S i and the kernel SHAP metrics is the relative importance of Θ D,∞ and ω Γ,max when compared against V m and σ A .The difference between these results could be from the the Sobol indexes globally sampling the region of Θ D,∞ > 1300 K instead of relying on the two materials in that regime or S i over-estimating its importance because the higher correlation between Θ D,∞ and the other inputs.In fact, the low values of S T i also imply that there are significant correlative effects in place between these inputs, and no single feature can be singled out as primarily responsible for changes in κ SISSO (300 K).For instance, the similarity between the importance of ω Γ,max and Θ D,∞ is because they are strongly correlated to each other, only one of them needs to be considered (see the Supplementary Figure 2).The importance of these features is further substantiated in Figure 2b, where we compare the performance of the models calculated using the full dataset and one that only includes σ A , V m , and Θ D,∞ .For all tested models, we see only a slight deterioration in performance with a predictive AFD of 1.87, 1.77, and 1.77 for the SISSO, KRR, and GPR models, respectively, compared to 1.45 for the models trained with all features.This result highlights that the trends and the underlying mechanisms describing the dependence of κ L (300 K) in materials space are fully captured by those features alone.
Even more importantly, our model captures the interplay between these features across materials, as demonstrated in the maps in Figure 4.These maps showcase the strong correlation between κ SISSO (300 K) and σ A , V m , and Θ D,∞ , and that materials with high anharmonicity, low-energy vibrational modes, and a large molar volume will be good thermal insulators.Figure 4 shows the expected value of κ SISSO (300 K), E X κ SISSO (300 K) X , for different sets of input features, X , shown on the axes of each plot.We then overlay the maps with the actual values of each input for all materials in the training set to evaluate the trends across different groups of materials.Figure 4c confirms that σ A is already a good indicator for finding thermal insulators, with most of the materials having κ L (300 K) within one standard deviation of the expected value.For the more harmonic materials with σ A < 0.2, the vanishing degree of anharmonicity is, alone, not always sufficient for quantitative predictions.In this limit, a combination of σ A and V m can produce correct predictions for the otherwise underestimated white triangles with a σ A < 0.2, as seen in Figure 4a.In order to fully describe the low thermal conductivity of the remaining highlighted materials both Θ D,∞ and V m are The expected value of κ SISSO (300 K) relative to select primary features.The expected value of κ SISSO (300 K), , and e) {Vm}.E X κ SISSO (300 K) X is calculated by sampling over the multivariate distributions used for the sensitivity analysis, and binning the input data until there are at least 10 000 samples in each bin.The red line in c-e corresponds to E X κ SISSO (300 K) X and the pink shaded region is one standard deviation on either side of the line.The gray shaded regions represent where a thermal conductivity of 10 Wm −1 K −1 or lower is within one standard deviation of the expected value.On all maps all materials in the training set are displayed.The green circles correspond to rock-salts, the blue diamonds are zincblende, the light blue pentagons are wurtzites, and black triangles are all other materials.All points with a κL (300 K) less than one standard deviation below the expected value based on σ A are highlighted in white.The points in c-e correspond to the actual values of κL (300 K) for each material.Additionally we include four materials outside of the training set (yellow stars) whose thermal conductivities we calculate using ab initio molecular dynamics.
needed as can be seen in Figure 4a, b, d and e.Generally, this reflects that the three properties σ A , Θ D,∞ , and V m are the target properties to optimize to obtain ultra-low thermal conductivities.These results can also be rationalized within our current understanding of thermal transport and showcase which physical mechanisms determine κ L in material space.Qualitatively, it is well known that good thermal conductors typically exhibit a high degree of symmetry with a smaller number of atoms, e.g.diamond and silicon, whereas thermal insulators, e.g., glass-like materials, are often characterized by an absence of crystal symmetries and larger primitive cells.In our case, this trend is quantitatively captured via V m , which reflects that larger unit cells have smaller thermal conductivities.Furthermore, it is well known that phonon group velocities determine how fast energy is transported through the crystal in the harmonic picture [59], and that it is limited by scattering events arising due to anharmonicity.In our model, these processes are captured by Θ D,∞ , which describes the degree of dispersion in the phonon band structure, and the anharmonicity measure, σ A respectively.In this context, it is important to note that, in spite of the fact that these qualitative mechanisms were long known, there had hitherto been no agreement on which material property would quantitatively capture these mechanisms best across material space.For instance, both the γ, the lattice thermal expansion coefficient, and now σ A , have been used to describe the anharmonicity of a material.However, when both γ and σ A are included as primary features, only σ A is chosen (see VIII Supplementary Note 5 for more details).This result indicates that the σ A measure is the more sensitive choice for modeling the strength of anharmonic effects.While γ also depends on anharmonic effects, they are also influenced by the bulk modulus, the density, and the specific heat of a material.

C. Validating the Predictions with ab initio
Green-Kubo Calculations To confirm that the discovered models produce physically meaningful predictions, we validate the estimated thermal conductivity of four materials using the ab initio Green-Kubo method (aiGK) [10,45].This approach has recently been demonstrated to be highly accurate when compared to experiments [45], using similar DFT settings for what was done in this work.In particular aiGK is highly accurate in the low thermal conductivity regime that we are studying here.For details of how we calculate κ L see the methodology in Section IV J.For this purpose, we chose BrBaCl, LiScS 2 , CaF 2 , and GaLiO 2 , since these materials represent a broad region of the relevant feature space that also test the boundary regions of the heuristics found by the sensitivity analysis and mapping, as demonstrated by the yellow stars in Figure 4. Figure 5 shows the convergence of the thermal conductivity of the selected materials, as calculated from three aiMD trajec- tories.All of the calculated thermal conductivities fall within the 95% confidence interval of the model, with the predictions for both CaF 2 and ClBaBr being especially accurate.The better performance of the model for these materials is expected, as they are more similar to the training data than the hexagonal Caswellsilverite like materials.Additionally, quantum nuclear effects play a more important role in LiScS 2 and GaLiO 2 than CaF 2 and ClBaBr, which can also explain why those predictions are worse than CaF 2 and ClBaBr.Overall these results demonstrate the predictive power of the discussed model.

D. Discovering Improved Thermal Insulators
Using the information gained from the sensitivity analysis and statistical maps of the model, we are now able to design a hierarchical and efficient high-throughput 10 −1 10 0 10 1 10 2 10 3 κ SISSO (300 screening protocol split into three stages: structure optimization, harmonic model generation, and anharmonicity quantification.We demonstrate this procedure by identifying possible thermal insulators within a set of 732 materials, within those compounds available in the materials project [60] that feature the same crystallographic prototypes [61,62] as the ones used for training.Once the geometry is optimized we remove all materials with V m < 35.5 Å(60 materials) and all (almost) metallic materials (bandgap < 0.2 eV), and are left with 302 candidate compounds.We then generate the converged harmonic model for the remaining materials and screen out all materials with Θ D,∞ > 547 K or have an unreliable harmonic model, e.g.materials with imaginary harmonic modes, leaving 148 candidates.Finally we evaluate the anharmonicity, σ A , for the remaining materials (see Section IV D) and exclude all materials with σ A < 0.206, and obtain 110 candidate thermal insulators.To avoid unnecessary calculations, we first estimate σ A via σ A OS and then refine it via aiMD when σ A OS > 0.4 [8].For these candidate materials, we evaluate κ SISSO (300 K) using Eq. 1.Of the 110 materials that passed all checks, 96 are predicted to have have a κ SISSO (300 K) below 10 Wm −1 K −1 , illustrating the success of this method.
Finally, let us emphasize that the proposed strategy is not limited to the discovery of thermal insulators, but can be equally used to find, e.g., good thermal conductors.This is demonstrated in Figure 6, in which we predict the thermal conductivity of all non-metallic and stable materials using the SISSO and KRR models.Generally, both the SISSO and KRR models agree with each other with only 28 of the 227 materials having a disagreement larger than a factor of two and one (LiHF 2 ) with a disagreement larger than a factor of 5, further illustrating the reliability of these predictions.We expect that the large deviation for LiHF 2 is a result of the large σ A value for that material (0.54), which is significantly larger than the maximum in the training data.We can see from the outset histograms of both models that the hierarchical procedure successfully finds the good thermal insulators, with only 26 of the 122 materials with a κ L (300 K) ≤ 10 Wm −1 K −1 and 10 of the 80 materials with a κ L (300 K) ≤ 5 Wm −1 K −1 not passing all tests.Of these eight only the thermal insulating behavior of CuLiF 2 and Sr 2 HN can not be described by the values of the other two tests that passed.Conversely, materials that do not pass the test show high conductivities.When one of the tests fail the average estimated value of log (κ L (300 K)) increases to 1.38 ± 0.490 (24.0 Wm −1 K −1 ), with a range of 0.95 Wm −1 K −1 to 741.3 Wm −1 K −1 .In particular, screening the materials by their molar volumes alone is a good marker for finding strong thermal conductors as all of the 15 materials with κ L (300 K) ≥ 100 Wm −1 K −1 have a V m ≤ 45 Å3 .

III. DISCUSSION
We have developed an AI framework to facilitate and accelerate material space exploration, and demonstrate its capabilities for the urgent problem of finding thermal insulators.By combining symbolic regression and sensitivity analysis, we are able to obtain accurate predictions for a given property using relatively easy to calculate materials properties, while retaining strong physical interpretability.Most importantly, this analysis enables us to create hierarchical, high-throughput frameworks, which we used to screen over a set of more than 700 materials and find a group of ∼100 possible thermal insulators.Notably, almost all of the good thermal conductors in the set of candidate materials are discarded within the first iteration of the screening, in which we only discriminate by molar volume, i.e., with an absolutely negligible computational cost compared to full calculations of κ L .Accordingly, we expect this approach to be extremely useful in a wide range of materials problems beyond thermal transport, especially whenever (i) few reliable data are available, (ii) additional data are hard to produce, and/or (iii) multiple physical mechanisms compete nontrivially, limiting the reliability of simplified models.
Although the proposed approach is already reliable for small dataset sizes, it obviously becomes more so when applied to larger ones.Here, the identified heuristics can substantially help steer data creation towards more interesting parts of material space.Along these lines, it is possible to iteratively refine both the SISSO model and the rules from the sensitivity analysis during material space exploration while the dataset grows.Furthermore, one can also apply the proposed procedure to the most influential primary features in a recursive fashion, learning new expressions for the computationally expensive features, e.g.σ A , using simpler properties.In turn, this will further accelerate material discovery, but also allow for gaining further physical insights.Most importantly, this method is not limited to just the thermal conductivity of a material, and can be applied to any target property.Further extending this framework to include information about where the underlying electronic structure calculations are expected to fail, also provides a means of accelerating materials discovery more generally [63].

A. SISSO
We use SISSO to discover analytical expressions for κ L (300 K) [64].SISSO finds low-dimensional, analytic expressions for a target property, P , by first generating an exhaustive set of candidate features, Φ, for a given set of primary features, Φ0 , and operators Ĥm , and then performing an ℓ 0 -regularization over a subset of those features to find the n-dimensional subset of features, whose linear combination results in the most descriptive model.Φ is recursively built in rungs, Fr , from Φ0 and Ĥm , by applying all elements, ĥm , of Ĥm on all elements fi and fj of Fr−1 Fr ≡ ĥm fi , fj , ∀ ĥm ∈ Ĥm and ∀ fi , fj ∈ Fr−1 .
Φr is then the union of Φr−1 and Fr .Once Φ is generated, the n SIS features most correlated to P are stored in Ŝ1 , and the best one-dimensional models are trivially extracted from the top elements of Ŝ1 .Then the n SIS features most correlated to any of the residuals, ∆ i 1 , of the n res best one-dimensional descriptors are stored in Ŝ2 .We define this projection as s = max (s 0 , s 1 , ..., s i , ..., s nres ) (3) where φ ∈ Φ, and R 2 is the Pearson correlation function.We call this approach the multiple residual approach, which was first introduced by the authors [65] and later fully described in Ref. [66].From here, the best two dimensional models are found by performing an ℓ 0 -regularized optimization over Ŝ1 ∪ Ŝ2 [67].This process is iteratively repeated until the best n-dimensional descriptor is found [64].
For this application Ĥm contains: A, exp (A), exp (−1.0 * A), and ln (A).Additionally to ensure the units of the primary features do not affect the final results, we additionally include the following operators: αA), exp (−1.0 * αA), and ln (αA + β), where α and β are scaling and bias constants used to adjust the input data on the fly.We find the optimal α and β terms using non-linear optimization for each of these operators [22,66,68].To ensure that the parameterization does not result in mathematically invalid equations for data points outside of the training set, the range of each candidate feature is derived from the range of the primary features, and the upper and lower bounds for the features are set appropriately.When generating new expressions these ranges are then used as a domain for the operations, and any expression that would lead to invalid results are excluded [66].The range of the primary features are set to be physically relevant for the systems we are studying and are listed in Table I.Hereafter, we call the use of these operators parametric SISSO.For more information please refer to [66].
All hyperparameters were set following the crossvalidation procedures described in Section IV E.

B. Kernel-Ridge Regression
To generate the kernel-ridge regression models we used the utilities provided by scikit-learn [69], using a radial basis function kernel with optimized regularization term and kernel length scale.The hyperparameters were selected using with a 141 by 141 point logarithmic grid search with possible parameters ranging from 10 −7 to 10 0 .Before performing the analysis each input feature, x i is standardized where x stand i is the standardized input feature, µ i is the mean of the input feature for the training data, and σ i is the standard deviation of the input feature for the training data.

C. Gaussian Process Regression
To generate the Gaussian Process Regression Models we used the utilities provided by scikit-learn [69], using a radial basis function kernel with an optimized regularization term and kernel length scale.The hyperparameters were selected using with a 141 by 141 point logarithmic grid search with possible parameters ranging from 10 −7 to 10 0 .Before performing the analysis each input feature, x i is standardized where x stand i is the standardized input feature, µ i is the mean of the input feature for the training data, and σ i is the standard deviation of the input feature for the training data.All uncertainty values were taken from the results of the GPR predictions, and in the case of the nested cross-validation the uncertainty was propagated using where κ pred GPR,i and κ pred GPR,i are the respective prediction and uncertainty of the i th GPR model for a given data point and κ pred GPR and κ pred GPR are the respective mean prediction and uncertainty for a prediction.

D. Creating the Dataset
In this study we focus on only room-temperature data for κ L , since values for other temperatures are even scarcer.However, we note that an explicit temperature dependence can be straightforwardly included using multi-task SISSO [70], and it is at least partially included via, the anharmonicity factor, σ A [8] (see below for more details).For κ L (300 K), we have compiled a list of seventy-five materials from the literature (see Supplementary Table 1 for complete list with references), whose thermal conductivity has been experimentally measured.This list was curated from an initial set of over 100 materials, from which we removed all samples that are either thermodynamically unstable or are electrical conductors.This list of materials covers a diverse set of fourteen different binary and ternary crystal structure prototypes [61,62,71].
With respect to the primary features, Φ0 , compound specific properties are provided for each material.All primary features can be roughly categorized in two classes: Structural parameters that describe the equilibrium structure and dynamical parameters that characterize the nuclear motion.For the latter case, both harmonic and anharmonic properties have been taken into account.As shown in VIII Supplementary Note 5, additional features, such as the parameters entering the Slack model, i.e., γ, Θ a , and V a , can be included.However, these features do not benefit the model and when included only V a , and not γ or Θ a are selected.For a complete list of all primary features, and their definitions refer to Table I.
The structural parameters relate to either the mass of the atoms (µ, m min , m max , m avg ), the lattice parameters of the primitive cell (V m , L min , L max , L avg ), the density of the materials (ρ), or the number of atoms in the primitive cell (n at ).For all systems a generalization of the reduced mass, µ, is used so it can be extended to non-binary systems, where n emp is the number of atoms in the empirical formula and m i is the mass of atom, i.Similarly, the molar volume, V m , is calculated by where V prim is the volume of the primitive cell and Z = nat nemp .Finally, ρ is calculated by dividing the total mass of the empirical cell by All of the harmonic properties used in these models are calculated from a converged harmonic model generated using phonopy [72].For each material, the phonon density of states of successively larger supercells are compared using a Tanimoto similarity measure where S is the similarity score, g p,L (ω) is the phonon density of states of the larger supercell, g p,S (ω) is the phonon density of states of the smaller supercell, If S > 0.80, then the harmonic model is considered converged.From here C V is calculated from phonopy as a weighted sum over the mode dependent heat capacities.Both approximations to the Debye temperature are calculated from the moments of the phonon density of states where g p (ε) is the phonon density of states at energy ε [73].Finally v s is approximated from the Debye frequency, ω D , by [20] where ω D is approximated as and a is found by fitting g p (ω) in the range 0, ωΓ,max 8 to To measure the anharmonicity of the materials we use σ A as defined in [8] σ A (T ) = in which ⟨•⟩ (T ) denotes the thermodynamic average at a temperature T , F I,α is the α component of the force calculated from density functional theory (DFT) acting on atom I, and F ha I,α is the same force approximated by the harmonic model [8].First we calculate σ A OS , which uses an approximation to the thermodynamic ensemble average using the one-shot method proposed by Zacharias and Giustino [74].In the one-shot approach the atomic positions are offset from their equilibrium positions by a vector ∆R, where I is the atom number, α is the component, e s are the harmonic eigenvectors, ⟨A s ⟩ = √ 2k B T /ω s is the mean mode amplitude in the classical limit [75], and ζ s = (−1) s−1 [74].These displacements correspond to the turning-points of the oscillation estimated from the harmonic force constants, and is a good approximation to σ A in the harmonic limit.Because of this, if σ A OS < 0.2 we accept that value as the true σ A .Otherwise we calculate σ A using aiMD in the canonical ensemble at 300 K for 10 ps, using the Langevin thermostat.When performing the high-throughput screening the threshold for when to use aiMD is increased to 0.4 because that is the point that σ A OS becomes qualitatively unreliable [8].All electronic structure calculations are done using FHI-aims [76].
All geometries are optimized with symmetry-preserving, parametric constraints until all forces are converged to a numerical precision better than 10 −3 eV/ Å [77].The constraints are generated using the AFlow XtalFinder Tool [71].All calculations use the PBEsol functional to calculate the exchangecorrelation energy and an SCF convergence criteria of 10 −6 eV/ Å and 5×10 −4 eV/ Å for the density and forces, respectively.Relativistic effects are included in terms of the scalar atomic ZORA approach and all other settings are taken to be the default values in FHI-aims.For all calculations we use the light basis sets and numerical settings in FHI-aims.These settings were shown to ensure a convergence in lattice constants of ±0.1 Å and a relative accuracy in phonon frequencies of 3% [8].
All primary features are calculated using the workflows defined in FHI-vibes [78].

E. Error Evaluation
To estimate the prediction error for all models we perform a nested cross-validation, where the data are initially separated into different training and test sets using a ten-fold split.Two hyperparameters (maximum dimension and parameterization depth) are then optimized using a five-fold cross validation on each of the training sets, and the overall performance of the model is evaluated on the corresponding test set.The size of the SIS subspace, number of residuals, and rung were all set to 2 000, 10, and 3, respectively, because they did not have a large impact on the final results.We then repeat the procedure three times and average over each iteration to get a reliable estimate of the prediction error for each sample [79].

F. Calculating the inputs to the Slack model
The individual components for the Slack model were the same as the ones used for the main models, with the exception of γ, V a and Θ a .For Θ a , we first calculate the Debye temperature, Θ D where ω D is the same Debye frequency used for calculating v s (see Section IV D), k B is the Boltzmann constant, and ℏ is Planck's constant.From here we calculate Θ a using We use the phonopy definition of Θ D instead of Θ D,∞ because it is better aligned to the original definition of Θ a .However, it is not used in the SISSO training because the initial fitting procedure to find ω D does not produce a unique value for Θ D and it is already partially included via v s .To calculate the thermodynamic Grüneisen parameter we use the utilities provided by phonopy [72].
The atomic volume was calculated by taking the volume of the primitive cell and dividing it by the total number of atoms.

G. Calculating the Sobol Indexes
Formally, the Sobol indices are defined as where xi ∈ X is one of the inputs to the model, Var a (B) is the variance of B with respect to a, E a (B) is the mean of B after sampling over a, and X i is the set of all variables excluding xi .Normally, it is assumed that all elements of X are independent of each other, and this assumption is preserved when calculating S i and S T i in Figure 3b.As a result of this, the variance of log κ SISSO (300 K) and the required expectation values would be calculated from sampling over an n v -dimensional hypercube covering the full input range, ignoring the correlation between the input variables.However, in order to properly model the correlative effects between elements of X , Kucherenko et al. modify this sampling approach [49,51].The first step of the updated algorithm is to fit the input data to a set of marginal univariate distributions coupled together via a copula [49,51].The algorithm then samples over an n v -dimensional unit-hypercube and transforms these samples into the correct variable space using a transform defined by the fitted distributions and copulas (see VIII Supplementary Note 3 for more details).It was later demonstrated that when using the approach proposed by Kucherenko and coworkers to calculate the Sobol indices, S i includes effects from the dependence of xi on those in X i , while S T i is independent of these effects [80].We use this updated algorithm to calculate S i and S T i in Figure 3a.In both cases we use the implementation in UQLab [50] to calculate S i and S T i .

H. Calculating the SHAP Indexes
The SHAP values are calculated by treating the features as independent variables using the original method proposed by Lundberg and Lee [48], as implemented in the python package shap, and as dependent variables using shapr by Aas, et al. [53].The SHAP values are an extension of the Shapley values from cooperative game theory, that distributes the contribution, v (S), of each player or subset of players, S ⊆ M = {1, •, M }, where M is the set of all players [48,53].The Shapley value, ϕ j (v) = ϕ j , can then be calculated by taking a weighted mean over the contribution function differences for all S not containing the player, j, where |S| is the number of members in S [53].
For a machine learning problem with a training set y i , x i i=1,••• ,ntrain , where y i is the property value and x i are the target property value and input feature values for the i th data point in the training set with n train data points [48,53], we can explain the prediction of the model, f (x * ) for a particular point, x * , with where ϕ 0 is the mean prediction and ϕ * j is the Shapley value for the j th feature for a prediction x = x * .Essentially the Shapley value for the model describes the difference between a prediction, y * = f (x * ), and the mean of all predictions [48,53].The contribution function is then defined as which is the expectation value of the model conditional on x S = x * S [48,53].The expectation value can be calculated as where x S is the subset of all features not included in S and p x S x S = x S * is the conditional probability distribution of x S given x S = x S * [48,53].In the case where the features are treated independently, p x S x S = x S * is replaced by p x S and v (S) can be approximated by Monte Carlo integration where x k S are samples from the training data, and K is the number of samples taken [48,53].To include feature dependence the marginal distributions of the training data are converted into a Gaussian copula and that is used to generate samples for the Monte Carlo integration [53].
Because the number of subsets that need to be explored grows as 2 M for the number of features, calculating the exact Shapley values for a large number of inputs becomes intractable.To remove this constraint the problem can be approximated as the optimal solution of a weighted least squares problem, which can be described as Kernel SHAP, which is described in [48,53].

I. Calculating the LIME Indexes
For the LIME values we use the lime package in python [54].The values were calculated using the standard tabular explainer using all features in the model and the mean absolute value of each prediction for each feature was used to asses the global feature importance.The methodology assumes the features are independent and for algorithmic details see Ref. [54]

J. Calculating the Thermal Conductivity
To calculate κ L , we use the ab initio Green Kubo (aiGK) method [10,81].The aiGK method calculates the αβ component of the thermal conductivity tensor, κ αβ , of a material for a given volume V , pressure p, and temperature T with where k B is Boltzmann's constant, ⟨•⟩ (T,p) denotes an ensemble average, J (t) is the heat flux, and G [J] is the time-(auto)correlation functions The heat flux of each material is calculated from aiMD trajectories using the following definition where R I is the position of the i th -atom and σ I is the contribution of the i th atom to the stress tensor, σ = I σ I [10].From here κ L is calculated as All calculations were done using both FHI-vibes [78] and FHI-aims with the same settings as the previous calculations [8] (see Section IV D for more details).The molecular dynamics calculations were done using a 5 fs time step in the NVE ensemble, with the initial structures taken from a 10 ps NVT trajectory.Three MD calculations were done for each material and the κ L was taken to be the average of all three runs.

VI. CODE AVAILABILITY
SISSO++ [22] and FHI-vibes [78] were used to generate all data and analysis in the paper and are freely available online in the cited publications.All electronic structure calculations were done using FHIaims [76], which is freely available for use for academic use (with a voluntary donation) (https://fhi-aims.org/get-the-code-menu/get-the-code).Supplementary Figure 5.The expected value of κ SISSO Slack (300K), E X κ SISSO Slack (300K) X , where X is defined by the variables on the x and y-axis.All one-dimensional maps are shown along the diagonal, where the y-axis represents the expected value of the models (the red axis on the right).E X κ SISSO Slack (300K) X is calculated by sampling over the multivariate distributions used for the sensitivity analysis, and binning the input data until there are at least 10,000 samples in each bin.The red line in the diagonal plots corresponds to E X κ SISSO Slack (300K) X and the pink shaded region is one standard deviation on either side of the line.The gray shaded regions represent where a thermal conductivity of 10 Wm −1 K −1 or lower is within one standard deviation of the expected value.On all maps all materials in the training set are displayed.The green circles correspond to rock-salts, the blue diamonds are zincblende, the light blue pentagons are wurtzites, and black triangles are all other materials.All points with a κL (300K) less than one standard deviation below the expected value based on σ A are highlighted in white.The points for the diagonal plots correspond to the actual values of κL (300K) for each material.

FIG. 1 .
FIG.1.The motivation for the work is reducing the number of calculations needed to approximate the thermal conductivity of a material.a) The number of force evaluations needed to complete each step of a κL calculation for four representative materials: 1) Geometry relaxation (green first bar), 2) Harmonic model generation with Phonopy (yellow, second bar), 3) Evaluating κL via Phono3py (lavender third bar) or MD (purple fourth bar).The relaxation step typically acts on the primitive cells (∼10 atoms) while all other are done on supercells with ∼200 or more atoms.The number of force evaluations for Phono3py assumes all displacements are needed to calculate the third order force constants for version 2.5.1 b) The proposed hierarchical workflow that can screen out materials before the final calculations.

FeatureFIG. 3 .
FIG. 3. The feature importance metrics for the models.Si (first bar, dark blue), S T i (second bar, light blue), mean absolute SHAP index (third bar, brown), and LIME index (fourth bar, yellow) for each feature in the model by treating the inputs as a) dependent feature and b) independent features.The Sobol indices are plotted on the left y-axis and the SHAP and LIME indexes are plotted on the right y-axis

FIG. 5 .
FIG. 5. Validation of the predictions of the model.The convergence of the calculated thermal conductivity of a) CaF2, b) ClBaBr, c) GaLiO2 d) LiScS2.All aiGK calculations were done using the average of three 75 ps (ClBaBr and GaLiO2) or 100 ps (CaF2 and LiScS2) molecular dynamics trajectories.The dashed lines are the values of the thermal conductivities predicted by Equation 1 and the shaded region is the 95% confidence interval of the prediction based on the RMSE obtained in Figure 2b.

FIG. 6 .
FIG.6.A scatter plot of the prediction of both the SISSO and KRR generated models for an additional 227 materials from the same classes as the training set.σ A is estimated via σ A OS for all materials with a σ A OS ≤ 0.4 in this screening.The dataset is split up into four subsets based on if the Vm test failed (top, green), ΘD,∞ test failed (second from top, yellow), σ A test failed (third from top, blue), or none of the tests failed (bottom, purple).The outlets correspond to the histogram of all predictions using the same break down.The darker shaded region represents where both predictions are within a factor of 2 of each other and the lighter shaded region where both predictions are within a factor of 5 of each other.