Abstract
Structural defects are abundant in solids, and vital to the macroscopic materials properties. However, a defectproperty linkage typically requires significant efforts from experiments or simulations, and often contains limited information due to the breadth of nanoscopic design space. Here we report a graph neural network (GNN)based approach to achieve direct translation between mesoscale crystalline structures and atomlevel properties, emphasizing the effects of structural defects. Our endtoend method offers great performance and generality in predicting both atomic stress and potential energy of multiple systems with different defects. Furthermore, the approach also precisely captures derivative properties which strictly observe physical laws and reproduces evolution of properties with varying boundary conditions. By incorporating a genetic algorithm, we then design de novo atomic structures with optimum global properties and target local patterns. The method would significantly enhance the efficiency of evaluating atomic behaviors given structural imperfections and accelerates the design process at the mesolevel.
Introduction
Structural defects are inevitable during synthesis and essential to the performance of a great variety of materials. On one hand, the deviation from perfection can deteriorate the performance of a pristine crystal, including lowering the elastic moduli, opening the zero band gap, and ruining the high thermal/electric conductivity^{1,2,3}. On the other hand, the defects can be also useful in tailoring local properties and developing promising functionalities of materials. For instance, a nanoporous graphene membrane is able to serve as an effective water filter^{4} and polycrystalline graphene can release more energy than pristine graphene under fracture^{5}. Furthermore, not only the defects themself but also their atomiclevel distribution can affect the properties of crystalline solids both locally and globally. The structural gradient of nanotwin boundaries in metals shows such an example of realizing superior strengthening with a tunable distribution of structural defects^{6}.
To experimentally eliminate or control the formation of defects during the synthesis, inexhaustible effects have been put into the development and improvement of methodologies such as singlecrystal silicon growth^{7} and spatial control technique of defect creation in graphene synthesis^{8}. In addition, multiscale modeling approaches from quantum level up to continuum level have been developed to calculate the effects of structural defects and reveal the mechanism behind the experimental observation. Typical methods include density functional theory (DFT)^{9,10}, molecular dynamics (MD) simulation^{5,11,12} and finite element method (FEM)^{13,14}. However, due to the heterogeneity introduced by defects in the materials, the design space of defective solids usually contains a numerous number of possible structures, which prevent us from performing bruteforce search of optimum geometries. In addition, both experiments and simulations can be expensive and timeconsuming, especially when the system size surges. Given these concerns, artificial intelligence (AI) steps in as a savior thanks to its capacity of making fast predictions based on the knowledge learned from training.
The advent of machine learning (ML) methods, especially deep learning (DL), have revolutionized the ways of materials design, physical modeling, and properties measurement^{15,16,17,18,19,20}. DL approaches and architectures originally developed to achieve humanlevel capacities have been merged into almost all disciplines^{21,22,23,24}. Within the field of crystalline solids, the applications of ML models range from intrinsic properties calculation of small periodic crystal unit cell^{25} to dynamic crack path predictions in large solids^{26}. In addition, AIbased approaches have also been applied to search for the optimal structures in terms of both mechanical^{27,28} and thermal properties^{29} of defective crystals such as porous 2D materials. In those mentioned cases, multiscale modeling is widely leveraged to generate training data or validating the predictions from ML models. By incorporating fieldbased (DFT), particlebased (MD), or continuumbased (FEM) modeling, ML has also impacted a great variety of aspects in material science, including quantum interactions calculation^{30,31,32}, molecular forcefields development^{33,34,35}, and continuum mechanics^{36,37,38}. Among all the DL architectures, GNN models have been developed to deal with graph structures which model a series of objects (nodes) and the relationships between them (edges). Compared to convolutional neural networks (CNNs), which have been successful on Euclidean data such as images, the GNNs extend the power of DL to nonEuclidean data such as social networks, molecules, and crystals in this work^{39}.
However, most of those MLbased methods are limited by a minimal level of information included in the prediction and obstacle for generalization, given that the models either focus on small crystal structures or predict only single property. In this work, to overcome these difficulties, we introduce a general method to translate the crystalline structure, which is represented by a graph with spatial information, directly to atomwise properties such as atomic stress field or potential energy distribution. We show the performance of the method by testing the model on multiple large crystalline systems, including 2D graphene and 3D aluminum systems with different types of structural defects and target atomic properties. The proposed approach achieves high accuracy and captures the physical information extracted from atomic predictions in all the datasets we investigate, serving as a potential alternative to the expensive molecular simulations. The model is further combined with optimization algorithms to screen designs with lowstress concentration and specific local stress patterns, manifesting its applications for design purposes.
Results
Workflow and DL model
We develop a deep learning model to translate crystalline structures, especially within the context of defects, to atomic properties using a graphbased neural network on different datasets (Fig. 1 and Methods: Dataset generation and Methods: Graph neural network)^{40}. To start with, defective crystalline solids containing either grain boundaries (GBs) or vacancies with random structures are generated using wellstudied algorithms. Two different classes of crystalline solids are mainly investigated in this work known as the 2D graphene sheet and 3D bulk aluminum. More specifically, three separate datasets are generated to test the approach, each containing thousands of crystals with one type of defect in one class of system. With the randomly generated structures, fully atomistic MD simulations are carried out to perform different tests such as relaxation, tension, and heating of the crystal. The outcomes given by MD simulations are regarded as the ground truth or the labels which are used to train the DL model. The MD simulations have been widely studied and utilized based on the experimental observations for all three classic systems we investigate. For instance, the atomistic simulations have not only revealed the statistics of strength and toughness of the polycrystalline graphene^{5,41}, but also the effects of grain size, strain rate, and temperature on its mechanical behaviors^{42}. In terms of nanoporous graphene, the MD simulations show the linear relation between the defect concentration and tensile modulus in multiple studies^{43,44,45}. And as for polycrystalline aluminum, processes involving dislocations such as melting and solidification have been examined using MD simulations for more than two decades up till now^{46,47}. The DL model here treats each individual crystal as a graph with each node representing an atom and each edge being the chemical bond. Since we emphasize more on the structural effect and deal with singleelement crystals, the node features are spatial information of each atom and the edge only shows connectivity given a certain cutoff distance (Methods: Dataset generation). The model is trained to predict atomic stress or potential energy, which is the label of each node.
Once the ML model is trained, atomic properties in a new structure can be predicted, bypassing conventional atomistic simulations. The atomlevel information can not only be used to extract collective properties such as modulus or total potential energy, but also allows investigation of local phenomena, including stress concentration or energy barrier. Furthermore, the approach can be utilized to design atomic structures such as holey graphene membranes with lowstress concentration, which can be realized thanks to the acceleration of property calculations.
Von Mises stress field predictions in polycrystalline graphene sheets
Grain boundaries can strongly and widely affect graphene’s behaviors such as mechanical strength, toughness, thermal, and electronic transport^{48}. Therefore, we first test our approach on a polycrystalline graphene dataset which contains 2000 randomly generated structures using an algorithm that yields wellannealed GBs^{41}. The polycrystal is created within a periodic unit cell with a size of 128 Å × 128 Å × 10 Å in x, y, and z directions. We pick four different grain numbers, which are 4, 8, 12, 16, and generate 500 data for each grain number. The distributions of grain sizes regarding different grain numbers are shown in Fig. 2a, along with an example of grain seeds and orientations in a generated crystal shown by the Voronoi diagram. The MD simulations are carried out to calculate the von Mises stress field (\(\sigma _{{{{\mathrm{von}}}}}\)) of each polycrystalline graphene sheet in the NVT ensemble (T = 300 K). We first equilibrate the polycrystal and then sample the ensemble average (Fig. 2b) of the coordinates and stress value of each atom which are used as the input node features and output node label, respectively, in our GNN model (Fig. 2c). The details about the MD simulations including presampling equilibration and sampling convergence are discussed in Methods: Dataset generation.
With the generated data, we first build the graphs based on the polycrystal structures (Methods: Graph representation) and then train our model on the train set to learn the linkage between defective structures and atomic properties. After training, the model is capable of predicting the von Mises stress field accurately shown by two random examples comparing the predictions with the ground truth visualized with images (Fig. 2d). The atomic structures are colored based on the local lattice type calculated using polyhedral template matching in OVITO’s python interface^{49,50}. As the figure displays, not only the stress concentrations at the GBs but also the concentration patterns are precisely captured by the model. To quantify the model performance, the normalized relative errors, which are basically average relative errors of the atomic properties after normalization (Methods: Model training and evaluation), are calculated for the 400 new crystals in the test set. The mean error is approximately 5.5%, while the highest error is below 7%, indicating both the high accuracy and robustness of the approach (Fig. 2e). With the predicted atomic properties, the collective mechanical property of the crystals, such as the mean von Mises stress can be derived. The mean von Mises stress, which is the overall residual stress can reflect mechanical behaviors such as the likelihood of fracturing of the crystal. The ML results of mean von Mises stress are aligned well with MD results (R^{2} = 0.99) even if the property is secondary information extracted from atomic predictions (Fig. 2f). We also cluster the data points based on the grain number in Fig. 2f. The mean von Mises stress generally increases as the grain number increases as there are more GBs inside the crystal, causing higher residual stress or more weak regions. The physical relation between the grain number and mean von Mises stress is also precisely reproduced by our model, further validating its great performance. To emphasize, although there is no loading applied in this case, the translation from polycrystalline structures to the stress field is not an onthefly stress calculation based on the force field, which is not computationally expensive in MD simulations. Instead, the linkage is between the ensemble averages of spatial coordinates and the stress values. Supplementary Fig. 1a shows that the predictions of the GNN model are nontrivial as the errors of onthefly stress computation are much higher compared to the GNN model.
Tensile stress field predictions in porous graphene membranes
Apart from GBs, vacancy is another common and important class of structural defects in graphene materials. To prove the university of our approach to different types of structural defects, we further validate the graph model on a porous graphene dataset which again contains 2000 membranes. The porous graphene membrane is generated in a periodic unit cell with a size of 127.9 Å × 127.8 Å in x, y directions by randomly removing a pair of the carbon atom and hydrogen atom in a perfect graphene sheet. The vacancy concentration (C_{v}) of different porous membranes varies uniformly from 0.0 to 0.1 (Fig. 3a). When C_{v} increases, there are more and more multivacancies such as double vacancies or triple vacancies inside the membrane. Therefore, the curve showing the relation between the single vacancy concentration (C_{sv}) and C_{v} deviates more and more from the diagonal line as the C_{v} rises (Fig. 3a).
After obtaining the porous graphene structures, nonequilibrium MD (NEMD) simulations are performed after the equilibration process to stretch each graphene membrane along the x direction with the same tensile strain at 5% (Fig. 3b). The tensile stress fields (\(\sigma _{xx}\)) are sampled after the tension test to be used as the node labels for training (Fig. 3c). The input node features of the ML model are the original 2D coordinates of the porous graphene membrane before the tensile test (Fig. 3c). The details about the NEMD simulations including presampling equilibration and sampling convergence are also included in the section Methods: Dataset generation.
With the generated dataset, the crystals are translated as graphs (Methods: Graph representation) and then used to train the GNN. The examples of comparison between ML predictions and MD results are shown in images in Fig. 3d. The carbon atoms near the vacancies are colored yellow, whereas those with perfect coordination numbers (=3) are blue. Similar as before, we calculate the normalized relative error of 400 new graphene membranes in the test set. The mean error is 4.9%, with the highest error at ~10% (Fig. 3e). In terms of the relation between C_{v} and Young’s modulus (E), multiple previous studies have shown that the E of porous graphene decreases linearly with the increase of C_{v}^{44,45,51}. Our ML model is also able to reproduce the linear relation and align well with the MD results (Fig. 3f). By fitting the data points from ML prediction, we can obtain the mathematical expression of the linear relation: \(E = 823.7  5635.6 \times C_{{{\mathrm{v}}}}\). The intercept of the linear curve corresponds to Young’s modulus of pristine graphene (E_{0}), which is 823.7 GPa in our case (T = 10 K). The value is in the same magnitude of order as the reported values in both experiments^{52} and simulations^{44,45,51}. All the results above show the high accuracy and physical understanding obtained from our ML model.
Potential energy distribution predictions in polycrystalline aluminum bulks
After implementing our ML model to capture mechanical responses in 2D graphene systems, we shift our attention to energy profile in 3D bulk systems such as metals. The aluminum solid, which is a pearl in the palm among the metals have been widely and well studied^{53}. The GBs inside the aluminum crystals affect the energetics of the system which play important roles in diverse phenomena such as solidification, crystallization, and melting^{54,55}. The atomic potential energy distribution can not only reflect the positions of dislocations, but also possible entanglements and interactions related to them. For instance, dislocations can pile up at the GBs, leading to strengthening mechanisms in polycrystalline metals that can be associated with locally high atomic potential energy. Therefore, it is possible to investigate relevant mechanisms related to dislocation interactions such as size dependence if we can train a GNN model to predict atomic potential energies. We here build a dataset which contains thousands of nanocrystals of aluminum using an open source program Atomsk^{56}. The size of the periodic nanocrystal is 50 Å × 50 Å × 50 Å. Four different grain numbers (4, 8, 12, and 16) are used with the grain size distribution, along with an example of a 16grain polycrystal shown in Fig. 4a. We here utilize NEMD simulations again to calculate the atomic potential energy (E_{p}) of the polycrystals after heating from 50 to 100 K (Fig. 4b). The temperatures are set at relatively low values to avoid possible grain growth during the heating process since we focus on effects of existing structural defects instead of dynamic structural evolution in this work. As a result, the input node features are coordinates of aluminum atoms at 50 K and the output node labels are corresponding potential energies at 100 K (Fig. 4c). The sampling is performed by averaging outcomes of multiple separate simulations. The details about the NEMD simulations, including equilibration, sampling method, and convergence are discussed in Methods: Dataset generation.
The ML model is trained on the graphs, which are constructed based on the generated polycrystalline structures (Methods: Graph representation). The predictions of the ML model with respect to the MD ground truth are shown in Fig. 4d. The GBs are visualized based on the calculation of polyhedral template matching in which the lattice type of GBs is labeled as “Other” compared to the major ‘FCC’ lattice structures in the polycrystalline aluminum. The variation of potential energy strongly correlated with the GBs as captured by our ML model. In terms of the evaluation of the model performance, the mean normalized relative error of all 400 test graphs is about 11.0%, with the highest error at 13.0% (Fig. 4e). The error is a bit higher than in the two previous cases because the aluminum atoms in the lattice have higher coordination numbers (In a perfect crystal, the coordination number is 12) compared to atoms in graphene (In a perfect crystal, the coordination number is 3) which increases the complexity of the graph. In addition, the polycrystalline aluminum solids have more dynamic structures which can evolve during heating as the interatomic interaction is weaker. As the GBs are less stable compared to a perfect FCC lattice, the average potential energy is higher when the grain number is larger (Fig. 4f). The ML model’s predictions fit well with the physical intuition and show a high correlation (R^{2} = 0.99) with the ground truth from MD results (Fig. 4f).
Stress evolution predictions with input loading conditions
The three examples above show how general and powerful our proposed approach is in building the linkage between structures with defects to the atomic properties. However, in those cases, the input node features only contain spatial information about the crystals. In this section, besides the atomic coordinates, we also include the boundary conditions of the whole crystal in the input node features. More specifically, we randomly collect 400 crystals out of 2000, which we generate in the porous graphene dataset, and run NEMD simulations to perform the tensile test at ten different strains varying uniformly from 0.5 to 5%. The tensile strain is attached to the 2D coordinates in the features of each node. Consequently, there are 4000 graph data in total, which are further split into train, test, and validation sets for training. More details are included in Methods: Dataset generation.
When the training is accomplished, we can use the model to predict the stress evolution of porous graphene crystals during the tensile test (Fig. 5a). The sequence of images in Fig. 5a reveals how the tensile stress gradually builds up during the tension and the consistency between predictions and ground truth. Furthermore, the strainstress curves are also precisely reproduced by our ML model (Fig. 5b), showing the model’s capacity of making predictions based on given boundary conditions. Noticeably, the tensile stress of the full crystal does not necessarily increase linearly with the tensile strain (“Data 1” of Fig. 5b). Therefore, the model is not performing simple multiplications using the input strain values. Instead, it is learning the nonlinearity given different loading conditions. In order to plot the complete strainstress curve, the data in the train set are also used for predictions. The fact that the model achieves high accuracy in both known data in the train set and new data in the test and validation set indicates that there are no signs of overfitting. The overall great performance of our model on three different crystals with low to high C_{v} further demonstrates the robustness of the approach.
Apart from tensile strain, environmental variables such as temperature or pressure can also be included in the node features that would enable the GNN model to make predictions based on varying conditions. To extrapolate the predictions to unseen conditions, transfer learning is a useful technique that only requires a small amount of data under these new conditions to adapt the model.
Design of holey graphene membranes with lowstress concentration
The discussions we have had up till now are all about using our model to accelerate the calculation of atomic properties. Actually, with the acceleration we obtain, we can leverage the model for design problems. For instance, porous graphene with designed holes has been widely studied as the structure can serve multiple purposes such as selective filtering of liquid^{4} or gas^{57} and sensing of DNA^{58}. At the same time, the mechanical behaviors such as the stress concentration which can lead to catastrophic fracture are essential to the functional integrity of the holey graphene. Therefore, in this part, we first exhibit an example of using our ML model to design holey graphene to lower the stress concentration when the sheet is stretched combining ML models with an optimization algorithm shown in Fig. 6a.
Above all, the design space is selected so that there are 30 available hole sites in a 128 Å × 128 Å graphene sheet (Fig. 6a). The 30 hole sites follow a hexagonal pattern which has been used for designing porous graphene with tunable thermal conductivity^{29}. We select 13 holes out of 30 sites to fix the density of vacancies at 0.05. The allowed structures are set symmetric about y axis as the symmetric designs are more likely to achieve high mechanical performances^{36,59}. Given the setup, there are 12,177 possible combinations in total in the design space and more details are included in Methods: Holey graphene design. To train the model, we randomly pick 800 combinations from 12,177 combinations and perform NEMD simulations to collect the labeled data like we do for the porous graphene dataset. With the trained model (Methods: Model training and evaluation), we are able to predict the stress concentration in each design by calculating the mean value of the highest 300 stress values in the holey graphene sheet (Fig. 6a).
We first use the trained model to bruteforce search for the top designs across the whole design space. The top two designs are exhibited in Fig. 6b together with the bottom two designs for a comparison. As the figure displays, the top designs share arrowshaped patterns which mainly extend in the x direction. By contrast, the bottom designs contain holes which span along the y direction, forming two straight lines. The MD simulations are exerted to validate the designs. The stress fields from MD simulations clearly reveal much higher stress concentration in the bottom designs compared to the top designs (Fig. 6b). From a theoretical perspective, the holey graphene sheet can be regarded as a composite in which holes are treated as soft materials while integral parts of the sheet are brittle materials^{59}. Within such a context, the design problems can be solved using the composite theory named “rule of mixtures”^{60}. The bottom designs correspond to the isostress situation which leads to the lower bound of the E while the top designs are similar to the isostrain situation when the upper bound is realized. All in all, both the results from the simulation and theory validate the designs found by our ML model.
Furthermore, we utilize the GNN model as a predictor and combine it with an optimizer, a Genetic Algorithm (GA; details about the implementation of the GA approach are included in Methods: Holey graphene design) to further improve the efficiency of searching for the optimal designs (Fig. 6a). The atomic structures of holey graphene sheets are represented by an array of 0 and 1 s in which “1” indicates that there is a hole at the corresponding site while “0” refers to an intact part. The optimization algorithm takes the derivative information from predicted stress fields as the objective function and optimizes the positions of holes. With 20 candidates in the population, the GA converges quickly at ~40 generations (Fig. 6c). The images of optimal structures corresponding to each plateau during the optimization process are provided along with the optimization curve (Fig. 6c). Compared to a bruteforce search which involves more than 10,000 calculations of stress concentration values, the GA utilizes less than 800 calculations (repeated calculations for those same chromosomes) which further accelerates the design process by an order of magnitude. The example reveals how our GNN model can be utilized to vastly improve the efficiency of designing atomic structures with certain global properties via the combination with optimization algorithms.
Design of holey graphene membranes with local stress patterns
Besides global properties, our model can also be utilized to design local stress patterns with atomic property predictions. In Supplementary Fig. 2, we show examples of maximizing local stress within certain regions. Highstress concentration is generally unwanted in mechanical design problems. However, containing high stress within certain regions can be useful for protecting other important parts such as the stress shielding effect. In addition, a designed highstress region can be utilized to manipulate cracking patterns or to invoke other response for which high stress is needed. Within the context, we consider maximizing the stress in both vertical and horizontal diamondshaped strips (Details of geometries are included in Methods: Holey graphene design) of the holey graphene sheets shown in Supplementary Fig. 2a, b. We utilize the trained model for global property optimization to examine all the structures that satisfy the same symmetry as the strip regions. As a consequence, in terms of the vertical diamond strips, the optimal design with the highest stress in the target region contains holes at both ends of each strip (Supplementary Fig. 2a). However, for the horizontal case, the highest stress is witnessed in the intact region (Supplementary Fig. 2b) with all holes distributed outside the target region instead. Interestingly, for vertical and horizontal strips, the design strategies are completely opposite as vertical strips tend to be holey while the horizontal strips tend to be intact to gain high stress. The reason behind the difference can also be explained by the rule of mixtures: when the target region is more along the horizontal direction (the degree between the strip and the horizontal direction is around 30°), the holes which can be treated as soft materials will lead to lower stress in the intact part which can be regarded as brittle materials as two materials are in the isostress situation; by contrast, when the target strips are more along vertical region (the degree between the strip and the horizontal direction is around 60°), the holes generally leads to higher stress in the intact part as two materials are in the isostrain situation.
To further validate the optimization results, we compare the optimal designs with the opposite designs based on the ground truth using the NEMD results. The opposite designs refer to the designs which have totally different positions of holes compared to the optimal designs (the intact parts become holey while the holey parts become intact). Given the symmetry, we only plot the stress along strips in one of the diamonds (Supplementary Fig. 2c, d). As the stress distributions along the strips suggest, when the strips are more in the vertical direction, the holey design achieves higher stress than the intact design except in nearhole regions; on the contrary, when the strips are more along the horizontal direction, the intact design achieves higher stress than the holey design. The complete, quantitative stress distributions further prove the design choices of our model. This example of designing local stress patterns along with the earlier example of searching for designs with globally lower stress concentration shows the diversity and power of our ML model in realizing specific design purposes.
Discussion
We here propose a DLbased approach to provide the direct linkage between defective crystalline structures and atomic properties, achieved using a GNN model. The model is trained with a small amount of data (~thousands of crystals) but achieves high accuracy in predicting different atomic properties in different crystalline systems with different types of structural defects. Derived from atomic properties, global properties such as Young’s modulus and total potential energy are precisely captured by our model, observing wellknown physical laws. Furthermore, we also show the model’s applications on predicting the evolution of atomic properties and designing defective structures with target mechanical performances. The predictions using our trained model only take seconds which shows a dramatic acceleration compared to conventional atomistic simulations which can last hours, days, or even months. As shown in Supplementary Fig. 7a, the ML model can accelerate the prediction by hundreds to thousands of times with less computational sources compared to MD simulations. More specifically, in our case, the calculations of atomic properties take seconds using ML model on a single CPU while the MD simulations run for more than thousands of seconds on 24 CPUs in parallel. Despite the computational cost does not include the time of generating the dataset and training the model, the ML models still save a great amount of time once trained given the large structure space of the systems we investigate. For example, the porous graphene membranes contain more than 10^{600} possible structures with different distributions of vacancies and there are an infinite number of crystals for both polycrystalline graphene and aluminum cases. By contrast, the GNN model is trained with only 1200 random data for generalization to the intractable number of structures. The gain in speed with little loss of accuracy enables our approach to serve as an alternative of expensive MD simulations in investigating structural effects on properties. The systems we investigate in this work are mostly well studied as we attempt to benchmark our model and prove the concept, therefore no new physics are discovered. However, we show that the approach can address new design problems such as designing porous graphene membranes with lowstress concentration which is intractable using bruteforce MD simulations. Given the atomic property predictions, local stress patterns can be further tuned. The approach proposed here can be extended to crystalline solids that are not well understood for discovering new phenomena by quickly searching across the structural space, such as examining the effects of arrangements of GBs and establishing the relation between performances and C_{v} given different vacancy distributions.
The significances of predicting the full distribution of atomic properties over single property are massive. Above all, the full distribution contains more local information such as high potential energy regions and stress concentration patterns. The local information can provide predictions and evaluations which global properties can not reveal. For instance, high potential energy regions are less stable which are essential to the dynamics of the whole structure. Stress concentration can lead to catastrophic phenomena such as fractures which will cause dramatic degradation of the material performance. We have shown one example of designing local stress patterns in this work. Apart from the example, other local properties related to field patterns such as the gradient and variation of stress can also be investigated using our model, solving corresponding design problems including designing strain gradients or uniform stress fields like the Eshelby inclusion problem. The main contribution is providing accelerated alternatives to the expensive atomistic simulations for building efficient linkages between structural defects and atomic properties. Based on the accelerated predictions, the GNNbased approach combined with certain optimization algorithms can enable us to explore huge design spaces that are intractable for MD simulations. The proposed design candidates by the combined framework can exhibit interesting new mechanisms sometimes referred to as “new physics”, born out of the predictive power of atomistic simulations. One example we show in this work is that with the GNNbased approach, we reveal that atomlevel porous graphene membranes with lowstress concentration can be designed following a composite design principle also known as the continuum scale.
Moreover, multiple global properties can be derived from the full distribution with just one shot as the field contains diverse information about a structure. For instance, the average stress reflects the modulus of the structure while the highest stress indicates the cracking tendency. Therefore, the model can be utilized to design multifunctional structures or to solve materials performance tradeoffs. Finally, with the model that can predict the full evolution of atomic properties, we can also leverage the model to predict multiple historydependent or pathdependent properties at the same time such as residual stress and elastic hysteresis at different cycles.
However, there are several limitations that can be addressed in future works. We here propose three possible directions: (1) One of the most important extensions of the proposed GNNbased approach is to further predict dramatic structural evolutions or dynamical phenomena of crystalline solids. The current framework focuses on static loading conditions and aims at linking the spatial distribution of existing structural defects to atomic properties. However, dynamic behaviors of defects are equivalently important and are essential to liquid and gas systems or hightemperature situations. To account for large structural evolutions, sequential DL models including diverse recurrent neural networks (RNNs) can be implemented together with our GNN model to encode the temporal information. (2) The proposed model in this work is mainly tested on crystalline systems with a single type of element and bond. A next step to generalize this could be to validate the model and evaluate its accuracy on multielement systems. In terms of the data representation for multielement crystals, the element type can be included with an additional dimension of node feature vectors and different bonds can be discriminated using edge attributes in the graph representation. (3) For certain types of structural defects such as dislocations, the current graph representation with fully atomistic descriptors might not be the most efficient way to represent the crystals. Instead, we can treat each dislocation as a node in the graph with node features being dislocation properties such as Burgers vector and determine the connectivity based on the slip planes. With the new representation, the graph is simplified without including every atom in the crystals, thus lowering the computational costs of investigating large crystals.
In conclusion, the approach we propose shows high accuracy, generality, and diversity in translation between structures and properties on an atomic scale. Compared to the imagebased approach in continuumlevel geometrytofield translation^{38}, the graphbased approach better deals with the nonEuclidean representation which covers diverse structures from microscales such as crystals and molecules to macroscale including social networks and structural buildings. The idea presented here can also be applied to other applications in the science and engineering problems, such as magnetic fields of spin systems, electron densities in molecules, and mechanical states for architected structures.
Methods
Dataset generation
All the datasets with MD labels are generated using a largescale atomic/molecular massively parallel simulator (LAMMPS)^{61} and visualized using OVITO’s python interface^{49}. AIREBO empirical interatomic potential^{62,63} is used for all calculations related to graphene sheets and an embedded atom method (EAM) potential^{64} is implemented for bulk aluminum. Below we discuss in detail how each dataset is generated in different paragraphs.
In terms of polycrystalline graphene dataset, 2000 random polycrystals of 2D graphene each in a fixed size of the unit cell (128 Å × 128Å × 10Å and periodic along x and y directions) are generated using an algorithm developed for investigating mechanical behaviors of wellannealed polycrystalline graphene^{41}. Four different grain numbers (4, 8, 12, and 16) are investigated, each containing 500 data. The unit cell is periodic in all three dimensions but the graphene sheet only connects at the periodic boundaries in x and y directions while the dimension in the z direction is set large enough to avoid interactions of atoms across the boundary. The distribution of grain sizes in the polycrystals with different grain numbers is shown by the violin plot in Fig. 2a. During the MD simulations, energy minimization is first performed using a conjugate gradient algorithm to remove unreasonably high energy configurations. After the energy minimization, the system is then equilibrated in an NVT ensemble (T = 300 K) using the Langevin thermostat for 20 ps. After the equilibration, the ensemble average of atomic coordinates and stresses are sampled and averaged over 40 ps which contains 2000 frames. The crystals are equilibrated when sampling as shown in Supplementary Fig. 3a in which four random crystals of different grain numbers are examined. We further reveal the distribution of target atomic properties which are von Mises stresses after collecting the labels from MD simulations (Supplementary Fig. 4a).
The second dataset we investigate is the porous graphene dataset. The porous graphene dataset contains 2000 graphene sheets with C_{v} varying from 0.0 to 0.1 uniformly (Fig. 3a). The vacancies are generated by randomly deleting carbon atoms in a perfect 2D graphene membrane based on C_{v}. The multivacancies are those vacancy clusters with multiple missing atoms next to each other. We calculate the number of single vacancies by checking the environment of a missing atom. If there are no other atoms missing in its nearest neighbors, the vacancy is the single vacancy. The size of the periodic simulation cell is 127.9 Å × 127.8 Å (the length of a unit cell in the z direction does not matter in this case as there is no wrinkle in the z direction). The porous graphene sheets are in zigzag chirality along the x direction and in armchair chirality along the y direction (Fig. 3a). Before the tensile test, the energy minimization is first carried out using conjugate gradient algorithm. The porous graphene membranes are then equilibrated in the NVT ensemble (Langevin thermostat) first, followed by two runs in NPT ensembles (Berendsen barostat) with different thermostats. The former run in the NPT ensemble uses the Langevin thermostat while the latter uses the Berendsen thermostat. The relaxation in each ensemble lasts for 10 ps and the temperatures are set to 10 K in all ensembles. The first run of relaxation in the NPT ensemble quickly sets the pressure of the system to be zero and the second run is to better equilibrate the temperature. After the equilibration, NEMD simulations are performed to stretch the graphene membranes along the x direction by changing the volume of the simulation box. The strain rate of the tensile test is 0.2 ns^{−1} and the magnitude of the tensile strain is 5%. After the tensile strain reaches 5%, we keep the tensile strain and sample the atomic stresses over 200 ps over 2000 frames at NVT ensemble using the Langevin thermostat. The evidence of sampling convergence is given in Supplementary Fig. 3b with four randomly picked data. The distribution of target tensile stress in all crystals obtained from sampling is manifested in Supplementary Fig. 4b.
Finally, we develop the polycrystalline aluminum dataset in which there are 500 FCC polycrystals of bulk aluminum for each grain number (4, 8, 12, and 16) which leads to 2000 data in total. The random structures in a periodic 50 Å × 50 Å × 50 Å simulation box are generated using Atomsk^{56} which construct polycrystals with Voronoi tessellation. The grain size distribution is displayed in Fig. 4a. The generated polycrystals are first relaxed using energy minimization implemented with a conjugate gradient algorithm and then equilibrated at 50 K under the NVT ensemble (NooseHoover thermostat) for 50 ps. After the relaxation and equilibration, the simulation box is heated from 50 to 100 K in 50 ps with a heating rate of 1 K/ps. The data of potential energy distribution is collected using the frame right after the heating. The reason we do not use a similar sampling as the porous graphene dataset and perform NEMD at low temperatures is to avoid any possibility of grain growth which can result in dramatic structural variations beyond our interest. The grain growth will result in the merger of different grains into one which only consists of a perfect FCC lattice, making the prediction task trivial as all atoms are identical. In addition, at lower temperatures, the thermal fluctuations in the systems are negligible which requires fewer independent simulations to obtain converged data. In order to obtain a converged potential energy distribution for each polycrystal, we run 40 separate simulations with different initialization of velocities to calculate the average atomic potential energies. The sampling is converged as manifested in Supplementary Fig. 3c. The distribution of atomic potential energies contains two peaks (Supplementary Fig. 4c) which correspond to FCC lattices (sharp peak with lower potential energy) and GBs (plain peak with higher potential energy) separately.
The loading condition dataset and holey graphene dataset are built based on the porous graphene dataset. The 400 porous graphene membranes in the loading condition dataset are randomly picked from the porous graphene dataset. The 400 crystals are put under tensile tests with ten different tensile strains varying uniformly from 0.05 to 0.5 which leads to 4000 data in total. The holey graphene dataset investigates the collective behaviors of vacancies as holes distributed in available sites. The detailed geometry of an individual hole and the design space is further explained in Methods: Holey graphene design. For both datasets, the setups of NEMD simulations (except different tensile strains in the loading condition dataset) performed to obtain the stress fields are the same as porous graphene dataset.
Graph representation
The results from MD simulations about the crystal structures and atomic properties are represented by graphs used as input and labels to train the GNN. Each atom is a node in the graph and the connectivity between nodes are determined by the distance between atoms. In all graphenerelated datasets, two nodes are connected by an edge if the distance between the corresponding carbon atoms is smaller than a cutoff distance which is set to be 1.92 Å^{41}. In terms of polycrystalline aluminum dataset, the cutoff distance is 2.86 Å which is the distance between the nearest neighbors in a perfect FCC crystal of aluminum. The edges are only defined existence without edge attributes as all the systems we investigate contain only single element. The node features contain spatial information of atoms and the node labels are target atomic properties. For all datasets, the numbers of nodes and edges vary from crystals to crystals. The graphs representing polycrystalline graphene sheets contain ~6600 nodes and 9900 edges. In terms of porous graphene membranes, the graph representation has around 5900 nodes and 8850 edges. As for polycrystalline aluminum dataset, the node number and the edge number are about 7500 and 45,000 respectively.
Graph neural network
The GNN model we use in this work is based on an architecture known as principal neighborhood aggregation (PNA)^{40} which achieves great performances on the graph regression and classification tasks. The PNA model combines multiple aggregators which affect how the messages between nodes are passed with a degreescaler which generalizes the sum aggregator. There are mainly four different aggregators in the architecture, namely “mean”, “maximum”, “minimum”, and “standard deviation”. The aggregators collect the messages from neighboring nodes and apply the mathematical operation as the name indicates to the messages. The node is then updated based on the received messages. In terms of the degreescaler, it allows the network to amplify or attenuate signals based on the degree of each node. We have three different types of degreescalers in our model which are “identity”, “amplification”, and “attenuation” which function as the name indicates. Combing the two strategies, the PNA approach is able to improve the performance of GNNs. In order to perform the supervised node regression tasks in which labels of each node instead of the label of the whole graph are predicted, we remove the final pooling layer in the original model and train the model with a loss function summarizing the mean square error for each node. All the ML calculations are performed using Pytorch^{65} and PyTorch Geometric^{66}.
The model architecture is shown in Supplementary Fig. 5. First of all, the input graph is sent to the input block which uses a combination of PNA convolutional (PNAConv)^{66} layer, Gated Recurrent Unit cell (GRUCell)^{65}, and Batch Normalization layer (BatchNorm)^{66} to upscale the dimension of node features (the output dimension is called hidden dimension). Then the graph is passed to the message passing block which contains Ntime (N determines the complexity of the model) repetition of the combined layers. Within the block, nodes communicate with each other by passing the messages given the node features plus connectivity and update their own node features considering the received messages. Finally, a readout block consisting of a single PNAConv layer is attached at the end to downscale the hidden node features to the target node labels. In terms of the hyperparameters, we fix the number of “tower”^{66} to be 1 for the PNAConv layers in the input and readout block and five for the message passing block across all datasets. In addition, the numbers of transformation layers before and after the aggregation are set to be 1 in all models. Other hyperparameters of the model architectures which vary from cases to cases are included in Supplementary Table 1. The adjustments of hyperparameters are based on the loss of models on the validation set. The hidden dimensions can be either 25 or 50. The value of N can vary from 5 to 13 with an interval of 2.
Model training and evaluation
All datasets are split into the train set (70% data), test set (20% data), and validation set (10% data). We train the models for 500 epochs on one or two NVIDIA Tesla V100s each with 32GB memory.
The learning curves of three different datasets are shown in Supplementary Fig. 6 which indicate the convergence of training for all three cases. The loss functions of our supervised node regression tasks are defined using the mean square error:
In which L_{MSE} refers to the mean square loss and i is the index of each node or atom, \(y_i^{{{{\mathrm{ML}}}}}\) is the prediction of node label from the ML model while \(y_i^{{{{\mathrm{MD}}}}}\) is the node label from MD simulations. In the training of the holey graphene dataset in which all the structures have the same C_{v}. In terms of evaluation of the model performance, we calculate the normalized relative error of graphs in the test set. The term “normalized” refers to the operation in which we scale all the values of node labels to values between 0 and 1. The reason we perform normalization is that in cases such as the polycrystalline aluminum dataset, the potential energies vary within a small range compared to the absolute values which leads to intrinsically low relative error. Therefore, the normalized error can better reflect the model’s performance and set the same standard for different datasets. The mathematical expression of normalized error for one graph is written as follow:
Where RE_{norm} is the normalized relative error and N is the number of nodes in the graph. \(y_{{{{\mathrm{norm}}}},\;i}^{{{{\mathrm{ML}}}}}\) and \(y_{{{{\mathrm{norm}}}},\;i}^{{{{\mathrm{MD}}}}}\) are the normalized values of \(y_i^{{{{\mathrm{ML}}}}}\) and \(y_i^{{{{\mathrm{MD}}}}}\).
Model comparison
In order to validate the selection of PNA architecture, we compare the performance of the model with other commonly used GNN architectures including graph convolutional network (GCN)^{67}, graph attention network (GAT)^{68}, and message passing network (MPNN)^{69}. We implement two different MPNNs whose aggregators are “sum” and “max” respectively^{40} (correspond to the annotation “MPNN (sum)” and “MPNN (max)”). The comparison is performed on the polycrystalline graphene dataset by collecting the distribution of mean square errors (MSE) in the test set. The same model architecture as shown in Supplementary Fig. 6 (only the convolutional layer differs) is utilized for a fair comparison. To avoid overfitting, additional Dropout layers^{66} are attached in the GAT and MPNN models. In terms of the choice of hyperparameters, we keep the layer depth (“N” in Supplementary Fig. 6) to be constant at 5 in all models and the hidden dimension is set to 50. Although the same choice of hyperparameters leads to different numbers of weights in different models, we prove that the number of trainable weights is not a major factor in the model performance by varying the hidden dimensions. All models are trained for 500 epochs to reach convergence. The distributions of MSE for all 5 models are visualized in box plots displayed in Supplementary Fig. 7. As the figure indicates, the PNA model reaches the highest accuracy as the predictions show the lowest mean value and variation of MSE compared to GCN, GAT, MPNN (sum), and MPNN (max) models.
Significance illustration
To demonstrate the significance of this work, we have shown that for the polycrystalline graphene dataset, the predictions of the GNN model can link the ensemble average of atomic structures to the stress field which can not be realized by the simple onthefly stress calculation of MD simulations. In addition, in the porous graphene dataset and the polycrystalline dataset, the atomic properties are calculated with certain boundary conditions such as tensile loading and temperature change. The GNN model is able to predict the final atomic property distributions with the initial equilibrium structures. To examine whether the final atomic properties can be simply derived from the initial property distribution, we perform a linear fitting based on the labels given the initial atomic properties. MD simulations are utilized to calculate the initial atomic properties given the initial atomic structures. The linear fitting basically adds a constant value to the property of each atom. For instance, for porous graphene, the increase of tensile stress of each atom is the multiplication of Young’s modulus and the tensile strain. As Supplementary Fig. 1b, c show, for both porous graphene and polycrystalline aluminum cases, the errors of linear fitting are much higher than the GNN predictions. As a result, the relations between initial atomic structures and the final property distributions are nonlinear and the prediction tasks are nontrivial even with small structural evolutions.
The major goal of the proposed approach is focused on accelerating predictions of atomic properties that are influenced by distributions and structures of defects. The accuracy of predictions is as high as the training data calculated using MD simulations with no new physics beyond. However, the accelerated toolbox presented here makes it feasible to search for promising designs out of a massive set of candidates, thus proposing interesting designs and associated mechanisms.
Holey graphene design
In the holey graphene design, each hole contains 24 vacancies. The geometry of the hole follows a previous work of designing holey graphene with tunable thermal conductivity using ML^{29}. The holes are distributed in a hexagonal pattern in which the distance between holes along the y direction is 21.3 Å which are 15 times of the bond length. The hole sites are periodic along the y direction and symmetric about the y axis as the designs of interest with lowstress concentration are more likely to be symmetric given the loading is along the x direction. As a consequence, there are 5 × 6 holes (x × y) in the sheet which leads to 30 available sites in total. For a pair of designs which are symmetric about the xaxis, the mechanical response should be the same, thus only one of them being considered. For minimizing the overall stress concentration, we allow 13 holes to be selected among 30 available hole sites to avoid the effect of density variation. Therefore, the C_{v} of all possible designs is fixed at 0.05. Given all the constraints mentioned above, the number of possible combinations in the design space is 12177. In terms of designing local stress patterns, we only consider those structures which follow the same geometry as the diamondshaped strips and allow the C_{v} to vary from 0.0154 (four holes) to 0.0769 (20 holes). As a result, there are 838 possible combinations in the design space. In the design problems, all the stress fields are ground truth for validation calculated using NEMD simulations.
We implement the GA using a package in python named Pymoo^{70}, a framework that offers stateofart single and multiobjective optimization algorithms. The GA is a search process that imitates the natural selection during evolution which involves with operations such as sampling, selection, crossover, and mutation. In our case, we utilize random sampling, simulated binary crossover, and polynomial mutation which are provided by the package^{70}. The variables in the design spaces are either 0 or 1 and the population size is set to 20. In order to fix the C_{v} in the design, we add large penalty to the objective function if the number of holes is not equal to 13. To quantify the uncertainty of the optimization process, we initialize the GA with different random seeds for 1000 times to collect the optimal objectives after each optimization process and the number of calculations needed for convergence. Supplementary Fig. 8 shows the distribution of the optimal objectives and the number of calculations with different initializations. As the Supplementary Fig. 8a indicates, in most cases, the GA can locate the global minimum given the parameters we set up for GA. For all 1000 initializations, the optimal objectives are much lower than the mean stress concentration of all possible geometries. Furthermore, the number of calculations (the number of geometries whose property is calculated until convergence) is generally lower by more than an order of one magnitude compared to the total number of possible geometries, which validates the low computational cost of using GA.
In terms of designing local stress patterns, we select the tilt strips with a width the same as the hole’s diameter (~7.52 Å). The target regions exclude the sites of holes to put different structures on the same page when discussing the stress distributions. Given the symmetry, we extract the stress distribution in one out of four diamonds and the stresses are plotted clockwise across four strips, starting from and also ending up with the lowest position in the y direction. The target regions of vertical and horizontal cases are visualized in Supplementary Fig. 2a, b. The objective function of designs is calculated by averaging the \(\sigma _{xx}\) of each atom in the region which is maximized to obtain high stress strip patterns.
Data availability
The data that support the findings of this study are available from the corresponding author upon reasonable request.
Code availability
The codes of this work are available at https://github.com/lammmit/atomic2field.
References
Fang, T.T. Elements of Structures and Defects of Crystalline Materials (Elsevier, 2018).
Tilley, R. J. D. Defects in Solids (John Wiley & Sons, Ltd, 2008).
Xu, T. & Sun, L. Structural defects in graphene. Defects Adv. Electron. Mater. Nov. Low. Dimens. Struct. 5, 137–160 (2018).
CohenTanugi, D. & Grossman, J. C. Water desalination across nanoporous graphene. Nano Lett. 12, 3602–3608 (2012).
Jung, G., Qin, Z. & Buehler, M. J. Molecular mechanics of polycrystalline graphene with enhanced fracture toughness. Extrem. Mech. Lett. 2, 52–59 (2015).
Cheng, Z., Zhou, H., Lu, Q., Gao, H. & Lu, L. Extra strengthening and work hardening in gradient nanotwinned metals. Science 362, eaau1925 (2018).
Shimura, F. Springer Handbook of Electronic and Photonic Materials (Springer International Publishing, 2017).
Robertson, A. W. et al. Spatial control of defect creation in graphene at the nanoscale. Nat. Commun. 3, 1144 (2012).
Cui, Y. et al. Metallic bondenabled wetting behavior at the liquid Ga/CuGa2 interfaces. ACS Appl. Mater. Interfaces 10, 9203–9210 (2018).
Yeo, J. et al. Multiscale design of graphynebased materials for highperformance separation membranes. Adv. Mater. 31, 1–24 (2019).
Wang, S. et al. Atomically sharp crack tips in monolayer MoS2 and their enhanced toughness by vacancy defects. ACS Nano 10, 9831–9839 (2016).
Qin, Z., Jung, G. S., Kang, M. J. & Buehler, M. J. The mechanics and design of a lightweight threedimensional graphene assembly. Sci. Adv. 3, 1–9 (2017).
Xu, W. et al. Selffolding hybrid graphene skin for 3D biosensing. Nano Lett. 19, 1409–1417 (2019).
Ma, A., Roters, F. & Raabe, D. A dislocation density based constitutive model for crystal plasticity FEM including geometrically necessary dislocations. Acta Mater. 54, 2169–2179 (2006).
Butler, K. T., Davies, D. W., Cartwright, H., Isayev, O. & Walsh, A. Machine learning for molecular and materials science. Nature 559, 547–555 (2018).
Lecun, Y., Bengio, Y. & Hinton, G. Deep learning. Nature 521, 436–444 (2015).
Hughes, T. W., Williamson, I. A. D., Minkov, M. & Fan, S. Wave physics as an analog recurrent neural network. Sci. Adv. 5, eaay6946 (2019).
Qin, Z., Yu, Q. & Buehler, M. J. Machine learning model for fast prediction of the natural frequencies of protein molecules. Rsc. Adv. 10, 16607–16615 (2020).
Jensen, Z. et al. A machine learning approach to zeolite synthesis enabled by automatic literature data extraction. ACS Cent. Sci. 5, 892–899 (2019).
Aykol, M. et al. Network analysis of synthesizable materials discovery. Nat. Commun. 10, 2018 (2019).
Karniadakis, G. E. et al. Physicsinformed machine learning. Nat. Rev. Phys. 3, 422–440 (2021).
Liu, Y. et al. Materials discovery and design using machine learning. J. Mater. 3, 159–177 (2017).
Libbrecht, M. W. & Noble, W. S. Machine learning applications in genetics and genomics. Nat. Rev. Genet. 16, 321–332 (2015).
Guo, K., Yang, Z., Yu, C. H. & Buehler, M. J. Artificial intelligence and machine learning in design of mechanical materials. Mater. Horiz. 8, 1153–1172 (2021).
Xie, T. & Grossman, J. C. Crystal graph convolutional neural networks for an accurate and interpretable prediction of material properties. Phys. Rev. Lett. 120, 145301 (2018).
Hsu, Y. C., Yu, C. H. & Buehler, M. J. Using deep learning to predict fracture patterns in crystalline solids. Matter 3, 197–211 (2020).
Rajak, P. et al. Autonomous reinforcement learning agent for stretchable kirigami design of 2D materials. npj Comput. Mater. 7, 102 (2021).
Hanakata, P. Z., Cubuk, E. D., Campbell, D. K. & Park, H. S. Accelerated search and design of stretchable graphene kirigami using machine learning. Phys. Rev. Lett. 121, 255304 (2018).
Wan, J., Jiang, J.W. & Park, H. S. Machine learningbased design of porous graphene with low thermal conductivity. Carbon 157, 262–269 (2020).
Schütt, K. T., Arbabzadah, F., Chmiela, S., Müller, K. R. & Tkatchenko, A. Quantumchemical insights from deep tensor neural networks. Nat. Commun. 8, 6–13 (2017).
Schleder, G. R., Padilha, A. C. M., Acosta, C. M., Costa, M. & Fazzio, A. From DFT to machine learning: recent approaches to materials science–a review. J. Phys. Mater. 2, 032001 (2019).
Schütt, K. T. et al. SchNet: a continuousfilter convolutional neural network for modeling quantum interactions. Adv. Neural Inf. Process. Syst. 2017, 992–1002 (2017).
Wang, J. et al. Machine learning of coarsegrained molecular dynamics force fields. ACS Cent. Sci. 5, 755–767 (2019).
Behler, J. & Parrinello, M. Generalized neuralnetwork representation of highdimensional potentialenergy surfaces. Phys. Rev. Lett. 98, 1–4 (2007).
Noé, F., Tkatchenko, A., Müller, K.R. & Clementi, C. Machine learning for molecular simulation. Annu. Rev. Phys. Chem. 71, 361–390 (2020).
Gu, G. X., Chen, C.T. & Buehler, M. J. De novo composite design based on machine learning algorithm. Extrem. Mech. Lett. 18, 19–28 (2018).
Yang, Z., Yu, C.H., Guo, K. & Buehler, M. J. Endtoend deep learning method to predict complete strain and stress tensors for complex hierarchical composite microstructures. J. Mech. Phys. Solids 154, 104506 (2021).
Yang, Z., Yu, C.H. & Buehler, M. J. Deep learning model to predict complex stress and strain fields in hierarchical composites. Sci. Adv. 7, eabd7416 (2021).
Zhou, J. et al. Graph neural networks: a review of methods and applications. AI Open 1, 57–81 (2020).
Corso, G., Cavalleri, L., Beaini, D., Liò, P. & Veličković, P. Principal Neighbourhood Aggregation for Graph Nets. In Advances in Neural Information Processing Systems (eds. Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M. F. & Lin, H.) 33, 13260–13271 (Curran Associates, Inc., 2020).
Shekhawat, A. & Ritchie, R. O. Toughness and strength of nanocrystalline graphene. Nat. Commun. 7, 1–8 (2016).
Chen, M. Q. et al. Effects of grain size, temperature and strain rate on the mechanical properties of polycrystalline graphene – A molecular dynamics study. Carbon 85, 135–146 (2015).
Hao, F., Fang, D. & Xu, Z. Mechanical and thermal transport properties of graphene with defects. Appl. Phys. Lett. 99, 2009–2012 (2011).
Jing, N. et al. Effect of defects on Young’s modulus of graphene sheets: a molecular dynamics simulation. RSC Adv. 2, 9124–9129 (2012).
Mortazavi, B. & Ahzi, S. Thermal conductivity and tensile response of defective graphene: a molecular dynamics study. Carbon 63, 460–470 (2013).
Yamakov, V., Wolf, D., Phillpot, S. R., Mukherjee, A. K. & Gleiter, H. Dislocation processes in the deformation of nanocrystalline aluminium by moleculardynamics simulation. Nat. Mater. 1, 45–48 (2002).
Alavi, S. & Thompson, D. L. Molecular dynamics simulations of the melting of aluminum nanoparticles. J. Phys. Chem. A 110, 1518–1523 (2006).
Yang, G., Li, L., Lee, W. B. & Ng, M. C. Structure of graphene and its disorders: a review. Sci. Technol. Adv. Mater. 19, 613–648 (2018).
Stukowski, A. Visualization and analysis of atomistic simulation data with OVITOthe open visualization tool. Model. Simul. Mater. Sci. Eng. 18, 15012 (2009).
Larsen, P. M., Schmidt, S. Ø. & SchiØtz, J. Robust structural identification via polyhedral template matching. Model. Simul. Mater. Sci. Eng. 24, 55007 (2016).
Hao, F., Fang, D. & Xu, Z. Mechanical and thermal transport properties of graphene with defects. Appl. Phys. Lett. 99, 41901 (2011).
Lee, C., Wei, X., Kysar, J. W. & Hone, J. Measurement of the elastic properties and intrinsic strength of monolayer graphene. Science 321, 385–388 (2008).
Davis, J. R. Aluminum and Aluminum Alloys (ASM International, 1993).
Noori, Z., Panjepour, M. & Ahmadian, M. Study of the effect of grain size on melting temperature of Al nanocrystals by molecular dynamics simulation. J. Mater. Res. 30, 1648–1660 (2015).
Papanikolaou, M., Salonitis, K., Jolly, M. & Frank, M. Largescale molecular dynamics simulations of homogeneous nucleation of pure aluminium. Metals 9, 1–17 (2019).
Hirel, P. Atomsk: a tool for manipulating and converting atomic data files. Comput. Phys. Commun. 197, 212–219 (2015).
Jiang, D. E., Cooper, V. R. & Dai, S. Porous graphene as the ultimate membrane for gas separation. Nano Lett. 9, 4019–4024 (2009).
Garaj, S. et al. Graphene as a subnanometre transelectrode membrane. Nature 467, 190–193 (2010).
Yu, C.H., Qin, Z. & Buehler, M. J. Artificial intelligence design algorithm for nanocomposites optimized for shear crack resistance. Nano Futures 3, 035001 (2019).
Chawla, K. K. Composite Materials: Science and Engineering (Springer Science & Business Media, 2012).
Plimpton, S. Fast parallel algorithms for shortrange molecular dynamics. J. Comput. Phys. 117, 1–19 (1995).
Brenner, D. W. et al. A secondgeneration reactive empirical bond order ({REBO}) potential energy expression for hydrocarbons. J. Phys. Condens. Matter 14, 783–802 (2002).
Stuart, S. J., Tutein, A. B. & Harrison, J. A. A reactive potential for hydrocarbons with intermolecular interactions. J. Chem. Phys. 112, 6472–6486 (2000).
Mendelev, M. I., Kramer, M. J., Becker, C. A. & Asta, M. Analysis of semiempirical interatomic potentials appropriate for simulation of crystalline and liquid Al and Cu. Philos. Mag. 88, 1723–1750 (2008).
Paszke, A. et al. PyTorch: an imperative style, highperformance deep learning library. In Advances in Neural Information Processing Systems 32 (eds. Wallach, H. et al.) 8024–8035 (Curran Associates, Inc., 2019).
Fey, M. & Lenssen, J. E. Fast graph representation learning with PyTorch geometric. Preprint at https://arxiv.org/abs/1903.02428 (2019).
Kipf, T. N. & Welling, M. Semisupervised classification with graph convolutional networks. Preprint at https://arxiv.org/abs/1609.02907 (2016).
Veličković, P. et al. Graph attention networks. Preprint at https://arxiv.org/abs/1710.10903 (2017).
Gilmer, J., Schoenholz, S. S., Riley, P. F., Vinyals, O. & Dahl, G. E. Neural message passing for quantum chemistry. In Proc. 34th International Conference on Machine Learning (eds. Precup, D. & Teh, Y. W.) 1263–1272 (PMLR, 2017).
Blank, J. & Deb, K. pymoo: multiobjective optimization in Python. IEEE Access 8, 89497–89509 (2020).
Acknowledgements
We acknowledge support from the Army Research Office (W911NF1920098) and AFOSRMURI (FA95501510514). The authors acknowledge support from the GoogleCloud platform and MIT Quest for providing computational resources and other support.
Author information
Authors and Affiliations
Contributions
M.J.B. and Z.Y. conceived the idea. Z.Y. and M.J.B. developed the model and carried out the simulations. Z.Y. curated the training and testing data. M.J.B. supervised the project, analyzed the results, and interpreted it with Z.Y. Z.Y. and M.J.B. wrote the manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary information
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Yang, Z., Buehler, M.J. Linking atomic structural defects to mesoscale properties in crystalline solids using graph neural networks. npj Comput Mater 8, 198 (2022). https://doi.org/10.1038/s41524022008794
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s41524022008794
This article is cited by

An interpretable deep learning approach for designing nanoporous silicon nitride membranes with tunable mechanical properties
npj Computational Materials (2023)

Materials fatigue prediction using graph neural networks on microstructure representations
Scientific Reports (2023)