## Abstract

Identifying the magnetic state of materials is of great interest in a wide range of applications, but direct identification is not always straightforward due to limitations in neutron scattering experiments. In this work, we present a machine-learning approach using decision-tree algorithms to identify magnetism from the spin-integrated excitation spectrum, such as the density of states. The dataset was generated by Hartree–Fock mean-field calculations of candidate antiferromagnetic orders on a Wannier Hamiltonian, extracted from first-principle calculations targeting BaOsO\(_3\). Our machine learning model was trained using various types of spectral data, including local density of states, momentum-resolved density of states at high-symmetry points, and the lowest excitation energies from the Fermi level. Although the density of states shows good performance for machine learning, the broadening method had a significant impact on the model’s performance. We improved the model’s performance by designing the excitation energy as a feature for machine learning, resulting in excellent classification of antiferromagnetic order, even for test samples generated by different methods from the training samples used for machine learning.

## Introduction

Magnetism plays a crucial role in many physical and technological phenomena, ranging from magnetic storage devices to superconductivity. Determining the presence of long-range magnetic ordering in materials is therefore essential for designing new functional materials with tailored magnetic properties. Neutron scattering is a powerful tool for directly determining magnetic order and is functional across a wide range of temperatures and pressures. However, neutron scattering experiments typically require access to specialized facilities, such as nuclear reactors or spallation sources, which can be costly. Additionally, it mandates a relatively large size and high-quality sample. The elements with high neutron absorption cross-sections also hinder clear scattering signals.

Despite the availability of direct measurement methods, the limitations mentioned make it challenging to identify magnetic order. Therefore, it would be beneficial to have a method for determining magnetic order that is more accessible and less expensive, even if it is not as direct as neutron scattering. For instance, specifying magnetic order based on the density of states (DOS), which can be accessed by various experimental methods, can be a compelling alternative. In principle, magnetic orders is closely connected with the particle–hole excitation spectrum and the DOS displays distinct features of the corresponding order. The challenge is how to extract and quantify the correlation effectively.

The recent advancement of machine learning has had a significant impact in uncovering hidden correlations in the field of condensed matter physics^{1,2,3,4,5,6,7,8,9}. This technology has also been applied to the study of magnetism, enabling for the prediction of physical quantities without the need for direct measurement or calculations^{10,11,12,13,14,15,16,17,18,19,20,21,22,23} or probing orders from the data^{24,25,26,27,28,29,30}.

Motivated by the capability of machine learning to uncover complex relationships within numerical data, we explore the use of decision tree algorithms for identifying magnetic order from the density of states. We also examine an alternative probe through momentum-resolved spectra, as the integration over the momentum space in the local density of states may mask crucial differences between various forms of magnetic order.

For the classification of magnetic order, a dataset comprising inputs and their respective magnetic order is necessary. We selected BaOsO\(_3\) as a target system for our machine learning study. The polycrystalline BaOsO\(_3\) samples show metallic behavior^{31,32}, but its high symmetry allows us to induce various distinct magnetic orders by lowering the symmetry. We employ Hartree–Fock (HF) mean-field theory to generate the data sets with multiple magnetic order candidates for machine learning. The system is ordered by local Coulomb interaction depending on antiferromagnetic order parameters we set. The resulting DOS, momentum-resolved spectra, and antiferromagnetic orders are used to construct the data sets.

This paper is organized as follows. In “Model Hamiltonian”, we describe the model Hamiltonian and Hartree–Fock approximation we employed. The data preparation for machine learning and the performance of the trained model are discussed in “Machine learning”. “Conclusion” is devoted to conclusion and outlook.

## Model Hamiltonian

Figure 1 shows the unit cell and electronic structure of BaOsO\(_3\). The first-principles calculations were performed using density functional theory using projector augmented wave potentials within PBE exchange–correlation functional as implemented in Vienna Ab initio Simulation Package^{33,34}. The crystal field lifts the degeneracy of the *d*-orbitals in Os atoms, placing the Fermi level in the \(t_{2g}\) levels, which are well separated from other bands based on first-principles calculations. We used Wannier90^{35} to construct Maximally-Localized Wannier Functions (MLWF) based tight-binding Hamiltonians for the \(t_{2g}\) bands.

The model Hamiltoinan is represented as \(H = H_\mathrm{0} + H_\mathrm{int}\), where the bilinear part is given by

where \(c_{il\sigma }^{\dagger }\)(\(c_{il\sigma }\)) creates (annihilates) an electron with spin \(\sigma\) in orbital *l* at site *i*. The hopping amplitude \(t_{ij}^{ll^{'}}\) is adapted from the Wannier Hamiltonian. The hopping parameters have ideal cubic symmetry, allowing us to induce various magnetic orders for this machine learning study. In a \(2\times 2\times 2\) supercell, three types of antiferromagnetic (AF) orders were considered: A-, C-, and G-type, as illustrated in Fig. 2. In Fig. 2, we display the three magnetic structures and their associated noninteracting band energies in the first Brillouin zone of the original lattice, which has been modified to reflect the periodicity when the respective antiferromagnetic order is stabilized.

The two-body interaction part is represented as,

where \(n_{il\sigma }\equiv c_{il\sigma }^{\dagger }c_{il\sigma }\) is a number operator, *U* is the on-site intra-orbital Coulomb interaction and *J* is the Hund’s coupling. We assume the rotational invariance, setting inter-orbital interaction terms, \(U^\prime =U-2J\) for two different spins and \(U^{\prime \prime }=U-3J\) for the same spins. In order to stabilize an AF order, we introduce the Hartree–Fock approximation, which allows us to handle the many-body problem in Hint by using the following mean-field ansatz. The precise wave vectors are defined as

where \(n_l\)(\(m_l\)) is an electron occupancy (staggered magnetization) of orbital *l*, \(\textbf{r}_{j}\) is the position vector of site *j*, \(\textbf{q}_\alpha\) is the wave vector corresponding an AF type \(\alpha\)=A, C, and G. To be precise, \(\textbf{q}^{}_\mathrm{A}=(\pi , 0, 0)\), \(\textbf{q}^{}_\mathrm{C}=(\pi , \pi , 0)\) and \(\textbf{q}^{}_\mathrm{G}=(\pi , \pi , \pi )\). Note that the Kronecker deltas force any nonlocal, interorbital, and interspin terms to be zero in this calculation, but introducing additional off-diagonal order parameters does not alter the applicability of this machine learning study.

We perform the self-consistent calculation with the HF ansatz Eq. (3) for the three AF types and obtain the phase diagrams as shown in Fig. 3. In general, the stronger interaction leads to larger staggered magnetization, but metal-insulator transition exhibits different behavior depending on the number of electrons per site *N* and the AF ordering. The G-type order is metallic in a broad range of parameter space except the half-filled (\(N=3\)) case. This is due to the symmetry constraint of the AF order (\(\textbf{q}_\mathrm{G}=(\pi ,\pi ,\pi )\)), which requires the orbital occupancies of the three t\(_{2g}\) orbitals to be the same. To open a gap, the number of electrons per unit cell must be an integer, and this symmetry constraint requires it to be a multiple of three. Therefore, the only possible case for an insulator is half-filling, as confirmed by HF calculations.

The A- and C-type antiferromagnetic orders have fewer restrictions compared to the G-type, requiring only two out of three orbital occupancies to be equal. The different combinations of the \(2+1\) occupancy splitting can result in different final solutions for a Hartree–Fock calculation. To account for the possibility of converging to metastable states instead of the ground state, multiple independent Hartree–Fock iterations are performed with various initial conditions and the solution with the lowest energy is selected to construct the phase diagrams.

To increase diversity in the machine learning samples, we have included data with nonzero values of *J*/*U*. The Hund’s coupling has two opposing effects on the metal-insulator transition in transition metal oxides^{36}. At half-filling, it reduces the critical value of *U*, whereas for electron fillings other than half-filling, it increases the critical *U*. This is because the maximum total spin induced by the Hund’s coupling is much larger at half-filling, making gap formation easier. This unique behavior makes the half-filled case special, indicating that it may be challenging to obtain accurate predictions for this case unless a sufficient number of samples for the half-filling are included in the training set.

## Machine learning

### Data preparation

We created the dataset for our machine learning study from the Hartree–Fock results discussed in the previous section. We obtained three phase diagrams for each of the three antiferromagnetic types, with *J*/*U* values of 0.0, 0.1, and 0.2. For each phase diagram we collected \(9\times 59\) data points, where *U* is varied from 0 to 8 with increments of 1 and *N* ranges from 0.1 to 5.9 with a step of 0.1. This resulted in a total of \(3 \times 3\times 9\times 59 = 4779\) samples. The data generation took approximately 3 h using a single CPU (Intel Xeon Platinum 8360Y with 2.40 GHz). Still, this time could be reduced to a fraction using parallel production with multiple CPUs without sacrificing performance.

The antiferromagnetic order in HF calculations is determined by the selected HF ansatz which assumes that symmetry breaking occurs and is quantified by non-zero staggered magnetizations (\(m_l\)). When \(U = 0\), the \(m_l\) does not contribute to the Hamiltonian and the self-consistent solution would result in \(m_l\) = 0, which is equivalent to the original Hamiltonian without magnetic order. Labeling such cases as antiferromagnetic data could negatively impact the training process. Additionally, even if \(m_l\) is not zero, extremely small values of \(m_l\) only result in minimal changes to the Hamiltonian, which can confuse the machine learning model. To improve the efficiency of the model, we included only those samples in the dataset where \(m_l\) is greater than 0.1.

As this study aims to identify magnetic orders based on spin-insensitive measurements, the input data for the machine learning should not reflect the modified periodicity in the magnetic orders, even though the spin-integrated data contains crucial information to distinguish the AF orders. We restore the original periodicity by applying band unfolding transformations^{37} as illustrated in Fig. 4a.

In Hartree–Fock, a single particle approach, the energy axis is represented by bands as delta functions, and broadening is required to calculate the non-divergent density of states. Despite the broadening, each peak retains the same width, as the weights of the delta functions are unity, unless two band energies are in close proximity. This is not the case post the band unfolding process. The local density of states (LDOS) in Fig. 4b remains unaffected by the unfolding as the weights are integrated over the Brillouin zone. However, the momentum-resolved density of states (\(\rho _\textbf{k}\)) in Fig. 4c shows a noticeable variation from its folded counterpart. The energy-dependent variations in the weights of the peaks are related to the antiferromagnetic order.

The selection of optimal features is crucial because redundant features can introduce noise or bias during the learning process, potentially leading to poor generalization or overfitting. We test three different features in this work as described below. We first evaluate two sets of features, one is the LDOS and the other is \(\rho _\textbf{k}\) on the high-symmetry points (\(\textbf{k}=\Gamma\), X, M, and R). The LDOS is calculated by integrating the \(\rho _\textbf{k}\) over the entire Brillouin zone, as follows.

where \(\eta\) is a Lorentzian broadening factor and *n* is the band index. The calculated DOS is a continuous function of energy. However, in order to utilize it as input features for machine learning, the DOS should be expressed as a set of numerical values. We discretize the DOS into \(N_\mathrm{bin}\) points,

where \(I=1,2,\ldots ,N_\mathrm{bin}\) are frequency indices, \(\omega _I = -8 + I \delta \omega\), and \(\delta \omega = 16/N_\mathrm{bin}\). The original data is generated using 1024 grid points over the energy range [− 8:8], and the features for the given \(N_\mathrm{bin}\) are extracted by integrating the cubic-interpolated LDOS. The \(\rho _\textbf{k}\) at each high-symmetry point is also extracted in a similar manner, but the number of bins is reduced by 1/4 to ensure a fair comparison between the two sets of features, LDOS and \(\rho _\textbf{k}\).

We also design the third features based on the peak structure of the \(\rho _\textbf{k}\) feature, which will be introduced later. An additional advantage of the third feature is its reduced dependence on the broadening scheme. Broadening can arise from various sources, including instrumental resolution, thermal fluctuations, and excitations with finite lifetimes. Given the inability to control all sources of broadening in experiments, it becomes necessary to incorporate broadening effects accurately through theoretical approaches. The third feature is motivated by various test evaluation in “Results of decision tree algorithms”. The features we utilized for the machine learning and testing procedure, performed in “Results of decision tree algorithms”, are summarized in Fig. 5.

### Results of decision tree algorithms

We used decision tree ensemble algorithms, including Random Forest, a bagging method available in scikit-learn^{38}, as well as boosting methods such as XGBoost^{39}, LightGBM^{40} and CatBoost^{41}. We divided the samples into a training and a test set in a 7:3 ratio and trained the model using Random Forest, XGBoost, LightGBM, and CatBoost algorithms. Figure 6 shows the confusion matrices and the precision–recall curves. An element of the confusion matrix is defined as \(C_{\alpha \beta }\) = (number of samples predicted as \(\beta\) order while the true label is \(\alpha\) in the test set). The diagonal parts of the matrices represent cases that the trained model correctly predicts the AF labels for the test set. The off-diagonal parts indicate the number of incorrect answers, where the column represents which labels were incorrectly assigned.

The precision–recall curve illustrates how the balance between precision and recall changes with varying thresholds. Precision (*P*) represents the ratio of true positives (\(T_p\)) to the total number of cases predicted as positive; \(P = \frac{T_p}{T_p + F_p}\), and recall (*R*) denotes the proportion of \(T_p\) to the total number of actual positive samples; \(R = \frac{T_p}{T_p + F_n}\). \(F_p\) and \(F_n\) stand for false positives and false negatives, respectively. \(F_1\) score is the harmonic mean of precision and recall; \(F_1 = 2\frac{P\times R}{P+R}\). High precision and recall scores indicate good classification results, and the curve tends to be closer to the upper-right corner.

The performance of the models is generally good, however, errors are more frequent in cases where the filling is half or the value of \(m_l\) is small. The half-filled case is different from the others with non-zero Hund’s coupling, making it difficult to accurately predict. Small values of \(m_l\) result in small mean-field corrections, not providing sufficient information to distinguish different AF orders. Because this method captures the pattern of features, detecting weak magnetism using this method is a fundamental challenge. Adding more half-filled samples to the training set, however, can effectively mitigate errors for gapped cases.

The accuracy of the LDOS model is relatively lower compared to the \(\rho _\textbf{k}\) model as expected, because the integration process during the calculation of the LDOS results in the loss of momentum-resolved information. Despite both models having the same number of features, the performance difference suggests that feature selection is key to the success of this machine learning problem.

The method has an advantage in that it enables feature design optimization for specific applications. For example, angle-resolved photoemission spectroscopy (ARPES) measures hole excitation spectra, so a model that focuses on energy ranges with \(E < E_F\) is necessary. Testing the model trained within the restricted energy range would provide valuable information for analyzing the spectra. Additionally, actual experiments can be influenced by various environmental noises, such as thermal broadening, which cannot be replicated precisely in theoretical calculations. Thus, validating the model with different types of noise is crucial for practical applications.

Figure 7 illustrates two spectra produced from the same HF solution but with different broadening methods. A constant broadening is applied to Fig. 7a, whereas the broadening in Fig. 7b increases as the energy decreases below the Fermi level as \(\eta (|E-E_{F}|) = 0.1 + 0.4 |E-E_{F}| / 8\). For validation purpose, the training set consists of DOS generated using constant broadening, while the test set is constructed using the linearly increasing broadening as the energy lowers below the Fermi level. The machine learning faces a more difficult situation as the test set, which is distinct from the training set, is now broadened using a different pattern from the training set. Note that we only use the \(\rho _\textbf{k}\) at high-symmetry points for machine learning, even though the spectra are visualized over a path connecting high-symmetry points.

We present the resulting confusion matrices in Fig. 8. Figures 6e–h and 8a–d are similar in terms of accuracy, indicating that limiting the energy range does not negatively impact the performance of the machine learning models. However, we observe a substantial drop in accuracy in Fig. 8e–h. This decline suggests that the broadening strength significantly affects the decision trees’ performance, as the trees make decisions based on numerical values that become smaller when the broadening strength \(\eta\) increases.

Suppose a trained decision tree checks whether a given sample has an LDOS value above a certain threshold in a specific energy range at the root node. If the LDOS is larger than the threshold, it returns *True*; otherwise, it returns *False*. When the test set is broadened by a larger broadening factor, it reduces the height of the peak in LDOS. Consequently, the root node sends the sample to the *False* branch, and the decision tree misclassifies the sample.

To investigate the impact of different broadening applied to the training and test set, we perform a systematic cross-test of \(\eta _\mathrm{train}\) and \(\eta _\mathrm{test}\). We divide the HF solutions into training and test sets, and apply \(\eta _\mathrm{train}\) and \(\eta _\mathrm{test}\) to each set respectively. The resulting accuracy is presented in Fig. 9a,c,e. As expected, the best performance is achieved when \(\eta _\mathrm{train} = \eta _\mathrm{test}\). When \(\eta _\mathrm{train}\) and \(\eta _\mathrm{test}\) are not equal, the incorrect predictions increase, especially when \(\eta _\mathrm{train}\) is smaller than \(\eta _\mathrm{test}\). This is because the decision trees choose a branch to follow at every nodes, based on the comparison between a certain feature and a threshold value. A larger \(\eta _\mathrm{train}\) reduces these threshold values, making samples with smaller \(\eta _\mathrm{test}\) more likely to be classified accurately than vice versa.

Based on the results of the cross-broadening tests, which suggest that the key aspect of the model is whether a feature (i.e. an averaged DOS within a bin) exceeds a threshold value, we devised a compact set of features. Despite the variation of the height and width of a peak on the energy axis as \(\eta\) changes with Lorentzian broadening in Eq. (4), the energy at which the weight reaches the local maximum remains unchanged. This energy, originating from the corresponding band energy (\(\varepsilon _n(\textbf{k})\) in Eq. (4)), represents the lowest excitation from the ground state and is measured by the distance from the Fermi level to the peak position.

Figure 10 shows the extraction of features from the original \(\rho _\textbf{k}\) features. For each high-symmetry point \(\textbf{k}\), the closest peak energy to the Fermi level for both electron and hole excitations. We use the energy difference as features, where a positive value represents an electron excitation and a negative value represents a hole excitation. If a peak of \(\rho _\textbf{k}\) is located on the Fermi level, the corresponding features are set to zeroes. This feature selection significantly reduces the number of features, from 256 to 8, which is 32 times smaller than the original size, enabling more efficient training.

The utilization of the lowest excitation energy as a feature contributes to not only efficient training but also accurate results, as demonstrated in Fig. 9b,d,f. The problematic \(\eta\)-dependence of the cross-\(\eta\) test vanishes, highlighting the significance of feature selection for classification based on spectral information. The test set encompasses both metallic and insulating solutions, with the model tending to produce more incorrect predictions for metallic samples. However, tests performed solely on insulating cases show over 95% accuracy, encouraging potential application for classifying antiferromagnetic insulators by this method.

## Conclusion

In this study, we examined the application of a machine learning model for classifying antiferromagnetic (AF) orders in a model Hamiltonian targeting BaOsO\(_3\). The dataset was created using Hartree–Fock calculations with selected AF orders in a \(2\times 2\times 2\) supercell. Converged solutions were used to generate the local density of states (LDOS) and the momentum-resolved density of states (\(\rho _\textbf{k}\)). We trained the model using various features, including LDOS, \(\rho _\textbf{k}\), and the lowest excitations, and evaluated its ability to identify AF orders in test samples. While both LDOS and \(\rho _\textbf{k}\) were designed to have the same number of features, the latter demonstrated superior performance, highlighting the importance of feature selection. The \(\rho _\textbf{k}\) features performed well when test samples had comparable broadening levels, but different broadening methods weakened the model’s performance. In contrast, the lowest excitations, with only 8 features per sample, surpassed these limitations and exhibited excellent performance across most samples.

We considered only three types of orders in this paper, but in principle, the approach can be extended to include more diverse types of orders. The Hartree–Fock calculation is computationally very cheap and enables us to have a flexible design of candidate orders followed by prompt testing for desired samples. This method will be useful for materials where conventional methods to identify magnetic orders are not applicable, such as two-dimensional materials, non-collinear orders.

The training based on mean-field level calculations may encounter challenge in application to real materials, in case the correlation effects lead the electronic structure to beyond the mean-field correction. Therefore, to ensure a successful application, it would be beneficial to validate the performance for test samples generated by methods incorporate beyond mean-field fluctuations. For instance, identifying the AF order in dynamical mean-field calculations would be a reasonable test as a bridge toward applications to real materials.

## Change history

### 21 August 2023

A Correction to this paper has been published: https://doi.org/10.1038/s41598-023-40525-7

## References

Rosenbrock, C. W., Homer, E. R., Csányi, G. & Hart, G. L. W. Discovering the building blocks of atomic systems using machine learning: Application to grain boundaries.

*NPJ Comput. Mater.***3**, 29 (2017).Carrasquilla, J. & Melko, R. G. Machine learning phases of matter.

*Nat. Phys.***13**, 431–434 (2017).Ch’ng, K., Carrasquilla, J., Melko, R. G. & Khatami, E. Machine learning phases of strongly correlated fermions.

*Phys. Rev. X***7**, 031038 (2017).Zhang, Y. & Kim, E.-A. Quantum loop topography for machine learning.

*Phys. Rev. Lett.***118**, 216401 (2017).Stanev, V.

*et al.*Machine learning modeling of superconducting critical temperature.*NPJ Comput. Mater.***4**, 29 (2018).Carleo, G.

*et al.*Machine learning and the physical sciences.*Rev. Mod. Phys.***91**, 045002 (2019).Ghosh, A., Ronning, F., Nakhmanson, S. M. & Zhu, J.-X. Machine learning study of magnetism in uranium-based compounds.

*Phys. Rev. Mater.***4**, 064414 (2020).Lee, D., You, D., Lee, D., Li, X. & Kim, S. Machine-learning-guided prediction models of critical temperature of cuprates.

*J. Phys. Chem. Lett.***12**, 6211–6217 (2021).Tsai, Y.-H.

*et al.*Deep learning of topological phase transitions from entanglement aspects: An unsupervised way.*Phys. Rev. B***104**, 165108 (2021).Landrum, G. A. & Genin, H. Application of machine-learning methods to solid-state chemistry: Ferromagnetism in transition metal alloys.

*J. Solid State Chem.***176**, 587–593 (2003).Kusne, A. G.

*et al.*On-the-fly machine-learning for high-throughput experiments: Search for rare-earth-free permanent magnets.*Sci. Rep.***4**, 6367 (2014).Tamura, R. & Hukushima, K. Method for estimating spin-spin interactions from magnetization curves.

*Phys. Rev. B***95**, 064407 (2017).Miyazato, I., Tanaka, Y. & Takahashi, K. Accelerating the discovery of hidden two-dimensional magnets using machine learning and first principle calculations.

*J. Phys. Condens. Matter***30**, 06LT01 (2018).Nelson, J. & Sanvito, S. Predicting the curie temperature of ferromagnets using machine learning.

*Phys. Rev. Mater.***3**, 104405 (2019).Rhone, T. D.

*et al.*Data-driven studies of magnetic two-dimensional materials.*Sci. Rep.***10**, 15795 (2020).Samarakoon, A. M.

*et al.*Machine-learning-assisted insight into spin ice dy2ti2o7.*Nat. Commun.***11**, 892 (2020).Katsikas, G., Sarafidis, C. & Kioseoglou, J. Machine learning in magnetic materials.

*Phys. Status Solidi B***258**, 2000600 (2021).Xie, Y., Tritsaris, G. A., Grånäs, O. & Rhone, T. D. Data-driven studies of the magnetic anisotropy of two-dimensional magnetic materials.

*J. Phys. Chem. Lett.***12**, 12048–12054 (2021).Acosta, C. M., Ogoshi, E., Souza, J. A. & Dalpian, G. M. Machine learning study of the magnetic ordering in 2d materials.

*ACS Appl. Mater. Interfaces***14**, 9418–9432 (2022).Chapman, J. B. J. & Ma, P.-W. A machine-learned spin-lattice potential for dynamic simulations of defective magnetic iron.

*Sci. Rep.***12**, 22451 (2022).Alidoust, M., Rothmund, E. & Akola, J. Machine-learned model hamiltonian and strength of spin-orbit interaction in strained mg2x (x = si, ge, sn, pb).

*J. Phys. Condens. Matter***34**, 365701 (2022).Domina, M., Cobelli, M. & Sanvito, S. Spectral neighbor representation for vector fields: Machine learning potentials including spin.

*Phys. Rev. B***105**, 214439 (2022).Kucukbas, M. E., McCann, S. & Power, S. R. Predicting magnetic edge behavior in graphene using neural networks.

*Phys. Rev. B***106**, L081411 (2022).Greitemann, J., Liu, K. & Pollet, L. Probing hidden spin order with interpretable machine learning.

*Phys. Rev. B***99**, 060404 (2019).Zhang, Y.

*et al.*Machine learning in electronic-quantum-matter imaging experiments.*Nature***570**, 484–490 (2019).Shiina, K., Mori, H., Okabe, Y. & Lee, H. K. Machine-learning studies on spin models.

*Sci. Rep.***10**, 2177 (2020).Liu, K., Sadoune, N., Rao, N., Greitemann, J. & Pollet, L. Revealing the phase diagram of kitaev materials by machine learning: Cooperation and competition between spin liquids.

*Phys. Rev. Res.***3**, 023016 (2021).Rao, N., Liu, K., Machaczek, M. & Pollet, L. Machine-learned phase diagrams of generalized kitaev honeycomb magnets.

*Phys. Rev. Res.***3**, 033223 (2021).Yu, H.

*et al.*Complex spin hamiltonian represented by an artificial neural network.*Phys. Rev. B***105**, 174422 (2022).Tibaldi, S., Magnifico, G., Vodola, D. & Ercolessi, E. Unsupervised and supervised learning of interacting topological phases from single-particle correlation functions.

*Sci. Post Phys.***14**, 005 (2023).Shi, Y.

*et al.*High-pressure synthesis of 5d cubic perovskite baoso3 at 17 gpa: Ferromagnetic evolution over 3d to 5d series.*J. Am. Chem. Soc.***135**, 16507–16516 (2013).Jung, M.-C. & Lee, K.-W. Electronic structures, magnetism, and phonon spectra in the metallic cubic perovskite \({\rm baoso }_{3}\).

*Phys. Rev. B***90**, 045120 (2014).Kresse, G. & Furthmüller, J. Efficient iterative schemes for ab initio total-energy calculations using a plane-wave basis set.

*Phys. Rev. B***54**, 11169–11186 (1996).Kresse, G. & Joubert, D. From ultrasoft pseudopotentials to the projector augmented-wave method.

*Phys. Rev. B***59**, 1758–1775 (1999).Mostofi, A. A.

*et al.*An updated version of wannier90: A tool for obtaining maximally-localised wannier functions.*Comput. Phys. Commun.***185**, 2309–2310 (2014).Lee, H. J., Kim, C. H. & Go, A. Hund’s metallicity enhanced by a van hove singularity in cubic perovskite systems.

*Phys. Rev. B***104**, 165138 (2021).Boykin, T. B. & Klimeck, G. Practical application of zone-folding concepts in tight-binding calculations.

*Phys. Rev. B***71**, 115215 (2005).Pedregosa, F.

*et al.*Scikit-learn: Machine learning in Python.*J. Mach. Learn. Res.***12**, 2825–2830 (2011).Chen, T. & Guestrin, C. Xgboost: A scalable tree boosting system. In

*Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining*, KDD ’16, 785–794 (Association for Computing Machinery, 2016).Ke, G.

*et al.*Lightgbm: A highly efficient gradient boosting decision tree. In*Proceedings of the 31st International Conference on Neural Information Processing Systems*, NIPS’17, 3149–3157 (Curran Associates Inc., 2017).Prokhorenkova, L., Gusev, G., Vorobev, A., Dorogush, A. V. & Gulin, A. Catboost: Unbiased boosting with categorical features. In

*Proceedings of the 32nd International Conference on Neural Information Processing Systems*, NIPS’18, 6639–6649 (Curran Associates Inc., 2018).

## Acknowledgements

This work was supported by the National Research Foundation of Korea (NRF) under Grant No. NRF2021R1C1C1010429 (Y. Jang and A. Go) and Institute for Basic Science under Grants No. IBS-R009-D1 (C. H. Kim). We thank the Center for Theoretical Physics of Complex Systems (IBS-PCS) Advanced Study Group program for their support during this collaboration.

## Author information

### Authors and Affiliations

### Contributions

All authors contributed significantly to this work.

### Corresponding authors

## Ethics declarations

### Competing interests

The authors declare no competing interests.

## Additional information

### Publisher's note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

The original online version of this Article was revised: The original version of this Article contained an error in Figure 4, where a single layer was distorted.

## Rights and permissions

**Open Access** This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

## About this article

### Cite this article

Jang, Y., Kim, C.H. & Go, A. Classification of magnetic order from electronic structure by using machine learning.
*Sci Rep* **13**, 12445 (2023). https://doi.org/10.1038/s41598-023-38863-7

Received:

Accepted:

Published:

DOI: https://doi.org/10.1038/s41598-023-38863-7

## Comments

By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.