Introduction

Artificial Intelligence holds the promise of profound and far reaching impact on experimental science by integrating theory and experiment in new ways1. Neutron scattering on quantum materials is an area where much progress can be expected1,2,3 which would impact co-design of theory and experiment as well as materials discovery and optimization. To achieve this requires the integration of simulations, data treatment and analysis, and theoretical interpretation3. Data science, and in particular machine learning (ML), have been proposed as ways to integrate scattering experiments with demanding state-of-the art simulations2,4, however effective schemes to do this remain to be demonstrated. Here, we deploy ML across the experimental pipeline, closely integrating theory and experiment in a way that could be used more widely for materials research. We apply it to a highly frustrated magnet that provides challenges representative of current state-of-the-art materials.

Traditionally, experiment planning, data treatment, and analysis have taken major efforts involving months of detailed work2,3,5. They have relied on often crude analytic approximations due to the difficulty in matching time consuming and highly specialized simulations with experiment. Recently we have shown that machine learning, and in particular the application of Non-Linear Autoencoders (NLAEs) can be used to create automated capabilities for Hamiltonian extraction from diffuse neutron scattering data4,5. Further, this approach was demonstrated to provide robust parameter optimization, automated denoising and data treatment, as well as phase diagram mapping and categorization.

Here we show how a complete integration could be achieved and present a scheme that integrates the experiments with theory and modelling on the experiment timescales. The approach in ref. 4 is augmented with generative models to provide fast surrogates for expensive materials’ simulations. This facilitates analysis to be conducted during experiments with the results feeding back into choices made by the experimenter. We demonstrate the key elements of ML that enable neutron experiments on the highly frustrated magnet Dy2Ti2O74,5. This material shows complex physical behavior that requires sophisticated simulations to understand and thus stands as an ideal test case. While our approach can be more closely integrated into experiment by connecting to the data collection and experimental steering, the capabilities are not yet in place to do this at the Spallation Neutron Source, Oak Ridge National Laboratory, and a number of steps are still carried out manually. However, the study here provides a proof-of-principle for deeper integration of machine learning into neutron scattering pipelines as well as allowing us to understand a complex condensed matter system on a rapid timescale.

Results

Neutron scattering experiments

The experimental neutron scattering pipeline can be abstracted into four aspects2, shown schematically in Fig. 1a. The first one, (I), consists in the modelling of the material under study and the design of the scattering experiment and its optimisation. In this stage the underlying hypothesis and possible theoretical models of the material to be studied are considered, and the experiments are planned accordingly. This determines the specific type of experiment that is going to be performed, (elastic/inelastic scattering, Small angle scattering, etc.), the environment (temperature, pressure, external magnetic field, etc.) and guides the choice of specific instrument and sample environment (cryostats, pressure cells, magnets, etc.). The experiment is conducted in this stage and the output––the scattering data––is transferred to the next stage together with the specific instrument parameters and the set of possible relevant model parameters. The second stage of this pipeline, (II), is the parameter space exploration and treatment of information. Here, one part involves identifying the relevant information contained in the experimental scattering data, eliminating experimental artefacts (such as extraneous scattering signal from the environment), de-noising the signal, and removing signals not relevant for the specific study (e.g. nuclear Bragg scattering in a diffuse scattering experiment). During this process, the model, if available, is also explored at length, identifying relevant sectors in parameter space. The pipeline then branches into two parallel aspects: one outcome of the experiment, (III), is the determination and prediction of structure and properties of the system under study; the other, (IV), involves refining a theoretical description of the system that allows for wide parameter space predictions and the construction of phase diagrams, maps that distinguish regions with distinctive properties.

Fig. 1: Schematic overview of machine-learning integration into the direct and inverse scattering problem.
figure 1

a The ML workflow used here to drive the scattering experiment with automated data analysis and feeding back vital information. The workflow is split into four main sections: (I) scattering experiment design and optimization; (II) parameter space exploration and information compression; (III) structure or property predictions; and (IV) parameter space predictions. Section II links to both III and IV via latent space, \({{{{{{{\mathcal{LS}}}}}}}}\), a compressed version of the large pixel space. Dashed lines with a silhouette indicate parts of the flow that currently still require some human intervention. The latent space representations, S(L), are used in surrogates that bypass expensive calculations. b Schematic design of the surrogate model used to predict S(L) and S(Q) for a model with a given set of parameters, \({{{{{{{\mathcal{H}}}}}}}}(p)\). It comprises a radial basis network, mapping parameter space to latent space and a decoder to reconstruct S(Q) from latent space representations. The training of the surrogate is done based on a set of S(L) obtained from a set of models at different parameters, \(\{{{{{{{{\mathcal{H}}}}}}}}(p)\}\), using Monte Carlo simulations and NLAE encoding. These surrogates are used for exhaustive searches of parameter space, identifying phases and phase transitions, and predicting optimal regions for experimental study. More simulations are done iteratively in the areas of interest, and the surrogates are trained accordingly to improve their prediction accuracy.

Machine learning can play a significant role in all these stages, closely integrating data handling and analysis with modelling and theoretical phase space exploration, allowing for high-level on-site feedback during the experiment. The scheme presented here introduces an element into this pipeline, the latent space, \({{{{{{{\mathcal{LS}}}}}}}}\), that forms the backbone of the ML operation. This is a space of reduced dimensionality into which experimental data, simulations and predictions feed and from which structure, property and model parameters are predicted (see Fig. 1a). The choice of the characteristics of this space is crucial: its dimensional reduction drives data compression, a central concept of this design, as it is designed to reduce experimental noise and remove artefacts. The experimental data and simulations are encoded into \({{{{{{{\mathcal{LS}}}}}}}}\); its representations are then either decoded for human-readable comparison between experiments and modelling or directly used by ML modules to construct a phase diagram of the system or determine model parameters that best fit experimental conditions. Processed results and improved modelling and predictions are fed back into instrument and experimental parameters. In the present implementation this feedback still requires some human intervention (indicated in the figure with human silhouettes), but the introduction of ML already enables real-time high-level prediction that would not be achieved otherwise. Parameter space exploration can be a formidable task, as models required to understand technologically relevant materials increase in complexity. In the example presented here a conventional simulation for a single set of parameters can take about 330 CPU hours. In the ML implementation the generative network model (Fig. 1b) takes a set of parameters and directly returns an element of \({{{{{{{\mathcal{LS}}}}}}}}\). In the current example this reduced the time to a maximum of 0.1 CPU seconds per point. This enables real-time exhaustive phase space exploration with the usual computer power available for users at a neutron scattering instrument: a normal laptop or desktop computer.

In our scheme, the characteristics of the latent space are determined as an outcome of the training of the NLAE modules. Autoencoders were originally developed as effective tools for image compression and denoising in the context of computer vision. The aim of the NLAE here is to return a version of the original input where noise is reduced and some experimental artefacts removed. This is achieved by putting the information through a "double funnel” : first the data is encoded into a space of reduced dimensionality, with the consequent loss of information, and subsequently, it is decoded back into the original representation. The dimension of the intermediate space is minimised such that no relevant information is lost, the training ensures the lost information corresponds to the noise and artefacts. The intermediate representation thus obtained defines the latent space \({{{{{{{\mathcal{LS}}}}}}}}\), and the NLAE encoder and decoder networks become the input/output interfaces with it.

As it stands, the scheme with the integrated ML modules presented here, summarised in table 1, allows for qualitative change in the efficiency of neutron experiments. Fast calculations and parameter determination open up the possibility of wide parameter space exploration that feedback in real time into the experimental parameters. The on-site categorization of theoretical phase diagrams and identification of experimental phases provides unprecedented information during the experiment that guides experimental choices of instrument parameters and sample environment. It is possible to envisage full automation for some preliminary investigations, where some of the feedback from stages III and IV to I are done autonomously. In its current implementation it is ideally suited for investigation of complex materials, where direct scientific input is still required, delivering a qualitative change of efficiency and breath to the neutron scattering experiment.

Table 1 Summary of the machine learning modules in the workflow.

We will showcase the ML aided approach by studying the behaviour under hydrostatic pressure of Dy2Ti2O7, a notable example of a magnetically frustrated material.

In geometrically frustrated materials, the dominant pairwise interactions cannot be simultaneously minimized due to constraints dictated by the arrangement of spins on the lattice. Intricate correlations as a result of the mutually struggling ordering tendencies become manifest in the ground states. Real materials are rich systems, with multiple magnetic interactions covering a broad range of energies and length-scales. The neutralisation of the dominant forces leave the ground open for minor players to determine the outcome. Frustration can thus be an avenue towards subtler, more exotic types of order at low temperatures, such as exponentially degenerate ground states6, fractionalized magnetic excitations7, gigantic anomalous Hall effect8, spin-glasses and spin-liquid phases9.

Spin-ices can be described as classical ferromagnetic Ising spins on the cubic pyrochlore lattice (see Fig. 2a). The ground state of the system is an exponentially degenerate disordered state: a three dimensional spin-liquid with an emergent gauge field and fractionalised excitations10. In real materials, such as Dy2Ti2O7 (DTO) and Ho2Ti2O7 (HTO), the situation is more complicated and several magnetic interactions are necessary to account for experimental observations11,12.

Fig. 2: Structure of the magnetic system, phase map and generative model.
figure 2

a In the frustrated material considered in this work, Dy2Ti2O7, the magnetic moments located on Dy ions are constrained by crystal field interactions to point in or out of the tetrahedra. They form a corner sharing pyrochlore lattice. Nearest neighbors (1), next-nearest-neighbors (2) and two inequivalent next-next-nearest neighbors (3 and 3'); interactions are shown as thick colored lines. b Schematic diagram of the experimental parameters that can be used to tune the properties of frustrated materials and their associated effects. c A predicted map of magnetic orderings for varying J3 and \({J}_{3^{\prime} }\) with the remaining Hamiltonian parameters J1, J2, and D fixed to 3.41 K. 0 K and 1.3224 K respectively. The coordinates of a three-dimensional latent space predicted by a Generator network have been converted to an RGB color code. A region with uniform color is expected to be structurally the same, and continuous color changes correspond to either crossovers or continuous transitions. d Comparison between the high-symmetry-plane slices of simulated and surrogate-predicted S(Q) data at multiple places in parameter space (labelled from I to V) as indexed on panel (c).

The material of our choice, Dy2Ti2O7, is perhaps the cleanest spin-ice material, and as such it has been heavily studied. The system is under a delicate balance of interactions. Fits to experimental data result in a complex empirical Hamiltonian, \({{{{{{{\mathcal{H}}}}}}}}({{{{{{{\bf{p}}}}}}}})\), where the parameters, \({{{{{{{\bf{p}}}}}}}}=({J}_{1},{J}_{2},{J}_{3},{J}_{3^{\prime} },{{{{{{{\mathcal{D}}}}}}}})\), are: a nearest neighbour, next nearest neighbour, two inequivalent third nearest neighbour exchange interactions, (see Fig. 2 a), and a magnetic dipolar interaction term13,14,

$${{{{{{{\mathcal{H}}}}}}}}({{{{{{{\bf{p}}}}}}}})=\mathop{\sum}\limits_{\alpha =1,2,3,3^{\prime} }{J}_{\alpha }\mathop{\sum}\limits_{ < i,j{ \,{ > }\,}_{\alpha }}{{{{{{{{\bf{S}}}}}}}}}_{i}\cdot {{{{{{{{\bf{S}}}}}}}}}_{j}+{{{{{{{\mathcal{D}}}}}}}}{r}_{1}^{3}\mathop{\sum}\limits_{i,j}\left[\frac{{{{{{{{{\bf{S}}}}}}}}}_{i}\cdot {{{{{{{{\bf{S}}}}}}}}}_{j}}{| {{{{{{{{\bf{r}}}}}}}}}_{ij}{| }^{3}}-\frac{3({{{{{{{{\bf{S}}}}}}}}}_{i}\cdot {{{{{{{{\bf{r}}}}}}}}}_{ij})({{{{{{{{\bf{S}}}}}}}}}_{j}\cdot {{{{{{{{\bf{r}}}}}}}}}_{ij})}{| {{{{{{{{\bf{r}}}}}}}}}_{ij}{| }^{5}}\right].$$
(1)

As expected for a frustrated system, even within a restricted region of parameter space there is an abundance of competing phases. Experimentally, the control of external parameters such as pressure, both uniaxial15,16 and hydrostatic17,18, doping19,20, and magnetic field can be used as a powerful tool to search for new ordered states in frustrated systems (see Fig. 2b). This opens a vast multi-dimensional space to be explored and one where ML enabled neutron scattering experiments can be groundbreaking.

Machine learning integration scheme

We now analyse in detail how machine learning can be integrated in the different stages of a neutron scattering experiment as represented in Fig. 1a.

Scattering experiment design and optimization

Effective design of an experiment involves setting up the instrument and measurement parameters to collect meaningful data to test underlying hypotheses or refine models. The initial hypothesis or model is used to determine the initial instrument parameters and experimental conditions. As measurements evolve there is a double feed-back process: On one side, processed results at different stages are fed back (stage III to I) that change the instrument parameters (such as temperature pressure or magnetic field), on the other, improved modeling and predictions feed back into the initial hypothesis, and subsequently into instrument and experiment parameters (stage IV to I). The latter requires a detailed analysis of the results and is usually unachievable in real time during experiments.

Combining hierarchical clustering from the latent space of the NLAE and the use of a generative model allows the rapid construction of hypothetical phase diagrams for the behavior of the material. Analysis of the data processed through the autoencoder and its comparison with the measurement allow for differences with the model to be detected. Meanwhile, the variance of the values in latent space determine the degree to which distinguishing features are detected, giving a criterion for sufficiency of counting and measurement. Data sets at other conditions such as field, pressure, and temperature then provide validation of the model and parameters determined. A pre-trained NLAE and generative model speed up the process to the point where feed-back from fully processed results can also be used live in the experiment.

In the case of DTO, the pre-trained NLAE and GM [see Methods] allowed to predict a phase diagram (Fig. 2c) at finite temperature which shows finely balanced structures controlled by further neighbor exchanges. In combination with previously considered phase effects4, trends anticipated from physical variables such as uniaxial and hydrostatic pressure, applied magnetic field, and doping effects can be projected out from such phase diagrams in terms of their control over phase stability, Fig. 2b. Hypotheses can then be constructed and targeted.

Here we hypothesize that the morphology of the recently proposed structural glass state21 and its related anomalous noise22 in spin ice should be tunable. This would provide a testbed for out-of-equilibrium behavior in a model magnet; an experimental scenario that would facilitate systematic study of long-standing questions regarding breakdown of ergodicity. As indicated in Fig. 2b applied field, doping, and uniaxial pressure are not expected to be effective wherease hydrostatic pressure is expected to appropriately couple into the frustration between interactions. On this basis hydrostatic pressure was selected to tune DTO between phases, combined with temperature to map the development of irreversibility, and the results where analysed in real time to vary experimental parameters.

Information compression and parameter space exploration

The structure factor determined by neutron experiment, \({S}^{\exp }({{{{{{{\bf{Q}}}}}}}})\), is the observed diffracted intensity at a given scattering vector Q and contains detailed information about the system including correlations and the possible existence of long- and short-range-ordered structures. Traditionally, direct inspection of \({S}^{\exp }({{{{{{{\bf{Q}}}}}}}})\), and comparison with simulated structure factors, \({S}^{{{{{{{{\rm{sim}}}}}}}}}({{{{{{{\bf{Q}}}}}}}})\), were the tools used for extracting experimental information.

With Machine Learning both these processes can be greatly optimised. First, \({S}^{\exp }({{{{{{{\bf{Q}}}}}}}})\), and \({S}^{{{{{{{{\rm{sim}}}}}}}}}({{{{{{{\bf{Q}}}}}}}})\), can be considered volumetric images that need to be analysed and compared, and the usual processes of compression and comparison in latent space used. Second, the expensive process of calculating structure factors \({S}^{{{{{{{{\rm{sim}}}}}}}}}({{{{{{{\bf{Q}}}}}}}})\), based on model parameters, usually performed using Monte Carlo simulations [see Methods] can be tackled in a more efficient way by means of a surrogate that directly generates compressed representations of model information.

The first part requires an encoder. Large volumes of S(Q), about 106 − 108 pixels, covering several Brillouin zones, have to be considered. While similar structure factors are expected to indicate phase information of the model and data, the vast Q-space dimensionality renders any analysis impractical. To address this dimensionality reduction techniques are needed, e.g. Non Linear Autoencoders (NLAE) or Principal Component Analysis (PCA), that compress information to a latent space of much lower dimension (dL), of the order of 100−102, while preserving a one-to-one correspondence between the compressed S(L) and the original S(Q). PCA is a commonly used linear technique for dimensional reduction, that involves computing the principal components of the input distribution and linearly transforming from the original basis. While a linear autoencoder is essentially performing principal component analysis, the flexibility of a NLAE allow for further compression and more efficient representations of the same problem (see23 for a detailed comparison of these two approaches).

For this work we trained a NLAE architecture comprising an Encoder and a Decoder [see Methods]. The Encoder takes a linearized version of S(Q), and compress it into the lower-dimensional representation, S(L). The Decoder outputs a predicted structure factor, SAE(Q), for any S(L). The full NLAE architecture is used for the training of the autoencoder itself, minimizing the deviation between the input, a series of simulated structure factors \({S}^{{{{{{{{\rm{sim}}}}}}}}}({{{{{{{\bf{Q}}}}}}}})\), and the filtered output. The dimension of the latent space should strike a balance between overfitting and underfitting. Keeping dL relatively small limits the autoencoder from fitting irrelevant noise in the training data. On the other hand, dL should be large enough to allow the autoencoder flexibility to capture physically relevant characteristics in S(Q). Based on the dependence of the error on the validation set we determined dL = 30 for this work. We have used as an initial approach a linear version of S(Q), where all rows of the matrix are sequentially concatenated. This allows for a much simpler network architecture. Since neutron scattering data is a Fourier transform of a correlation function, local information in real space is not stored in neighbouring data in S(Q). The results with the linearized version are sufficiently good that more elaborate network architectures are not necessary for this kind of analysis.

The second part consists in building a generative network model (GN), a surrogate to bypass computationally expensive direct solvers. The GN maps model parameter space, \(\{{{{{{{{\mathcal{H}}}}}}}}({{{{{{{\bf{p}}}}}}}})\}\) directly into S(L). This makes exhaustive searches possible and enables live experiment planning and parameter space mapping. These predictions depend on the degree of training of the network, and on the topography of the phase space and the sparsity of the sampling. They do not fully replace simulations and should not be used to draw conclusions when detailed information is needed. These surrogates can also be used as the low-cost estimator in the iterative mapping algorithm workflow as an alternative to the Gaussian Process Regression of ref. 4.

Figure 1b shows the design of the surrogate implemented in this work. A Radial Basis Network (RBN), labeled as Generator is trained using Monte Carlo simulated data \({S}^{{{{{{{{\rm{sim}}}}}}}}}({{{{{{{\bf{Q}}}}}}}})\) [see Methods for details]. A quantitative measure of the predictive power of the GN is given in Supplementary Fig. 2, where a comparison is shown between the predictions for three vectors in \({{{{{{{\mathcal{LS}}}}}}}}\) given by the GN (solid lines) and those calculated from the direct solver (symbols), as one of the model parameters is varied. The corresponding \({S}^{{{{{{{{\rm{sim}}}}}}}}}({{{{{{{\bf{Q}}}}}}}})\), were not included in the training of the GN or the NLAE. In the case exemplified, each MC simulated point takes of order of 330 CPU hours to run while the surrogate model takes less than a second.

The information from experiments and model (both from simulations and GN) and compressed into \({{{{{{{\mathcal{LS}}}}}}}}\) is then transferred to processes III and IV.

Structure or property predictions

In the integration of ML into the scattering pipeline the information encoded into \({{{{{{{\mathcal{LS}}}}}}}}\) has to be decoded in order to provide some direct feedback into experimental planning (I), and to allow for human-readable comparison between experiments and modeling.

The decoder section of the NLAE trained previously serves this purpose. An example of human-readable data comparison is given in Figure 2d. Here the MC calculated scattering patterns \({S}^{{{{{{{{\rm{sim}}}}}}}}}({{{{{{{\bf{Q}}}}}}}})\) (labelled Simulation) for five different points in parameter space are compared with the corresponding predictions from the GN model (labelled Surrogate). The five parameter points are scattered along the \({J}_{3}-{J}_{3^{\prime} }\) plane as indicated by the circled numbers shown in Fig. 2c. These points in parameter space are not part of the training set of the GM.

The comparison shows that the predictions are fairly good in most of the regions considered. They are expected to be worse in regions where parameter space is sparsely sampled and the S(Q) is rapidly changing. For example, in the regions where phase transitions or rapid cross-overs occur, as is the case for the point labelled a. Here, more samples are needed in order to increase the prediction accuracy of the surrogate. This is a dynamical process and the surrogate could be retrained on demand. Even though the prediction accuracy may be weak over some regions, the surrogates are still useful to locate regions with certain correlations that can be later verified with calculations using more time-demanding simulations.

Parameter space predictions

The last section of the ML pipeline also takes as input the latent space \({{{{{{{\mathcal{LS}}}}}}}}\). There are two main tasks within this section: the Latent Space Optimization or solution of the inverse scattering, which provides feedback into the Model/Hypothesis of section I, and the data Auto-classification in order to generate a phase diagram of the system.

Latent Space Optimization. The inverse scattering problem is usually an ill-posed one where ML optimisation can make an important contribution. The determination of the model parameters p that best fit experiments involves a minimization in a multi-dimensional parameter space (d = 4 in our example). A ML assisted scheme was recently introduced4 that by working in the compressed \({{{{{{{\mathcal{LS}}}}}}}}\) and introducing a new error measure, \({\chi }_{{S}_{L}}^{2}\), defined as the sum of the squared distance between latent space vectors of experimental and simulated data, can greatly improve optimization over traditional methods. The problem was then treated within the framework of an Efficient Global Optimization algorithm. In this work we use an improved variant of that version, an Iterative mapping algorithm (IMA), that involves the use of the GN module, with a different training, rather than a Gaussian process regression [see Methods for details].

In this approach, one iteratively constructs a dataset of carefully sampled Hamiltonians \({{{{{{{\mathcal{H}}}}}}}}({{{{{{{\bf{p}}}}}}}})\). For each of these, one calculates the simulated structure factor \({S}^{{{{{{{{\rm{sim}}}}}}}}}({{{{{{{\bf{Q}}}}}}}})\). The data is then converted to latent space by means of the NLAE encoder and the deviation \({\chi }_{{S}_{L}}^{2}\) from the data to be fitted is calculated. With all such data, one builds a low cost regression model \({\hat{\chi }}_{{S}_{L}}^{2}\)that predicts \({\chi }_{{S}_{L}}^{2}\) for Hamiltonians not yet sampled. The low-cost model \({\hat{\chi }}_{{S}_{L}}^{2}\)can then be rapidly scanned over the space of Hamiltonians. In this approach, \({\hat{\chi }}_{{S}_{L}}^{2}\)is calculated with the RBN module used for the GN, now trained with the p and corresponding \({\chi }_{{S}_{L}}^{2}\)obtained from the simulations used as inputs and target respectively. This surrogate also acts as a denoiser, effectively "averaging out” uncorrelated stochastic errors in the \({\chi }_{{S}_{L}}^{2}\) data.

The IMA process collects samples subject to the condition that \({\hat{\chi }}_{{S}_{L}}^{2}\)is below an error tolerance threshold, CL. This threshold is reduced gradually from CL = 1 to \({C}_{L}^{{{{{{{{\rm{final}}}}}}}}}=0.05\) over 10 steps. The NLAE and the RBN are then retrained to better fit the region towards which the process is focusing. As more data is collected, the prediction accuracy of the RBN towards the minimum of \({\chi }_{{S}_{L}}^{2}\)becomes higher. Fig. 3b shows 3D cuts of the DTO 4D parameter space. The coloured areas correspond to \({\chi }_{{S}_{L}}^{2}\)\( < \,{C}_{L}^{{{{{{{{\rm{final}}}}}}}}}\) for different conditions (0 GPa, 1.3 GPa, etc.).

Fig. 3: Comparison of magnetic structure factor from both experiment and theory at different pressure and parameter fit.
figure 3

a Three perpendicular slices of 3D volumes of \({S}^{\exp }({{{{{{{\bf{Q}}}}}}}})\) are shown for 0 GPa (top) and for 1.3 GPa (bottom). Each panel is a side-by-side comparison of experiment (left) and simulation (right). All data was collected at T = 0.68 K. Notice that the simulations exclude nuclear Bragg peaks, not relevant for the magnetic structure investigated here. b Three 3D slices of 4D solution manifolds with J3=0 K, J2=0 K and J1=3.41 K respectively. The coloured contours denote the the region in parameter space that best fit \({S}^{\exp }({{{{{{{\bf{Q}}}}}}}})\)at 0 GPa (blue) and 1.3 GPa (red). Black contours denote the combined uncertainty for the fit to \({S}^{\exp }({{{{{{{\bf{Q}}}}}}}})\)and the heat capacity (Cv) at 0 GPa. The light blue plane indicates the usually accepted value of J1 = 3.41K. Both J1 and 3J2 − J3 are ill-constrained by \({S}^{\exp }({{{{{{{\bf{Q}}}}}}}})\), yet the relation J2 + 3J3 = − 0.0394K holds for both datasets (0 GPa and 1.3 GPa). Combining \({S}^{\exp }({{{{{{{\bf{Q}}}}}}}})\)with Cv further reduces the uncertainties of the solution. Unfortunately, it is extremely hard to measure heat-capacity of a material under pressure. However, the uncertainty along \({J}_{3}^{\prime}\) is low enough to clearly resolve a difference of ≈ 3 mK in \({J}_{3}^{\prime}\) as a hydrostatic pressure of 1.3 GPa is applied.

Autoclassification and Phase Diagram Generation. One of the aims of a scattering experiment is to determine a phase diagram for the system, a map of the different types of order the system displays under different circumstances. The correlations of the system are encoded in S(Q) (and consequently in S(L)) and parameter sets corresponding to the same structure would cluster together in either Q − or L − space. Thus, archetypal hierarchical clustering can be easily used to classify the main phases. However, such clustering analysis would fail when the system undergoes continuous transitions or crossovers rather than abrupt first-order like changes. In this case it is still possible to easily construct a graphical phase diagram. \({{{{{{{\mathcal{LS}}}}}}}}\) can be further reduced by an additional NLAE with an output into a three dimensional latent space, \({{{{{{{\mathcal{L{S}}}}}}}^{\prime}}}\), and the resulting vectors treated as the RGB color components of a phase map. Alternatively, the Q − space can be directly reduced to an \({{{{{{{\bf{{L}}}}}}}^{\prime}}}-\)space with dL = 3 (see ref. 5).

In the case studied here, the initial \({{{{{{{\mathcal{LS}}}}}}}}\), with dL = 30, was further reduced by a second NLAE, into \({{{{{{{\mathcal{L{S}}}}}}}^{\prime}}}\) with \({d}_{L}^{\prime}=3\). Also, separate GN was trained with the p and \(S({{{{{{{{\bf{L}}}}}}}}}^{\prime})\) as the input and the target respectively. Figure 2c shows a map of the magnetic orderings (phase map) by varying J3 and \({J}_{{3}^{\prime}}\), generated in such a manner.

Neutron scattering of Dy2Ti2O7 under pressure

For the case study neutron experiment, an isotopically enriched single crystal sample of Dy2Ti2O7 was prepared, and used to perform neutron diffuse scattering experiments at the Elastic Diffuse Scattering Spectrometer, CORELLI at the Spallation Neutron Source, Oak Ridge National Laboratory under zero and finite pressure conditions (see Methods and Supplementary Note 1 for details).

Figure 3a shows the magnetic structure factor for three perpendicular slices in reciprocal space at 0 GPa (upper row) and 1.3 GPa (lower row), all taken at a temperature T = 680 mK. The temperature has been chosen so that it is low enough for correlations to be well developed but sufficiently high to reach equilibrium over a short time scale (see e.g ref. 22). Each panel shows a comparison between the experimental data (left) and the simulated patterns using the ML optimised parameters (right). In all cases there is a very good agreement between experiments and simulation. Notice that the simulations intentionally exclude nuclear Bragg peaks, not relevant for the magnetic structure investigated here. A comparison between the 0 GPa and 1.3 GPa data shows minor changes, corresponding to a slight sharpening with pressure of features already present at 0 GPa, but the structures are qualitatively identical. Pressure enhances short-range correlations, but does not induce long-range order.

The ML processing allows for a quick and effective determination of the variation in the Hamiltonian parameters (see Supplementary notes 2 to 6 for details). The variation with pressure in the lattice parameter is negligible up to 1.3 GPa, as determined from the nuclear Bragg peaks. This means that no variation in the dipolar interaction parameter D needs to be considered, and the parameter space becomes effectively four-dimensional. Figure 3 shows different three dimensional cuts of this four dimensional parameter space. The light blue and red volumes correspond to the region in parameter space where the \({\chi }_{L}^{2}\) is minimised for 0 GPa and 1.3 GPa respectively, as determined from S(Q) using the IMA. In the case of ambient pressure, this volume can be further reduced by considering also specific heat data (dark blue region). While S(Q) leaves ill constrained both J1 and the relation 3J2 − J3, it is clear that there is no overlap between the optimal \({\chi }_{L}^{2}\) volumes, and that the effect of pressure is to induce a shift in the value of \({J}_{3}^{\prime}\) of ≈3 mK. The optimal parameters for 1.3 GPa are marked as white circles in the \({J}_{3}-{J}_{3}^{\prime}\) phase diagram of Fig. 2b. Pressure moves the system deeper into the blue region, where only short-range correlations arising from subsets of ice-states are present.

The parameters determined for 1.3 GPa can be easily validated by looking at the temperature dependence of S(Q). Fig. 4 shows a 2D slice of S(Q) in [l, l, l] − [k, k, − 2k] for six temperatures (between 300 mK and 1.5 K). Each panel is a side by side comparison of the experiment, on the left-hand side of the panel, and the simulation, on the right-hand side, using the parameters determined with the ML optimisation for T = 680mK. The agreement at all temperatures between model and experiment is very satisfactory.

Fig. 4: Validation over temperature dependence and prediction of scattering data.
figure 4

Two-dimensional slices of [l, l, l] − [k, k, − 2k] at six temperatures: (a) 300 mK, (b) 400 mK, (c) 500 mK, (d) 680 mK, (e) 900 mK and (f) 1.5 K. Each panel is a side by side comparison of experiment (left) and simulation (right). All the experimental data shown here are collected at 1.3 GPa. The parameters for the simulations are J1 = 3.3(3) K, J2 = −0.079(8) K, J3 = 0.010(4) K, \({J}_{3}^{\prime}\) = 0.075(5) K and D = 1.3224(1) K. g The evolution of calculated \({S}^{{{{{{{{\rm{sim}}}}}}}}}({{{{{{{\bf{Q}}}}}}}})\)as a function of \({J}_{3}^{\prime}\) with fixed values of other exchange parameters at J1 = 3.33 K, J2 = −0.05 K, J3 = 0 K and D = 1.3224 K.

The effective Hamiltonian obtained using ML has predictive power and can be used to explore the consequence of a further increase in \({J}_{3}^{\prime}\). Fig. 4g) shows the evolution of a cut in [l, l, l] − [k, k, − 2k] of S(Q) as \({J}_{3}^{\prime}\) is gradually increased from 0 towards 0.3 K with the other parameters fixed. The features sharpen, but no proper long range order (LRO) is established.

Discussion

The results here show that the application of machine learning in the planning, collection, analysis, and interpretation of neutron experiments and data provides a powerful capability. It is easy to imagine the extension of this approach to inelastic data and other codes.

Instrumental effects

The effects of instrumental resolution are straightforward in the case of diffuse scattering where the signal is broadened and so is not strongly affected. In other applications resolution will be more challenging. Surrogates based on Monte Carlo ray tracing simulations of instruments should be able to provide fast capabilities for computation of resolution effects. Codes are available, e.g. McSTAS24 and McVINE25, by which simulations over ranges of cases can be made available for training.

Data compression

Non-linear autoencoders are successful at efficiently compressing input from diffuse scattering. Other architectures could provide even more effective training. For example Euclidean Neural Networks (ENN) can exploit the symmetries in crystalline samples which may make them trainable with far fewer simulations26. Certainly the relatively small dimensionality of the latent space here implies a high degree of compression which a network that encodes some physics into may very effectively learn for complex cases. These also open the way for more generic training over a wide range of Hamiltonians as well as experimental data itself. Such an ENN could form the basis of a more model agnostic compression of experimental data. Combined with Bayesian analysis this would provide a basis for assessing experimental data for more autonomous steering of experiments.

Data processing

A well trained NLAE, as utilized here, successfully undertakes filtering of experimental background and artifacts. Alternative approaches, which are suitable for cases where less precise models are available, involve identifying background and artifacts more straightforwardly. Generally these will provide signals that do not correspond to the underlying symmetries and physical constraints so physics informed networks such as ENNs should provide good discrimination. Another approach is to use both measured background data sets and simulations to train machine learning to identify and remove these signals. Generally, instrumental backgrounds correspond to a limited number of processes such as scattering from sample environments or phonons from sample mountings.

Integrated workflows

The simulations and training of the machine learning components as well as data processing requires coordination over data and computing resources. Current instrument control systems at the Spallation Neutron Source and High Flux Isotope Reactor at Oak Ridge National Laboratory are able to integrate feedback from analysis. However, the scheduling and management of the simulations and data processing using machine learning requires an infrastructure that can combine edge and high-performance computing resources. This requires to work over a delocalized network where specialized codes will be required to run at other institutions. These developments will be needed if machine learning is to transform experiments beyond simpler cases of diffraction and small angle scattering where large data bases and standard codes are available.

Modeling

A large number of approaches to the simulation of magnetic structure and dynamics are available. Our work for elastic and inelastic scattering has utilized Monte Carlo and Landau Lifshitz approaches which cover a wide range of cases5. Spinwave theory provides fast computation for simple magnon dynamics and codes are available27. More interestingly, machine learning holds the promise of being able to interface sophisticated simulations28 that include quantum effects including correlations which are important to quantum materials. Examples are density matrix renormalization group29, quantum Monte Carlo30, dynamical mean field theory31 and dynamic cluster approximation approaches to name but a few. Efficient training is of paramount importance as these are computationally expensive methods to run. However, bringing such state-of-the-art theoretical methods closer to experiment would undoubtedly have a significant impact on our understanding of quantum materials. Further, artificial intelligence is being used to accelerate these methods and can be expected to facilitate their integration with experiment in the foreseeable future.

Applications

The general approach to machine learning integrating into neutron scattering is potentially applicable to a wide range of science cases. A wide array of diffuse scattering problems could be approached this way with large scale atomic simulations being an obvious starting point. Inelastic data is well suited also. Crystal field measurements should be considered and may well be open to a significant degree of automation of measurements and analysis. Powder inelastic scattering is also well suited which could make a significant impact on throughput and time to understand materials. Finally, single crystal inelastic scattering from quantum magnets, itinerant and superconducting materials, and anharmonic phonons are obvious targets.

Pressure control of a magnetic glass

In the case study experiment undertaken here, the results show that the further-neighbor couplings of Dy2Ti2O7 are successfully tuned by hydrostatic pressure. The pressure applied of ≈1.3a GPa perturbs the system modifying the nanoscale magnetic order from ambient pressure. The morphology of the resulting glass21 restricts monopole pathways and is a key property reflecting the phase ordering kinetics. Recently, we have proposed spin ices as a model systems to explore glass formation systematically21,22. The results here are promising; diffuse neutron studies reaching ~7 GPa are feasible. Over such a range significant variation would be attainable and the role of subtle interactions on the development of non-equilibrium phases rigorously testable. Anomalous dynamics, such as colored noise spectra, are signatures of memory effects22 and when combined with diffuse scattering give a comprehensive characterization of the state. The systematics then could provide a powerful connection between theory and experiment for the long standing and difficult problems involving the fundamentals of glass and correlated liquid behavior; a connection made possible by ML based approaches.

In this paper we have proposed a scheme for the application of machine learning to neutron scattering that enables high-level real-time feedback. The approach uses non-linear autoencoders to undertake compression to a latent space from training data involving computationally expensive simulations of neutron scattering data. A generative model provides fast calculations which allow identification of areas of interest and experiment planning. Hierarchical clustering provides categorization of theoretical phase diagrams and identification of experimental phases from measurements. The NLAE also provides capabilities for accurate parameter determination and data treatment/handling. We explore these capabilities on the highly frustrated magnet Dy2Ti2O7 under pressure. This material has a complex physical behavior which can be extracted rapidly from the combined measurements and data analytics based on simulations. Our analysis shows that hydrostatic pressures of up to 1.3 GPa are able to modify the magnetic interactions of the material leading to the prediction that substantially higher pressures may cause a magnetic phase transition. This provides a route to a pressure tunable structural glass.

Methods

Characteristics and training of the Autoencoder

For the construction and training of the NLAE we follow the approach introduced in ref. 4. We use a non-linear autoencoder (NLAE) composed of two networks: a Encoder and a Decoder32. The Encoder takes a linearized version of the S(Q) (either simulated or experimental) and compress it into a lower-dimensional representation S(L). This latter space is referred to as the Latent space, \({{{{{{{\mathcal{LS}}}}}}}}\), and its dimensionality is determined in the training during the hyper-parameter tuning step4. The Decoder network returns a structure factor, SAE(Q) for any provided S(L).

If the encoder and decoder are used in tandem, the resulting structure factor SAE(Q)captures the essence of the input S(Q) removing irrelevant information, including noise and artefacts if present. The dimension of the latent space needs to be tuned so that the NLAE acts as a ’poor identity’ where not all information can be carried through. The aim of the training is to ensure that the lost information corresponds to noise and artefacts. Part of this is obtained by using simulated data for the training, with low noise levels and no experimental artefacts.

Training Previous training on elastic neutron scattering signal determined the optimal dimension of the latent space to be dL = 30 (see ref. 4). A NLAE of 30-dimensional latent space was trained using simulated structure factors, \({S}^{{{{{{{{\rm{sim}}}}}}}}}({{{{{{{\bf{Q}}}}}}}})\), obtained by means of the direct solver (see relevant methods section). For the training, the dataset has to be sufficiently broad that it would cover all potentially important characteristic scattering features.

For DTO our training data consisted on a 1000 model Hamiltonians of the form of eq. (1), labelled by their coupling parameters p. We used the direct solver (DS) to calculate an equilibrated three-dimensional \({S}^{{{{{{{{\rm{sim}}}}}}}}}({{{{{{{\bf{Q}}}}}}}})\) for each \({{{{{{{\mathcal{H}}}}}}}}({{{{{{{\bf{p}}}}}}}})\) and split this into two groups: 90% were used as the training dataset and remaining 10% was used as test data. The model parameter space was sampled iteratively through the IMA within the parameter range J1 = 3 K to 3.8 K, J2 = −0.5 K to 0.5 K, J3 = −0.3 K to 0.3 K and \({J}_{3}6^{\prime}\) = 0 to −0.3 K. The autoencoder tries to minimize the deviation between its input S(Q) and its outpur SAE(Q), summed over all random models in the dataset. The loss function used in this minimisation is

$${{{{{{{\mathcal{L}}}}}}}} =\frac{1}{N}\mathop{\sum}\limits_{\{p\},{{{{{{{\bf{Q}}}}}}}}}m({{{{{{{\bf{Q}}}}}}}}){(S({{{{{{{\bf{Q}}}}}}}})-{S}_{{{{{{{{\rm{AE}}}}}}}}}({{{{{{{\bf{Q}}}}}}}}))}^{2}\\ +\frac{\lambda }{2}\mathop{\sum}\limits_{\{p\},{{{{{{{\bf{Q}}}}}}}}}{W}^{2}+\beta \mathop{\sum}\limits_{D}KL(\rho | | {\hat{\rho }}_{D}).$$
(2)

Here m(Q) = 0, 1 can be used to mask some experimental artefacts. The second and third terms are two types of regularisation: the first one on the weight matrix, W, and the second a Kullback-Leibler regularisation on latent space sparcity. ρD is the average activation value of the hidden layer neurons and ρ, the desired average activation value, set to 0.05 (see ref. 4 for details). To find the model parameters that minimize \({{{{{{{\mathcal{L}}}}}}}}\), we have used the scaled conjugate gradient descent algorithm, both in MATLAB and and Keras33, a deep learning API written in Python. Although more complex architectures such as multilayer convolutional neural networks (CNN) or variational autoencoders, the simple version used here works well for the S(Q).

Building and training of the Surrogate

Building a Surrogate to bypass computationally expensive direct solvers is essential for drawing exhaustive information based on existing results. The surrogate maps between the model parameter space, \(\{{{{{{{{\mathcal{H}}}}}}}}({{{{{{{\bf{p}}}}}}}})\}\) and their corresponding structure factors, S(Q). It is composed of two parts: a Generator and a NLAE Decoder (see Fig. 2b).

The Generator is a Radial Basis Network (RBN) trained to predict latent space representations, Ssur(L), for an input of \({{{{{{{\mathcal{H}}}}}}}}({{{{{{{\bf{p}}}}}}}})\) with a given set of parameters, p. These are then fed through the Decoder network of the trained NLAE to calculate, \({S}_{{{{{{{{\rm{AE}}}}}}}}}^{{{{{{{{\rm{sur}}}}}}}}}({{{{{{{\bf{Q}}}}}}}})\).

The RBN of the Generator is composed of two layers: an input layer of radial basis (RB) neurons followed by an output layer of logistic neurons. The latent space predictions, Ssur(L), for a given set of parameters p are defined as:

$${S}^{{{{{{{{\rm{sur}}}}}}}}}({L}_{i})={f}_{2}\left(\mathop{\sum}\limits_{j}{w}_{ij}^{(2)}{h}_{j}({{{{{{{\bf{p}}}}}}}})+{b}_{i}^{(2)}\right)$$
(3)

where f2 is the logistic activation function f2(x) = 1/(1 + ex), similar to the one used in the output layer of the NLAE encoder, W(2) is the weight matrix and b(2) is the bias vector of the output layer. Here,

$${h}_{i}({{{{{{{\bf{p}}}}}}}})=exp\left[-\frac{{\sum }_{j}{({p}_{j}-{c}_{ij})}^{2}}{{\sigma }^{2}}\right].$$
(4)

The values of the weight matrix elements \({w}_{ij}^{(2)}\), the bias vector b(2) and the clustering centers cij of the RB layer are determined in the training process. The loss function in this case is defined as,

$${{{{{{{\mathcal{L}}}}}}}}=\mathop{\sum}\limits_{\{p\}}\left[\frac{1}{{N}_{\nu }}\mathop{\sum}\limits_{\nu }{({S}^{{{{{{{{\rm{sim}}}}}}}}}(L)-{S}^{{{{{{{{\rm{sur}}}}}}}}}(L))}^{2}\right]$$
(5)

and is minimized using the Adam optimization algorithm for a preset number of radial basis neurons. The spread of the RB functions, σ, is preset to 0.05. The network is trained using the outputs of the Direct solver, \({S}^{{{{{{{{\rm{sim}}}}}}}}}({{{{{{{\bf{L}}}}}}}})\), as the target and the corresponding p as the input (see Fig. 2b). Thus the input and the output dimensionality are set by the dimensionality of the {H(p)} and of the latent space respectively. The number of neutrons in the RB layer is determined during the training process. The training starts with no neurons in the hidden layer and iteratively adds neurons in order to minimize the error between output and the target, with a termination condition in which the prediction error is below a preset threshold. In our runs this number is between 20 and 30 neurons. This step was initially implemented in MATLAB using the newrb function in Deep Learning Toolbox. We have also implemented this in Python using Keras.

The complete surrogate to predict S(Q)for a given parameter set, p, is built by joining the RBN to the decoder part of the NLAE trained in a prior step.

Monte Carlo based Direct Solver for scattering \({S}^{{{{{{{{\rm{sim}}}}}}}}}({{{{{{{\bf{Q}}}}}}}})\)

We use a standard Monte Carlo based direct solver for the neutron scattering (see e.g. ref. 4). In DTO the individual vector spins \({{{{{{{{\bf{S}}}}}}}}}_{i}=\left[{S}_{i}^{x},{S}_{i}^{y},{S}_{i}^{z}\right]\) are at positions Ri which are located on the pyrochlore lattice and behave as classical Ising spins that can point in or out of the tetrahedra. The energy due to interactions is given by the spin Hamiltonian, \({{{{{{{\mathcal{H}}}}}}}}({{{{{{{\bf{p}}}}}}}})\) of equation (1). The set of interactions spans \({{{{{{{\bf{p}}}}}}}}=({J}_{1},{J}_{2},{J}_{3},{J}_{3^{\prime} },{{{{{{{\mathcal{D}}}}}}}})\). For our simulations we fix \({{{{{{{\mathcal{D}}}}}}}}=1.3224\) K.

Realistic spin configurations can be prepared based on the Metropolis algorithm, a Markov Chain Monte Carlo method34. The Metropolis algorithm anneals the configuration of spins to be representative of the system in thermal equilibrium at a chosen temperature T. From these configurations a full range of physical properties can be calculated.

The diffuse scattering from the magnetic system is (approximately) proportional to the cross section35:

$$\frac{{d}\sigma }{d\Omega } = \, {r}_{m}^{2}\mathop{\sum}\limits_{\alpha ,\beta }\frac{{g}_{\alpha }{g}_{\beta }}{4}\left({\delta }_{\alpha \beta }-\frac{{q}_{\alpha }{q}_{\beta }}{{q}^{2}}\right) \times \\ \times | F({{{{{{{\bf{Q}}}}}}}}){| }^{2}{S}^{\alpha \beta }\left({{{{{{{\bf{Q}}}}}}}}\right)$$
(6)

where Q is the wavevector transfer in the scattering process, rm is a scattering factor, α, β = x, y, z are cartesian coordinates indicating initial and final spin polarization of the neutron, F(Q) is the magnetic form factor and \({S}^{\alpha \beta }\left({{{{{{{\bf{Q}}}}}}}}\right)\) is the scattering factor correlation function:

$${S}^{\alpha \beta }({{{{{{{\bf{Q}}}}}}}})=\frac{1}{2\pi N}| {S}_{Q}^{\alpha }{S}_{-Q}^{\beta }|$$
(7)

with

$${S}_{Q}^{\alpha }=\mathop{\sum}\limits_{i}{S}_{i}^{\alpha }({t}_{n}){e}^{i{{{{{{{\bf{Q}}}}}}}}\cdot {{{{{{{{\bf{R}}}}}}}}}_{i}}.$$
(8)

The actual measured cross sections \({S}^{\exp }({{{{{{{\bf{Q}}}}}}}})\) depend on experimental conditions including resolution. The direct solver then undertakes the transformation \({{{{{{{\mathcal{H}}}}}}}}({{{{{{{\bf{p}}}}}}}})\) → \({S}^{{{{{{{{\rm{sim}}}}}}}}}({{{{{{{\bf{Q}}}}}}}})\) to calculate the expected scattering signal for the given model and parameters.

Iterative mapping algorithm (IMA)

To find optimal values of the parameters p that best describe a given S(Q) we use an Iterative mapping algorithm (IMA), a variant of the Efficient Global Optimization algorithm. A dataset of carefully sampled Hamiltonians \({{{{{{{\mathcal{H}}}}}}}}({{{{{{{\bf{p}}}}}}}})\) is constructed iteratively. For each of these, \({S}^{{{{{{{{\rm{sim}}}}}}}}}({{{{{{{\bf{Q}}}}}}}})\) is calculated over a 3-dimensional volume of reciprocal space (61 × 81 × 21 = 35721 pixels) using a Monte–Carlo algorithm (see methods:MC direct solver). The data is then converted to latent space by means of the NLAE encoder and the deviation from the data to be fitted is calculated: \({\chi }_{{S}_{L}}^{2}\)\(=1/{d}_{L}{\sum }_{i}{({S}^{\exp }({L}_{i})-{S}^{{{{{{{{\rm{sim}}}}}}}}}({L}_{i}))}^{2}\). With this data a low cost regression model \({\hat{\chi }}_{{S}_{L}}^{2}\)is calculated that predicts \({\chi }_{{S}_{L}}^{2}\) for Hamiltonians not yet sampled with a much smaller computational cost (from 300 CPU hours to 0.1 CPU seconds at each point). This low-cost predictor uses the same RBN architecture as the GN used to predict S(Q). The training in this case involves using the \({\chi }_{{S}_{L}}^{2}\)calculated from a set of simulated Hamiltonians as target, and their parameters, p, as input. The low-cost predictor is defined as \({\hat{\chi }}_{{S}_{L}}^{2}=1/{d}_{L}{\sum }_{i}{({S}_{\exp }({L}_{i})-{S}_{{{{{{{{\rm{sur}}}}}}}}}({L}_{i}))}^{2}\), where Ssur(Li) is the output from the GN. If a Gaussian Process Regression is used to predict \({\chi }_{{S}_{L}}^{2}\), the computational expense increases exponentially with the number of model parameters and it becomes impractical beyond six.

The IMA process collects samples subject to the condition that \({\hat{\chi }}_{{S}_{L}}^{2}\)is below an error tolerance threshold, CL. The NLAE and the RBN are iteratively retrained to better fit the region towards which the process focuses. The IMA was run to fit the DTO data measured at 680 mK, zero field and ambient pressure before the pressure experiment. The pre-trained RBN was then used in real time to analyze the data at 680 mK and 1.3 G.Pa. (data in Fig. 3) and to predict model parameters with uncertainty. The ellipsoidal approximation to the uncertainty corresponding to \({\chi }_{{S}_{L}}^{2}\)(1.3 G.Pa.) < CL (red) is shown in Fig. 3. The \({S}^{{{{{{{{\rm{sim}}}}}}}}}({{{{{{{\bf{Q}}}}}}}})\) for the predicted parameters for each data set is also shown in Fig. 3 compared to experimental data. As shown in the Fig. 4, the predicted parameters for 1.3 G.Pa data were then validated over the S(Q)datasets collected at multiple temperatures.

Experiment details

Crystal growth

For this work we used an isotopically enriched single crystal sample of Dy2Ti2O7, grown using floating-zone mirror furnace (see4). The single crystal was cut and polished to be cylindrical. The diameter, height, and mass are 1.8 mm, 5 mm, and 76.6 mg respectively. The cylindrical axis of the sample aligns with [h,−h,0] crystallographic direction. The polished crystal was kept inside a Teflon tube filled with Fluorinert FC-770 as a pressure transmission medium, and a Copper-beryllium cell was used to apply hydrostatic pressures. A load of 0.9 tonnes was applied using a hydraulic press, and the resultant pressure at the sample can be estimated as ~1.3 GPa, according to Fig. 4 of ref. 36.

Diffuse neutron scattering experiment

Elastic Diffuse Scattering Spectrometer, CORELLI at the Spallation Neutron Source, Oak Ridge National Laboratory was used to perform experiments under zero and finite pressure conditions37. Ambient pressure and 1.3 GPa experiments were performed on two different beam-time and experiment setups. The dilution refrigerator insert was used in both cases to enable the measurements down to 100 mK. A cryomagnet was used in the ambient pressure experiment, and two datasets at temperatures of 100 mK and 680 mK were collected under zero-field4. In the pressure experiment, the loaded pressure cell was rotated through 360 degrees with the steps of 3 degrees horizontally in the [H,H,L] plane of reflection at a fixed temperature. Seven measurements were repeated at 300 mK, 400 mK, 500 mK, 680 mK, 900 mK, 1.5 K, and 19 K. The data was reduced using Mantid [24] and Python scripts available at Corelli.