Active learning-assisted neutron spectroscopy with log-Gaussian processes

Neutron scattering experiments at three-axes spectrometers (TAS) investigate magnetic and lattice excitations by measuring intensity distributions to understand the origins of materials properties. The high demand and limited availability of beam time for TAS experiments however raise the natural question whether we can improve their efficiency and make better use of the experimenter’s time. In fact, there are a number of scientific problems that require searching for signals, which may be time consuming and inefficient if done manually due to measurements in uninformative regions. Here, we describe a probabilistic active learning approach that not only runs autonomously, i.e., without human interference, but can also directly provide locations for informative measurements in a mathematically sound and methodologically robust way by exploiting log-Gaussian processes. Ultimately, the resulting benefits can be demonstrated on a real TAS experiment and a benchmark including numerous different excitations.

Neutron scattering experiments at three-axes spectrometers (TAS) investigate magnetic and lattice excitations by measuring intensity distributions to understand the origins of materials properties. The high demand and limited availability of beam time for TAS experiments however raise the natural question whether we can improve their efficiency and make better use of the experimenter's time. In fact, there are a number of scientific problems that require searching for signals, which may be time consuming and inefficient if done manually due to measurements in uninformative regions. Here, we describe a probabilistic active learning approach that not only runs autonomously, i.e., without human interference, but can also directly provide locations for informative measurements in a mathematically sound and methodologically robust way by exploiting log-Gaussian processes. Ultimately, the resulting benefits can be demonstrated on a real TAS experiment and a benchmark including numerous different excitations.
Neutron three-axes spectrometers (TAS) 1 enable understanding the origins of materials properties through detailed studies of magnetic and lattice excitations in a sample. Developed in the middle of the last century, the technique remains one of the most significant in fundamental materials research and was therefore awarded the Nobel prize in 1994 2 . It is used for investigating the most interesting and exciting phenomena of their time: the cause of different crystal structures in iron 3 , unconventional superconductors 4 , quantum spin glasses 5 , quantum spin liquids 6 , and non-trivial magnetic structures 7 .
TAS are globally operated at neutron sources of large-scale research facilities and measure scattering intensity distributions in the material's four-dimensional Q-E space, i.e., in its momentum space (Q) for different energy transfers (E) 8 , by counting scattered neutrons on a single detector. However, high demand and limited availability make beam time at TAS a valuable resource for experimenters. Since, furthermore, the intensity distributions of the aforementioned excitations have an information density that strongly varies over Q-E space due to the direction of the underlying interactions and the symmetry of the crystal, and TAS measure sequentially at single locations in Q-E space, it is natural to think about if and how we can improve the efficiency of TAS experiments and make better use of the experimenter's time.
In experimental workflows at TAS, there are scenarios where the intensity distribution to be measured is not known in advance and therefore a rapid overview of the same in a particular region of Q-E space is required. So far, experimenters then decide manually how to organize these measurements in detail. In this mode, however, there is a possibility depending on the specific scenario that measurements do not provide any further information and thus waste beam time since they are placed in the so-called background, i.e., regions with either no or parasitic signal. If there were computational approaches that autonomously, i.e., without human interference, place measurements mainly in regions of signal instead of background in order to acquire more information on the intensity distribution in less time, not only the use of beam time gets optimized, but also the experimenters can focus on other relevant tasks in the meantime.
The potential of autonomous approaches for data acquisition was recognized throughout the scattering community in recent years. gpCAM, for example, is an approach that is also based on GPR and applicable to any scenario in which users can specify a reasonable acquisition function to determine locations of next measurements 9,10 .
It was originally demonstrated on small-angle X-ray scattering (SAXS) and grazing-incidence small-angle X-ray scattering (GISAXS) applications. However, it was also applied to a TAS setting recently 11 . In reflectometry, experiments can be optimized by placing measurements at locations of maximum information gain using Fisher information 12,13 . Furthermore, the maximum a posteriori (MAP) estimator of a quantity of interest in a Bayesian setting 14,15 can be used to accelerate small-angle neutron scattering (SANS) experiments 16 . Moreover, neutron diffraction experiments are shown to benefit from active learning by incorporating prior scientific knowledge of neutron scattering and magnetic physics into GPR for an on-the-fly interpolation and extrapolation 17 . Materials synthesis and materials discovery were also considered as potential fields of application 18,19 . Interestingly, log-Gaussian (Cox) processes 20 , the technique that we use as a basis for our approach, are applied in various domains such as epidemiology 21 , insurance 22 , geostatistics 23 , or forestry 24,25 . The common factor of all those applications is the possibility to model or interpret their corresponding quantity of interest as an intensity.
From the field of artificial intelligence and machine learning, active learning [26][27][28] (also called optimal experimental design in statistics literature) provides a general approach that can be taken into account for the task of autonomous data acquisition in TAS experiments. In our context, an active learning approach sequentially collects intensity data while deciding autonomously where to place the next measurement. In other words, it updates its own source of information that its further decisions are based on.
The approach that we describe in this work regards an intensity distribution as a non-negative function over the domain of investigation and approximates it probabilistically by the mean function of a stochastic process. The primary objective for our choice of a stochastic process is that its posterior approximation uncertainties, i.e., its standard deviations after incorporating collected data, are largest in regions of signal as this enables us to identify them directly by maximizing the uncertainty as an acquisition function. More concretely, we fit a log-Gaussian process to the intensity observations, i.e., we apply Gaussian process regression (GPR) 29 to logarithmic intensities and exponentiate the resulting posterior process. The fact that GPR is a Bayesian technique that uses pointwise normal distributions to fit noisy data from an underlying inaccessible function of interest makes it indeed an interesting candidate for many applications and use cases where it is important to quantify some sort of uncertainty. However, we will see that, in our case, it is the logarithmic transformation of observed intensities that leads to large uncertainties in regions of signal and is thus the central element of our approach. Nevertheless, this approach is not only able to detect regions with strong signals, but also those with weak signals. A threshold parameter for intensity values and a background level, both estimated by statistics of an initial set of intensity observations on a particular grid, can control which signals are subject to be detected or neglected. Furthermore, costs for changing the measurement location caused by moving the axes of the instrument are respected as well.
Since gpCAM is the only of the mentioned approaches that was already applied to TAS 11 , it is possible to briefly contrast it with ours. For the TAS setting, we see two major differences. First, gpCAM approximates the original intensity function (instead of its logarithm) which violates a formal assumption of GPR. In fact, GPR assumes that the function of interest is a realization of a Gaussian process. Since normal distributions have support on the entire real line, their realizations can, with positive probability, take negative values which is not possible for non-negative intensity functions. This issue does not occur in our methodology because we approximate the logarithm of the intensity function with GPR. Secondly, for identifying regions of signal, gpCAM requires users to specify an acquisition function based on a GPR approximation which can be problematic. Indeed, especially at the beginning of an experiment, having no or not much information about the intensity distribution, it can be difficult to find a reasonable acquisition function which leads to an efficient experiment. Moreover, even if users were successful in finding an acquisition function that works for a particular experiment, there is no guarantee that it is also applicable for another experiment with a different setting. Additionally, the acquisition function used in the only TAS experiment of gpCAM so far [ref. 11,Eq. (6)] makes it risk losing some of its interpretability 30,31 in the TAS setting, because physical units do not match and it is not obvious from a physics point of view why this function in particular is suitable for the goal of placing measurement points in regions of signal. In contrast, we are able to choose a particular canonical acquisition function that remains the same for each experiment since we identify regions of signal by the methodology itself or, more concretely, by the logarithmic transformation of a Gaussian process, and not primarily by an acquisition function as the critical component. Furthermore, the mentioned parameters of our approach, the intensity threshold and the background level, are both scalar values with, and not functions without a physical meaning. They are thus directly interpretable and can be estimated using initial measurements in a provably robust manner or set manually.
In this work, we demonstrate the applicability and benefits of our approach in two different ways. First, we present outcomes of a real neutron experiment performed at the thermal TAS EIGER 32 at the continuous spallation source SINQ of Paul Scherrer Institute (PSI) in Villigen, Switzerland. In particular, we compare with a grid approach and investigate the robustness of the results w.r.t. changes in the estimated background level and intensity threshold. It can be seen that our approach robustly identifies regions of signal, even those of small shape, and hence is able to improve the efficiency of this experiment. Moreover, we challenge our approach with a difficult initial setting and can demonstrate that its behaviour remains reasonable. Secondly, we apply a benchmark with several synthetic intensity functions and make fair comparisons with two competing approaches: an approach that places measurements uniformly at random and again a grid-based approach. In this setting, efficiency is measured by the time to reduce a relative weighted error in approximating the target intensity functions. The results show that our approach significantly improves efficiency for most intensity functions compared to the random and grid approach and is at least as efficient for the remainder. In addition, we can show that the results of our approach are robust w.r.t. changes in the intensity threshold parameter. Finally, we provide a comment to a comparison with gpCAM in this benchmark setting and a corresponding reference to the Supplementary Information.

Problem formulation
From a methodological perspective, we aim to discover an intensity function i : X ! ½0,1Þ on a rectangular set X R n , n ∈ N, with coordinates on a certain hyperplane in four-dimensional Q-E space 8 . As an example, Fig. 1a displays an intensity function defined on X = ½2:3,3:3 × ½2:5,5:5 R 2 , where the two-dimensional hyperplane is spanned by the vector (0, 0, 1) in Q space with offset (1, 1, 0) and energy transfer (E). For directions in Q space, we use relative lattice units (r.l.u.), while energy transfer E is measured in milli-electron volts (meV). Note that, due to restrictions of an instrument, the intensity function might only be defined on a subset X * X consisting of all measurement locations reachable. For more details on the TAS setting, we refer to Supplementary Note 1.
The intensity function i is accessed by counting scattered neutrons on a detector device for a finite number of measurement locations x 2 X * yielding noisy observations I + (x)~Pois(λ = i(x)). Note that detector counts are usually normalized by neutron counts on a monitor device, i.e., the corresponding unit of measurement is detector counts/(M monitor counts), M ∈ N. Since Poisson distributions Pois(λ) with a sufficiently large parameter λ > 0 can be approximated by a normal distribution N ðλ,λÞ, we assume that where η + ∼ N ð0,1Þ.
In the following, we repeatedly refer to "experiments" which are defined as a sequential collection of intensity observations.

Definition 1. (Experiment). An experiment A is an N-tuple of locationintensity pairs
where |A| :¼ N 2 N denotes the number of measurement points, x j 2 X * are measurement locations, andî j are corresponding noisy observations of an intensity function i at x j . Since demand for beam time at TAS is high but availability is limited, the goal of an approach is to perform the most informative experiments at the lowest possible cost. Beam time use can, for example, be optimized when excessive counting in uninformative regions like the background (Fig. 1b) is avoided but focused on regions of signal (Fig. 1c).
We quantify the benefit of an experiment A by a benefit measure μ = μðAÞ 2 R and its cost by a cost measure c = cðAÞ 2 R. For now, it suffices to mention that cost is measured by experimental time (the time used for an experiment) and benefit is defined by reducing a relative weighted error between the target intensity function and a corresponding approximation constructed by the collected intensity observations. Note that, quantifying benefits this way, their computation is only possible in a synthetic setting with known intensity functions (as in our benchmark setting). Real neutron experiments do not meet this requirement and thus must be evaluated in a more qualitative way.
In our setting, an approach attempts to conduct an experiment A with highest possible benefit using a given cost budget C ≥ 0, i.e., it aims to maximize μðAÞ while ensuring that cðAÞ ≤ C. The steps of a corresponding general experiment are given in Box 1. Line 4 is most important and crucial for both cost and benefit of the experiment since it decides where to observe intensities, i.e., count neutrons, next.
From an algorithmic perspective, if we denote the current step of an experiment by J ∈ N, our approach implements the decision for the next measurement location x J + 1 2 X * by maximizing an objective function ϕ J : X * ! R. It balances an acquisition function acq J : X * ! R and a cost function c J : X * ! R that both depend on the current step J and hence can change from step to step. The acquisition function acq J indicates the value of any x 2 X * for improving the benefit of the experiment whereas the cost function c J quantifies the costs of moving the instrument axes from the current location x J 2 X * to x. The metric d : X * × X * ! ½0,1Þ used for our particular cost function is formally specified in Supplementary Note 1 (Eq. (11)).

BOX 1
General experiment algorithm Our methodology concentrates on developing a useful acquisition function as the crucial component of our approach making it straightforward to find regions of signal. A schematic representation with general and specific components of our approach in the context of the general experiment algorithm (Box 1) can be found in Fig. 2. Specific components necessary to understand the experimental results are informally introduced below. Details for these specific components as well as our final algorithm (Box 2), concretizing the general experiment algorithm, along with all its parameters and their particular values used are specified in the Methods section.

Log-Gaussian processes for TAS
We briefly describe here why log-Gaussian processes, our central methodological component, are suitable to identify regions of signal in a TAS experiment and thus used to specify a reasonable acquisition function acq J . Methodological details can be found in the Methods section.
Although the intensity function is not directly observable due to measurement noise (Eq. (1)), we aim to approximate it by the mean function of a log-Gaussian process where F is a Gaussian process. That is, after J steps, we fit logarithmic intensity observations to F yielding its posterior mean and variance function denoted by m J and σ 2 J , respectively. The acquisition function is then defined as the uncertainty of I given by its posterior standard deviation, i.e., Observe the crucial detail that m J appears exponentially in this function which is the main reason why our approach is based on log-Gaussian processes. A posterior log-Gaussian process is thus able to find regions of signal just through maximizing its uncertainty. As illustration, regard Fig. 3 displaying the posterior of the Gaussian process F together with logarithmic intensity observations (Fig. 3a) and the corresponding posterior of the log-Gaussian process I (Fig. 3b).

Intensity threshold and background level
Maximizing the acquisition function from Eq. (5) prioritizes regions with high intensities over regions with low intensities. This poses a problem when there are multiple signal regions with intensities of different magnitudes ( Supplementary Fig. 1a). Indeed, measurement points are mainly placed in regions with higher intensities whereas regions with less signal are neglected ( Supplementary Fig. 1b). In TAS, we are interested in each region of signal no matter of which intensity magnitude. We compensate for this potential problem by introducing an intensity threshold τ > 0 for observed intensities. That is, we truncate the observed intensities to a maximum of τ before fitting (Supplementary Fig. 1c). Consequently, measurement points get more evenly distributed among all signal regions ( Supplementary Fig. 1d) since their placement is not biased due to large differences in their intensity values. As another problem, intensity observations, in neutron experiments, contain background which is not part of the actual signal, i.e., even if there is no actual signal at a certain location, we might nonetheless observe a positive intensity there. If our approach does not compensate for regions of background, it might not recognize them as parasitic and hence consider them as regions of weak signal which potentially yields uninformative measurement points being placed there. Therefore, we subtract a background level γ ∈ [0, τ) from already threshold-adjusted intensity observations while ensuring a non-negative value.
In an actual experiment, both, the intensity threshold and the background level, are estimated by statistics of initial measurement points which is described in more detail in the Methods section. The estimation of the intensity threshold however depends on a parameter β ∈ (0, 1] controlling the distinction between regions of strong and weak signals (Eq. (47)) that needs to be set before starting an experiment. This parameter is already mentioned here as we examine its impact on our results in the benchmark setting described below.

Neutron experiment
At SINQ (PSI), we investigated a sample of SnTe (tin telluride) in a real neutron experiment performed at the thermal TAS EIGER 32 . Our general aim is to reproduce known results from [ref. [33,Fig. 1b] using our approach. Furthermore, we assess the robustness of the experimental result w.r.t. changes in the parameters for the background level and the intensity threshold (scenario 1) as well as challenge our approach using a coarser initialization grid on a modified domain X with no initial measurement locations directly lying in a region of signal (scenario 2).
The software implementation of our approach communicates with the instrument control system NICOS (nicos-controls.org) which was configured on site. For the experimental setting at the instrument, we refer to Supplementary Note 2.
As mentioned, the benefit measure used for benchmarking, involving a known target intensity function, is not computable in this setting of a neutron experiment due to experimental artefacts like background and noise. We, therefore, evaluate the results of our approach in a more qualitative way for this experiment. Also, although the costs for moving the instrument axes contribute to the total experimental time, we do not consider them here for optimizing the objective function since they are approximately constant across the domain X .
For scenario 1, we adopt the setting from the original results [ref. [33, Fig. 1b] which, in our context, means to investigate intensities on X = ½0,2 × ½2,12:4 along the vector (1, 1, 0) in Q space with offset (0, 0, 3) and energy transfer. Initially, we performed measurements in a conventional mode for reference, i.e., we mapped X with a grid of 11 columns containing 27 measurement points each (bottom row in Fig. 4). These measurements were arranged in columns from bottom to top, i.e., from low to high energies, and from left to right, i.e., from small to large values for the coordinate along the vector in Q space.
A direct comparison with our approach would put this mapping mode in disadvantage since its total experimental time could have been spent more efficiently. For this reason, the approach that we eventually take for comparison, the grid approach (top row in Fig. 4), takes the measurement points from the mapping mode but changes their order into four stages I-IV ( Supplementary Fig. 2a-d). The first stage is from bottom to top and from left to right but only consists of every other grid point on both axes. The second stage is again from bottom to top but from right to left and also consists only of every other grid point on both axes. The third and fourth stage are analogous but consist of the remaining grid points that were skipped in the first two stages. With this order, the grid approach can observe intensities on each region of X * already after the first stage and hence can acquire more information in a faster way as with the conventional order.
We ran our approach in the default setting (Table 1), i.e., with automatically computed background level and intensity threshold, as well as with two alternative pairs of corresponding parameter values manually set in order to study the robustness of the results. In the default setting (row 2 in Fig. 4), the background level and intensity threshold were computed to γ 0 = 58 and τ 0 = 94.5, respectively. To study the robustness of these results w.r.t. changes in the intensity threshold, the parameters for the first alternative (row 3 in Fig. 4) were set to γ 1 = γ 0 and τ 1 = 130. The second alternative (row 4 in Fig. 4), for studying the robustness w.r.t. changes in the background level, was configured with γ 2 = 30 and τ 2 = τ 0 . Eventually, the cost budget C for each instance of our approach was determined by the total experimental time of the grid approach which was~11.5 hours. Each respective initialization with the same grid of 61 measurement points took 2.9 h of experimental time. The results displayed in Fig. 4 show that, after initialization, our approach places measurement points mainly in regions of signal in all three settings. The grid approach, in contrast, has observed intensities at a considerable amount of uninformative measurement locations. Note that the evolution of the experiments related to the three different settings of our approach can be seen in Supplementary Movies 1-3.
In scenario 2, we tried to challenge our approach with a more difficult initial setting. The initialization grid in scenario 1 (triangles in Fig. 4) indeed beneficially lies on an axis of symmetry of the intensity function. Also, partly as a consequence, some initial measurements points are already located in regions of signal. From a methodological perspective, it is interesting to investigate how our approach behaves after unfavorable initialization, i.e., if almost all initial intensity observations are lying in the background and hence almost no useful initial information is available. Hence, we reduced the number of initial measurement points from 61 to 25 and modified the domain to X = ½0:2,2 × ½2,16 to break the axis of symmetry and measure larger energy transfers (Fig. 5a). Estimating the background level and intensity threshold is not reasonable in this setting since the amount of initial information is too little. Therefore, we manually set γ = 50 and τ = 80, respectively. The cost budget C for our approach was not set explicitly here. In fact, we stopped the experiment manually after~7.3 h of total experimental time, with~1.1 h spent on initialization with the modified grid. The result is depicted in Fig. 5b. It shows that, even in this challenging situation, our approach keeps making reasonable decisions on measurement locations. The evolution of the experiment related to this scenario can be seen in Supplementary Movie 4.

Benchmark
This section shows results of a benchmark on several two-dimensional (i.e., n = 2) test case intensity functions ( Supplementary Fig. 3) as a quantitative proof of concept. The benchmarking procedure quantifies the performance of an approach by how much benefit it is able to  acquire for certain cost budgets in the context of predefined test cases. We briefly describe its setting in the following paragraph. For more details, we refer to the original work 34 .
A test case mainly includes an intensity function defined on a certain set X and a synthetic TAS used for moving on X * with certain velocities and observing intensities. As mentioned, the cost measure is chosen to be the experimental time used, i.e., the sum of cumulative counting time and cumulative time for moving instrument axes. The benefit measure is defined by a relative weighted L 2 approximation error between the target intensity function and a linear interpolation i =îðAÞ using observed intensities from experiments A. It encodes the fact that a TAS experimenter is more interested in regions of signal than in the background which suggests to use i itself as a weighting function. However, an important constraint, which is controlled by a benchmark intensity threshold τ * > 0 truncating weights to i τ * ðxÞ :¼ minfiðxÞ,τ * g, ð6Þ as similarly described for the intensity threshold of our approach, is that separate regions of signal with different magnitudes of intensity might be equally interesting. Formally, we define where k Á k = k Ák L 2 ðX * ,ρ i,τ * Þ for the weighting function For each test case, benefits are measured for several ascending cost budgets, called "milestone values", to demonstrate the evolution of performance over time. We note that this synthetic setting includes neither background nor measurement noise as both are artefacts of real neutron experiments. We run the benchmarking procedure with three approaches for comparison: 1. random approach, 2. grid approach, 3. our approach with different values for the intensity threshold parameter β.
The random approach places intensity observations uniformly at random in X * . The grid approach is adopted from the section on the neutron experiment but using a square grid of dimension P × P, P ∈ N.
For our approach, we set a zero background level, i.e., γ = 0, manually since background is not included in this synthetic setting. Our approach is run with four variations of the intensity threshold parameter β (Eq. (47)) in order to study corresponding sensitivities of the benchmark results.
The specific benchmarking procedure involves four milestone values according to the four stages (I-IV) of the grid approach, i.e., the m-th milestone value represents the experimental time needed to complete stage m ∈ {I, II, III, IV}. Observe that they depend on the particular test case. The specific milestone values used are indicated in Supplementary Note 3. The number of columns/rows P for the grid approach is chosen to be the maximum number such that the corresponding experimental time for performing an experiment in the described order does not exceed 9 hours.
Since both the random and our approach contain stochastic elements, we perform a number of 100 repeated runs with different random seeds for each test case in order to see the variability of their results.
The results for each test case along with its corresponding intensity function are shown in Fig. 6. Our approach has, on median average, performed significantly better than the random and grid approach in most test cases. Furthermore, its mostly thin and congruent shapes of variability (light color areas) demonstrate its reproducibility and its robustness w.r.t. changes in the intensity threshold parameter β. Examples of particular experiments performed by our approach are additionally provided in Supplementary Fig. 4 for each test case.
A well-formulated and sustainable comparison of our approach with gpCAM in this benchmark setting is currently difficult to implement because, to the best of our knowledge, gpCAM does not specify how to choose its main parameter, the acquisition function, in the TAS setting. In this situation, where relevant information is missing, it would be inappropriate to compare the two approaches at this point. We have therefore decided to provide a comparison that is currently possible, i.e., based on all the information available to us about gpCAM, in Supplementary Note 4. The acquisition function used there is chosen to be the same as that used for the neutron experiment at the TAS ThALES (ILL) [ref. 11,Eq. (6)].  Article https://doi.org/10.1038/s41467-023-37418-8

Discussion
The results of the neutron experiment demonstrate the benefits of our approach. Indeed, in scenario 1 (Fig. 4), our approach identifies regions of strong as well as of weak signal in each setting and even finds isolated relevant signals of small shape at the edge repeatedly. Therefore, for this experimental setting, the results are shown to be robust w.r.t. changes in the estimated background level and intensity threshold, which we view as an important outcome of this experiment. The choice of these parameters is however directly reflected in the placement of measurement points which indicates a certain aspect of interpretability and explainability 30,31 of our approach. An intensity threshold higher in value namely leads to measurement points that are placed on a thinner branch of signal (row 3 in Fig. 4), whereas as a lower background level yields more exploratory behaviour, with a risk to measure in regions of background from time to time (row 4 in Fig. 4). Additionally, note that a smaller intensity threshold results in measurement points also being placed in regions of high intensity gradient. Furthermore, in each setting, our approach (rows 2-4 in Fig. 4) has significantly fewer measurement points in the background compared to the grid approach (top row in Fig. 4) as expected and thus uses experimental time more efficiently. The grid approach additionally does not cover small signal regions at the edge. A simulated intensity between the two regions of signal on the right [ref. 33, Fig. 1c] is not seen by our approach which is, however, in agreement with the original experiment [ref. 33, Fig. 1b]. In situations like this, a human experimenter can focus on such details and place additional measurement points manually, if necessary.
When applying GPR, we use a common Gaussian kernel as detailed in the Methods section. The (logarithm of the) intensity function from Fig. 4 however violates the stationarity of this kernel. Indeed, using a stationary kernel assumes homogeneous statistical behaviour of the intensity function across the entire domain X which is not the case for our particular scenario. The length scale along the E axis, for instance, is differing for different values on the Q axis. In the middle of the Q axis, the length scale is certainly larger than near its edges. Note that the length scale along the Q axis is also non-stationary. These discrepancies are one of the main reasons for our choice of the material SnTe and the setting mentioned above since it provides an opportunity to demonstrate that a stationary kernel, which is computationally substantially cheaper than non-stationary kernels, is sufficient for identifying regions of signal and hence for performing efficient experiments.
In scenario 2 (Fig. 5), we challenged our approach with a difficult setting. Although the initialization grid and the domain were organized such that no initial measurement point is located in a region of signal, and hence the approach is initialized with little useful information, its behaviour stays reasonable. It namely keeps placing measurements in regions of signal and can still identify the small region with strong signal on the bottom right. However, the signal region in the middle of the domain is not fully identified, which can be explained by the relatively short experimental time (7.3 hours) as well as by the stationarity of the kernel since the sparse data suggest a short length scale along the E axis leading to the assumption of lower function values in an actual region of signal.
Note that the reduced amount of initial measurement points in scenario 2 is only applied for the purpose of challenging our approach and a demonstration of its robustness. In productive operation, however, we stay with the larger initialization grid from scenario 1 since placing some measurements in regions of background for a proper determination of the same is a valuable step during an experiment providing relevant information for the experimenter. Fortunately, this mostly leads to a sufficiently good initialization of our approach, despite using a common stationary kernel.
For further results of neutron experiments in the setting from scenario 1 as well as from an additional scenario (scenario 3) that investigated another phonon of SnTe in the default setting of our approach, we refer to Supplementary Note 5.
The benchmark results (Fig. 6), as a proof of concept, confirm the outcomes of the neutron experiment quantitatively in a synthetic setting. Our approach shows a better performance, measured by a relative weighted approximation error (Eq. (7)) between a target intensity function and a linear interpolation of collected intensity observations, compared to the random and grid approach for all test cases. The improvements are especially substantial for target intensity functions with smaller regions of signal (Fig. 6a-q). For intensity functions that cover a substantial part of the domain X (Fig. 6r-t), the competing approaches are also able to place intensity observations in regions of signal early during the experiment and hence it is difficult for any approach to demonstrate its benefit in these scenarios.
The variability in the results of our approach containing stochastic elements, quantified by the median of benefit values and the range between their minimum and maximum, is shown to be small for most test cases and acceptable for the remainder. Our approach can thus be considered reliable with reproducible results despite starting with a different sequence of pseudo-random numbers for each run.
Moreover, the benchmark results indicate that our approach is robust w.r.t. changes in the intensity threshold. The four variations of the corresponding controlling parameter β (Eq. (47)) yield similar results for most test cases. It should be noted, however, that the use of a reasonable value for β in real neutron experiments is nevertheless important. Indeed, it not only eliminates the effect of outliers in initial intensities but also, recall, allows to control the width of signal branches on which the measurement points are placed, and thus the extent to which regions of high gradient are preferred over those of peak intensity.
In addition to the results of the neutron experiment, the particular experiments performed by our approach in the benchmark setting ( Supplementary Fig. 4) confirm that it, after initialization, autonomously places a large part of measurement points in regions of signal for a variety of different intensity distributions.
As a conclusion, in the previous sections, we demonstrated that our approach indeed improves the efficiency of TAS experiments and allows to make better use of the experimenter's time. It maximizes an acquisition function given by approximation uncertainties of a log-Gaussian process in order to find informative measurement points and not waste experimental time in regions such as the background. Our robust and reproducible results suggest that it is in fact capable of autonomously obtaining a rapid overview of intensity distributions in various settings.
In a real neutron experiment at the thermal TAS EIGER, our approach repeatedly demonstrated its ability to identify regions of signal successfully leading to a more efficient use of beam time at a neutron source of a large-scale research facility. It was additionally shown that it keeps making reasonable decisions even when being initialized with little information. Furthermore, substantial performance improvements, in comparison with two competing approaches, and their robustness were quantified in a synthetic benchmark setting for numerous test cases.
Nevertheless, we feel that the automated estimation of algorithmic parameters (background level and intensity threshold) from statistics of initial measurements, despite its good performance in each of the settings discussed, needs to be confirmed in future experiments.
Our approach, so far, solves the problem of where to measure. However, an interesting topic of future research is the major question of how to measure at a certain location. Counting times were assumed to be constant in this work and thus promise to be another possibility to save experimental time if determined autonomously in an advantageous way. Indeed, large counting times in regions of high intensities and background are actually not needed since the necessary information can also be collected in less time. Experimenters, in contrast, are often more interested in weaker signals and their comprehensive measurement to reduce corresponding error bars.
Moreover, although using a common stationary kernel for GPR has proven to be sufficient for identifying regions of signal, we regard the application of non-stationary kernels with input-dependent hyperparameters [35][36][37][38] , e.g., length scales, also modelled by log-Gaussian processes, as another interesting option for further investigations.
Finally, a certainly more involved topic for future research is the parameter estimation for physical models such as Hamiltonians, if available, in a Bayesian framework using data collected by an autonomous approach. Once this estimation is possible, a natural follow-up question is how our approach, which is completely model-agnostic, i.e., it does not take into account any information from a physical model, compares to a model-aware approach in terms of reducing uncertainties of parameters to be estimated while minimizing experimental time.

Methods
Our approach is methodologically based on Gaussian process regression (GPR) 29 , a Bayesian technique for estimating and approximating functions from pointwise data that allows to quantify uncertainties on the approximation itself in the form of normal distributions. We fit a Gaussian process to logarithmic intensity observations and exponentiate the resulting posterior process yielding a log-Gaussian process. As mentioned, this transformation causes approximation uncertainties to be higher in regions of signal which in turn can be exploited for the definition of a useful acquisition function in the TAS setting.
Article https://doi.org/10.1038/s41467-023-37418-8 The kernel function κ θ is an important component in GPR since it describes the correlation structure between random variables F(x). Hence, it enables to include assumptions on realizations of F and determines function properties like regularity, periodicity, symmetry, etc. In practice, it is crucial to choose a kernel function that matches with properties of the function of interest f.
We acquire knowledge on f through noisy observationsf j at locations x j 2 X * (Eq. (2)) in our context. Therefore, for j = 1, …, J, we setF where η j ∼ N ð0,1Þ are i.i.d. random variables independent of F(x j ) and e(x) > 0 denotes the noise standard deviation at x 2 X . Note thatFðx j Þ is a normally distributed random variable. For clear notation, we define and let for any function h : X ! R.
After observations have been made, we are interested in the posterior, i.e., conditional, Gaussian process. It holds that with the posterior mean function and the posterior variance function If necessary, the hyperparameters θ J of the kernel function κ J :¼ κ θ J can be optimized using dataf J . For this, we compute θ J such that the logarithm of the so-called marginal likelihood, i.e., is maximized. A suitable optimizer is specified below. Note that the analytical expression for the integral in Eq. (19) is only feasible due to the normal distributions involved. However, the computational cost of GPR is often hidden in this kernel optimization step since it requires solving linear systems and computing determinants [ref. 29,Sec. 2.3]. An appropriate criterion for stopping kernel optimizations that detects stagnant hyperparameters during an experiment is therefore provided below. Furthermore, observe that, for a fixed non-optimized kernel, σ 2 J does not depend on observationsf J but only on locations X J they have been made at.
The posterior mean function m J : X ! R, incorporating knowledge on J noisy observations of the function of interest f, can now be used as an approximation to f, whereas the posterior variance function σ 2 J : X ! ½0,1Þ quantifies the corresponding uncertainties (Supplementary Fig. 5). Note that σ 2 J ðxÞ<σ 2 ðxÞ for each x 2 X since ½κ J ðX J ,X J Þ + diagðeðX J ÞÞ 2 À1 is positive-definite. Since we have m ≡ 0 later, we can further simplify Eq. (17) to Log-Gaussian processes The Gaussian process, which is fitted to logarithmic intensity observations in our approach, is exponentiated to the original linear scale in this section to become log-Gaussian. Before describing details of its application in the TAS setting, we first give the relevant definitions and mention a technical detail that will be important below.
Definition 3. (Log-normal distribution 39 ). Let η ∼ N ð0,1Þ. Then, for parameters μ ∈ R and σ > 0, the random variable is said to follow a log-normal distribution, denoted by Z ∼ log-N ðμ,σ 2 Þ. The mean and variance of a log-normally distributed random variable Z are given by Below, we look at the noise distribution of log-Gaussian processes in order to satisfy our assumption on intensity observations to contain normally distributed noise (Eq. (1)). The following mathematical result is fundamental for this as it establishes a link between normal and log-normal distributions. It is proved in Supplementary Note 6.
Proposition 1. (Small variance limit for normalized log-normally distributed random variables) Let Z be a log-normally distributed random variable as in Eq. (21) and define the corresponding normalized random variable Then, the random variable Z converges pointwise to η as σ → 0 + , i.e., Z ðωÞ ! ηðωÞ as σ ! 0 + ð24Þ where Ω denotes the sample space. As mentioned, the exponentiation of a Gaussian process is log-Gaussian, by definition.
is called a log-Gaussian process. Using Eqs. (17) and (18), it immediately follows for the posterior log-Gaussian process that In particular, its posterior mean function is and its posterior variance function becomes VarðIðxÞ |FðX J Þ =f J Þ = ðexpðσ 2 J ðxÞÞ À 1Þ Á expð2m J ðxÞ + σ 2 J ðxÞÞ: ð28Þ Application to TAS Log-Gaussian processes for TAS. In the context of our methodology from the previous sections, we choose the function of interest to be the logarithm of the intensity function from the TAS setting (Supplementary Note 1), i.e., If we defineF relating to Eq. (25), i.e., F = log I, Eq. (13) gives for measurement locations x j 2 X * . Note that, in contrast to the pro-cessF containing additive normally distributed noise (Eq. (13)), the noise ofÎ is multiplicative and log-normal, i.e., However, referring to Eq. (1), the physical assumption on the noise of intensity observations I + is to be additive and normally distributed, i.e., where η + j ∼ N ð0,1Þ and e + ðx j Þ = ffiffiffiffiffiffiffiffiffi iðx j Þ q . Fortunately, the actual deviation of the two different noise distributions is negligible for sufficiently large intensities i(x j ) as the following explanations demonstrate.
As a first step, let us determine e(x j ) from Eq. (31) such that the variances of both noise distributions are equal, i.e., which yields Note that the intensity term i(x j ) in Eq. (35) is not known in practice but can be replaced by the corresponding observationî j ≈iðx j Þ.
Since we aim to apply the small variance limit for log-normally distributed random variables from above (Eq. (24)), we set with μ = log iðx j Þ and σ 2 = log 1 2 Observe that Furthermore, note that the term μ also depends on the limit i(x j ) → ∞ and hence on σ → 0 + , in contrast to the setting above, but disappears in the calculations due to cancellations (see proof in Supplementary Note 6). Using the small variance limit, we immediately get the following result (Supplementary Fig. 6).
Proposition 2. The normalization of the noise random variablê Iðx j Þ | ðIðx j Þ = iðx j ÞÞ converges pointwise to η as i(x j ) → ∞. In particular, it converges in distribution to a standard normal distribution. If we setî the acquisition function acq J of our approach can now be defined as which eventually gives Eq. (5) through Eq. (28).
Intensity threshold and background level. The intensity threshold τ and background level γ have already been introduced informally in the Results section. Taking both into account as explained, the log-Gaussian process actually aims to approximate the modified intensity function i γ,τ ðxÞ :¼ maxfminfiðxÞ,τg À γ,0g: Observe that, by regarding intensity observations adjusted according to Eq. (41), the assumption of their noise being normally distributed is violated in general as their noise distribution is asymmetric. In particular, if i(x j ) substantially exceeds the intensity threshold τ, the distribution gets rather concentrated at τ and thus small in variance. We, however, do assume noise on adjusted intensities as if they were observed so without adjusting. This does not change the expected behaviour of getting a useful acquisition function but even ensures numerical stability since noise regularizes the computational problem of solving linear systems in GPR. It remains to explain how we seek to compute suitable values for the background level and intensity threshold without knowing the intensity function. We estimate γ = γðA 0 Þ and τ = τðA 0 Þ by statistics of J 0 ∈ N initial measurement points collected to initialize our approach. The arrangement of initial measurement locations is specified below. For computing the background level γ, we divide the initial intensity observations sorted in ascending order into 10 buckets where D l , l = 1,…,9, denotes the l-th decile of initial intensity observations, D 0 := −∞, and D 10 := +∞. The relative and absolute differences of the bucket medians, i.e., for parameters Δ rel max > 0, Δ abs min > 0, and l max 2 f1, . . . ,9g and set the function for computing the background level to The intensity threshold τ is then selected as a value between the background level γ and the maximum bucket median m 10 on a linear scale by the parameter β that was introduced in the Results section. Therefore, we define τðA 0 Þ :¼ γðA 0 Þ + β Á ðm 10 À γðA 0 ÞÞ: ð47Þ Note that this definition is, by the meaning of m 10 , robust to outliers in A 0 .
Initial measurement locations. The initial measurement locations x 1 , . . . ,x J 0 2 X * are deterministically arranged as a certain grid. It is chosen to be a variant of a regular grid in which every other row (or column) of points is shifted to the center of surrounding points ( Supplementary Fig. 7). The intensity observations start in the bottom left corner of X * and then continue row by row. Initial locations not reachable by the instrument, i.e., outside of X * , are skipped. If all initial locations are reachable, we use a total number of where N row ∈ N is the odd number of rows in the grid.
Consumed areas. As placing measurement points too close to each other has limited benefit for an efficient experiment, we mark areas around each measurement location x j 2 X * as "consumed" and ignore them as potential locations for new measurement points. Since the resolution function of a TAS yields ellipsoids in Q-E space, it is natural to consider consumed areas in X * as ellipses. An ellipse with center point x j 2 X * is defined as for a matrix-valued function where U(x j ) ∈ R n×n is a rotation matrix and R(x j ) = diag(r 1 (x j ), …, r n (x j )) ∈ R n×n with r k > 0. Then, the union of all ellipses at step J is denoted by It is included in the objective function ϕ J which is part of the final algorithm.

Final algorithm
Incorporating all of the discussed methodological components, the steps for the final algorithm are listed in Box 2. Required components that have not been mentioned above are described in the next paragraphs. The algorithmic setting, i.e., particular values for parameters of the algorithm, is specified below.
Objective function. Recall that the objective function ϕ J , which is supposed to indicate the next measurement location, is composed of the acquisition function acq J from Eq. (40) and the cost function c J from Eq. (3). In order to avoid distorting the objective function with physical units of time, we use the normalized cost function where c 0 > 0 is a normalizing cost value and set to the maximum distance between two initial measurement locations w.r.t. the metric d, i.e., The objective function is then defined as o t h e r w i s e : Observe that ϕ J excludes consumed areas in E J as potential locations for new observations. Outside E J , it reflects the fact that the objective function should increase if the cost function decreases and vice versa. Also, if there were no costs present, i.e., c J ≡ 0, then ϕ J = acq J outside E J .
Kernel and optimizer for hyperparameters. The parameterized kernel κ θ is chosen to be the Gaussian (or radial basis function, RBF) kernel, i.e., where σ 2 > 0 and Λ = diag(λ 1 , …, λ n ) ∈ R n×n for length scales λ k ≥ 0. Hence, the vector of hyperparameters is θ = ðσ 2 ,λ 1 , . . . ,λ n Þ > 2 R n + 1 : Recall that the kernel is needed for GPR to fit a Gaussian process to the logarithm of intensity observations. As mentioned above, we compute optimal hyperparameters by maximizing the logarithm of the marginal likelihood (Eq. (19)). Since this optimization problem is non-convex in general, it might have several local maxima. Instead of a global optimizer, we run local optimizations starting with N H 2 N different initial hyperparameter values distributed uniformly at random in a hypercube H R n + 1 and choose the one with the largest local maxima. Note that this introduces stochasticity into our approach.
Stopping criterion for kernel optimizations. As kernel optimizations are computationally the most expensive part of our methodology, it is reasonable to stop them once a certain criterion is met. The stopping criterion P KO = P KO (J) is formalized as a predicate, i.e., a boolean-valued function, depending on step J. If N KO (J) ∈ N denotes the number of kernel optimizations performed up until step J, it is defined as and ε KO > 0. Informally, this predicate indicates that kernel optimizations should be stopped as soon as N KO (J) exceeds a given maximum number N max KO or if, provided that N KO (J) exceeds a given minimum number N min KO , the average relative difference of the last k KO hyperparameters falls below a given threshold value ε KO , i.e., the hyperparameters stagnate and do no longer change substantially. Note that, in Eq. (57), the expressions logðθÞ for the vector of kernel hyperparameters θ = ðσ 2 ,λ 1 , . . . ,λ n Þ > from Eq. (56) are meant componentwise, i.e., logðθÞ = ðlogðσ 2 Þ, logðλ 1 Þ, . . . , logðλ n ÞÞ > 2 R n + 1 : Cost measure. Finally, the cost measure c is chosen to represent experimental time, i.e., the total time needed to carry out an experiment A. Experimental time consists of the cumulative counting time and the cumulative time for moving the instrument axes. The cumulative counting time is measured by c count ðAÞ :¼ where T count,j ≥ 0 denotes the single counting time, i.e., the time spent for a single intensity observation, at x j . The cost measure for the cumulative time spent to move the instrument axes is defined as where d is a metric representing the cost for moving from x j to x j+1 . For details, we refer to Supplementary Note 1 (Eq. (11)). Eventually, we set cðAÞ :¼ c count ðAÞ + c axes ðAÞ: ð62Þ For simplicity, the single counting times are assumed to be constant on the entire domain X * , i.e., T count,j = T count ≥ 0 yielding c count ðAÞ = |A| Á T count : ð63Þ

Degenerate cases
An intensity function that is nearly constant along a certain coordinate x k in X , i.e., an intrinsically lower-dimensional function, might cause problems for the Gaussian kernel from Eq. (55) as the corresponding optimal length scale hyperparameter would be λ k = ∞. Also, the initial observations A 0 from Eq. (42) might not resolve the main characteristics of the intensity function sufficiently well and hence pretend it to be lower-dimensional. Most degenerate cases can be identified by kernel optimizations resulting in one or more length scales that are quite low or high relative to the dimensions of X. In general, we assess a length scale parameter λ k as degenerate if it violates for two parameters 0 < δ − < δ + < ∞, where x À k and x + k denote the limits of the rectangle X in dimension k. If, after kernel optimization at a certain step, a length scale parameter is recognized to be degenerate, we regard the intensity function on a coordinate system rotated by 45 ∘ in order to avoid the mentioned problems. Of course, the rotation is performed only internally and does not affect the original setting.
In n = 2 dimensions, a crystal field excitation, for example, might induce a lower-dimensional intensity function (Supplementary Fig. 8a). After rotating the coordinate system, the intensity function becomes full-dimensional ( Supplementary Fig. 8b) allowing non-degenerate kernel optimizations.

Algorithmic setting
All experiments described in this article are performed in n = 2 dimensions, i.e., X R 2 . If not specified otherwise, we use N row = 11 rows corresponding to 61 measurements in the initialization grid (Eq. (48)). For scenario 2 of the neutron experiment, we use N row = 7 rows corresponding to 25 initial measurements. The default parameter values used for the final algorithm (Box 2) in both experimental settings, i.e., the neutron experiment and the benchmark, are specified in Table 1.
Although NICOS is able to consider an instrument's resolution function for the computation of matrices E(x j ) defining ellipses as consumed areas, we decided to use ellipses fixed over X * for both, the neutron experiment and the benchmark. Thus, the function E (Eq. (50)) is chosen to give circles with fixed radius r > 0 on a normalized domain, i.e., U(x j ) = I and We set r = 0.02 for the neutron experiment and r = 0.025 for the benchmark.

Data availability
Source data for figures and movies related to the neutron experiment or the benchmark are available in the repository at jugit.fz-juelich.de/ ainx/ariane-paper-data.

Code availability
The software implementation of our approach is based on the Gaus-sianProcessRegressor class from the Python package scikit-learn 40 . All results can be reproduced using our code from the repository jugit.fzjuelich.de/ainx/ariane (commit SHA: c1c31c96). The benchmark results can be reproduced by using code from the repository jugit.fz-juelich. de/ainx/base-fork-ariane (commit SHA: 3715a772). It is a fork, adjusted to our approach, of the benchmark API from jugit.fz-juelich.de/ainx/ base which is part of the mentioned original work on the benchmarking procedure 34 .