Autonomous materials discovery driven by Gaussian process regression with inhomogeneous measurement noise and anisotropic kernels

A majority of experimental disciplines face the challenge of exploring large and high-dimensional parameter spaces in search of new scientific discoveries. Materials science is no exception; the wide variety of synthesis, processing, and environmental conditions that influence material properties gives rise to particularly vast parameter spaces. Recent advances have led to an increase in the efficiency of materials discovery by increasingly automating the exploration processes. Methods for autonomous experimentation have become more sophisticated recently, allowing for multi-dimensional parameter spaces to be explored efficiently and with minimal human intervention, thereby liberating the scientists to focus on interpretations and big-picture decisions. Gaussian process regression (GPR) techniques have emerged as the method of choice for steering many classes of experiments. We have recently demonstrated the positive impact of GPR-driven decision-making algorithms on autonomously-steered experiments at a synchrotron beamline. However, due to the complexity of the experiments, GPR often cannot be used in its most basic form, but rather has to be tuned to account for the special requirements of the experiments. Two requirements seem to be of particular importance, namely inhomogeneous measurement noise (input-dependent or non-i.i.d.) and anisotropic kernel functions, which are the two concepts that we tackle in this paper. Our synthetic and experimental tests demonstrate the importance of both concepts for experiments in materials science and the benefits that result from including them in the autonomous decision-making process.


Introduction
Artificial intelligence and machine learning are transforming many areas of experimental science.While most techniques focus on analyzing "big data" sets, which are comprised of redundant information, collecting smaller but information-rich data sets has become equally important.Brute-force data collection leads to tremendous inefficiencies in the utilization of experimental facilities and instruments, in data analysis and data storage; large experimental facilities around the globe are running at 10 to 20 percent utilization and are still spending millions of dollars each year to keep up with the increase in the amount of data storage needed [16,14,1,35].In addition, conventional experiments require scientists to prepare samples and directly control experiments, which leads to highly-trained researchers spending significant effort on micromanaging experimental tasks rather than thinking about scientific meaning.To avoid this problem, autonomously steered experiments are emerging in many disciplines.These techniques place measurements only where they can contribute optimally to the overall knowledge gain.Measurements that collect in mm ∈ [0, 1] versus a temperature in • C ∈ [5,500], we should find different length scales for different directions of the parameter space.Also, there might be different differentiability characteristics in different directions.It is therefore vitally important to give the model the flexibility to account for those varying features.This can either be done by using an altered Euclidean norm or by employing different norms that provide more flexibility of distance measures in different directions.The general idea including the concepts proposed in this paper are visualized in Figure 1.
This paper is organized as follows: First, we introduce the traditional theory of Gaussian process regression with i.i.d.noise and standard isotropic kernel functions.Second, we make formal changes to the theory to include non-i.i.d.noise and anisotropy.Third, we demonstrate the impact of the two concepts on synthetic experiments.Fourth, we present a synchrotron beamline experiment that exploited both concepts in autonomous control.
2 Gaussian Process Regression with non-i.i.d.Noise and Anisotropic Kernels

Prerequisite
We define the parameter space X ⊂ R n , which serves as the index set or input space in the scope of Gaussian process regression and elements x ∈ X .We define four functions over X .First, the latent function f = f (x) can be interpreted as the inaccessible ground truth.Second, the often noisy measurements are described by y = y(x) : X → R d .To simplify the derivation, we assume d = 1; allowing for d > 1 is a straightforward extension.Third, the surrogate model function is then defined as ρ = ρ(x) : X → R. Fourth, the posterior mean function m(x), which is often assumed to equal the surrogate model, i.e., m(x) = ρ(x), but this is not necessarily the case.We also define a second space, a Hilbert space H ⊂ R N × R N × R J , with elements [f y f 0 ] T , where N is the number of data points, J is the number of points at which we want to predict the model function value, y are the measurement values, f is the vector of unknown latent function evaluations and f 0 is the vector of predicted function values at a set of positions.Note that scalar functions over X , e.g.f (x), are vectors (bold typeface) in the Hilbert space H, e.g.f .We also define a function p over our Hilbert space which is just the function value of the Gaussian probability density functions involved.For more explanation on the distinction between the two spaces and the functions involved see Figure 2.

Gaussian Process Regression with Isotropic Kernels and i.i.d. Observation Noise
Defining a GP regression model from data D = {(x 1 , y 1 ), ..., (x N , y N )}, where y i = f (x i ) + (x i ), is accomplished in a GP regression framework, by defining a Gaussian probability density function, called the prior, as and a likelihood where µ µ µ = [µ(x 1 ), ..., µ(x N )] T is the mean of the prior Gaussian probability density function (not to be confused with the posterior mean function m(x)).The prior mean can be understood as the position of the Gaussian.
x ∈ X is the covariance of the Gaussian process, with its covariance function, often referred to as the kernel, k(φ, x i , x j ), where φ are the hyper parameters, and where σ 2 is the variance of the i.i.d.observation noise.The problem here is that, in practice, the i.i.d.noise restriction rarely holds in experimental sciences, which is one of the issues to be addressed in this paper.The kernel k is a symmetric and positive semi-definite function, such that k : X × X → R. As a reminder, X is our parameter space and often referred to as index set or input space in the literature.
A well-known choice [37] is the Matérn kernel class defined by where B ν is the Bessel function of second kind, Γ is the gamma function, σ 2 s is the signal variance, l is the length scale, r = ||x i − x j || l2 is the Euclidean distance between input points and ν is a parameter The ellipses show the found anisotropy visually.The take-home message for the practitioner here is that the method will find the most likely model function given all collected data with their variances.The model function will not pass directly through the points but find the most likely shape given all available information.(a) A function over X .This can be the surrogate model ρ(x), the latent function f (x) to be approximated through an experiment, the function describing the measurements y(x) or the predictive mean function m(x).x 1 and x 2 are two experimentally controlled parameters (e.g., synthesis, processing or environmental conditions) that the measurement outcomes potentially depend on.(b) The Gaussian probability density function over H which gives GPR its name.For noise-free measurements, y = f at measurement points, meaning that we can directly observe the model function.Generally this is not the case and the observations y are corrupted by input dependent (non-i.i.d) noise.that controls the differentiability characteristics of the kernel and therefore of the final model function.The well-known exponential and squared exponential kernels are special cases of the Matérn kernels.The signal variance σ 2 s and the length scale l are hyper parameters (φ) that are found by maximizing the log-likelihood, i.e., solving where where I is the identity matrix.In the isotropic case, we only have to optimize for one signal variance and one length scale (per kernel function).The mean function µ(x) is often assumed to be constant and therefore does not have to be part of the optimization.The mean function assigns the location of the prior in H to any x ∈ X .The mean function can therefore be used to communicate prior knowledge (for instance physics knowledge) to the Gaussian process.Provided some hyper parameters, the joint prior is given as where where κ κ κ i = k(φ, x 0 , x i ), K K K = k(φ, x 0 , x 0 ) and, as a reminder, K ij = k(φ, x i , x j ).Intuitively speaking, Σ, K and k are all measures of similarity between measurement results y(x) of the input space.While K stores this similarity between all data points, Σ stores the similarity between all data points and all unknown points of interest, and κ contains the similarity only between the unknown y(x) of interest.k contains the instruction on how to calculate this similarity.The reader might wonder: "how do we find the similarity between unknown points of interest?"The answer lies in the formulation of the kernels that calculate the similarity just by knowing locations x ∈ X and not the function evaluations y(x).x 0 is the point where we want to estimate the mean and the variance.Note here that, with only slight adaption of the equation, we are able to compute the mean and variance for several points of interest.The predictive distribution is defined as and the predictive mean and the predictive variance are therefore respectively defined as which are the posterior mean and variance at x 0 , respectively.N (•, •) stands for the normal (Gaussian) distribution with a given mean and covariance.

Gaussian Processes with non-i.i.d. Observation Noise
To incorporate non-i.i.d.observation noise one can redefine the likelihood (2) as where V is a diagonal matrix containing the respective measurement variances.The matrix V can also have non-diagonal entries if the measurement noise happens to be correlated.We will only discuss non-correlated measurement noise.
From equations ( 6) and ( 11), we can calculate equation ( 8), i.e., the predictive probability distribution for a measurement outcome at x 0 , given the data set.The mean and variance of this distribution are respectively.Note here, that the matrix of the measurement errors V replaces the matrix σ 2 I in equations ( 9) and (10).However, this does not follow from a simple substitution, but from a significantly different derivation.The log-likelihood (15) changes accordingly, yielding This concludes the derivation of GPR with non-i.i.d.observation noise.Figure 3 illustrates the effect of different kinds of noise on an one-dimensional model function.As we can see, while some details of the derivation change when we account for inhomogeneous (also known as input dependent or non-i.i.d) noise, the resulting equation are very similar and the computation exhibits no extra costs.

Gaussian Processes with Anisotropy
For parameter spaces X that are anisotropic, i.e., where different directions have different characteristic correlation length, we can redefine the kernel function to incorporate different length scales in different directions.One way of doing this for axial anisotropy is by choosing the l 1 norm as distance measure and redefine the kernel function as where the superscripts m, n mean point labels, the subscript i means different directions in X and d = dim(X ).Defining a kernel per direction gives us the flexibility to enforce different orders of differentiability in different directions of X .The main benefit, however, is the possibility to define different length scales in different directions of X (see Figure 4).Unfortunately, the choice of the l 1 norm can lead to a very recognizable checkerboard pattern in the surrogate model, but the predictive power of the associated variance function is significantly improved compared to the isotropic case.
A second way, which avoids the checkerboard pattern in the model but does not allow different kernels in different direction, is to redefine the distances in X as where M is any symmetric positive semi-definite matrix playing the role of a metric tensor [36].This is just the Euclidean distance in a transformed metric space.In the actual kernel functions, any r/l can then be replaced by the new equation for the metric.We will here only consider axis-aligned anisotropy which means the matrix M is a diagonal matrix with the inverse of the length scales on its diagonal.The extension to general forms of anisotropy is straightforward but needs a more costly likelihood optimization since more hyper parameters have to be found.The rest of the theoretical treatment, however, remains unchanged.The mean function µ(x), the hyper parameters φ i and the signal variance σ 2 s are again found by maximizing the marginal log-likelihood (15).The associated optimization tries to find a maximum of a function that is defined over R d+1 , if we ignore the mean function as it is commonly done.We therefore have to find d + 1 parameters which adds a significant computational cost.If M is not diagonal we have to maximize the log-likelihood over R (d 2 −N )/2+1 .However, the optimization can be performed in parallel to computing the posterior variance, which can hide the computational effort.It is important to note that accounting for anisotropy can make the training of the algorithm, i.e. the optimization of the log-likelihood, significantly more costly.The extent of this depends on the kind of anisotropy considered.As we shall see, taking anisotropy into account leads to more efficient steering and a higher-quality final result, and is thus generally worth the additional computational cost.

Synthetic Tests
Our synthetic tests are carefully chosen to demonstrate the benefits of the two concepts under discussion, namely: non-i.i.d.observation noise and anisotropic kernels.To demonstrate the importance of including non-i.i.d.observation noise into the analysis, we consider a synthetic test based on actual physics which we used in previous work to showcase the functionality of past algorithms [27].We are choosing an example given in a closed form, because it provides a noise-free "ground truth" that we can compare to, whereas experimental data would inevitably include unknown errors.To showcase the importance of anisotropic kernels as part of the analysis, we provide an high-dimensional example based on a simulation of a material that is subject to a varying thermal history.In x 1 direction we have assumed that the model function is not differentiable.Therefore we used the exponential kernel.In x 2 direction, the model can be differentiated an infinite number of times.We therefore chose the squared exponential kernel.For other orders of differentiability, other kernels can be used.Fixing the order of differentiability also gives the user the ability to incorporate domain knowledge into the experiment.
The shown synthetic tests explore spaces of very different dimensionality.There is no theoretical limitation to the dimensionality of the parameter space.Indeed the autonomous methods described herein are most advantageous when operating in high-dimensional spaces, since this is where simpler methods-and human intuition-typically fail to yield meaningful searches.

Non-i.i.d. Observation Noise
For this test, we define a physical "ground truth" model (f (x)), whose correct function value at x is inaccessible due to non-i.i.d measurement noise, but can be probed by our simulated experiment through y(x).In this case, we assume that the measurements are subject to Gaussian noise with a standard deviation of 2% of the function value at x.The ground-truth model function is defined to be the diffusion coefficient D = D(r, T, C m ) for the Brownian motion of nanoparticles in a viscous liquid consisting of a binary mixture of water and glycerol: where k B is Bolzmann's constant, r ∈ [1 , 100] nm is the nanoparticle radius, T ∈ [0 , 100] • C is the temperature and µ = µ(T, C m ) is the viscosity as given by [8], where C m ∈ [0.0, 100.0] % is the glycerol mass fraction.This model was used in [27] to show the functionality of Kriging based autonomous experiments.The experiment device has no direct access to the ground truth model, but adds an unavoidable noise level, i.e., To demonstrate the importance of the noise model, we first ignore noise , then approximate it assuming i.i.d.noise, and finally model it allowing for non-i.i.d.noise.Figure 5 shows the results after 500 measurements, and a comparison to the (inaccessible) ground truth.Figure 6 compares the decrease in the error, in form of the Euclidean distance between the models and the ground truth, with increasing number of measurements N , between the three different types of noise.
The results show that treating noise as i.i.d. or even non-existent can lead to artifacts in the surrogate model.Additionally, the discrepancy between the ground truth and the surrogate mode is reduced far more efficiently if non-i.i.d.noise is accounted for.

Anisotropy
Allowing anisotropy can increase efficiency of autonomous experiments significantly for any dimensionality of the underlying parameter space.However, as the dimensionality of the parameter space increases, the importance of anisotropy increases substantially, purely due to the number of directions in which anisotropy can occur.To demonstrate this link, we simulated an experiment where a material is subjected to a varying thermal history.That is, the experiment consists of repeatedly changing the temperature, and taking measurements along this time-series of different temperatures.The temperature at each time step can be thought of as one of the dimensions of the parameter space.The full set of possible applied thermal histories thus become points in the high-dimensional parameter space of temperatures.In particular, we consider the ordering of a block copolymer, which is a self-assembling material that spontaneously organizes into a well-defined morphology when thermally annealed [10].The material organizes into a defined unit cell locally, with ordered grains subsequently growing in size as defects annihilate [23].We use a simple model to describe this grain coarsening process, where the grain size ξ increases with time according to a power-law where α is a scaling exponent (set to 0.2 for our simulations) and the prefactor k captures the temperaturedependent kinetics Here, E a is an activation energy for coarsening (we select a typical value of E a = 100 kJ/mol), and the prefactor A sets the overall scale of the kinetics (set to 3 × 10 11 nm/s α ).From these equations we construct an instantaneous growth-rate of the form: Block copolymers are known to have a order-disorder transition temperature (T ODT ) above which thermal energy overcomes the material's segregation strength, and thus the nanoscale morphology disappears in favor of a homogeneous disordered phase.Heating beyond T ODT thus implies driving ξ to zero.We describe this 'grain dissolution' process using an ad-hoc form of: where we set k diss = 1.0 nm s −1 K −1 and T ODT = 350 • C. We also apply ad-hoc suppresion of kinetics near T ODT and when grain sizes are very large to account for experimentally-observed effects.Overall, this simple model describes a system wherein grains coarsen with time and temperature, but shrink in size if temperature is raised too high.The parameter space defined by a sequence of temperatures will thus exhibit regions of high or low grain size depending on the thermal history described by that point; moreover there is non-trivial coupling between these parameters since the grain size obtained for a given step of the annealing (i.e. a given direction in the parameter space) sets the starting-point for coarsening in the next step (i.e. the next direction of the parameter space).We select thermal histories consisting of 11 temperature selections (temperature is updated every 6 s), which thus defines an 11-dimensional parameter space for exploration.Each temperature history defines a point (x ∈ X ) within the 11-dimensional input space.As can be seen in Figure 7(a), the majority of thermal histories one might select terminate in a relatively small grain size (blue lines in figure).This can be easily understood since a randomly-selected annealing protocol will use temperatures that are either too low (slow coarsening) or too high (T > T ODT drives into disordered state).Only a subset of possible histories terminate with a large grain size (dark, less transparent lines in Figure 7), corresponding to the judicious choice of annealing history that uses large temperatures without crossing ODT.While this conclusion is obvious in retrospect, in the exploration of a new material system (e.g. for which the value of material properties like T ODT are not known), identifying such trends is non-trivial.Representative slices through the 11-dimensional parameter space (Fig. 7(b) and (c)) further emphasize the complexity of the search problem, especially emphasizing the anisotropy of the problem.That is, different steps in the annealing protocol have different effects on coarsening; correspondingly the different directions in the parameter space have different characteristic length scales that must be correctly modeled (even though every direction is conceptually similar in that it describes a 6 s thermal annealing process).
Autonomous exploration of this parameter space enables the construction of a model for this coarsening process.Moreover, the inclusion of anisotropy markedly improves the search efficiency, reducing the model error more rapidly than when using a simpler isotropic kernel (Fig. 7(d)).As the dimensionality of the problem and the complexity of the physical model increase, the utility of including an anisotropic kernel increases further still.

Autonomous SAXS Exploration of Nanoscale Ordering in a Flow-Coated Polymer-Grafted Nanorod Film
The proposed GP-driven decision-making algorithm that takes into account non-i.i.d.observation noise and anisotropy has been used successfully in autonomous synchrotron experiments.Here we present, as an illustrative example, the results of an autonomous x-ray scattering experiment on a polymer-grafted gold nanorod thin film, where a combinatorial sample library was used to explore the effects of film fabrication parameters on self-assembled nanoscale structure.Unlike traditional short ligand coated particles, polymer-grafted nanoparticles (PGNs) are stabilized by high molecular weight polymers at relatively low grafting densities.As a result, PGNs behave as soft colloids, possessing the favorable processing behavior of polymer systems while still retaining the ability to pack into ordered assemblies [7].Although this makes PGNs well suited to traditional approaches for thinfilm fabrication, the nanoscale assembly of these materials is inherently complex, depending on a number of variables including, but not limited to particle-particle interactions, particle-substrate interactions, and process methodology.
The combinatorial PGN film sample was fabricated at the Air Force Research Laboratory.A flowcoating method [7] was used to deposit a thin PGN film on a surface-treated substrate where gradients in coating velocity and substrate surface energy were imposed along two orthogonal directions over the film surface.A 250 nM toluene solution of 53 kDa polystyrene-grafted gold nanorods (94% polystyrene by volume), with nanorod dimensions of 70 ± 6 nm in length and 11.0 ± 0.9 nm in diameter (based on TEM analysis), was cast onto a functionalized glass coverslip using a motorized coating blade.The resulting film covered a rectangular area of dimensions 50 mm × 60 mm.The surface energy gradient on the glass coverslip was generated through the vapor deposition of phenylsilane [13].The substrate surface energy varied linearly along the x direction from 30.5 mN/m (hydrophobic) at one edge of the film (x = 0) to 70.2 mN/m (hydrophilic) at the other edge (x = 50 mm).Along the y direction, the film-casting speed increased from 0 mm/s (at y = 0) to 0.5 mm/s (y = 60 mm) at a constant acceleration of 0.002 mm/s 2 .The film-casting condition corresponds to the evaporative regime where solvent evaporation occurs at similar timescales to that of solid film formation [4].In this regime, solvent evaporation at the meniscus induces a convective flow, driving the PGNs to concentrate and assemble at the contact line.The film thickness decreased with increasing coating speed, resulting in transitions from multilayers through a monolayer to a sub-monolayer with increasing y.This was verified by optical microscopy observations of the boundaries between multilayer, bilayer, monolayer and sub-monolayer regions, the last of which were identified by the presence of holes in the film, typically 1 µm or greater as seen in the optical images.
The autonomous small-angle x-ray scattering (SAXS) experiment was performed at the Complex Materials Scattering (11-BM CMS) beamline at the National Synchrotron Light Source II (NSLS-II), Brookhaven National Laboratory.As described previously [27,28], experimental control was coordinated by combining three Python software processes: bluesky [22] for automated sample translations and data collection, SciAnalysis [21] for real-time analysis of newly collected SAXS images, and the above GPR-based optimization algorithms for decision-making.The incident x-ray beam was set to a wavelength of 0.918 Å (13.5 keV x-ray energy) and a size of 0.2 mm × 0.2 mm.The PGN film-coated substrate was mounted normal to the incident x-ray beam, on a set of motorized xy translation stages.Transmission SAXS patterns were collected on an area detector (DECTRIS Pilatus 2M) located at a distance of 5.1 m downstream of the sample, with exposure time of 10 s/image.The SAXS results indicate that the polymer grafted nanorods tend to form ordered domains in which the nanorods lie flat and parallel to the surface and align with their neighbors.The fitting of SAXS intensity profiles via real-time analysis allowed for the extraction of quantities such as: the scattering-vector position q for the diffraction peak corresponding to the in-plane inter-nanorod spacing d = 2π/q; the degree of anisotropy η ∈ [0, 1] for the in-plane inter-nanorod alignment, where η = 0 for random orientations and η = 1 for perfect alignments [30]; the azimuthal angle χ or the factor cos(2χ) for the in-plane orientation of the inter-nanorod alignment; and the grain size ξ of the nanoscale ordered domains, which is inversely proportional to the diffraction peak width and provides a measure of the extent of in-plane positional correlations between aligned nanorods.The analysis-derived best-fit values and associated variances for these parameters were passed to the GPR decision algorithms.
Three analysis-derived quantities ξ, η, and cos(2χ) were used as signals to steer the SAXS measurements as a function of surface coordinates (x, y).For the initial part of the experiment, N < 464 (first 4 h), where N is the number of measurements completed up to a given point in the experiment, the autonomous steering utilized the exploration mode based on model uncertainty maxima [28] for ξ, η, and cos(2χ).For the latter part of the experiment (464 ≤ N ≤ 1520 or next 11 h), the feature maximization mode [28] was used for η, while keeping ξ and cos(2χ) in the exploration mode.We found that the nanorods in the ordered domains tended to orient such that their long axes were aligned along the x direction [cos(2χ) ≈ 1], i.e., perpendicular to the coating direction, and that ξ and η are strongly coupled.Figure 8A (top panels) show the N -dependent evolution of the model for the grain size distribution ξ over the film surface.It should be noted that the entire experiment took 15 h, and that the GPR-based autonomous algorithms identified the highly ordered regions in the band 5 < y < 15 mm (between red lines in Fig. 8A), corresponding to the uniform monolayer region, within the first few hours.By contrast, grid-based scanning-probe transmission SAXS measurements would not be able to identify large regions of interest at these resolutions in such a short amount of time.
The collected data is corrupted by non-i.i.d.measurement noise.While all signals are corrupted by noise, we draw attention to the peak position q because it shows the most obvious correlation of noni.i.d.measurement noise and model certainty.The green circles in Figure 8B (middle panel) and C (right panel) highlight the areas where the measurement noise affects the Gaussian-process predictive variance significantly.Note that we have not used q for steering in this case, but the general principle we want to show remains unchanged across all experiment results.Figure 8A shows the time evolution of the exploration of the model and the impact of non-i.i.d.noise on the model but also on the uncertainty.If q had been used for steering without taking into account non-i.i.d.noise into the analysis, the autonomous experiment would have been misled because predictive uncertainty due to high noise levels would not have been taken into account.Figure 8 shows that the next suggested measurement strongly depends on the noise.We want to remind the reader at this point that the next optimal measurement happens at the maximum of the GP predictive variance.The locations of the optima (Figure 8C) are clearly different when non-i.i.d.noise is taken into account.The objective function without measurement noise (Fig. 8C, left panel) shows no preference for regions of high noise (green circles in Fig. 8B, middle panel), where preference means higher function values of the GP predictive variance.In contrast, the variance function that takes measurement noise into account (Fig. 8C, right panel) gives preference to regions (green circles) where measurement noise of the data is high.This is a significant advantage and can only be accomplished by taking into account non-i.i.d.measurement noise.In conclusion, the model that assumes no noise looks better resolved, which communicates a wrong level of confidence and misguides the steering.The model that takes into account non-i.i.d.noise finds the correct most likely model and the corresponding uncertainty.The algorithm also took advantage of anisotropy by learning a slightly longer length scale in the x-direction which increased the overall model certainty.Note that the algorithm used an objective function formulation that put emphasis on high-amplitude regions of the parameter space.This led to a higher resolution in areas of interest.
The above autonomous SAXS experiment revealed interesting features from the material fabrication perspective as well.First, a somewhat surprising result is that the grain size is not observed to change significantly with surface energy (Figure 8A).Previous work on the assembly of polystyrene-grafted spherical gold nanoparticles [7] demonstrated a significant decrease in nanoparticle ordering when fabricating films on lower surface energy substrates (greater polymer-substrate interactions).Although the surface energies used in this study are similar, a different silane was used to modify the glass surface (phenylsilane vs octyltrichlorosilane) which may differ in its interaction with polystyrene.We also note that PGN-substrate interactions will be sensitive to molecular orientation of the functional groups, which is known to be highly dependent on the functionalization procedure [13].Second, an unexpected well-ordered band was identified at 20 < x < 35 mm and y > 15 mm (between blue lines in Figure 8A), corresponding to the sub-monolayer region with an intermediate surface-energy range.We believe that this effect arises from instabilities associated with the solution meniscus near the middle of the coating blade (x ∼ 25 mm).Rapid solvent evaporation often leads to undesirable effects including the generation of surface tension gradients, Marangoni flows, and subsequent contact line instabilities.This can result in the formation of non-uniform morphologies as demonstrated by the irregular region of larger grain size centered in the middle of the film and spanning the entire velocity range.Further investigations into these issues are currently in progress.

Discussion and Conclusion
In this paper, we have demonstrated the importance of including inhomogeneous (i.e.non-i.i.d.) observation noise and anisotropy into Gaussian-process-driven autonomous materials-discovery experiments.
It is very common in the scientific community to rely on Gaussian processes that ignore measurement noise or only include homogeneous noise, i.e. noise that is a constant for every measurement.In experimental sciences, and especially in experimental material sciences, strong inhomogeneity in measurement noise can be present and only accounting for homogeneous (i.i.d) measurement noise is therefore insufficient and leads to inaccurate models and, in the worst case, wrong interpretations and missed scientific discoveries.We have shown that it is straightforward to include non-i.i.d noise into the steering and modeling process.Figure 5 undoubtedly shows the benefit of including non-i.i.d measurement noise into the Gaussian process analysis.Figure 6 supports the conclusion we drew from Figure 5 visually, by showing a faster error decline.
The case for allowing anisotropy in the input space can be made when there is a reason to believe that data varies much more strongly in certain direction than in others.This is often the case when the directions have different fundamentally physical meanings.For instance, one direction can mean a temperature, while another one can define a physical distance.In this case, accounting for anisotropy can be vastly beneficial, since the Gaussian process will learn the different length scales and use them to lower the overall uncertainty.Figure 7 shows how common anisotropy is, even in cases where it would normally not be expected, and how including it decreases the approximated error of the Gaussian process posterior mean.In our example, all axes carry the unit of temperature; even so, anisotropy is present and accounting for it has a significant impact on the approximation error.
In our autonomous synchrotron x-ray experiment we have seen how misleading the no-measurementnoise can be.While the Gaussian process posterior mean, assuming no noise, is much more detailed in Figure 8, it is not supported by the data which is subject to non-i.i.d.noise.In addition, we have seen that the steering actually accounts for the measurement noise if included, which leads to much a smarter decision algorithm that knows where data is of poor quality and has to be substantiated.We showed, that without accounting for non-i.i.d.noise this phenomenon would not arise.We would therefore place measurements sub-optimally, wasting device access, staff time and other resources.
It is important to discuss the computational costs that come with accounting for non-i.i.d.noise and anisotropy.While non-i.i.d.noise can be included at no additional computational costs, anisotropy potentially comes at a price.The more complex the anisotropy, the more hyper parameters have to be found.The number of hyper parameters translates directly into the dimensionality of the space the likelihood is defined over.The training process to find the hyper parameters will therefore take longer, the more hyper parameters we have to find.However, the cost per function evaluation will not change significantly.Therefore, instead of avoiding the valuable anisotropy, we should make use of modern, efficient optimization methods.
While our results have shown that accounting for non-i.i.d.noise and anisotropy is highly valuable for the efficiency of an autonomously steered experiment, we have only scratched the surface of possibilities.Both proposed improvements can be seen as part of a larger theme commonly referred to as kernel design.The possibilities for improvements and tailoring of Gaussian-process-driven steering of experiments are vast.Well-designed kernels have the power to extract sub-spaces of the Hilbert space of functions, which means we can put constraints on the function we want to consider as our model.We will look into the impact of advanced kernel designs on autonomous data acquisition in the near future.(middle, row B, from the left) An exact Gaussian-process interpolation of the complete measured data-set for the peak position q.The data is corrupted by measurement errors which corrupt the model if standard, exact interpolation techniques are used (including GPR).The green circles mark the regions of the largest variances in the model and the corresponding high errors (measurement variances) that were recorded during the experiment.On the right is the Gaussian process model of q, taking into account the non-i.i.d.measurement variances.This model does not show any of the artifacts that are visible in the exact GPR interpolation.(bottom row, C) The final objective functions for no noise and non-i.i.d.noise in q which has to be maximized to determine the next optimal measurement.If the experiment had been steered using the posterior variances in q without accounting for non-i.i.d.observation noise, the autonomous experiments would have been misled significantly.

Figure 1 :
Figure 1: Schematic of an autonomous experiment.The data acquisition device in this example is a beamline at a synchrotron light source.The measurement result depends on parameters x.The raw data is then sent through an automated data processing and analysis pipeline.From the analyzed data, the autonomousexperiment algorithm creates a surrogate model and an uncertainty function whose maxima represent points of high-value measurements; they are found by employing function optimization tools.The new measurement parameters x are then communicated to the data acquisition device and the loop starts over.The main contribution of the present work is that the model computation and uncertainty quantification account for the anisotropic nature of the model function and the input-dependent (non-i.i.d.) measurement noise.The surrogate model (bottom) shows how the model function is evolving as the experiment is steered and more data (N ) is collected.The red dots indicate the positions of the measurements and their size represents the varying associated measurement variances.The numbers l x and l y indicate the anisotropic correlation lengths that the algorithm finds by maximizing a log-likelihood function.The ellipses show the found anisotropy visually.The take-home message for the practitioner here is that the method will find the most likely model function given all collected data with their variances.The model function will not pass directly through the points but find the most likely shape given all available information.

Figure 2 :
Figure 2: Figure emphasizing the distinction between the spaces and functions involved in the derivation.(a)A function over X .This can be the surrogate model ρ(x), the latent function f (x) to be approximated through an experiment, the function describing the measurements y(x) or the predictive mean function m(x).x 1 and x 2 are two experimentally controlled parameters (e.g., synthesis, processing or environmental conditions) that the measurement outcomes potentially depend on.(b) The Gaussian probability density function over H which gives GPR its name.For noise-free measurements, y = f at measurement points, meaning that we can directly observe the model function.Generally this is not the case and the observations y are corrupted by input dependent (non-i.i.d) noise.

Figure 3 :
Figure 3: Three one-dimensional examples with (a) no noise, (b) i.i.d.noise and (c) non-i.i.d.noise, respectively.For the no-noise case, the model has to explain the data exactly.In the i.i.d.noise-case, the algorithm is free to choose a model that does not explain the data exactly but allows for a constant measurement variance.In the non-i.i.d.noise case, the algorithm finds the most likely model given varying variances across the data set.Note the vertical axis labels; y(x) are the measurement outcomes, m(x) is the mean function, i.e., the most likely model, ρ(x) is the surrogate model, often assumed to equal the mean function and f (x) is the "ground truth" latent function.

Figure 4 :
Figure 4: Model function with different length scales and different orders of differentiability in different directions.In x 1 direction we have assumed that the model function is not differentiable.Therefore we used the exponential kernel.In x 2 direction, the model can be differentiated an infinite number of times.We therefore chose the squared exponential kernel.For other orders of differentiability, other kernels can be used.Fixing the order of differentiability also gives the user the ability to incorporate domain knowledge into the experiment.

Figure 5 :
Figure 5: The result of the diffusion-coefficient example on a three-dimensional input space.The figure shows the result of the GP approximation after 500 measurements for three different nanoparticle radii.While the measurement results are always subject to differing noise, the model can take noise into account in different ways.Most commonly noise is ignored (left column).If noise is included, it is common to approximate it by i.i.d.noise (middle column).The proposed method models the noise as what it is, which is non-i.i.d.noise (right column).The iso-lines of the approximation are shown in white while the isolines of the ground truth are shown in red.Observe how the no-noise and the i.i.d.noise approximations create localized artifacts.The non-i.i.d.approximation does a far better job of creating a smooth model that explains all data including noise.
.d. noise non i.i.d.noise

Figure 6 :
Figure 6: The approximation errors of the surrogate model during the diffusion-coefficient example (Figure 5), for three different noise models noted in the legend.The bands around each line represent the standard deviation of this error metric computed by running repeated synthetic experiments.

Figure 7 :CFigure 8 :
Figure 7: Visualization of the grain size as a function of temperature history for a simple model of block copolymer grain size coarsening.The figure demonstrates that when describing physical systems in highdimensional spaces, strong anisotropy is frequently observed; only by taking this into account when estimating errors, will experimental guidance be optimal.(a) 10, 000 simulated temperature histories and their corresponding grain size represented by color.The majority of histories terminate in a small grain size (blue lines).A small select set of histories yield large grain sizes (dark red lines).(b) Example twodimensional slice through the 11-dimensional parameter space.The anisotropy is clearly visible.(c) A different two-dimensional slice with no significant anisotropy present.(d) The estimated maximum standard deviation across the 11-dimensional domain as function of the number of measurements during a synthetic autonomous experiment.