Introduction

Designing metamaterials or composite materials, in which computational tools select composable components to re-create desired properties that are not present in the constituent materials, is a crucial task for a variety of areas of engineering (acoustic, mechanics, thermal/electronic transport, electromagnetism, and optics)1. For example, in metalenses, the components are subwavelength scatterers on a surface, but the device diameter is often >103 wavelengths2. Applications of such optical structures include ultra-compact sensors, imaging, and spectroscopy devices used in cell phone cameras and in medical applications2. As the metamaterials become larger in scale and as the manufacturing capabilities improve, there is a pressing need for scalable computational design tools.

In this work, surrogate models were used to rapidly evaluate the effect of each metamaterial components during device design3, and machine learning is an attractive technique for such models4,5,6,7. However, in order to exploit improvements in nano-manufacturing capabilities, components have an increasing number of design parameters and training the surrogate models (using brute-force numerical simulations) becomes increasingly expensive. The question then becomes: How can we obtain an accurate model from minimal training data? We present an active-learning approach—in which training points are selected based on an error measure—that can reduce the number of training points by more than an order of magnitude for a neural-network (NN) surrogate model of partial differential equations (PDEs). Further, we show how such a surrogate can be exploited to speed up large-scale engineering optimization by >100×. In particular, we apply our approach to the design of optical metasurfaces: large (102–106 wavelengths λ) aperiodic nanopattered (λ) structures that perform functions such as compact lensing8.

Metasurface design can be performed by breaking the surface into unit cells with a few parameters each (Fig. 1) via domain-decomposition approximations3,9, learning a surrogate model that predicts the transmitted optical field through each unit as a function of an individual cell’s parameters, and optimizing the total field (e.g. the focal intensity) as a function of the parameters of every unit cell3 (see “Results”). This makes metasurfaces an attractive application for machine learning because the surrogate unit-cell model is re-used millions of times during the design process, amortizing the cost of training the model based on expensive exact Maxwell solves sampling many unit-cell parameters. For modeling the effect of 1–4 unit-cell parameters, Chebyshev polynomial interpolation can be very effective3, but encounters an exponential curse of dimensionality with more parameters10,11. In this paper, we find that an NN can be trained with orders of magnitude fewer Maxwell solves for the same accuracy with 10 parameters, even for the most challenging case of multi-layer unit cells many wavelengths (>10λ) thick. In contrast, we show that subwavelength-diameter design regions (considered by several other authors4,5,6,7,12,13) require orders of magnitude fewer training points for the same number of parameters (Fig. 2), corresponding to the physical intuition that wave propagation through subwavelength regions is effectively determined by a few effective-medium parameters14, making the problems effectively low-dimensional. In contrast to typical machine-learning applications, constructing surrogate models for physical model such as Maxwell’s equations corresponds to interpolating smooth functions with no noise, and this requires approaches to training and active learning as described in the “Results” section. We believe that these methods greatly extend the reach of surrogate model for metamaterial optimization and other applications requiring moderate-accuracy high-dimensional smooth interpolation.

Fig. 1: Unit cells for metasurface design.
figure 1

3D unit cells: a fin unit cell with two parameters, b H-shape unit cell with four parameters. Unit cells (with independent sets of parameters) are juxtaposed to form a metasurface c in 3D and d in 2D, which is optimized to scatter light in a prescribed way. e 2D unit cell: multi-layer unit cell with holes with ten parameters. Each of the unit cell parameters are illustrated by red arrows. The transmitted field of the unit cell is computed with periodic boundary conditions. When the period is subwavelength, the transmitted field can be summarized by a single complex number—the complex transmission. Using the local periodic approximation and the unit cell simulations, we can efficiently compute the approximate source equivalent to the metasurface and generate the field anywhere in the far field.

Fig. 2: Test error with varying design-region diameters.
figure 2

Comparison of baseline training as we shrink the unit cell. a For the same number of training points, the fractional error (defined in “Methods”) on the test set of the small unit cell and the smallest unit cell are, respectively, one and two orders of magnitude better than the error of the main unit cell when using 1000 training points or more, which indicates that parameters are more independent when the design-region diameter is big (λ), and training the surrogate model becomes harder. b Pictures of the unit cells to scale. Each color corresponds to the line color in the plot. For clarity, an inset shows the smallest unit cell enlarged 10 times.

Recent work has demonstrated a wide variety of optical-metasurface design problems and algorithms. Different applications15 such as holograms16, polarization-17,18, wavelength-19, depth-of-field-20, or incident angle-dependent functionality21 are useful for imaging or spectroscopy22,23. Pestourie et al.3 introduced an optimization approach to metasurface design using the Chebyshev polynomial surrogate model, which was subsequently extended to topology optimization (~103 parameters per cell) with online Maxwell solvers24. Metasurface modeling can also be composed with signal/image-processing stages for optimized end-to-end design25,26. Previous work demonstrated NN surrogate models in optics for a few parameters27,28,29, or with more parameters in deeply subwavelength-design regions4,12. As shown in Fig. 2, deeply subwavelength regions pose a vastly easier problem for NN training than parameters spread over larger diameters. Another approach involves generative design, again typically for subwavelength6,7 or wavelength-scale unit cells30, in some cases in conjunction with larger-scale models5,12,13. A generative model is essentially the inverse of a surrogate function: instead of going from geometric parameters to performance, it takes the desired performance as an input and produces the geometric structure, but the mathematical challenge appears to be closely related to that of surrogates.

Active learning is connected with the field of uncertainty quantification (UQ), because active learning consists of adding the most uncertain points to training set in an iterative way (Figs. 3 and 4) and hence it requires a measure of uncertainty. Our approach to UQ is based on the NN-ensemble idea of ref. 31 due to its simplicity. There are many other approaches for UQ32,Sec. 5], but ref. 31 demonstrated performance and scalability advantages of the NN-ensemble approach. This approach is an instance of Bayesian deep learning32. In contrast, Bayesian optimization relies on Gaussian processes that scale poorly (~N3 where N is the number of training samples)33,34. The work presented here achieves training time efficiency (we show an order of magnitude reduction sample complexity), design time efficiency (the actively learned surrogate model is at least two orders of magnitude faster than solving Maxwell’s equations), and realistic large-scale designs (due to our optimization framework3), all in one package.

Fig. 3: Active-learning algorithm and the surrogate model.
figure 3

Diagram of the surrogate model (blue background) and the active-learning algorithm (orange background). The circle arrow signifies that the algorithm iterates T times. The fast evaluation of the surrogate is used both to create predictions of the surrogate model and to compute the error measure that selects the points to add to the training set.

Fig. 4: Algorithm for active learning of the surrogate model.
figure 4

The algorithm takes the input hyperparameters ninit, T, M, K, and returns the actively learned surrogate model (which outputs an estimate of the complex transmission coefficient and an error measure). The algorithm adds the training points iteratively, by filtering the randomly sampled points with the highest error measures.

Results

Metasurfaces and surrogate models

In this section, we present the NN surrogate model used in this paper, for which we adopt the metasurface design formulation from ref. 3. The first step of this approach is to divide the metasurface into unit cells with a few geometric parameters p each. For example, Fig. 1 shows several possible unit cells: (Fig. 1a) a rectangular pillar (fin) etched into a 3d dielectric slab35 (two parameters); (Fig. 1b) an H-shaped unit cell (four parameters) in a dielectric slab4; or (Fig. 1e) a multi-layered 2d unit cell with ten holes of varying widths considered in this paper. As depicted in Fig. 1c, d, a metasurface consists of an array of these unit cells. The second step is to solve for the transmitted field (from an incident plane wave) independently for each unit cell using approximate boundary conditions3,24,35,36, in our case a locally periodic approximation (LPA) based on the observation that optimal structures often have parameters that mostly vary slowly from one unit cell to the next (ref. 3 has a detailed section and a figure about this approximation \({}{{(\rm{Sec}}.\, 2.1,\,{\rm{Fig}}\,.2)}\); other approximate boundary conditions are also possible9). For a subwavelength period, the LPA transmitted far field is entirely described by a single number—the complex transmission coefficient t(p). One can then compute the field anywhere above the metasurface by convolving these approximate transmitted fields with a known Green’s function—a near-to-far-field transformation37. Finally, any desired function of the transmitted field, such as the focal-point intensity, can be optimized as a function of the geometric parameters of each unit cell3.

In this way, optimizing an optical metasurface is built on top of evaluating the function t(p) (transmission through a single unit cell as a function of its geometric parameters) thousands or even millions of times—once for every unit cell, for every step of the optimization process. Although it is possible to solve Maxwell’s equations online during the optimization process, allowing one to use thousands of parameters p per unit cell requires substantial parallel computing clusters24. Alternatively, one can solve Maxwell’s equations offline (before metasurface optimization) in order to fit t(p) to a surrogate model:

$$\tilde{t}({\bf{p}})\approx t({\bf{p}}),$$
(1)

which can subsequently be evaluated rapidly during metasurface optimization (perhaps for many different devices). For similar reasons, surrogate (or reduced-order) models are attractive for any design problem involving a composite of many components that can be modeled separately6,7,38. The key challenge of the surrogate approach is to increase the number of design parameters, especially in non-subwavelength regions as discussed in Fig. 2.

In this paper, the surrogate model for each of the real and imaginary parts of the complex transmission is an ensemble of J = 5 independent NNs with the same training data but different random batches39 on each training step. Each of NN i is trained to output a prediction μi(p) and an error estimate σi(p) for every set of parameters p. To obtain these μi and σi from training data y(p) (from brute-force offline Maxwell solves) we minimize31:

$$-{\sum }_{{\bf{p}}}\mathrm{log}\,{p}_{{\Theta }_{i}}(y| {\bf{p}})\propto {\sum }_{{\bf{p}}}\left[\mathrm{log}\,{\sigma }_{i}({\bf{p}})+\frac{{(y({\bf{p}})-{\mu }_{i}({\bf{p}}))}^{2}}{2{\sigma }_{i}{({\bf{p}})}^{2}}\right]$$
(2)

over the parameters Θi of NN i. Equation (2) is motivated by problems in which y was sampled from a Gaussian distribution for each p, in which case μi and \({\sigma }_{i}^{2}\) could be interpreted as mean and hetero-skedastic variance, respectively31. Although our exact function t(p) is smooth and noise free, we find that Eq. (2) still works well to estimate the fitting error, as demonstrated in Fig. 5. Each NN is composed of an input layer with 13 nodes (10 nodes for the geometry parameterization—p [0, 1]10—and 3 nodes for the one-hot encoding39 of three frequencies of interest), three fully-connected hidden layers with 256 rectified linear units (ReLU39), and one last layer containing one unit with a scaled hyperbolic-tangent activation function39 (for μi) and one unit with a softplus activation function39 (for σi). Given this ensemble of J NNs, the final prediction μ* (for the real or imaginary part of t(p)) and its associated error estimate σ* are combined as31

$${\mu }_{* }({\bf{p}})=\frac{1}{J}\mathop{\sum }\limits_{i = 1}^{J}{\mu }_{i}({\bf{p}}),$$
(3)
$${\sigma }_{* }^{2}({\bf{p}})=\frac{1}{J}\mathop{\sum }\limits_{i = 1}^{J}\left({\sigma }_{i}^{2}({\bf{p}})+\left({\mu }_{i}^{2}({\bf{p}})-{\mu }_{* }^{2}({\bf{p}})\right)\right).$$
(4)
Fig. 5: Test error for actively learned surrogate, baseline, and Chebyshev interpolation.
figure 5

The lower the desired fractional error, the greater the reduction in training cost compared to the baseline algorithm; the slope of the active-learning fractional error (−0.2) is about 30% steeper than that of the baseline (−0.15). The active-learning algorithm achieves a reasonable fractional error of 0.07 in 12 times less points than the baseline, which corresponds to more than one order of magnitude saving in training data. Chebyshev interpolation (surrogate for blue frequency only) does not compete well with this number of training points. Inset: Unit cell corresponding to the surrogate model.

Subwavelength is easier: effect of diameter

Before performing active learning, we first identify the regime where active learning can be most useful: unit-cell design volumes that are not small compared to the wavelength λ. Previous work on surrogate models4,5,6,7,12,13 demonstrated NN surrogates (trained with uniform random samples) for unit cells with ~102 parameters. However, these NN models were limited to a regime where the unit-cell degrees of freedom lay within a subwavelength-diameter volume of the unit cell. To illustrate the effect of shrinking design volume on NN training, we trained our surrogate model for three unit cells (Fig. 2b): the main unit cell of this study is 12.5λ deep, the small unit cell is a vertically scaled-down version of the normal unit cell only 1.5λ deep, and the smallest unit cell is a version of the small unit cell further scaled down (both vertically and horizontally) by 10×. Figure 2a shows that, for the same number of training points, the fractional error (defined in “Methods”) on the test set of the small unit cell and the smallest unit cell are, respectively, one and two orders of magnitude better than the error of the main unit cell when using 1000 training points or more. (The surrogate output is the complex transmission \(\tilde{t}\).) That is, Fig. 2a shows that in the subwavelength-design regime, training the surrogate model is far easier than for larger design regions (>λ).

Physically, for extremely subwavelength volumes wave propagation is accurately approximated by an averaged effective medium14, so there are effectively only a few independent design parameters regardless of the number of geometric degrees of freedom. (Effective-medium theory, also called homogenization theory, arises from the fact that extremely subwavelength features affect waves only in an averaged sense, in the same way that light propagating through glass can be described using a refractive index rather than by explicitly modeling scattering from individual atoms.) Quantitatively, we find that the Hessian of the trained surrogate model (second-derivative matrix) in the smallest unit-cell case is dominated by only two singular values—consistent with a function that effectively has only two free parameters—with the other singular values being more than 100× smaller in magnitude; for the other two cases, many more training points would be required to accurately resolve the smallest Hessian singular values. A unit cell with large design-volume diameter (λ) is much harder to train, because the dimensionality of the design parameters is effectively much larger.

Active-learning algorithm

Here, we present an online algorithm to choose training points that is significantly better at reducing the error than choosing points from a random uniform distribution. As described below, we select the training points where the estimated model error is largest, given the estimated error σ*.

The online algorithm used to train each of the real and imaginary parts is outlined in Figs. 3 and 4. Initially we choose ninit uniformly distributed random points \({{\bf{p}}}_{1},{{\bf{p}}}_{2},...,{{\bf{p}}}_{{n}_{{\rm{init}}}}\) to train a first iteration \({\tilde{t}}^{0}({\bf{p}})\) over 50 epochs39. Then, given the model at iteration i, we evaluate \({\tilde{t}}^{i}({\bf{p}})\) (which is orders of magnitude faster than the Maxwell solver) at M × K points sampled uniformly at random and choose the K points that correspond to the largest \({\sigma }_{* }^{2}\). We perform the expensive Maxwell solves only for these K points, and add the newly labeled data to the training set. We train \({\tilde{t}}^{i+1}({\bf{p}})\) with the newly expended training set, using \({\tilde{t}}^{i}\) as a warm start. We repeat this process T times.

Essentially, the method works because the error estimate σ* is updated every time the model is retrained with an expended dataset. In this way, the model tells us where it does poorly by setting a large σ* for parameters p where the estimation would be bad in order to minimize Eq. (2).

Active-learning results

Our algorithm achieves an order-of-magnitude reduction in training data.

We compared the fractional errors of a NN surrogate model trained using uniform random samples with an identical NN trained using an active-learning approach, in both cases modeling the complex transmission of a multi-layer unit cell with ten independent parameters (Fig. 5, inset). With the notation of our algorithm in Fig. 4, the baseline corresponds to T = 0, and ninit equal to the total number of training points. This corresponds to no active learning at all, because the ninit points are chosen from a random uniform distribution. In the case of active learning, ninit = 2000, M = 4, and we computed for K = 500, 1000, 2000, 4000, 8000, 16,000, 32,000, 64,000, and 128,000. Although three orders of magnitude on the log-log plot is too small to determine if the apparent linearity indicates a power law, Fig. 5 shows that the lower the desired fractional error, the greater the reduction in training cost compared to the baseline algorithm; the slope of the active-learning fractional error (−0.2) is about 30% steeper than that of the baseline (−0.15). The active-learning algorithm achieves a reasonable fractional error of 0.07 in 12 times less points than the baseline, which corresponds to more than one order of magnitude saving in training data (much less expensive Maxwell solves). This advantage would presumably increase for a lower error tolerance, though computational costs prohibited us from collecting orders of magnitude more training data to explore this in detail. For comparison and completeness, Fig. 5 shows fractional errors using Chebyshev interpolation (for the blue frequency only). Chebyshev interpolation has a much worse fractional error for a similar number of training points. Chebyshev interpolation suffers from the curse of dimensionality—the number of training points is exponential with the number of variables. The two fractional errors shown are for three and four interpolation points in each of the dimensions, respectively. In contrast, NNs are known to mitigate the curse of dimensionality40.

Application to metamaterial design: we used both surrogates models to design a multiplexer—an optical device that focuses different wavelength at different points in space. The actively learned surrogate model results in a design that much more closely matches a numerical validation than the baseline surrogate (Fig. 6). As explained in the “Results” section, we replace a Maxwell’s equations solver with a surrogate model to rapidly compute the optical transmission through each unit cell; a similar surrogate approach could be used for optimizing many other complex physical systems. In the case of our two-dimensional unit cell, the surrogate model is two orders of magnitude faster than solving Maxwell’s equations with a finite difference frequency domain (FDFD) solver41. The speed advantage of a surrogate model becomes drastically greater in three dimensions, where PDE solvers are much more costly while the surrogate model remains the same.

Fig. 6: Application to metamaterial design.
figure 6

a, b We used a the actively learned and b the baseline surrogates models to design a multiplexer—an optical device that focuses different wavelength at different points in space. The actively learned surrogate model results in a design that much more closely matches a numerical validation than the baseline surrogate. This shows that the active-learning surrogate is better at driving the optimization away from regions of inaccuracy. c Resulting metastructure for the active-learning surrogate with 100 unit cells of 10 independent parameters each (one parameter per layer).

The surrogate model is evaluated millions of times during a metastructure optimization. We used the actively learned surrogate model and the baseline surrogate model (uniform random training samples), in both cases with 514,000 training points, and we optimized a ten-layer metastructure with 100 unit cells of period 400 nm for a multiplexer application—where three wavelengths (blue: 405 nm, green: 540 nm, and red: 810 nm) are focused on three different focal spots (−10 μm, 60 μm), (0, 60 μm), and (+10 μm, 60 μm), respectively. The diameter is 40 μm and the focal length is 60 μm, which corresponds to a numerical aperture of 0.3. Our optimization scheme tends to yield results robust to manufacturing errors3 for two reasons: first, we optimize for the worst case of the three focal spot intensities, using an epigraph formulation3; second, we compute the average intensity from an ensemble of surrogate models that can be thought of as a Gaussian distribution \(\tilde{t}({\bf{p}})={\mu }_{* }({\bf{p}})+{\sigma }_{* }({\bf{p}})\epsilon\) with \(\epsilon \sim {\mathcal{N}}(0,1)\), and μ* and σ* are defined in Eq. (3) and Eq. (4), respectively,

$${\mathbb{E}}{\left|E({\bf{r}})\right|}^{2}={\left|\int\!\!G\text{}{\mu }_{* }\right|}^{2}+{\left|\int\!\!G\text{}{\sigma }_{* }\right|}^{2},$$
(5)

where G is a Green’s function that generates the far field from the sources of the metastructure3. The resulting optimized structure for the active-learning surrogate is shown in Fig. 6c.

In order to compare the surrogate models, we validate the designs by computing the optimal unit cell fields directly using a Maxwell solver instead of using the surrogate model. This is computationally easy because it only needs to be done once for each of the 100 unit cells instead of millions of times during the optimization. The focal lines—the field intensity along a line parallel to the two-dimensional metastructure and passing through the focal spots—resulting from the validation are exact solutions to Maxwell’s equations assuming the LPA (see “Results” section). Figure 6a, b shows the resulting focal lines for the active-learning and baseline surrogate models. A multiplexer application requires similar peak intensity for each of the focal spots, which is achieved using worst-case optimization3. Figure 6a, b shows that the actively learned surrogate has ≈3× smaller error in the focal intensity compared to the baseline surrogate model. This result shows that not only is the active-learning surrogate more accurate than the baseline surrogate for 514,000 training points but also the results are more robust using the active-learning surrogate—the optimization does not drive the parameters towards regions of high inaccuracy of the surrogate model. Note that we limited the design to a small overall diameter (100 unit cells) mainly to ease visualization (Fig. 6c), and we find that this design can already yield good focusing performance despite the small diameter. In earlier work, we have already demonstrated that our optimization framework is scalable to designs that are orders of magnitudes larger42. In principle, a manufacturing uncertainty measure could also be incorporated into the metasurface design process via robust optimization algorithms43, but in practice metasurface designs are already typically robust enough to manufacture, especially since multi-wavelength optimization is already a form of robustness3. Then robustness is robustness to any kind of error (including that of ML).

Previous work, such as ref. 44—in a different approach to active-learning that does not quantify uncertainty—suggested iteratively adding the optimum design points to the training set (re-optimizing before each new set of training points is added). However, we did not find this approach to be beneficial in our case. In particular, we tried adding the data generated from LPA validations of the optimal design parameters, in addition to the points selected by our active learning algorithm, at each training iteration, but we found that this actually destabilized the learning and resulted in designs qualitatively worse than the baseline. By exploiting validation points, it seems that the active learning of the surrogate tends to explore less of the landscape of the complex transmission function, and hence leads to poorer designs. Such exploitation–exploration trade-offs are known in the active-learning literature45.

Discussion

In this paper, we present an active-learning algorithm for composite materials which reduces the training time of the surrogate model for a physical response by at least one order of magnitude. The simulation time is reduced by at least two orders of magnitude using the surrogate model compared to solving the PDEs numerically. While the domain-decomposition method used here is the LPA and the PDEs are the Maxwell equations, the proposed approach is directly applicable to other domain-decomposition methods (e.g. overlapping domain approximation9) and other PDEs or ordinary differential equations46.

We used an ensemble of NNs for interpolation in a regime that is seldom considered in the machine-learning literature—machine-learning models are mostly trained from noisy measurements, whereas here the data are obtained from smooth functions. In this regime, it would be instructive to have a deeper understanding of the relationship between NNs and traditional approximation theory (e.g. with polynomials and rational functions10,11). For example, the likelihood maximization of our method forces σ* to go to zero when \(\tilde{t}({\bf{p}})=t({\bf{p}})\). Although this allows us to simultaneously obtain a prediction μ* and an error estimate σ*, there is a drawback. In the interpolation regime (when the surrogate is fully determined), σ* would become identically zero even if the surrogate does not match the exact model away from the training points. In contrast, interpolation methods such as Chebyshev polynomials yield a meaningful measure of the interpolation error even for exact interpolation of the training data10,11. In the future, we plan to separate the estimation model and the model for the error measure using a meta-learner architecture47, with expectation that the meta-learner will produce a more accurate error measure and further improve training time. We will also explore other ensembling methods that could improve the accuracy of our model48,49. We believe that the method presented in this paper will greatly extend the reach of surrogate-model-based optimization of composite materials and other applications requiring moderate-accuracy high-dimensional interpolation.

Methods

Training-data computation

The complex transmission coefficients were computed in parallel using an open-source FDFD solver for Helmholtz equation50 on a 3.5 GHz 6-Core Intel Xeon E5 processor. The material properties of the multi-layered unit cells are silica (refractive index of 1.45) in the substrate, and air (refractive index of 1) in the hole and in the background. In the normal unit cell, the period of the cell is 400 nm, the height of the ten holes is fixed to 304 nm, and their widths vary between 60 and 340 nm, each hole is separated by 140 nm of substrate. In the small unit cell, the period of the cell is 400 nm, the height of the ten holes is 61 nm, and their widths vary between 60 and 340 nm, there is no separation between the holes. The smallest unit cell is the same as the small unit cell shrunk ten times (period of 40 nm, ten holes of height 6.1 nm, and width varying between 6 and 34 nm).

Metalens design problem

The complex transmission data are used to compute the scattered field off a multi-layered metastructure with 100 unit cells as in ref. 3. The metastructure was designed to focus three wavelengths (blue: 405 nm, green: 540 nm, and red: 810 nm) on three different focal spots (−10 μm, 60 μm), (0, 60 μm), and (+10 μm, 60 μm), respectively. The epigraph formulation of the worst-case optimization and the derivation of the adjoint method to get the gradient are detailed in ref. 3. Any gradient based optimization algorithm would work, but we used an algorithm based on conservative convex separable approximations51. The average intensity is derived from the distribution of the surrogate model \(\tilde{t}({\bf{p}})={\mu }_{* }({\bf{p}})+{\sigma }_{* }({\bf{p}})\epsilon\) with \(\epsilon \sim {\mathcal{N}}(0,1)\) and the computation of the intensity based on the local field as in ref. 3,

$$| E({\bf{r}}){| }^{2}=\left|{\int_{\Sigma }}G({\bf{r}},{{\bf{r}}}^{\prime}){\left(-\tilde{t}({\bf{p}}({{\bf{r}}}^{\prime})){\mathrm d}{{\bf{r}}}^{\prime}\right|}^{2}\right.,$$
(6)
$$={\int_{\Sigma }}\bar{G}({\bar{\mu }}_{* }({\bf{p}})+{\bar{\sigma }}_{* }({\bf{p}})\epsilon ){\mathrm d}{{\bf{r}}}^{\prime}{\int_{\Sigma }}G({\mu }_{* }({\bf{p}})+{\sigma }_{* }({\bf{p}})\epsilon ){\mathrm d}{{\bf{r}}}^{\prime},$$
(7)
$$={\int\!\!\bar{G}{\bar{\mu }}_{* }}{\int\!\! G\,{\mu }_{* }}+{\epsilon }^{2}{\int\!\bar{G}{\bar{\sigma }}_{* }}{\int\!\! G\,{\sigma }_{* }}+2\epsilon {\rm{Re}}\left({\int\!\!\bar{G}{\bar{\mu }}_{* }}{\int\!\! G\,{\sigma }_{* }}\right),$$
(8)
$$={\left|\int\!\!G\text{}{\mu }_{* }\right|}^{2}+{\epsilon }^{2}{\left|\int\!\!G\text{}{\sigma }_{* }\right|}^{2}+2\epsilon {\rm{Re}}\left(\int\!\!\bar{G}{\bar{\mu }}_{* }\int\!\!G\text{}\,{\sigma }_{* }\right),$$
(9)

where the \(\bar{(\cdot )}\) notation denotes the complex conjugate, the notations \({\int}_{\Sigma }(\cdot ){\mathrm d}{{\bf{r}}}^{\prime}\) and \(G({\bf{r}},{{\bf{r}}}^{\prime})\) are simplified to ∫ and G, and the notation \({\bf{p}}({{\bf{r}}}^{\prime})\) is dropped for concision. From the linearity of expectation,

$${\mathbb{E}}{\left|E({\bf{r}})\right|}^{2}={\left|\int\!\!G\text{}{\mu }_{* }\right|}^{2}+{\mathbb{E}}({\epsilon }^{2}){\left|\int\!\!G\text{}{\sigma }_{* }\right|}^{2}+2{\mathbb{E}}(\epsilon ){\rm{Re}}\left(\int\!\!\bar{G}{\bar{\mu }}_{* }\int\!\!G\text{}\,{\sigma }_{* }\right),$$
(10)
$${\mathbb{E}}{\left|E({\bf{r}})\right|}^{2}={\left|\int\!\!G\text{}{\mu }_{* }\right|}^{2}+{\left|\int\!\!G\text{}{\sigma }_{* }\right|}^{2},$$
(11)

where we used that \({\mathbb{E}}(\epsilon )=0\) and \({\mathbb{E}}({\epsilon }^{2})=1\).

Active-learning architecture and training

The ensemble of NN was implemented using PyTorch52 on a 3.5 GHz 6-Core Intel Xeon E5 processor. We trained an ensemble of 5 NNs for each surrogate models. Each NN is composed of an input layer with 13 nodes (10 nodes for the geometry parameterization and 3 nodes for the one-hot encoding39 of three frequencies of interest), three fully-connected hidden layers with 256 ReLU39, and one last layer containing one unit with a scaled hyperbolic-tangent activation function39 (for μi) and one unit with a softplus activation function39 (for σi). The cost function is negative-log-likelihood of a Gaussian as in Eq. (2). The mean and the variance of the ensemble are the pooled mean and variance from Eqs. (3) and (4). The optimizer is Adam53. The parameters are initialized using PyTorch’s default settings, i.e., sampled uniformly on a support inversely proportional to the square root of the number of input parameters. The starting learning rate is 0.001. After the tenth epoch, the learning rate is decayed by a factor of 0.99. Each iteration of the active-learning algorithm as well as the baseline were trained for 50 epochs. The choice of training points is detailed in the algorithm of Fig. 4. The quantitative evaluations were computed using the fractional error on a test set containing 2000 points chosen from a random uniform distribution. The fractional error FE between two vectors of complex values uestimate and vtrue is

$${\mathrm {FE}}=\frac{| {{\bf{u}}}_{{\rm{estimate}}}-{{\bf{v}}}_{{\rm{true}}}| }{| {{\bf{v}}}_{{\rm{true}}}| },$$
(12)

where is the L2-norm for complex vectors.

For 128k training points and the surrogate NN architecture mentioned in this section, the time complexity breakdown of active learning is 5.4k seconds (including training and evaluation), and 27.8k seconds for Maxwell’s simulations on a 3.5 GHz 6-Core Intel Xeon E5 processor. Maxwell’s equations simulations are the most expensive part of the active learning process and account for 85% of the total computation time.