Introduction

Numerical algorithms for solving Maxwell’s equations are often required in a large number of design problems, ranging from integrated photonic devices to RF antennas and filters. Recent advances in large-scale computational capabilities have opened up the door for algorithmic design of electromagnetic devices for a number of applications1,2,3. These design algorithms typically explore very high dimensional parameter spaces using gradient-based optimization methods. A single device optimization requires ~500–1000 electromagnetic simulations, making them the primary computational bottleneck. However, during such a design process, the electromagnetic simulations being performed are on correlated permittivity distributions (e.g. permittivity distributions generated at different steps of a gradient-based design algorithm). The availability of such data opens up the possibility of using data-driven approaches for accelerating electromagnetic simulations.

Accelerating simulations in a gradient-based optimization algorithm has been previously investigated in structural engineering designs via Krylov subspace recycling4,5 — the key idea in such approaches is to use the subspace spanned by simulations in an optimization trajectory to augment future simulations being performed in the same trajectory. However, these approaches become computationally infeasible if an attempt is made to exploit the full extent of the available simulation data (e.g. use simulations performed in a large number of correlated optimization trajectories) since the augmenting subspace becomes increasingly high-dimensional. Approaches that intelligently select a suitable low-dimensional subspace using machine learning models, like the one explored in this paper, are thus expected to be more efficient at exploiting the available simulation data in practical design settings. Data-driven methods for solving partial-differential equations have only recently been investigated, with demonstration of learning an ‘optimal’ finite-difference stencil6 for time-domain simulations, or using neural networks for solving partial differential equations7.

In this paper, we investigate the possibility of accelerating finite difference frequency domain (FDFD) simulation of Maxwell’s equations using data-driven models. Performing an FDFD simulation is equivalent to solving a large sparse system of linear equations, which is typically done using an iterative solver such as Generalized Minimal Residual (GMRES) algorithm8. Here we develop an accelerated solver (data-driven GMRES) by interfacing a machine learning model with GMRES. The machine learning model is trained to predict a subspace that approximates the simulation result, and the subspace is used to augment the GMRES iterations. Since the simulation is still being performed with an iterative solver, it is guaranteed that the result of the simulation will be accurate — the performance of the machine learning model only affects how fast the solution is obtained. This is a major advantage of this approach over other data-driven attempts for solving Maxwell’s equations9,10 that have been presented so far, in which case a misprediction by the model would result in an inaccurate simulation. Using wavelength-splitting gratings as an example, we show an order of magnitude reduction in the number of GMRES iterations required for solving frequency-domain Maxwell’s equations.

This paper is organized as follows - Section 1 outlines the data-driven GMRES algorithm and Section 2 presents results of applying the data-driven GMRES algorithm using two machine learning models, principal component analysis and convolutional neural networks, to simulate wavelength splitting gratings. We show that data-driven GMRES not only achieves an order of magnitude speedup against GMRES, but also outperforms a number of commonly used data-free preconditioning techniques.

Data-driven GMRES

In the frequency domain, Maxwell’s equations can be reduced to a partial differential equation relating the electric field E(x) to its source J(x):

$$[\nabla \times \nabla \times -\,\frac{{\omega }^{2}}{{c}^{2}}\varepsilon ({\bf{x}})]{\bf{E}}({\bf{x}})=-\,{\rm{i}}\omega {\mu }_{0}{\bf{J}}({\bf{x}})$$
(1)

where \(\omega \) is the frequency of the simulation, and \(\varepsilon ({\bf{x}})\) is the permittivity distribution as a function of space. The finite difference frequency domain method11 is a popular approach for numerically solving this partial differential equation – it discretizes this equation on the Yee grid with perfectly matched layers together and periodic boundary conditions used for terminating the simulation domain to obtain a system of linear equations, \(Af=b\), where A is a sparse matrix describing the operator \(\nabla \times \nabla \times -\,{\omega }^{2}\varepsilon ({\bf{x}})/{c}^{2}\) and b is a vector describing the source term \(-\,i\omega {\mu }_{0}{\bf{J}}({\bf{x}})\) on the Yee grid.

For large-scale problems, this system of equation is typically solved via a Krylov subspace-based iterative method8. Krylov subspace methods have an advantage that they only access the matrix A via matrix-vector products, which can be performed very efficiently since A is a sparse matrix. The iterative algorithm that we focus on in this paper is GMRES8. In ith iteration of standard GMRES, the solution to \(Af=b\) is approximated by fi, where:

$${f}_{i}=\mathop{{\rm{argmin}}}\limits_{f\in {{\mathscr{K}}}_{i}(A,b)}\,\parallel Af-b{\parallel }^{2}$$
(2)

where \({{\mathscr{K}}}_{i}(A,b)={\rm{span}}(b,Ab,{A}^{2}b\ldots {A}^{i-1}b)\) is the Krylov subspace of dimension i generated by \((A,b)\). The Krylov subspace can be generated iteratively i.e. if an orthonormal basis for \({{\mathscr{K}}}_{i}(A,b)\) has been computed, then an orthonormal basis for \({{\mathscr{K}}}_{i+1}(A,b)\) can be efficiently computed with one additional matrix-vector product. In practical simulation settings, the GMRES iteration is performed till the Krylov subspace is large enough for the residual \({r}_{i}=\parallel A{f}_{i}-b\parallel /\parallel b\parallel \) to be smaller than a user-defined threshold. We also note that GMRES is a completely data-free algorithm — it only requires knowledge of the source vector b and the ability to multiply the matrix A with an arbitrary vector.

The number of GMRES iterations can be significantly reduced if an estimate of the solution \({A}^{-1}b\) is known. To this end, we train a data-driven model on simulations of correlated structures to predict a (low-dimensional) subspace \({\mathscr{V}}\) within which \({A}^{-1}b\) is expected to lie. More specifically, the model predicts \(N\) vectors, \({v}_{1},{v}_{2}\ldots {v}_{N}\), such that \({\mathscr{V}}={\rm{span}}({v}_{1},{v}_{2}\ldots {v}_{N})\). These vectors can then be used to augment the GMRES iterations (we refer to the augmented version of GMRES as data-driven GMRES throughout the paper) — the ith iteration of data-driven GMRES can be formulated as:

$${f}_{i}=\mathop{{\rm{\arg }}\,{\rm{\min }}}\limits_{f\in {\mathscr{V}}\oplus {{\mathscr{K}}}_{i}(\tilde{A},\tilde{b})}\,\parallel Af-b{\parallel }^{2}$$
(3)

where \(\tilde{A}\), and \(\tilde{b}\) are given by:

$$\tilde{A}={P}_{\perp }(A{v}_{1},A{v}_{2}\ldots A{v}_{N})A$$
(4)
$$\tilde{b}={P}_{\perp }(A{v}_{1},A{v}_{2}\ldots A{v}_{N})b$$
(5)

where \({P}_{\perp }(A{v}_{1},A{v}_{2}\ldots A{v}_{N})\) is the operator projecting a vector out of the space spanned by \(A{v}_{1},A{v}_{2}\ldots A{v}_{N}\). Note that while in GMRES (Eq. 2) the generated Krylov subspace is responsible for estimating the entire solution \({A}^{-1}b\), in data-driven GMRES (Eq. 3) the generated Krylov subspace is only responsible for estimating the projection of \({A}^{-1}b\) perpendicular to the subspace \({\mathscr{V}}\). Therefore, a large speedup in the solution of \(Af=b\) can be expected if \({\mathscr{V}}\) is a good estimate of a subspace within which \({A}^{-1}b\) lies. Moreover, as is shown in the supplement, an efficient update algorithm for data-driven GMRES can be formulated in a manner identical to that formulated for Generalized Conjugate Residual with inner Orthogonalization and outer Truncation (GCROT)5,12.

Results

We investigate two data-driven models to predict the vectors \({v}_{1},{v}_{2}\ldots {v}_{N}\): principal component analysis and a convolutional neural network. The dataset that we use for training and evaluating these models comprises of a collection of 2D grating splitters [Fig. 1(a)] which reflect an incident waveguide mode at \(\lambda =1.4\) μm and transmit an incident waveguide mode at \(\lambda =1.55\) μm. Throughout this paper, we focus on accelerating simulations of the grating splitters at \(\lambda =1.4\) μm — so as to train our data-driven models, we provide the dataset with the full simulations of the electric fields in the grating splitter at 1.4 μm [Fig. 1(b)]. Additionally, we inuitively expect a well designed data-driven model to perform better if supplied with an approximation to the simulated field as input for predicting the subspace \({\mathscr{V}}\). To this end, we provide the dataset with effective index simulations13 of the electric fields in the grating splitter [Fig. 1(b)]. The effective index simulations are very cheap to perform since they are equivalent to solving Maxwell’s equations in 1D, making them an attractive approximation to the simulated field that the data-driven model can exploit.

Figure 1
figure 1

(a) Schematic of the grating splitter device that comprises the dataset. All the gratings in the dataset are 3 μm long and are designed for a 220 nm silicon-on-insulator (SOI) platform with oxide cladding. We use a uniform spatial discretization of 20 nm while representing Eq. 1 as a system of linear equations. The resulting system of linear equations has 229 × 90 = 20,610 unknown complex numbers. (b) Visualizing samples from the dataset — shown are permittivity distribution, simulated electric fields and effective index fields for 4 randomly chosen samples. All fields are shown at a wavelength of 1.4 μm.

Principal component analysis

The first data-driven model that we consider for accelerating FDFD simulations is using the principal components14 computed from the simulated fields in the training dataset as \({v}_{1},{v}_{2}\ldots {v}_{N}\). The first 5 principal components of the training dataset are shown in Fig. 2(a) — we computed these principal components by performing an incomplete singular value decomposition of a matrix formed with the electric field vectors of the training dataset as its columns. The first two principal components appear like fields “reflected” from the grating region, whereas the higher order principal components capture fields that are either transmitted or scattered away from the grating devices. Note that the principal components are not necessarily solutions to Maxwell’s equations for a grating structure, but provide an estimate of a basis on which the solutions of Maxwell’s equations for the grating structures can be accurately represented.

Figure 2
figure 2

(a) First five principal components of the electric fields in the grating splitter dataset. (b) Performance of data-driven GMRES on the evaluation dataset when supplied with different number of principal components (~200 samples from the training set were used for computing the principal components) — the dotted line shows the mean residual, and the solid colored background indicates the region within one standard deviation around the mean residual. (c) Histogram of the residual after 100 data-driven GMRES iterations for different N computed over 100 randomly chosen samples from the evaluation dataset. The black vertical dashed line indicates the mean residual after 100 iterations of GMRES over the evaluation dataset.

Using principal components as \({v}_{1},{v}_{2}\ldots {v}_{N}\) in data-driven GMRES, Fig. 2(b) shows the residual \({r}_{i}=\parallel A{f}_{i}-b\parallel /\parallel b\parallel \) as a function of the number of iterations i and Fig. 2(c) shows the histogram of the residual after 100 iterations over the evaluation dataset. We clearly see an order of magnitude speed up in convergence rate for GMRES when supplemented with ≥5 principal components. Note that a typical trajectory of data-driven GMRES shows a significant reduction in the residual in the first iteration. This corresponds to GMRES finding the most suitable vector minimizing the residual within the space spanned by b and the supplied principal components. Moreover, the residuals in the data-driven GMRES decrease more rapidly than in GMRES. This acceleration can be attributed to the fact that the Krylov subspace generated corresponds to the matrix \(\tilde{A}\) defined in Eq. 4 instead of A. Since A corresponds to a double derivative operator, its application is equivalent to convolving the electric field on the grid with a 3 × 3 filter. Therefore, A generates a Krylov subspace almost on a pixel by pixel basis15. On the other hand, application of \(\tilde{A}\) to a vector is equivalent to application of A (which is a 3 × 3 filter) followed by the projection operator \({P}_{\perp }(A{v}_{1},A{v}_{2}\ldots A{v}_{N})\) (which is a fully dense operator). \(\tilde{A}\) can thus generate a Krylov subspace that spans the entire simulation region within the first few iterations, leading to a larger decrease in the residual ri in data-driven GMRES as compared to GMRES.

We also reemphasize the fact that the computation of principal components needs to be performed only once for a given training dataset. Additionally, we observed that we did not need to use the entire dataset (which had ~22,500 electric field vectors) for computing the principal components — using ~200 randomly chosen data samples already provided a good estimate of the dominant principal components. Consequently, the computational cost of data-driven GMRES is still dominated by the iterative solve which indicates that the residual obtained by data-driven GMRES when compared to GMRES is a good measure of the obtained speedup.

Convolutional neural network

While using principal components achieves an order of magnitude speed up over GMRES, this approach predicts the same subspace \({\mathscr{V}}\) irrespective of the permittivity distribution being simulated. Intuitively, it might be expected that a data-driven model that specializes \({\mathscr{V}}\) to the permittivity distribution being simulated would unlock an even greater speed up over GMRES. To this end, we train a convolutional neural network that takes as input the permittivity distribution of the grating device as well as the effective index electric field and predicts the vectors \({v}_{1},{v}_{2}\ldots {v}_{N}\) [Fig. 3(a)] that can be used in data-driven GMRES to simulate the permittivity distribution under consideration. To train the convolutional neural network, we consider two different loss functions:

  1. (a)

    Projection loss function: For the kth training example, the projection loss \({l}_{{\rm{proj}}}^{(k)}\) is defined by the square of length of the simulated field \({f}^{(k)}\) that is perpendicular to the space spanned by the vectors \({v}_{1},{v}_{2}\ldots {v}_{N}\) relative to the square of the length of \({f}^{(k)}\):

    $${l}_{{\rm{proj}}}^{(k)}=\mathop{{\rm{\min }}}\limits_{f\in {\mathscr{V}}}\,\frac{\parallel f-{f}^{(k)}{\parallel }^{2}}{\parallel {f}^{(k)}{\parallel }^{2}}$$
    (6)

    Note that \(0\le {l}_{{\rm{proj}}}^{(k)}\le 1\), with \({l}_{{\rm{proj}}}^{(k)}=0\) indicating that \({f}^{(k)}\) lies in the subspace \({\mathscr{V}}\) and \({l}_{{\rm{proj}}}^{(k)}=1\) indicating that \({f}^{(k)}\) is orthogonal to \({\mathscr{V}}\).

  2. (b)

    Residual loss function: The residual loss function can remedy this issue. For the kth training example, the residual loss \({l}_{{\rm{res}}}^{(k)}\) is defined by:

$${l}_{{\rm{res}}}^{(k)}=\mathop{{\rm{\min }}}\limits_{f\in {\mathscr{V}}}\,\frac{\parallel {A}^{(k)}f-b{\parallel }^{2}}{\parallel b{\parallel }^{2}}$$
(7)

where \({A}^{(k)}\) is the sparse matrix corresponding to the operator \(\nabla \times \nabla \times -\,{\omega }^{2}{\varepsilon }^{(k)}/{c}^{2}\) for the kth training example. Similar to the projection loss, \(0\le {l}_{{\rm{res}}}^{(k)}\le 1\), with \({l}_{{\rm{res}}}^{(k)}=0\Rightarrow {f}^{(k)}={[{A}^{(k)}]}^{-1}b\) lies within the subspace \({\mathscr{V}}\) and \({l}_{{\rm{res}}}^{(k)}=1\Rightarrow {f}^{(k)}={[{A}^{(k)}]}^{-1}b\) is orthogonal to the subspace \({\mathscr{V}}\) with respect to the positive definite matrix \({W}^{(k)}={[{A}^{(k)}]}^{\dagger }{A}^{(k)}\) (i.e. \({\langle v,{f}^{(k)}\rangle }_{{W}^{(k)}}={v}^{\dagger }{W}^{(k)}{f}^{(k)}=0\,\forall \,v\in {\mathscr{V}}\)). However, unlike the projection loss function, the residual loss function is an unsupervised loss function i.e. the simulated electric fields for the structure are not required for its computation.

Figure 3
figure 3

(a) Schematic of the CNN based data-driven GMRES — a convolutional neural network takes as input the permittivity and effective index field and produces as an output the vectors \({v}_{1},{v}_{2}\ldots {v}_{N}\). These vectors are then supplied to the data-driven GMRES algorithm, which produces the full simulated field. (b) Histogram of the residual after 1 and 100 data-driven GMRES iterations evaluated over the evaluation dataset. We consider neural networks trained with both the projection loss function \({l}_{{\rm{proj}}}\) and residual loss function \({l}_{{\rm{res}}}\). The vertical dashed lines indicate the mean residual after 1 and 100 iterations of GMRES over the evaluation dataset. (c) Performance of the data-driven GMRES on the evaluation dataset when supplied with the vectors at the output of the convolutional neural networks trained with the projection loss function \({l}_{{\rm{proj}}}\) and the residual loss function \({l}_{{\rm{res}}}\). The dotted line shows the mean residual, and the solid colored background indicates the region within ±standard deviation around the mean residual.

Since the residual loss function trains the neural network to minimize the residual directly, it accelerates data-driven GMRES significantly in the first few iterations as compared to the projection loss function. This can be clearly seen in Fig. 3(b), which shows histograms of the residual after 1 iteration and 100 iterations over the evaluation dataset. However, as is seen from the residual vs iteration number plots in Fig. 3(c), data-driven GMRES with the convolutional neural network trained using the residual loss function slows down after the first few iterations. It is not clear why this slow down happens, and is part of ongoing research.

We also note that the use of fields obtained with an effective index simulation as an input to the convolutional neural network significantly improves the performance of the convolutional neural network. Empirically, we observed that if the convolutional neural network was trained with the projection loss function but without the effective index fields as an input, the training loss would saturate at ~0.2, indicating that on an average approximately 20% of the simulated fields \({f}^{(k)}\) lie outside the subspace \({\mathscr{V}}\). With a convolutional neural network that takes the effective index fields as an input in addition to the permittivity distribution, the training loss would saturate at a significantly lower value of ~0.05 indicating that 95% of the simulated field \({f}^{(k)}\) lies in the subspace \({\mathscr{V}}\).

Finally, we point out that the convolutional neural network needs to be trained only once before using it to accelerate GMRES over all the simulations in the evaluation dataset. The results presented in this paper were obtained using a convolutional neural network that required ~10 hours to train, with the training distributed over ~8 GPUs. However, once the neural network was trained, computing the vectors \({v}_{1},{v}_{2}\ldots {v}_{N}\) for a given structure (i.e. performing the feedforward computation for the neural network shown in Fig. 3(a)) is extremely fast, and the time take for this computation is negligible when compared to the iterative solve. For full three-dimensional simulations, we expect the iterative solve to be even slower than feedforward computations of most convolutional neural network and consequently believe that data-driven approaches can significantly accelerate electromagnetic simulations in practical settings.

Benchmarks against data-free preconditioning techniques

Accelerating the solution of linear system of equations is a problem that has been studied by the scientific computing community via data-free approaches for several decades. For iterative solvers in particular, using various preconditioners16 to improve the spectral properties of the system of linear equations has emerged as a common strategy to solve this problem. In order to gauge the performance of the data-driven approaches outlined in this paper, we benchmark their performance against three stationary preconditioning techniques (Jacobi preconditioner, Gauss-Seidel preconditioner, Symmetric over-relaxation (SOR) preconditioner), incomplete LU preconditioner and a preconditioner designed specifically for finite-difference frequency domain simulations of Maxwell’s equations17 (see the supplement for residual vs. iteration plots of GMRES with different preconditioners on the evaluation dataset). Table 1 shows the results of these benchmarks:

  1. 1.

    Training time: While using data-driven GMRES augmented with principal components, the training time refers to the amount of time taken to compute the PCA vectors from the training data. While using data-driven GMRES augmented with the output of the CNN, training time refers to the amount of time taken to train the convolutional neural network by minimizing the projection loss function or the residual loss function over the training data. It can be noted that training is done only once, and the same PCA vectors or the same CNN are used for augmenting data-driven GMRES over all the structures in the evaluation dataset.

  2. 2.

    Setup time: The setup time refers to the amount of time taken to compute the augmenting vectors or the preconditioner for a given evaluation data sample. Note that there is no setup time while using principal components since they are computed once from the training data. The setup time for the convolutional neural network refers to the amount of time taken to perform the feedforward computation. The setup time for the preconditioners refers to the time taken for computing the preconditioners.

  3. 3.

    GMRES time: This refers to the amount of time taken to run GMRES iterations on the evaluation data sample to reduce the residual below a threshold rth. The results in Table 1 are for \({r}_{{\rm{th}}}=0.04\) which is chosen to be equal to the mean residual achieved by unpreconditioned GMRES after 100 iterations.

  4. 4.

    Total solve time: This is the amount of time taken for performing one simulation. It is calculated as a sum of the setup time (i.e. time required for calculating the augmenting vectors or the preconditioner) and the GMRES time. We do not include the training time in the solve time since the training is done exactly once for the entire evaluation dataset.

  5. 5.

    Number of iterations: This refers to the number of iterations for which GMRES needs to be performed to reduce the residual to below a threshold residual \({r}_{{\rm{th}}}=0.04\).

Table 1 Benchmarking data-driven GMRES against data-free preconditioning techniques.

The numbers presented for setup time, GMRES time and number of iterations are average values obtained by simulating 50 randomly chosen evaluation data samples. We see from Table 1 that data-driven GMRES outperforms every preconditioning technique other than the incomplete LU preconditioner by at least an order of magnitude in terms of total solve time as well as the number of GMRES iterations performed. The best case performance of these preconditioners is only slightly better than unpreconditioned GMRES. Preconditioning partial differential equations describing wave propagation is known to be a difficult problem due to the solution of the wave equations being delocalized over the entire simulation domain even when the source has compact spatial support15. We also note that while the incomplete LU preconditioner does provide a notable speedup over unpreconditioned GMRES, the PCA-based data-driven GMRES outperforms it in terms of the total solve time and CNN-based data-driven GMRES performs comparably to it. Clearly, these benchmarks indicate that the data-driven approaches introduced in this paper have the potential to provide a scalable alternative to data-free preconditioning techniques for accelerating electromagnetic simulations.

Conclusion

In conclusion, we present a framework for accelerating finite difference frequency domain (FDFD) simulations of Maxwell’s equations using data-driven models that can exploit simulations of correlated permittivity distributions. We analyze two data-driven models, based on principal component analysis and a convolutional neural network, to accelerate these simulations, and show that these models can unlock an orders of magnitude acceleration over data-free solver. Such data-driven methods would likely be important in scenarios where a large number of simulations with similar permittivity distributions are performed e.g. during a gradient-based optimization of a photonic device.

Methods

Dataset

The grating splitters are designed to reflect an incident waveguide mode at λ = 1.4 um and transmit an incident waveguide mode at λ = 1.55 um using a gradient-based design technique similar to that used for grating couplers3. Different devices in the dataset are generated by seeding the optimization with a different initial structure. Note that all the devices generated at different stages of the optimization are part of the dataset. Consequently, the dataset not only has grating splitters that have a discrete permittivity distribution (i.e. have only two materials – silicon and silicon oxide), but also grating splitters that have a continuous distribution (i.e. permittivity of the grating splitter can assume any value between that of silicon oxide and silicon). Moreover, the dataset has poorly performing grating splitters (i.e. grating splitters generated at the initial steps of the optimization procedure) as well as well-performing grating splitters (i.e. grating splitters generated towards the end of the optimization procedure). Our dataset has a total of ~30,000 examples, which we split into a training data set (75%) and an evaluation dataset (25%).

Implementation of data-driven models

Both the data-driven models (PCA and CNN) were implemented using the python library Tensorflow18. Any complex inputs (e.g. effective index fields) to the CNN were fed as an image of depth 2 comprising of the real and imaginary parts of the complex input. We use the ADAM optimizer19 with a batch size of 30 for training the convolutional neural network — it required ~10,000 steps to train the network.