Abstract
Deep learning has been transformative in many fields, motivating the emergence of various optical computing architectures. Diffractive optical network is a recently introduced optical computing framework that merges wave optics with deeplearning methods to design optical neural networks. Diffractionbased alloptical object recognition systems, designed through this framework and fabricated by 3D printing, have been reported to recognize handwritten digits and fashion products, demonstrating alloptical inference and generalization to subclasses of data. These previous diffractive approaches employed monochromatic coherent light as the illumination source. Here, we report a broadband diffractive optical neural network design that simultaneously processes a continuum of wavelengths generated by a temporally incoherent broadband source to alloptically perform a specific task learned using deep learning. We experimentally validated the success of this broadband diffractive neural network architecture by designing, fabricating and testing seven different multilayer, diffractive optical systems that transform the optical wavefront generated by a broadband THz pulse to realize (1) a series of tuneable, singlepassband and dualpassband spectral filters and (2) spatially controlled wavelength demultiplexing. Merging the native or engineered dispersion of various material systems with a deeplearningbased design strategy, broadband diffractive neural networks help us engineer the light–matter interaction in 3D, diverging from intuitive and analytical design methods to create taskspecific optical components that can alloptically perform deterministic tasks or statistical inference for optical machine learning.
Introduction
Deep learning has been redefining the stateoftheart results in various fields, such as image recognition^{1,2}, natural language processing^{3} and semantic segmentation^{4,5}. The photonics community has also benefited from deeplearning methods in various applications, such as microscopic imaging^{6,7,8,9,10} and holography^{11,12,13}, among many others^{14,15,16,17}. Aside from optical imaging, deep learning and related optimization tools have recently been utilized to solve inverse problems in optics related to, e.g., nanophotonic designs and nanoplasmonics^{18,19,20,21,22}. These successful demonstrations and many others have also inspired a resurgence on the design of optical neural networks and other optical computing techniques motivated by their advantages in terms of parallelization, scalability, power efficiency, and computation speed^{23,24,25,26,27,28,29}. A recent addition to this family of optical computing methods is Diffractive Deep Neural Networks (D^{2}NNs)^{27,30,31}, which are based on light–matter interaction engineered by successive diffractive layers designed in a computer by deeplearning methods such as error backpropagation and stochastic gradient descent. Once the training phase is finalized, a diffractive optical network that is composed of transmissive and/or reflective layers is physically fabricated using, e.g., 3D printing or lithography. Each diffractive layer consists of elements (termed neurons) that modulate the phase and/or amplitude of the incident beam at their corresponding location in space, connecting one diffractive layer to successive ones through spherical waves based on the Huygens–Fresnel principle^{27}. Using spatially and temporally coherent illumination, these neurons at different layers collectively compute the spatial light distribution at the desired output plane based on a given task that is learned. Diffractive optical neural networks designed using this framework and fabricated by 3D printing were experimentally demonstrated to achieve alloptical inference and data generalization for object classification^{27}, a fundamental application in machine learning. Overall, multilayer diffractive neural networks have been shown to achieve improved blind testing accuracy, diffraction efficiency and signal contrast with additional diffractive layers, exhibiting a depth advantage even when using linear optical materials^{27,30,31}. In all these previous studies on diffractive optical networks, the input light was both spatially and temporally coherent, i.e., utilized a monochromatic plane wave at the input.
In general, diffractive optical networks with multiple layers enable generalization and perform alloptical blind inference on new input data (never seen by the network before), beyond the deterministic capabilities of the previous diffractive surfaces^{32,33,34,35,36,37,38,39,40,41,42} that were designed using different optimization methods to provide wavefront transformations without any data generalization capability. These previous demonstrations cover a variety of applications over different regions of the electromagnetic spectrum, providing unique capabilities compared to conventional optical components that are designed by analytical methods. While these earlier studies revealed the potential of singlelayer designs using diffractive surfaces under temporally coherent radiation^{33,34}, the extension of these methods to broadband designs operating with a continuum of wavelengths has been a challenging task. Operating at a few discrete wavelengths, different design strategies have been reported using a singlelayer phase element based on, e.g., composite materials^{35} and thick layouts covering multiple 2π modulation cycles^{36,37,38,39,40}. In a recent work, a low numerical aperture (NA ~ 0.01) broadband diffractive cylindrical lens design was also demonstrated^{43}. In addition to these diffractive surfaces, metasurfaces also present engineered optical responses, devised through densely packed subwavelength resonator arrays that control their dispersion behaviour^{44,45,46,47,48}. Recent advances in metasurfaces have enabled several broadband, achromatic lens designs for, e.g., imaging applications^{49,50,51}. On the other hand, the design space of broadband optical components that process a continuum of wavelengths relying on these elegant techniques has been restrained to singlelayer architectures, mostly with an intuitive analytical formulation of the desired surface function^{52}.
Here, we demonstrate a broadband diffractive optical network that unifies deeplearning methods with the angular spectrum formulation of broadband light propagation and the material dispersion properties to design taskspecific optical systems through 3D engineering of the light–matter interaction. Designed in a computer, a broadband diffractive optical network, after its fabrication, can process a continuum of input wavelengths all in parallel and perform a learned task at its output plane, resulting from the diffraction of broadband light through multiple layers. The success of broadband diffractive optical networks is demonstrated experimentally by designing, fabricating and testing different types of optical components using a broadband THz pulse as the input source (see Fig. 1). First, a series of singlepassband and dualpassband spectral filters are demonstrated, where each design uses three diffractive layers fabricated by 3D printing, experimentally tested using the setup shown in Fig. 1. A classical tradeoff between the Qfactor and the power efficiency is observed, and we demonstrate that our diffractive neural network framework can control and balance these design parameters on demand, i.e., based on the selection of the diffractive network training loss function. Combining the spectral filtering operation with spatial multiplexing, we also demonstrate spatially controlled wavelength demultiplexing using three diffractive layers that are trained to demultiplex a broadband input source onto four output apertures located at the output plane of the diffractive network, where each aperture has a unique target passband. Our experimental results obtained with these seven different diffractive optical networks that were 3D printed provided very good fits to our trained diffractive models.
We believe that broadband diffractive optical neural networks provide a powerful framework for merging the dispersion properties of various material systems with deeplearning methods to engineer light–matter interactions in 3D and help us create taskspecific optical components that can perform deterministic tasks as well as statistical inference and data generalization. In the future, we also envision the presented framework to be empowered by various metamaterial designs as part of the layers of a diffractive optical network and to bring additional degrees of freedom by engineering and encoding the dispersion of the fabrication materials to further improve the performance of broadband diffractive networks.
Results
Design of broadband diffractive optical networks
Designing broadband, taskspecific and smallfootprint compact components that can perform arbitrary optical transformations is highly sought in all parts of the electromagnetic spectrum for various applications, including e.g., telecommunications^{53}, biomedical imaging^{54} and chemical identification^{55}, among others. We approach this general broadband inverse optical design problem from the perspective of diffractive optical neural network training and demonstrate its success with various optical tasks. Unlike the training process of the previously reported monochromatic diffractive neural networks^{27,30,31}, in this work, the optical forward model is based on the angular spectrum formulation of broadband light propagation within the diffractive network, precisely taking into account the dispersion of the fabrication material to determine the light distribution at the output plane of the network (see the Methods section). Based on a network training loss function, a desired optical task can be learned through error backpropagation within the diffractive layers of the optical network, converging to an optimized spectral and/or spatial distribution of light at the output plane.
In its general form, our broadband diffractive network design assumes an input spectral frequency band between f_{min} and f_{max}. Uniformly covering this range, M discrete frequencies are selected for use in the training phase. In each update step of the training, an input beam carrying a random subset of B frequencies out of these M discrete frequencies is propagated through the diffractive layers, and a loss function is calculated at the output plane, tailored according to the desired task; without loss of generality, B/M has been selected in our designs to be less than 0.5% (refer to the Methods section). At the final step of each iteration, the resultant error is backpropagated to update the physical parameters of the diffractive layers controlling the optical modulation within the optical network. The training cycle continues until either a predetermined design criterion at the network output plane is satisfied or the maximum number of epochs (where each epoch involves M/B successive iterations, going through all the frequencies between f_{min} and f_{max}) is reached. In our broadband diffractive network designs, the physical parameter to be optimized was selected as the thickness of each neuron within the diffractive layers, enabling the control of the phase modulation profile of each diffractive layer in the network. In addition, the material dispersion, including the real and imaginary parts of the refractive index of the network material as a function of the wavelength, was also taken into account to correctly represent the forward model of the broadband light propagation within the optical neural network. As a result of this, for each wavelength within the input light spectrum, we have a unique complex (i.e., phase and amplitude) modulation, corresponding to the transmission coefficient of each neuron, determined by its physical thickness, which is a trainable parameter for all the layers of the diffractive optical network.
Upon completion of this digital training phase in a computer, which typically takes ~5 h (see the Methods section for details), the designed diffractive layers were fabricated using a 3D printer, and the resulting optical networks were experimentally tested using the THz timedomain spectroscopy (TDS) system illustrated in Fig. 1, which has a noiseequivalent power bandwidth of 0.1–5 THz^{56}.
Singlepassband spectral filter design and testing
Our diffractive singlepassband spectral filter designs are composed of three diffractive layers, with a layertolayer separation of 3 cm and an output aperture positioned 5 cm away from the last diffractive layer, serving as a spatial filter, as shown in Fig. 1. For our spectral filter designs, the parameters M, f_{min} and f_{max} were taken as 7500, 0.25 THz and 1 THz, respectively. Using this broadband diffractive network framework employing three successive layers, we designed four different spectral bandpass filters with centre frequencies of 300 GHz, 350 GHz, 400 GHz and 420 GHz, as shown in Fig. 2a–d, respectively. For each design, the target spectral profile was set to have a flattop bandpass over a narrow band (±2.5 GHz) around the corresponding centre frequency. During the training of these designs, we used a loss function that solely focused on increasing the power efficiency of the target band, without a specific penalty on the Qfactor of the filter (see the Methods section). As a result of this design choice during the training phase, our numerical models converged to bandpass filters centred around each target frequency, as shown in Fig. 2a–d. These trained diffractive models reveal the peak frequencies (and the Qfactors) of the corresponding designs to be 300.1 GHz (6.21), 350.4 GHz (5.34), 399.7 GHz (4.98) and 420.0 GHz (4.56), respectively. After the fabrication of each of these trained models using a 3D printer, we also experimentally tested these four different diffractive networks (Fig. 1) to find a very good match between our numerical testing results and the physical diffractive network results. Based on the bluedashed lines depicted in Fig. 2a–d, the experimental counterparts of the peak frequencies (and the Qfactors) of the corresponding designs were calculated as 300.4 GHz (4.88), 351.8 GHz (7.61), 393.8 GHz (4.77) and 418.6 GHz (4.22).
Furthermore, the power efficiencies of these four different bandpass filter designs, calculated at the corresponding peak wavelength, were determined to be 23.13, 20.93, 21.76 and 18.53%, respectively. To shed more light on these efficiency values of our diffractive THz systems and estimate the specific contribution due to the material absorption, we analysed the expected power efficiency at 350 GHz by modelling each diffractive layer as a uniform slab (see the Methods section for details). Based on the extinction coefficient of the 3Dprinting polymer at 350 GHz (Supplementary Figure S1), three successive flat layers, each with a 1 mm thickness, provide 27.52% power efficiency when the material absorption is assumed to be the only source of loss. This comparison reveals that the main source of power loss in our spectral filter models is in fact the material absorption, which can be circumvented by selecting different types of fabrication materials with lower absorption compared to our 3D printer material (VeroBlackPlus RGD875).
To further exemplify the different degrees of freedom in our diffractive networkbased design framework, Fig. 2e illustrates another bandpass filter design centred at 350 GHz, same as in Fig. 2b; however, different from Fig. 2b, this particular case represents a design criterion where the desired spectral filter profile was set as a Gaussian with a Qfactor of 10. Furthermore, the training loss function was designed to favour a high Qfactor rather than better power efficiency by penalizing Qfactor deviations from the target value more severely compared to poor power efficiency (see the Methods section for details). To provide a fair comparison between Figs. 2b and 2e, all the other design parameters, e.g., the number of diffractive layers, the size of the output aperture and the relative distances, are kept identical. Based on this new design (Fig. 2e), the numerical (experimental) values of the peak frequency and the Qfactor of the final model can be calculated as 348.2 GHz (352.9 GHz) and 10.68 (12.7), once again providing a very good match between our numerical testing and experimental results, following the 3D printing of the designed network model. Compared to the results reported in Fig. 2b, this improvement in the Qfactor also comes at the expense of a power efficiency drop to 12.76%, which is expected by design, i.e., the choice of the training loss function.
Another important difference between the designs depicted in Figs. 2b, e lies in the structures of their diffractive layers. A comparison of the 3rd layers shown in Figs. 2b, e reveals that while the former design demonstrates a pattern at its 3rd layer that is intuitively similar to a diffractive lens, the thickness profile of the latter design (Fig. 2e) does not evoke any physically intuitive explanation of its immediate function within the diffractive network; the same conclusion is also evident if one examines the 1st diffractive layers reported in Fig. 2e as well as in Figs. 3 and 4. Convergence to physically nonintuitive designs, such as in these figures, in the absence of a tailored initial condition or prior design shows the power of our diffractive computational framework in the context of broadband, taskspecific optical system design.
Dualpassband spectral filter design and testing
Having presented the design and experimental validation of five different bandpass filters using broadband diffractive neural networks, we next used the same design framework for a more challenging task: a dualpassband spectral filter that directs two separate frequency bands onto the same output aperture while rejecting the remaining spectral content of the broadband input light. The physical layout of the diffractive network design is the same as before, being composed of three diffractive layers and an output aperture plane. The goal of this diffractive optical network is to produce a power spectrum at the same aperture that is the superposition of two flattop passband filters around the centre frequencies of 250 and 450 GHz (see Fig. 3). Following the deeplearningbased design and 3D fabrication of the resulting diffractive network model, our experimental measurement results (dashed blue line in Fig. 3a) provide very good agreement with the numerical results (red line in Fig. 3a); the numerical diffractive model has peak frequencies at 249.4 and 446.4 GHz, which closely agree with our experimentally observed peak frequencies, i.e., 253.6 and 443.8 GHz, for the two target bands.
Despite the fact that we did not impose any restrictions or loss terms related to the Qfactor during our training phase, the power efficiencies of the two peak frequencies were calculated as 11.91 and 10.51%. These numbers indicate a power efficiency drop compared to the singlepassband diffractive designs reported earlier (Fig. 2); however, we should note that the total power transmitted from the input plane to the output aperture (which has the same size as before) is maintained at approximately 20% in both the singlepassband and the doublepassband filter designs.
A projection of the intensity distributions produced by our 3layer design on the xz plane (at y = 0) is also illustrated in Fig. 3b, which exemplifies the operation principles of our diffractive network regarding the rejection of the spectral components residing between the two targeted passbands. For example, one of the undesired frequency components at 350 GHz is focused onto a location between the 3rd layer and the output aperture, with a higher numerical aperture (NA) compared to the waves in the target bands. As a result, this frequency quickly diverges as it propagates until reaching the output plane; hence, its contribution to the transmitted power beyond the aperture is significantly decreased, as desired. In general, the diffractive layers of a broadband neural network define a tuneable 3D space that can be optimized to approximate different sets of wavelengthdependent gratinglike structures that couple the input broadband light into different modes of radiation that are engineered depending on the target function in space and/or spectrum (see, e.g., Supplementary Figure S3).
From the spectrum reported in Fig. 3a, it can also be observed that there is a difference between the Qfactors of the two passbands. The main factor causing this variation in the Qfactor is the increasing material loss at higher frequencies (Supplementary Figure S1), which is a limitation due to our 3D printing material. If one selects the power efficiency as the main design priority in a broadband diffractive network, the optimization of a larger Qfactor optical filter function is relatively more cumbersome for higher frequencies due to the higher material absorption that we experience in the physically fabricated, 3Dprinted system. As a general rule, maintaining both the power efficiencies and the Qfactors over K bands in a multiband filter design requires optimizing the relative contributions of the training loss function subterms associated with each design criterion (refer to the Methods section for details); this balance among the subconstituents of the loss function should be carefully engineered during the training phase of a broadband diffractive network depending on the specific task of interest.
Spatially controlled wavelength demultiplexing
Next, we focused on the simultaneous control of the spatial and spectral content of the diffracted light at the output plane of a broadband diffractive optical network and demonstrated its utility for spatially controlled wavelength demultiplexing by training three diffractive layers (Fig. 4b) that channel the broadband input light onto four separate output apertures on the same plane, corresponding to four passbands centred at 300, 350, 400 and 450 GHz (Fig. 4a). The numerically designed spectral profiles based on our diffractive optical network model (red) and its experimental validation (dashed blue), following the 3D printing of the trained model, are reported in Fig. 4c for each subband, providing once again a very good match between our numerical model and the experimental results. Based on Fig. 4c, the numerically estimated and experimentally measured peak frequency locations are (297.5, 348.0, 398.5, 450.0) and (303.5 GHz, 350.1, 405.1, 454.8 GHz), respectively. The corresponding Qfactors calculated based on our simulations (11.90, 10.88, 9.84, and 8.04) are also in accordance with their experimental counterparts (11.0, 12.7, 9.19, and 8.68), despite various sources of experimental errors, as detailed in our Discussion section. Similar to our earlier observations in the dualpassband filter results, higher bands exhibit a relatively lower Qfactor that is related to the increased material losses at higher frequencies (Supplementary Figure S1). Since this task represents a more challenging optimization problem involving four different detector locations corresponding to four different passbands, the power efficiency values also exhibit a relative compromise compared to earlier designs, yielding 6.99, 7.43, 5.14 and 5.30% for the corresponding peak wavelengths of each passband. To further highlight the challenging nature of spatially controlled wavelength demultiplexing, Supplementary Figure S4 reports that the same task cannot be successfully achieved using only two learnable diffractive layers, which demonstrates the advantage of additional layers in a diffractive optical network to perform more sophisticated tasks through deeplearningbased optimization.
In addition to the material absorption losses, there are two other factors that need to be considered for wavelength multiplexing or demultiplexingrelated applications using diffractive neural networks. First, the lateral resolution of the fabrication method that is selected to manufacture a broadband diffractive network might be a limiting factor at higher frequencies; for example, the lateral resolution of our 3D printer dictates a feature size of ~λ/2 at 300 GHz that restricts the diffraction cone of the propagating waves at higher frequencies. Second, the limited axial resolution of a 3D fabrication method might impose a limitation on the thickness levels of the neurons of a diffractive layer design; for example, using our 3D printer, the associated modulation functions of individual neurons are quantized with a step size of 0.0625 mm, which provides 4 bits (within a range of 1 mm) in terms of the dynamic range, which is sufficient over a wide range of frequencies. With increasing frequencies, however, the same axial step size will limit the resolution of the phase modulation steps available per diffractive layer, partially hindering the associated performance and the generalization capability of the diffractive optical network. Nevertheless, with dispersion engineering methods (using, e.g., metamaterials) and/or higherresolution 3D fabrication technologies, including, e.g., optical lithography or twophoton polymerizationbased 3D printing, multilayer wavelength multiplexing/demultiplexing systems operating at various parts of the electromagnetic spectrum can be designed and tested using broadband diffractive optical neural networks.
Discussion
There are several factors that might have contributed to the relatively minor discrepancies observed between our numerical simulations and the experimental results reported. First, any mechanical misalignment (lateral and/or axial) between the diffractive layers due to, e.g., our 3D printer’s resolution can cause some deviation from the expected output. In addition, the THz pulse incident on the input plane is assumed to be spatially uniform, propagating parallel to the optical axis, which might introduce additional experimental errors in our results due to the imperfect beam profile and alignment with respect to the optical axis. Moreover, the wavelengthdependent properties of our THz detector, such as the acceptance angle and the coupling efficiency, are not modelled as part of our forward model, which might also introduce error. Finally, potential inaccuracies in the characterization of the dispersion of the 3Dprinting materials used in our experiments could also contribute some error in our measurements compared to our trained model numerical results.
For all the designs presented in this manuscript, the width of each output aperture is selected as 2 mm, which is approximately 2.35 times the largest wavelength (corresponding to f_{min} = 0.25 THz) targeted in our design. The reason behind this specific design choice is to mitigate some of the unknown effects of the Si lens attached in front of our THz detector, since the theoretical wave optics model of this lens is not available. Consequently, for some of our singlepassband spectral filter designs (Fig. 2a–d), the last layer before the output aperture intuitively resembles a diffractive lens. However, unlike a standard diffractive lens, our diffractive neural network, which is composed of multiple layers, can provide a targeted Qfactor even for a large range of output apertures, as illustrated in Supplementary Figure S5.
It is interesting to note that our diffractive singlepassband filter designs reported in Fig. 2 can be tuned by changing the distance between the diffractive neural network and the detector/output plane (see Fig. 1c), establishing a simple passband tunability method for a given fabricated diffractive network. Figure 5a reports our simulations and experimental results at five different axial distances using our 350 GHz diffractive network design, where Δ_{Z} denotes the axial displacement around the ideal, designed location of the output plane. As the aperture gets closer to the final diffractive layer, the passband experiences a redshift (centre frequency decreases), and any change in the opposite direction causes a blueshift (centre frequency increases). However, deviations from the ideal position of the output aperture also decrease the resulting Qfactor (see Fig. 5b); this is expected since these distances with different Δ_{Z} values were not considered as part of the optical system design during the network training phase. Interestingly, a given diffractive spectral filter model can be used as the initial condition of a new diffractive network design and be further trained with multiple loss terms around the corresponding frequency bands at different propagation distances from the last diffractive layer to yield a betterengineered tuneable frequency response that is improved from that of the original diffractive design. To demonstrate the efficacy of this approach, Figs. 5c, d report the output power spectra of this new model (centred at 350 GHz) and the associated Qfactors, respectively. As desired, the resulting Qfactors are now enhanced and more uniform across the targeted Δ_{Z} range due to the additional training with a band tunability constraint, which can be regarded as the counterpart of the transfer learning technique (frequently used in machine learning) within the context of optical system design using diffractive neural network models. Supplementary Figure S6 also reports the differences in the thickness distributions of the diffractive layers of these two designs, i.e., before and after the transfer learning, corresponding to Fig. 5a–d respectively.
In conclusion, the presented results of this manuscript indicate that the D^{2}NN framework can be generalized to broadband sources and process optical waves over a continuous, wide range of frequencies. Furthermore, the computational capacity of diffractive deep neural networks performing machine learning tasks, e.g., object recognition or classification^{27,30,31}, can potentially be increased significantly through multiwavelength operation enabled by the broadband diffractive network framework presented in this manuscript, under the assumption that the available fabrication technology can provide adequate resolution, especially for shorter wavelengths of the desired band of operation. The design framework described in this manuscript is not limited to THz wavelengths and can be applied to other parts of the electromagnetic spectrum, including the visible band, and therefore, it represents a vital progress towards expanding the application space of diffractive optical neural networks for scenarios where broadband operation is more attractive and essential. Finally, we anticipate that the presented framework can be further strengthened using metasurfaces^{49,50,57,58,59,60} that engineer and encode the dispersion of the fabrication materials in unique ways.
Materials and methods
Terahertz TDS system
A Ti:sapphire laser (Coherent MIRAHP) is used in modelocked operation to generate femtosecond optical pulses at a wavelength of 780 nm. Each optical pulse is split into two beams. One part of the beam illuminates the THz emitter, a highpower plasmonic photoconductive nanoantenna array^{61}. The THz pulse generated by the THz emitter is collimated and guided to a THz detector through an offaxis parabolic mirror, which is another plasmonic nanoantenna array that offers highsensitivity and broadband operation^{56}. The other part of the optical beam passes through an optical delay line and illuminates the THz detector. The generated signal as a function of the delay line position and incident THz/optical fields is amplified with a current preamplifier (Femto DHPCA100) and detected with a lockin amplifier (Zurich Instruments MFLI). For each measurement, traces are collected for 5 s, and 10 pulses are averaged to obtain the timedomain signal. Overall, the system offers signaltonoise ratio levels over 90 dB and observable bandwidths up to 5 THz. Each timedomain signal is acquired within a time window of 400 ps.
Each diffractive neural network model, after its 3D printing, was positioned between the emitter and the detector, coaxial with the THz beam, as shown in Fig. 1d, e. With a limited input beam size, the first layer of each diffractive network was designed with a 1 × 1 cm input aperture (as shown in e.g., Fig. 1b). After their training, all the diffractive neural networks were fabricated using a commercial 3D printer (Objet30 Pro, Stratasys Ltd.). The apertures at the input and output planes were also 3Dprinted and coated with aluminium (Figs. 1a and 4a).
Without loss of generality, a flat input spectrum was assumed during the training of our diffractive networks. Since the power spectrum of the incident THz pulse at the input plane is not flat, we measured its spectrum with only the input aperture present in the optical path (i.e., without any diffractive layers and output apertures). Based on this reference spectrum measurement of the input pulse, all the experimentally measured spectra generated by our 3Dprinted network models were normalized; accordingly, Figs. 2–5 reflect the inputnormalized power spectrum produced by the corresponding 3Dprinted network model.
Forward propagation model
The broadband diffractive optical neural network framework performs optical computation through diffractive layers connected by free space propagation in air. We model the diffractive layers as thin modulation elements, where each pixel on the lth layer at a spatial location (x_{i}, y_{i}, z_{i}) provides a wavelength (λ) dependent modulation, t,
where a and ϕ denote the amplitude and phase, respectively.
Between the layers, free space light propagation is calculated following the RayleighSommerfeld equation^{27,30}. The ith pixel on the lth layer at location (x_{i}, y_{i}, z_{i}) can be viewed as the source of a secondary wave \(w_i^l\left( {x,y,z,\lambda } \right)\), which is given by
where \(r = \sqrt {\left( {x  x_i} \right)^2 + \left( {y  y_i} \right)^2 + \left( {z  z_i} \right)^2}\) and \(j = \sqrt {  1}\). Treating the incident field as the 0th layer, the modulated optical field u^{l} by the lth layer at location (x_{i}, y_{i}, z_{i}) is given by
where I denotes all pixels on the previous layer.
Digital implementation
Without loss of generality, a flat input spectrum was used during the training phase, i.e., for each distinct λ value, a plane wave with unit intensity and a uniform phase profile was assumed. The assumed frequency range at the input plane was taken as 0.25–1 THz for all the designs, and this range was uniformly partitioned into M = 7500 discrete frequencies. A square input aperture with a width of 1 cm was chosen to match the beam width of the incident THz pulse.
Restricted by our fabrication method, a pixel size of 0.5 mm was used as the smallest printable feature size. To accurately model the wave propagation over a wide range of frequencies based on the Rayleigh–Sommerfeld diffraction integral, the simulation window was oversampled four times with respect to the smallest feature size, i.e., the space was sampled with 0.125 mm steps. Accordingly, each feature of the diffractive layers of a given network design was represented on a 4 × 4 grid, all 16 elements sharing the same physical thickness. The printed thickness value, h, is the superposition of two parts, h_{m} and h_{base}, as depicted in Eq. (4b). h_{m} denotes the part where the wave modulation takes place and is confined between h_{min} = 0 and h_{max} = 1 mm. The second term, h_{base} = 0.5 mm, is a constant, nontrainable thickness value that ensures robust 3D printing, helping with the stiffness of the diffractive layers. To achieve the constraint applied to h_{m}, we defined the thickness of each diffractive feature over an associated latent (trainable) variable, h_{p}, using the following analytical form:
where q(.) denotes a 16level uniform quantization (0.0625 mm for each level, with h_{max} = 1 mm).
The amplitude and phase components of the ith neuron on layer l, i.e., a^{l}(x_{i}, y_{i}, z_{i}, λ) and ϕ^{l}(x_{i}, y_{i}, z_{i}, λ) in Eq. (1), can be defined as a function of the thickness of each individual neuron, h_{i}, and the incident wavelength as follows:
The wavelengthdependent parameters, n(λ) and the extinction coefficient κ(λ), are defined over the real and imaginary parts of the refractive index, \(\tilde n\left( \lambda \right) = n\left( \lambda \right) + j\kappa \left( \lambda \right)\), characterized by the dispersion analysis performed over a broad range of frequencies (Supplementary Figure S1).
Loss function and trainingrelated details
After light propagation through the layers of a diffractive network, a 2 mm wide output aperture was used at the output plane, right before the integrated detector lens, which is made of Si and has the shape of a hemisphere with a radius of 0.5 cm. In our simulations, we modelled the detector lens as an achromatic flat Si slab with a refractive index of 3.4 and a thickness of 0.5 cm. After propagating through this Si slab, the light intensity residing within a designated detector active area was integrated and denoted by I_{out}. The power efficiency was defined by
where I_{in} denotes the power of the incident light within the input aperture of the diffractive network. For each diffractive network model, the reported power efficiency reflects the result of Eq. (7) for the peak wavelength of a given passband.
The loss term, L, used for singlepassband filter designs was devised to achieve a balance between the power efficiency and the Qfactor, defined as
where L_{p} denotes the power loss and L_{Q} denotes the Qfactor loss term; α and β are the relative weighting factors for these two loss terms, which were calculated using the following equations:
with B, ω_{0} and ∆ω_{p} denoting the number of frequencies used in a training batch, the centre frequency of the target passband and the associated bandwidth around the centre frequency, respectively. The rect (ω) function is defined as
Assuming a power spectrum profile with a Gaussian distribution N(ω_{0}, σ^{2}) with a fullwidthhalfmaximum (FWHM) bandwidth of ∆ω, the standard deviation and the associated ∆ω_{Q} were defined as
The Qfactor was defined as
For the singlepassband diffractive spectral filter designs reported in Fig. 2a–d and the dualpassband spectral filter reported in Fig. 3, ∆ω_{P} for each band was taken as 5 GHz. For these five diffractive designs, β in Eq. (8) was set to 0 to enforce the network model to maximize the power efficiency without any restriction or penalty on the Qfactor. For the diffractive spectral filter design illustrated in Fig. 2e, on the other hand, \(\frac{\alpha }{\beta }\) ratio (balancing the power efficiency and Qfactor) was set to 0.1 in Eq. (8).
In the design phase of the spatially controlled wavelength demultiplexing system (Fig. 4), following the strategy used in the filter design depicted in Fig. 2e, the target spectral profile around each centre frequency was taken as a Gaussian with a Qfactor of 10. For simplicity, the \(\frac{\alpha }{\beta }\) ratio in Eq. (8) was set to 0.1 for each band and detector location, i.e., \(\frac{{\alpha _1}}{{\beta _1}} = \frac{{\alpha _2}}{{\beta _2}} = \frac{{\alpha _3}}{{\beta _3}} = \frac{{\alpha _4}}{{\beta _4}} = \frac{1}{{10}}\), where the indices refer to the four different apertures at the detector/output plane. Although not implemented in this work, the \(\frac{\alpha }{\beta }\) ratios among different bands/channels can also be separately tuned to better compensate for the material losses as a function of the wavelength. In general, to design an optical component that maintains the photon efficiency and Qfactor over K different bands based on our broadband diffractive optical network framework, a set of 2K coefficients, i.e., (α_{1}, α_{2}, …, α_{K}, β_{1}, β_{2}, …, β_{K}), must be tuned according to the material dispersion properties for all the subcomponents of the loss function.
In our training phase, M = 7500 frequencies were randomly sampled in batches of B = 20, which is mainly limited by our GPU memory. The trainable variables, h_{p} in Eq. (4b), were updated following the standard error backpropagation method using the Adam optimizer^{62} with a learning rate of 1 × 10^{−3}. The initial conditions of all the trainable parameters were set to 0. For the diffractive network models with more than one detector location reported in this manuscript, the loss values were individually calculated for each detector with a random order, and the design parameters were updated thereafter. In other words, for a ddetector optical system, loss calculations and parameter updates were performed dtimes with respect to each detector in random order.
Our models were simulated using Python (v3.7.3) and TensorFlow (v1.13.0, Google Inc.). All the models were trained using 200 epochs (the network saw all 7500 frequencies at the end of each epoch) with a GeForce GTX 1080 Ti graphical processing unit (GPU, Nvidia Inc.), an Intel® Core™ i97900X central processing unit (CPU, Intel Inc.) and 64 GB of RAM, running the Windows 10 operating system (Microsoft). Training of a typical diffractive network model takes ~5 h to complete with 200 epochs. The thickness profile of each diffractive layer was then converted into the.stl file format using MATLAB.
Code availability
The deeplearning models reported in this work used standard libraries and scripts that are publicly available in TensorFlow.
Data availability
All the data and methods needed to evaluate the conclusions of this work are present in the main text and the Supplementary Materials. Additional data can be requested from the corresponding author.
References
 1.
Russakovsky, O. et al. ImageNet large scale visual recognition challenge. Int. J. Comput. Vis. 115, 211–252 (2015).
 2.
LeCun, Y., Bengio, Y. & Hinton, G. Deep learning. Nature 521, 436–444 (2015).
 3.
Collobert, R. & Weston, J. A unified architecture for natural language processing: deep neural networks with multitask learning. In Proc. 25th International Conference on Machine Learning (eds McCallum, A. & Roweis, S.) 160–167 (Helsinki, Finland: ACM, 2008). https://doi.org/10.1145/1390156.1390177.
 4.
Chen, L. C. et al. DeepLab: semantic image segmentation with deep convolutional nets, Atrous convolution, and fully connected CRFs. IEEE Trans. Pattern Anal. Mach. Intell. 40, 834–848 (2018).
 5.
Long, J., Shelhamer, E. & Darrell, T. Fully convolutional networks for semantic segmentation. In Proc. 2015 IEEE Conference on Computer Vision and Pattern Recognition 3431–3440 (Boston, MA, USA: IEEE, 2015).
 6.
Rivenson, Y. et al. Deep learning enhanced mobilephone microscopy. ACS Photonics 5, 2354–2364, https://doi.org/10.1021/acsphotonics.8b00146 (2018).
 7.
Rivenson, Y. et al. Deep learning microscopy. Optica 4, 1437–1443 (2017).
 8.
Nehme, E. et al. DeepSTORM: superresolution singlemolecule microscopy by deep learning. Optica 5, 458–464 (2018).
 9.
Kim, T., Moon, S. & Xu, K. Informationrich localization microscopy through machine learning. Nat. Commun. 10, 1996 (2019).
 10.
Ouyang, W. et al. Deep learning massively accelerates superresolution localization microscopy. Nat. Biotechnol. 36, 460–468 (2018).
 11.
Rivenson, Y. et al. Phase recovery and holographic image reconstruction using deep learning in neural networks. Light.: Sci. Appl. 7, 17141 (2018).
 12.
Rivenson, Y. et al. PhaseStain: the digital staining of labelfree quantitative phase microscopy images using deep learning. Light.: Sci. Appl. 8, 23 (2019).
 13.
Sinha, A. et al. Lensless computational imaging through deep learning. Optica 4, 1117–1125 (2017).
 14.
Barbastathis, G., Ozcan, A. & Situ, G. On the use of deep learning for computational imaging. Optica 6, 921–943 (2019).
 15.
Li, Y. Z., Xue, Y. J. & Tian, L. Deep speckle correlation: a deep learning approach toward scalable imaging through scattering media. Optica 5, 1181–1190 (2018).
 16.
Rahmani, B. et al. Multimode optical fiber transmission with a deep learning network. Light.: Sci. Appl. 7, 69 (2018).
 17.
Rivenson, Y. et al. Virtual histological staining of unlabelled tissueautofluorescence images via deep learning. Nat. Biomed. Eng. 3, 466–477 (2019).
 18.
Malkiel, I. et al. Plasmonic nanostructure design and characterization via Deep Learning. Light.: Sci. Appl. 7, 60 (2018).
 19.
Liu, D. J. et al. Training deep neural networks for the inverse design of Nanophotonic structures. ACS Photonics 5, 1365–1369 (2018).
 20.
Peurifoy, J. et al. Nanophotonic particle simulation and inverse design using artificial neural networks. Sci. Adv. 4, eaar4206 (2018).
 21.
Ma, W., Cheng, F. & Liu, Y. Deeplearningenabled ondemand design of chiral metamaterials. ACS Nano 12, 6326–6334 (2018).
 22.
Piggott, A. Y. et al. Inverse design and demonstration of a compact and broadband onchip wavelength demultiplexer. Nat. Photonics 9, 374–377 (2015).
 23.
Psaltis, D. et al. Holography in artificial neural networks. Nature 343, 325–330 (1990).
 24.
Krishnamoorthy, A. V., Yayla, G. & Esener, S. C. Design of a scalable Optoelectronic neural system using freespace optical interconnects. In Proc. IJCNN91Seattle International Joint Conference on Neural Networks 527–534 (Seattle, WA, USA: IEEE, 1991).
 25.
Shen, Y. C. et al. Deep learning with coherent nanophotonic circuits. Nat. Photonics 11, 441–446 (2017).
 26.
Shastri, B. J. et al. in Unconventional Computing: A Volume in the Encyclopedia of Complexity and Systems Science 2nd edn (Adamatzky, A. ed) 83–118 (Springer, New York, NY, 2018), 83118. 10.1007/9781493968831_702.
 27.
Lin, X. et al. Alloptical machine learning using diffractive deep neural networks. Science 361, 1004–1008 (2018).
 28.
Chang, J. L. et al. Hybrid opticalelectronic convolutional neural networks with optimized diffractive optics for image classification. Sci. Rep. 8, 12324 (2018).
 29.
Estakhri, N. M., Edwards, B. & Engheta, N. Inversedesigned metastructures that solve equations. Science 363, 1333–1338 (2019).
 30.
Mengu, D. et al. Analysis of diffractive optical neural networks and their integration with electronic neural networks. IEEE J. Sel. Top. Quantum Electron. 26, 1–14 (2020).
 31.
Li, J. X. et al. Classspecific differential detection in diffractive optical neural networks improves inference accuracy. Adv. Photonics 1, 46001 (2019).
 32.
O’Shea, D. C. et al. Diffractive Optics: Design, Fabrication, and Test. (SPIE Optical Engineering Press, Bellingham, WA, 2004).
 33.
Piestun, R. & Shamir, J. Control of wavefront propagation with diffractive elements. Opt. Lett. 19, 771–773 (1994).
 34.
Abrahamsson, S. et al. Multifocus microscopy with precise color multiphase diffractive optics applied in functional neuronal imaging. Biomed. Opt. Express 7, 855–869 (2016).
 35.
Arieli, Y. et al. Design of diffractive optical elements for multiple wavelengths. Appl. Opt. 37, 6174–6177 (1998).
 36.
Sweeney, D. W. & Sommargren, G. E. Harmonic diffractive lenses. Appl. Opt. 34, 2469–2475 (1995).
 37.
Faklis, D. & Morris, G. M. Spectral properties of multiorder diffractive lenses. Appl. Opt. 34, 2462–2468 (1995).
 38.
Sales, T. R. M. & Raguin, D. H. Multiwavelength operation with thin diffractive elements. Appl. Opt. 38, 3012–3018 (1999).
 39.
Kim, G., DomínguezCaballero, J. A. & Menon, R. Design and analysis of multiwavelength diffractive optics. Opt. Express 20, 2814–2823 (2012).
 40.
Banerji, S. & SensaleRodriguez, B. A computational design framework for efficient, fabrication errortolerant, planar THz diffractive optical elements. Sci. Rep. 9, 5801 (2019).
 41.
Salo, J. et al. Holograms for shaping radiowave fields. J. Opt. A: Pure Appl. Opt. 4, S161–S167 (2002).
 42.
Jacob, Z., Alekseyev, L. V. & Narimanov, E. Optical Hyperlens: farfield imaging beyond the diffraction limit. Opt. Express 14, 8247–8256 (2006).
 43.
Wang, P., Mohammad, N. & Menon, R. Chromaticaberrationcorrected diffractive lenses for ultrabroadband focusing. Sci. Rep. 6, 21545 (2016).
 44.
Aieta, F. et al. Multiwavelength achromatic metasurfaces by dispersive phase compensation. Science 347, 1342–1345 (2015).
 45.
Arbabi, E. et al. Controlling the sign of chromatic dispersion in diffractive optics with dielectric metasurfaces. Optica 4, 625–632 (2017).
 46.
Wang, Q. et al. A broadband Metasurfacebased terahertz flatlens array. Adv. Opt. Mater. 3, 779–785 (2015).
 47.
Avayu, O. et al. Composite functional metasurfaces for multispectral achromatic optics. Nat. Commun. 8, 14992 (2017).
 48.
Lin, Z. et al. Topologyoptimized multilayered Metaoptics. Phys. Rev. Appl. 9, 044030 (2018).
 49.
Wang, S. M. et al. Broadband achromatic optical metasurface devices. Nat. Commun. 8, 187 (2017).
 50.
Chen, W. T. et al. A broadband achromatic metalens for focusing and imaging in the visible. Nat. Nanotechnol. 13, 220–226 (2018).
 51.
Wang, S. M. et al. A broadband achromatic metalens in the visible. Nat. Nanotechnol. 13, 227–232 (2018).
 52.
Campbell, S. D. et al. Review of numerical optimization techniques for metadevice design [Invited]. Opt. Mater. Express 9, 1842–1863 (2019).
 53.
Karl, N. J. et al. Frequencydivision multiplexing in the terahertz range using a leakywave antenna. Nat. Photonics 9, 717–720 (2015).
 54.
Hu, B. B. & Nuss, M. C. Imaging with terahertz waves. Opt. Lett. 20, 1716–1718 (1995).
 55.
Shen, Y. C. et al. Detection and identification of explosives using terahertz pulsed spectroscopic imaging. Appl. Phys. Lett. 86, 241116 (2005).
 56.
Yardimci, N. T. & Jarrahi, M. High sensitivity terahertz detection through largearea plasmonic nanoantenna arrays. Sci. Rep. 7, 42667 (2017).
 57.
Li, Y. & Engheta, N. Capacitorinspired metamaterial inductors. Phys. Rev. Appl. 10, 054021 (2018).
 58.
Liberal, I., Li, Y. & Engheta, N. Reconfigurable epsilonnearzero metasurfaces via photonic doping. Nanophotonics 7, 1117–1127 (2018).
 59.
Chaudhary, K. et al. Engineering phonon polaritons in van der Waals heterostructures to enhance inplane optical anisotropy. Sci. Adv. 5, eaau7171 (2019).
 60.
Yu, N. F. & Capasso, F. Flat optics with designer metasurfaces. Nat. Mater. 13, 139–150 (2014).
 61.
Yardimci, N. T. et al. Highpower terahertz generation using largearea Plasmonic photoconductive emitters. IEEE Trans. Terahertz Sci. Technol. 5, 223–229 (2015).
 62.
Kingma, D. P. Variational Inference & Deep Learning: A New Synthesis https://hdl.handle.net/11245.1/8e55e07fe4be458fa9292f9bc2d169e8 (2017).
Acknowledgements
The Ozcan Research Group at UCLA acknowledges the support of Fujikura (Japan).
Author information
Affiliations
Contributions
Y.L. performed the design and fabrication of the diffractive systems, and N.T.Y. performed the experimental testing. D.M. provided assistance with the design and experimental testing of the diffractive models. M.V. provided assistance with the fabrication. All the authors participated in the analysis and discussion of the results. Y.L., D.M., Y.R., M.J. and A.O. wrote the manuscript with assistance from all the authors. A.O. initiated and supervised the project.
Corresponding author
Ethics declarations
Conflict of interest
A.O., Y.L., D.M. and Y.R. are coinventors of a patent application on Broadband Diffractive Neural Networks.
Supplementary information
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Luo, Y., Mengu, D., Yardimci, N.T. et al. Design of taskspecific optical systems using broadband diffractive neural networks. Light Sci Appl 8, 112 (2019). https://doi.org/10.1038/s4137701902231
Received:
Revised:
Accepted:
Published:
Further reading

A survey of approaches for implementing optical neural networks
Optics & Laser Technology (2021)

Largescale neuromorphic optoelectronic computing with a reconfigurable diffractive processing unit
Nature Photonics (2021)

Alloptical informationprocessing capacity of diffractive surfaces
Light: Science & Applications (2021)

Ensemble learning of diffractive optical networks
Light: Science & Applications (2021)

Terahertz pulse shaping using diffractive surfaces
Nature Communications (2021)