Angle-based wavefront sensing enabled by the near fields of flat optics

There is a long history of using angle sensors to measure wavefront. The best example is the Shack-Hartmann sensor. Compared to other methods of wavefront sensing, angle-based approach is more broadly used in industrial applications and scientific research. Its wide adoption is attributed to its fully integrated setup, robustness, and fast speed. However, there is a long-standing issue in its low spatial resolution, which is limited by the size of the angle sensor. Here we report a angle-based wavefront sensor to overcome this challenge. It uses ultra-compact angle sensor built from flat optics. It is directly integrated on focal plane array. This wavefront sensor inherits all the benefits of the angle-based method. Moreover, it improves the spatial sampling density by over two orders of magnitude. The drastically improved resolution allows angle-based sensors to be used for quantitative phase imaging, enabling capabilities such as video-frame recording of high-resolution surface topography.

Fig. S1 Schematic of single slit aperture for calculating field energy using Rayleigh-Sommerfeld diffraction theory.

Supplementary Note 2: Energy distribution calculated by full-wave simulation
We calculate the energy ratio (A-B)/(A+B) between two symmetric regions A and B beneath an aperture (See Fig. S1) as a function of incident angle ( in degree). The calculation is done for four different aperture sizes ranging from 0.5 m to 2.0 m and for five different wavelengths (450 nm, 500 nm, 550 nm, 600 nm, and 650 nm) using 2D full-wave simulation. For a fixed aperture, the wavelength dependence is weak, particularly for large apertures. The angular response of energy ratio exists even for a large aperture with a reduced angular sensitivity. The schematic of an actual device is shown in Fig. S3a. A CMOS image sensor with 5.2 m pixel is used. We assumed 1m thickness of SiO2 passivation layer and 70 nm thickness of Si3N4 anti-reflective coating layer on top of Si layer which is a more realistic structure of CMOS image sensor. Thickness of the Al mask on top of the structure is 100 nm. Energy ratio of the 3D structure was calculated using Tidy3D FDTD simulation tool [1]. As we see in Fig. S3b, energy ratio is incident angle dependent. The fabricated wavefront sensor was calibrated under a collimated LED light source. Two rotation stages and one linear stage were connected together as shown in Fig. S4 to rotate the sensor in and direction. The sensor was attached to the top rotation stage. The top rotation stage moves the camera in direction and the bottom rotation stage moves the camera in direction. A linear stage is used to compensate the off-axis movement of the wavefront sensor when it rotates in direction.
While the stage rotates, it changes the angle, and , between the collimated beam and the sensor. 30 raw image signals were recorded and the average was taken at each angle where 0°≤ ≤ 30° and 0°≤ ≤ 360° with a step of ∆ = ∆ = 1°. This process is very efficient with a motorized stage and with all 62,500 angle sensors on the fabricated chip being calibrated at the same time. We present selected raw images captured during the calibration process to show the working principle of angle sensitive pixels. Each panel in Fig. S5 depicts a cropped region of the raw image captured at different incident angles. The camera uses an 8-bit ADC (analogue-to-digital converter) meaning that the minimum intensity is 0 (black pixel) and the maximum intensity is 255 (white pixel). Here, we are able to observe the angle dependency between neighboring pixels. The yellow square represents a super-pixel that consists of 2x2 pixels.
As increases while = 90°, image contrast will increase in vertical direction while the contrast does not change in horizontal direction. This is because the projected LED location changes in vertical direction. For the actual measurement, a slight contrast change exists in horizontal direction due to the misalignment between the fabricated Al layer and the pixel layer. However, this misalignment is embedded in the raw image thus it has no effect to the sensor performance. The same can be observed while = 0° and increases. In this case, the image contrast exists in lateral while the contrast is minimum in vertical direction. Once the calibration is done for the entire range of and , acquired raw images will be used as a look up table for individual angle sensing pixels to determine the incident angle of light.

Supplementary Note 4: Spherical wavefront measurement
This section discusses the evaluation of the wavefront sensor using a spherical wave. A lens with a 25 mm focal length was used to generate a divering beam. A wavefront of a diverging beam was measured at 3 different locations A, B, and C. As the wavefront sensor is placed further away from the focal point, measured radius of curvature (ROC) of the spherical wavefront will decrease. Measured wavefronts at position A, B and C are shown in Fig. S6. It is clear that the wavefront measured further away from the focal point has less curvature. ROC was calculated for all three wavefronts and they are compared with theoretical ROC in table S1. The measured ROC and theoretical ROC are in good agreement.

Position
Theoretical ROC Measured ROC A 3 mm 3.14 mm B 5 mm 5.08 mm C 7 mm 7.08 mm Table S1. Comparison between theoretical ROC and measured ROC at three different positions. Both are in good agreement.

Supplementary Note 5: Conversion of angle measurement to wavefront
Measured wavefront can be used to calculate the surface height of the PMMA. The phase delay ( , ) at each unit pixel is related to the height information ℎ( , ) at each unit cell as [2] ( , ) = ℎ( , where is the wavelength of incident light and Δ is the index difference between the sample and the surrounding medium. We use zonal estimation [3] to reconstruct the wavefront from ( , ) calculated at each unit pixel. Surface profile of a non-transparent sample can be measured under a reflection mode setup as shown in Fig. S7a. This setup simply adds a beam splitter to an existing transmission mode setup.

Supplementary Note 6: Reflection Mode
For sample preparation, a PMMA polymer sample was coated with 10 nm Al to make its surface reflective. Figure S7b is a microscope image taken under a differential interference contrast (DIC) mode. While the image does not deliver any quantitative information about the surface profile, it does provide a qualitative information of the surface profile gradient. Figure S7c shows a wavefront sensor measured surface profile using the reflection mode setup. Not only does it show the surface profile but also delivers the actual height information which DIC microscope does not. The height difference between local maximum and local minimum within the captured frame is around 27 μm. Figure S7d shows the same region of the sample measured with a white light interferometer (WLI). WLI is not able to capture the surface roughness details that are clearly visible in DIC mode and wavefront sensor measurement results.

Supplementary Note 7: Minimum and maximum detectable angle of a wavefront tilt
The minimum detectable angle δ of the wavefront sensor can be expressed as where is the pixel intensity ratio between two neighboring pixels. If we assume has a linear response up to = as shown in Fig. S8a, / can be expressed as where is the maximum angle that wavefront sensor has a linear response and is the maximum pixel intensity ratio at degree.
Here, we show how minimum detectable angle around normal incidence can be calculated. Since = 1 / 2 where 1 and 2 are pixel intensity of two neighboring pixels, Δ can be expressed as: Since 1 ≈ 2 when light is normal incident, Eq. S5 can be written as Thus, substituting Eq. S4 and S6 into Eq. S3, δ can be expressed as If we assume SNR = 45 dB and use = 1.1 and = 5° which is based on our experimental results in Fig. S8b, δ can be calculated as . S8 a) Pixel intensity ratio ( ) of two neighboring pixels as a function of incident angle ( ). b) Measured pixel intensity ratio of two neighboring pixels. The neighboring pixels were randomly selected within the fabricated wavefront sensor.
Next, we discuss the maximum detectable angle, . For a lens based system such as Shack-Hartmann wavefront sensor, depends on the focal length and the diameter of a lens. Figure  S9 illustrates for a lens based system. is equivalent to the maximum displacement of the focal spot with respect to normal incident case. As shown in Fig. S9b, maximum focal spot shift with respect to normal incident case (Fig. S9a) is around /2. Once the focal spot shift exceeds /2, there will be an uncertainty for mapping out the correct lens that has generated the focal spot of interest. For a typical Shack-Hartmann wavefront sensor (Thorlabs WFS20-5C), ≈ 1° with a spatial resolution of 150 m which is equivalent to lenslet pitch. can be larger than 1° at the cost of spatial resolution since it will require larger .
Unlike a lens-based system, our wavefront sensor is based on flat optics that has much more flexibility of . As shown in Fig. S8b, of our wavefront sensor can reach up to 30° with a spatial resolution of only 2 pixels (10.4m).
is 30 times higher and spatial resolution is almost 15 times higher compared to a commercial Shack-Hartmann wavefront sensor. Fig. S9 a) When a lens is used to focus a normal incident light, location of the focal spot will not shift. It will be below the center of a lens. b) For an oblique incident light, the focal spot will shift. The amount of shift depends on the incident angle. Maximum detectable angle ( ) for a lens based system is limited by the lens diameter. The region where WLI fails to measure can be understood by looking at the surface slope. Fig.  S10a shows the surface profile of a PMMA sample measrued with our wavefront detector and Fig.  S10b shows the same region measured with WLI (Zygo NewView 9000). The measruement region WLI tend to fail is the side wall region where the surface slope is high and thus the interference pattern is not captured by the image sensor integrated with WLI. Fig. S10c shows the magnitude of the gradient calculated from the measurement result in Fig. S10a. Blue region indicates flat region and yellow region indicates the region with steep slope of the sureface profile. By comparing Fig. S10b and S10c, we can see that the region WLI fails to measure is the region where the surface profile gradient is high relative to the flat surface.

Supplementary Note 9: Image comparison with WLI and DIC microscope
Here, we compare the surface profile taken with three different tools. Figure S11a is taken with WLI. While it measures the overall height accurately, it will not reveal the detailed surface roughness. Using our wavefront sensor, the details can be revealed as in Fig. S11b which is consistent with the image taken using a differential interference contrast (DIC) microscope as in Fig. S11c. Note that DIC only delivers a qualitative image meaning that the image contrast does not accurately represent the surface profile. It provides an artificial 3D-like surface profile. Figure  S11d and S11e shows the angular component of Fig. S11b in x and y direction, respectively. These two images convey similar information as that of a DIC microscope image since they only show an angular gradient in one specific direction which can correspond to the polarization angle of DIC microscope.  Fig. S12. a) Schematic of our wavefront sensor with extremely small mask and short distance between mask and sensor layer. b) Schematic of conventional Hartmann sensor with extremely large aperture and mask to sensor distance. c) Example of raw measurement data captured with our wavefront sensor. d). Raw measurement data of conventional Hartmann sensor cited from [4] only showing 15x15 spatial resolution.
People have used apertures to measure wavefronts in EUV [5] and X-ray [4]. This is due to the lack of lens in these wavelength ranges. However, being micro-lens in the visible range or apertures in the EUV/X-ray range, traditional Hartmann sensors follow the same operating principle, and it is different with our sensor in one important aspect: length scale.
The difference in length scale leads to different regimes for the wave physics and the resulting performance metric is also drastically different. The aperture in EUV/X-ray, are 1000 -10,000 times of wavelength. The distance between the aperture and the sensor plane is even larger (Fig.  S12b). In contrast, all length scale in our system is all around wavelength scale (Fig. S12a). This difference dictates that the two systems explore different physics: the former primarily relies on ray optics with far-field diffraction correction. The latter, i.e. our sensor, needs full wave electrodynamics and exploit near-field energy distribution. Because we explore a quite different length scale and primary physical mechanism, the new system can realize a performance that is orders of magnitude better in both spatial resolution and angular dynamic range. Note the spatial resolution difference between our sensor (Fig. S12c) and conventional Hartmann sensor (Fig.  S12d).