Introduction

Throughout evolution, perception of the environment has proven to be highly advantageous for the survival of species, in particular with the instance of the eye, which is a key organ for continuous observation of the surrounding environment. For this reason, the perception system of some species has evolved to a high level of complexity. When it comes to the case of arthropods, such as the dragonfly, evolution has provided them with two spherical faceted eyes to observe the environment capturing a wide Field of View (FoV)1,2. Each facet of a dragonfly’s eye is an optical subsystem, named ommatidium, composed of a facet lens, a crystalline cone, a rhabdom (waveguide) and photoreceptor cells1,2. The curvature of the eye has specifically evolved so that each microlens forming this compound imaging system can capture light originating from a very specific direction only1,2 (cf. Fig. 1a, c).

Fig. 1: Representation of the 3D directional LiDAR working principle.
figure 1

a Photo representation of a dragonfly. b 3D artistic representation of the metalens array (MLA) integrated onto a microchip device. c Schematic representation of the architecture of the dragonfly compound-eye, with each ommatidium collecting light, with a numerical aperture NA, from a unique direction (θi, φi). The spherical shape of the compound lenses of a dragonfly eye captures light impinging from a large field of view. d Schematic representation of our MLA detector mimicking the dragonfly compound-eye. The overall field of view is 120 × 120. e Zoom on the phase profile of seven metalenses (MLs) arranged in an hexagonal pattern, and that constitute the center of the detection array. f Scanning electron microscopy (SEM) image of these same seven MLs. g Schematic representation of the LiDAR scanning device where a laser coupled to a deflector illuminates a 3D scene point-by-point. The light backscattered by the environment is collected by the MLA then focused on an array of pixelated detectors. Only the lens designed to collect light coming from the (θ, φ) direction focuses exactly on the ML axis, where we select the detection area of the associated pixel. The panel on the detection part also shows that the spots that are projected on the detector plane by all of the other MLs are distorted due to off axis aberration and shifted from the ML axis (cf. detection part inset).

Over the last twenty years, the development of optical cameras has been broadly inspired by biological systems, notably inspired by this specific working principle of compound eyes of arthropods3,4,5,6,7,8,9,10. To imitate the directionality of these eyes, planar assembly of microlenses5,8, all identical, comparable to the plenoptic camera model has been considered11,12,13,14. This method, named light field imaging, allows observing the environment as if the observer was positioned at different points of view, accessing 3D information related to both the measured light intensity as well as the direction of incidence of incoming rays14. A major shortcoming of this method is the large amount of information that is generally needed to reconstruct the environment, relying on powerful computers for fast information storage and processing15. Another procedure consists in fabricating microlenses on a thin flat layer of polymer, which is then shaped in 3D into complex curved geometries by applying stress, with the objective of achieving curvatures mimicking the arthropod eyes3,4,6,10. Such man-made devices present very large FoV, up to 160, free of off axis aberrations6. Moreover, the smaller the focal length of the microlenses, the greater the depth of field. As a result, an image of an object will appear sharp regardless of its distance from the eye6,16. Nevertheless, two main constraints arise from such devices. First, the curvature of the polymer containing the lenses is a very sensitive parameter, requiring to be perfectly controlled so that the solid angles of the lenses are regularly distributed on the k-vector sphere. Secondly, curving the overall light collecting area inevitably brings mismatch between the planar surface of the detector and the curved image plane of the lens array3, and as a result, the images generated by each microlens cannot be well focused simultaneously in the plane of the detector3 (cf. Supplementary Fig. S1). To compensate for the latter limitation and to correct these imaging aberrations, some devices use multiple layers of lenses9, bringing additional complexity in terms of fabrications, especially for miniaturization perspectives. Efforts in using different refractory geometry, for example lenses designed with logarithmic phase profile, could help extending the focusing range and reducing geometric aberrations caused by the curvature of the lens array3. This idea of using atypical phase profiles can be exploited and improved using metasurfaces, which are a class of planar optical components that allow arbitrary modification of the wavefront of light17,18,19,20,21,22.

Besides being able to observe the environment with a wide-FoV, living species have a 3D consciousness, and as such, they must correctly appreciate the distances of their surroundings. Nature has equipped most of the living species with two identical imaging sensors, each acquiring a different “picture”, for the brain to process depth perception. Simultaneously achieving high FoV and full 3D imaging is a great evolutionary challenge and mimicking these exceptional performances using artificial devices requires the use of advanced imaging techniques.

LiDAR, for Light Detection and Ranging, is a 3D imaging technology that uses a time-triggered pulsed laser source to illuminate the environment, and a photodetector to collect the returning time of pulses backscattered by objects of a scene. By measuring the round-trip time of the scattered light, also called the Time-of-Flight (ToF), it is possible to determine the distance of objects from the transmitter/receiver device23,24. The ToF depth recovering technology can be coupled with a light scanning engine to provide a sequential emission of the laser pulse into different angular positions, enabling point-by-point measurement to recover full 3D information. Most commercial imaging LiDAR operate using this rather simple architecture to reach high performances in terms of depth of field, FoV and angular resolution. However, the frame rate of such architecture is inherently limited by the speed of the scanning device, which can additionally suffer from mechanical wear. It is however worth mentioning that the mitigation of these drawbacks is subject to recent development, notably with optical phased arrays25 enabling a solid-state high-speed beam steering, but limited in terms of insertion efficiency and FoV, or with a metasurface approach26 tackling simultaneously the speed, FoV and mechanical wear issues, with a fully integrated architecture.

Here, we propose integrating directional metalens array (MLA) on a time-gated sensor to perform 3D imaging. The design of the MLA is planar but inspired by the arthropod curved eyes. It is composed of an assembly of micro-scaled metalenses (MLs), each specifically designed to have a unique phase profile, so that the overall MLA can mimic the optical function of a curved dragonfly’s eye. The realization of planar directional optical interface is facilitating vertical integration on the top of planar detection modules, enabling directional imaging measurements without the limitations inherent to conventional curved optical systems. In the first part of this paper, we will discuss the design and numerical characterization of this MLA. Then, we will present the experimental measurements developed to confirm the simulation results. Finally, we present 2D directional imaging and 3D LiDAR scanning experiments with a wide FoV.

Results

Design of the directional metalens array

The structure of the appositional compound eyes of dragonflies (see Fig. 1a) relies on the fact that each ommatidium (see Fig. 1c) collects light from a very small solid angle. All ommatidia have the same structure and thus the same numerical aperture (NA). The eye curvature causes two neighboring ommatidia to point in two slightly different directions (θ1, φ1) and (θ2, φ2), spaced by an inter-ommatidial angle (Δθ, Δφ)1 (cf. Fig. 1c). By affixing many ommatidia to form a compact curved surface, the FoV is fragmented as represented on Fig. 1c.

The objective here is to mimic the characteristics of the dragonfly eye, by subdividing a 120 × 120 FoV, reducing as much as possible the size of the shadowed areas. To this end, we consider 17 × 19 = 323 juxtaposed hexagonal MLs with 11.5 μm period whose NAs are regularly distributed between −60 and 60 with angular steps Δθ = 6.66 and Δφ = 7. 5, respectively (cf. Fig. 1d, e).

This array has a filling factor of 100% on a hexagonal grid of period 23 μm. The ith ML of the MLA has a unique spatial phase profile Φi(x, y) that is specifically defined such that it observes only a partial angular space along an optimized direction (θi, φi)27,28,29

$${\Phi }_{i}(x,y)=-\frac{2\pi {n}_{t}}{\lambda }\left(\sqrt{{r}^{2}+{f}^{2}}-f\right)-\frac{2\pi {n}_{i}}{\lambda }{\hat{{{{\bf{k}}}}}}_{{{{\rm{i}}}}}\cdot {{{\bf{r}}}},$$
(1)

where nt and ni are respectively the refractive indices of the transmitted and incident media, λ is the wavelength of the light, f is the focal length of the metalens. r = (x, y, 0) is the position vector in the metalens plane and \({\hat{{{{\bf{k}}}}}}_{{{{\rm{i}}}}}=(\sin {\varphi }_{{{{\rm{opt}}}},{{{\rm{i}}}}}\cos {\theta }_{{{{\rm{opt}}}},{{{\rm{i}}}}},\,\sin {\theta }_{{{{\rm{opt}}}},{{{\rm{i}}}}},\,\cos {\varphi }_{{{{\rm{opt}}}},{{{\rm{i}}}}}\cos {\theta }_{{{{\rm{opt}}}},{{{\rm{i}}}}})\) is the normalized incident wave vector. The angle θ (respectively φ) is the angle between the normal to the metalens and the vector \({\hat{{{{\bf{k}}}}}}_{{{{\rm{i}}}}}\) in the plane yOz (respectively xOz). To construct the MLA, the phase profile of each ML is determined using Eq. (1) considering its axis centered on the origin (x = 0, y = 0). All the phase profiles are then distributed along the hexagonal grid of period 23 μm.

To perform directional detection, the MLA is coupled to a matrix of detectors (Single photon avalanche (SPAD) array or a fast CCD camera) (cf. Fig. 1b). We align the MLA such that the center of each ML (on a is perfectly facing a separated active region of the detector (a single SPAD or a designated pixel for a CCD camera))30. We choose a hexagonal grid of 23 μm, anticipating integration on the SPAD2331, but obviously both the pitch and the focal plane of the MLA can be adjusted according to the camera utilized for the experiment. This specific association of a directional lens with a localized detection area is discussed in Fig. 1g. The MLA phase profile is fragmented such that the ith ML collects information associated to its own (θopt,i, φopt,i) direction (cf. fragmented FoV Fig. 1g). Thus, the ith ML operation principle is to rectify the direction of the incident light coming from the ith direction and to focus the light exactly along the ith ML optical axis, that is to activate only the detector placed along the center of the ith ML. Light impinging on all the other lenses will certainly be focused but away from their axis, and as a result their focal spots would simply not overlap with the active regions of the other detectors of the matrix (see detection inset Fig. 1g). However, the design of the MLA is carefully chosen to make sure that light coming from a different direction denoted as j ≠ i will be focused along the optical axis of the # jth detector, thus mimicking the spherical directional observation properties of the dragonfly’s eyes on a planar sensor.

We designed the MLs array following the classic method of passive metasurface approach, i.e., defining a lookup table of meta-atoms enabling phase shifts ranging from 0 to 2π17,18. The MLs are designed to operate in transmission mode for a working wavelength of 633 nm. This choice of wavelength was motivated by the equipment available in our laboratory, and also to facilitate the alignment of the optical apparatus. However, most LiDAR systems use a wavelength in the infrared (λ = 905 nm or λ = 1550 nm) for reasons of eye safety (cf. Supplementary Materials S11). Operating in the infrared range, it would be possible to operate the system with higher-intensity lasers while maintaining eye safe imaging capabilities. Meta-atoms are made of GaN nanopillars on a sapphire (Al2O3) substrate, arranged on a square grid of period 250 nm. The nanopillars are designed with a large aspect ratio with height of about 1 μm and varying radii between 54 nm and 102 nm to achieve 2π-phase modulation with high transmission efficiency (>90%)18,32. The selection of the meta-atoms was performed after a parametrical search using a commercial Finite Difference Time Domain (FDTD) simulation software (Lumerical, Ansys). More details are available in Supplementary Materials S2.

Theoretical design and numerically characterized performance of the directional lenses

The optical response of the MLA due to light excitation is not trivial for two reasons: first, the unusual hexagonal shape of MLs may imply an anisotropic response as a function of the incident direction of the illumination, and second, the MLs must be designed to focus light coming from specific oblique incidence. In our experiment, we consider ML fabricated by etching a thin film of GaN epitaxially grown on a Sapphire substrate. Because the light will impinge from the Sapphire substrate through the ML, we need to account for the substrate-air refraction and calculate the angles (θopt,i, φopt,i) of the incident light arriving on the ML. We choose this orientation as it helps reducing the MLA FoV from 120 × 120 in air to roughly 60 × 60 in sapphire (cf. Supplementary Materials S7). The refractive index-related reduction in the FoV is also accompanied by a reduction in the angular spacing between two beams, i.e. the angular resolution determined by the size of the detectors. While the detector size can be reduced to counteract this phenomenon, it is also remarkable to see that, as shown in Fig. 2, this reduction in FoV allows relatively high transmission for all MLs, which in fact improves the image quality. After defining the phase profile of each ML in the array, we performed numerical calculations to quantify the effectiveness of each ML for non-normal illumination. We simulate the optical response of individual metalens, of focal length f = 10 μm, for different angle of incidence using commercial full wave electromagnetic solver (FDTD Lumerical), and we use the simulated results to calculate the spot efficiency (σeff) and the transmission (Tr) of each ML. Tr is defined as the ratio of the surface integral of the Poynting vector just after the metasurface to the surface integral of the Poynting vector of the light source, given by

$$Tr=\frac{\frac{1}{2}{\int_{\hexagon}}[{{{\bf{E}}}}\times {{{{\bf{H}}}}}^{* }]{{{\rm{dS}}}}}{\frac{1}{2}{\int_{\hexagon}}[{{{{\bf{E}}}}}_{s}\times {{{{\bf{H}}}}}_{s}^{* }]{{{\rm{dS}}}}}$$
(2)

where E and H (respectively Es and Hs) are the electric and magnetic fields just after the ML (respectively the source). The bidimensional integrals are performed on a hexagonal surface, corresponding to the area of the ML. σeff is also defined as a ratio of two integrals, but this time, the Poynting vector integrated on the detector is divided by the integral of the Poynting vector just after the ML as:

$${\sigma }_{{{{\rm{eff}}}}}=\frac{\frac{1}{2}{\int_{\circ }}[{{{{\bf{E}}}}}_{d}\times {{{{\bf{H}}}}}_{d}^{* }]{{{\rm{dS}}}}}{\frac{1}{2}{\int_{\hexagon}}[{{{\bf{E}}}}\times {{{{\bf{H}}}}}^{* }]{{{\rm{dS}}}}}$$
(3)

where E and H (respectively Ed and Hd) are the electric and magnetic fields just after the ML (respectively at the detector position, z = 10 μm). The integrals are performed, respectively, on the hexagonal ML area, and on a circular area corresponding to the active area of the detector.

Fig. 2: Theoretical response of individual MLs designed with increasing directional observation angle.
figure 2

a Evolution of the phase profile of ML for various optimized oblique angles increasing from the left to the right from θopt,i = 0, 20, 40 and 60. The gray cone represents the numerical aperture of each ML. b Polar representation of the evolution of the spot efficiency as a function of the illumination angle for the different MLs. The blackest (respectively the most orange) curve corresponds to the lens optimized for the normal (respectively most oblique) incidence. The dashed lines correspond to the optimum angle of the respective lens. c Polar representation of the transmission as a function of the illumination angle, the color code is the same as for figure (b). d Electric field intensity maps, in transverse view (yOz plane), representing the evolution of the focal spot produced by MLs optimized for angle pairs (θopt,i = 0, φopt,i = 0) and (θopt,i = 60, φopt,i = 0). The green boxes surrounding the panel θopt,i = 0 (and respectively θopt,i = 60) at the first (resp. second) row highlights the optimal experimental condition. In both design cases, we can observe that when the light incident angle does not correspond to the designed angle of the ML, the light is focused off-axis (with some aberration), resulting in a walks-off of the focused beam. All intensity maps are normalized by the maximum value of the top left map (normal incidence lighting). The dashed black line at the bottom (respectively top) is the ML plane (respectively the focal plane at f = 10 μm). The gray box, with a radius of 2.5 μm, corresponds to the active region of the detector. All the maps have a size of 26 × 13 μm2.

Theoretical characterization of the directional MLA

We performed systematic optical characterization of all MLs in the array with phase profiles given by Eq. (1). Following the dragonfly eye design, each ML should have a unique phase profile, specifically designed to collect light impinging from a preferential direction. Figure 2a shows the phase profile of four off-axis lenses designed to activate their on-axis positioned detector for light incoming light at θopt,i = 0, 20, 40, and 60, respectively. Basically, these images show that to activate a detector with light coming from an increasingly oblique direction, the center of the ML phase profile must be increasingly shifted and distorted to compensate for off-axis aberrations. We then performed angular dependent focusing characterization and found that, although the hexagonal lenses have lower symmetry, their spot efficiency is practically invariant under rotation around the z-axis (cf. Supplementary Materials Fig. S3), thus featuring similar performance as a circularly shaped ML (see Supplementary Fig. S4). For symmetry reason, we are discussing here only the MLs with varying θopt,i (along Oy), fixing φopt,i = 0 and assuming design invariance by rotation around the Oz axis. We make the choice to perform simulations on θ because our inter-ommatidial angle is smaller along Oy (Δθ = 6.66 against Δφ = 7. 5 along Ox) and that will define the angular sensitivity of our MLA.

All the findings from the simulations for θ are valid for angle φ. By varying the angle of incidence θ around the designed angle θopt,i of each ith ML in the array, we recorded the spot efficiency in a 2.5 μm radius section located along the optical axis at the distance of 10 μm from the ML plane (corresponding to the focal length of all MLs). Multiplying this transmission efficiency by the quantum efficiency of a detector of similar footprint provides information on the overall efficiency of a single pixel metalens-detector compound detection system.

In Fig. 2b, c, we plot polar graphs of the spot efficiency and transmission of MLs, as a function of the illumination angle, optimized for normal incidence (black) up to an oblique incidence of 60 (orange). Each of the dashed lines corresponds to the optimum designed angle of the considered ML. The spot efficiency and transmission curves for all the MLs of the MLA are shown in the Supplementary Fig. S5. As expected, the spot efficiency peaks at its highest efficiency of around 80% whenever the incident angle matches with the design angle, regardless of the directional lens considered, and it decreases as the incident angle deviates from the optimal configuration. In Fig. 2d, we plot the intensity distribution of light in the yOz-plane for two representative MLs supposed to operate at θ = 0 and θ = 60 respectively. We plot the light focusing profiles from these two designs for three incidence angles in the vicinity of their designed angles, i.e. (θ = 0,θ = 6 and θ = 12) for the first ML and (θ = 51, θ = 60 and θ = 73) respectively for the second. These plots illustrate the evolution of the focusing efficiencies as a function of the incident angles as it clearly shows that the focal spots walks-off from the detection areas on both cases, as soon as the incident angle does not match the ML design. This off-axis focusing effect that we are exploiting for achieving sensitive directional detection is responsible for the significant drop in the spot efficiency. On these theoretical maps, the upper (respectively lower) black dotted line symbolizes the focal plane (respectively the ML plane). The gray box (5 μm wide) represents the detector associated with the ML. Note also that while the spot efficiency decreases considerably as a function of the incident angle, the transmission remains high over relatively large angle range with around 40% at θ 60 (Fig. 2c), which indicates efficient detection over a large field of view.

Angular sensitivity and crosstalk

Supplementary Fig. S3a quantifies the angular sensitivity of our device, indicating the amount of crosstalk, denoted as XT (Supplementary Eq. (S1)), of the spot efficiency as a function of the incident angle (cf. Supplementary Materials S5). The amount of XT is governed by mainly three parameters, the NA of the ML, the angular pitch Δθ of the design and the size of the detecting area. To experimentally demonstrate our concept, we decided to fix a NA of 0.75. Increasing the angular pitch would reduce the FoV angular resolution, so to improve the selectivity while keeping the high angular resolution, we rely on the size of the detecting area. If the detection is performed with a CCD camera, the detection area can be artificially reduced by adjusting the number of pixels. Note that to improve the angular selectivity using an assembly of physical detectors, such as SPAD for example, it is possible to reduce the detection zone using an extra apertured gold mask smaller than the detector. Recent works mentioning the integration apertured gold mask on detector arrays to physically reduce the size of the detector active area33,34,35 validate the feasibility of the proposed approach. We have provided a numerical calculation in Figure S7a,b, showing the evolution of the spot efficiency for the ML (θopt,i, φopt,i) with (red marks) and without (blue marks) gold mask, indicating that with proper masking, the spot efficiency (as well as the XT) can be drastically reduced (for θ = 6.66, φ = 0) to reach only a few % (red star) as opposed to 70 % without mask (blue star). The XT reduction according to the detection area has been proven experimentally (data acquired with a CCD camera) during the experimental characterization of our MLA (see next section and Figures S7c-e).

Experimental characterization of the MLA

We fabricated a 250 × 250 μm2 metasurface on a 1 μm GaN film grown on a double-side polished c-sapphire substrate (the details on the fabrication process and design of the meta-atoms are shown in Supplementary Materials S2 and S4). Figures 1f and Supplementary Fig. S6 show SEM images of the MLA after fabrication. The nanocylinders distribution follows the theoretical phase profiles expressed in Eq. (1) (cf. Fig. 1e). To characterize the MLA, we set up an optical bench consisting of a microscope objective (100 × NA = 0.9) mounted on a translational stage and a tube lens to re-image the MLA focal plane on a CCD camera. We perform a z-scan from the metasurface plane zms to the focal plane zf with a resolution step of 0.2 μm. The arm supporting the laser source (λ = 633nm) is mounted to rotate around the out-of-plane axis. The illumination angle φ varies between −22. 5 and 22. 5 with a step of 0. 5, while θ remains equal to 0 as indicated in Fig. 3a. In Fig. 3b, we represent the logarithm of the intensity distribution of the electric field in the focal plane under normal incidence illumination (θ = 0, φ = 0). On these experimental measurements, we superimpose the MLA (hexagons) and the detection areas (represented by the white circles of radius of 1.5 μm). The MLs surrounded by a red line correspond to those designed for angles φopt,i ranging between −22. 5 and 22. 5, in steps of 7. 5 (with θopt,i = 0). The central ML (in green) corresponds to the ML designed to focus on its axis normally incident light only. Among all MLs, including the seven highlighted in red in particular, we can observe that only the ML highlighted by green surrounding lines produces a light spot free from coma aberration (see Fig. 3c). This behavior agrees with the simulations presented in Fig. 2d. Furthermore, the experiment confirms that the spot intensity decreases as the designed ML angle deviates from the (θ = 0, φ = 0) illumination condition, as shown in Fig. 3d, featuring a maximum spot efficiency for the central ML of about 45% (cf. Fig. 3d).

Fig. 3: Characterization measurements of the MLA.
figure 3

a Photograph of the optical set-up used to adjust the illumination along the x-axis (corresponding to the angle φ, with θ = 0). The visible laser operates at λ = 633 nm. b Intensity distribution of the MLA at the focal plane for illumination at normal incidence (logarithm scale). See Video 1 in supplementary for all incident angles. c Zoom on the field distribution generated by the 7 MLs at the center of the MLA with a schematic representation of the coma aberrations. d Spot efficiencies, for all MLs, integrated from the intensity field maps. The red hexagons correspond to ML with φopt,i between − 22. 5 and 22. 5 and θopt,i = 0. The ML highlighted in green has the highest spot efficiency for normal incidence illumination. e Photographs of the illumination patterns on the large screen. f Measured intensity distribution of the MLA at focal plane associated with each illumination pattern. We observe in the inset that each ML creates a displaced replica of the projected image. g k-vector sphere with the theoretical patterns drawn with blue lines. The 29, 23 and 52 dots correspond to the MLs with the highest signals for the square, triangle and square+triangle patterns, respectively, and the color bar corresponds to their relative intensity.

The differences between theoretical values (cf. Fig. 2b) and the actual values may be due to fabrication errors36. We repeated the same measurement for all MLs and for various incident angles and have consolidated all the results in a Supplementary Video 1 of the additional data. The video clearly shows the evolution of both the intensity distribution in the focal plane of the MLA and the spot efficiency as a function of the illumination angle. We observe, as expected from numerical designs, that the focal spots move horizontally as a function of the incident angle, successively intercepting the central (detecting) region of each ML. As a result, the ML with the highest spot efficiency (highlighted in green) moves along the axis of the red MLs as a function of the illumination incident angle. The small detuning at wide angles is mainly due to aberrations in the optical system after the MLA and to vibrations induced by the rotating stage. These effects have been carefully studied and documented in Supplementary Materials S6.

To demonstrate the angular imaging capability of our compound imaging system, we set up an experiment to image a scene composed of defined illumination shapes. We utilize a beam scanning system able to illuminate the surrounding scene with structured light intensity profiles. Light reflected from the scene would then be reflected toward the detecting module, mimicking a scene composed of light rays originating from different angular positions. To do this, we implemented a light scanning engine that can deflect light at any desired position over relatively large deflection angles (up to 150 on both θ and φ, see detailed discussion on the scanning device in the LiDAR imaging section). This scanning device is set up to illuminate a projector screen, located 1.1 m from the MLA receiving module, to produce arbitrary user-defined patterns, including simple squares and triangles as shown in Fig. 3e. Upon scattering on the screen, the back-reflected light returns in the direction of the detector array, just as if it was coming from an infinite number of point sources, each associated to a given illumination direction (θ, φ). With this type of illumination, each ML produces a sub-image of the pattern in the focal plane (cf. Fig. 3f). This image array is similar to the sub-aperture images obtained with plenoptic cameras37 but with a major difference: because of the skewed phase profile of each ML, the images are off axis shifted by an amount that exactly corresponds to the detection direction. The gradually varying shift of the images thus provides additional information that we are exploiting to readily access the direction of the back-reflected signal. Said differently, the shifting phase profile of a given ith ML, optimized to detect light originating from (θopt,i, φopt,i), creates a sub-image that appears shifted with respect to its optical axis, so as to exactly center along its optical axis the section of the image that corresponds to the intended ith direction of detection. By placing a small detection area in the focal region of each of the ML, perfectly aligned with their corresponding optical axis, we achieve angular selectivity on the detected light. On the polar representations in Fig. 3g, we plot in blue the theoretical illumination patterns of each of the scene illuminations presented in Fig. 3e. The colored coded points on the hemispheres correspond to the activated detectors, i.e. those presenting a relatively high collection efficiency, coding the highest to lowest signal from the orange-red to yellow-white colors. As displayed, the positions of highest efficient detection correspond to the positions associated to the projected patterns. A detailed evaluation of the signal at each detector is presented in Supplementary Fig. S15. The top and middle graph respectively shows that the square and triangle patterns are perfectly reproduced by the activation of 29 and 23 detectors respectively. The last bottom graph, constructed experimentally by projecting both patterns simultaneously is a perfect combination of the two first detectors activation patterns (29 + 23 detectors). These maps demonstrate that our detection module can accurately recover the angular origin of the incident light. The small deviations from the theoretical patterns can be corrected by integrating the MLA directly onto a microchip array of detectors. Hence, this would eliminate optical aberrations introduced by the microscope objective and the tube lens situated after the MLA in our optical bench that distort the position of the sub-aperture images relative to the position of the detectors.

Looking at Fig. 3f, it seems that some information is also available out of the detector region. Note that off-axis signal presents high coma aberration (see Fig. 3c), which, if considered, would increase both the complexity and the computation time.

Integration of the detection module in a LiDAR experiment

Using pulsed light illumination instead of continuous waves, and adding directional detection capabilities of our MLA, we can achieve both angular and depth measurements of various objects in the scene. This powerful implementation of a LiDAR imaging system is shown to recover different illumination directions and different depths with non-scanning detection module. Our experiment consists of recovering the 3D information of a scene composed of three reflective objects (cf. Fig. 4a) disposed at different angular and depth positions, as shown in Supplementary Materials S8. We place at the center of the scene (θ = 0, φ = 0, position 4 in Fig. 4a) a highly reflective mirror, which is used as a reference position to calibrate simultaneously the center and the orientation of the scene. We then rely on the scanning pulsed illumination engine, shown in Fig. 4b, to illuminate the scene at six different positions highlighted by the six black targets in Fig. 4a. The illumination system consists of a λ = 633 nm laser source, pulsed at 500 kHz with a pulse width of 20ns, which is angularly deflected by a 2D acousto-optic deflector (AOD) (with a FoV 2 × 2). The small angle deflected light is impinging on a gradually varying phase gradient metasurface to increase the scanning FoV from 2 × 2 up to 150 × 150. Details on this system are described elsewhere26. The pulsed laser is required to measure the ToF of light going from the scanning module to the scene and back to the detection module. The laser pulses have therefore to be synchronized with the detector array. To simplify the experiment, we used an intensified CCD camera, Andor Solis iStar 334T, which is triggered by the laser pulse emission at a rate of 500kHz.

Fig. 4: LiDAR measurements.
figure 4

a Photograph of the scene with all illumination positions indicated by black targets and numbers. The horizontal (respectively vertical) blue line refers to the axis where θ = 0 (respectively φ = 0). The small inset shows the mirror at the center of the scene. The theoretical angles mentioned in the black box correspond to object directions, determined by trigonometry. b Illumination system with a laser source (λ = 633 nm) illuminating through an AOD of FoV 2 × 2 cascaded with a metasurface that increases the FoV to 150 × 150. c Detection part with the MLA cascaded with the same optical system (Objective + tube lens) used in Fig. 3. d Temporal evolution of the signal collected by different directional detectors activated by returning light pulses, indicating that different ToF is recovered for different directions in agreement with the position and depth of the objects in a. The color code is the same as in a. Vertical lines indicate the time at which the pulse reaches the camera. e Signal measured by the camera for illuminations 1, 2 and 4. f Integration of the signal measured on the detector surface for all detectors in the array. The color difference between the dots represents the difference in intensity (high signal in red and low signal in blue).

We used the gate mode of the CCD camera to scan an integration time gate of 2 ns time window with temporal displacement steps of 0.5 ns, which correspond to ~10 cm depth resolution. Shifting in time the temporal integration gate, we could observe the intensity distribution in the MLA focal plane for different ToF and different illumination direction. When the time gate corresponds to the exact ToF of the returning pulse, a spatially distributed intensity pattern is observed on the camera. We observe that the recorded intensity pattern changes as a function of both the illumination and the position of the time gate for different illumination conditions. Figure 4e shows the signal associated to the targets 1, 2 and 4 in Fig. 4a, observed at different time window (or ToF), indicating the actual z-position of each object in the scene. The measurements for all directions of illumination (1–6) are summarized in Supplementary Materials S9. Due to the low intensity of our laser, we averaged the signal at each acquisition (both for all illumination angle and time delay) by accumulating signal for about a minute (explaining some of the blurring effects in the images). This issue can be resolved using high power illumination lasers as those used for conventional LiDAR experiments and additionally by vertical integration of the MLA with more sensitive detectors such as SPAD arrays. The latter comes with an additional technological burden of correct alignment of the MLA with respect to the detector array, which was unfortunately not solvable using our current laboratory equipment but that can easily be solved by SPAD manufacturers. Indeed, SPAD arrays are advantageous as they offer high time resolution, which is of great importance in LiDAR, and, when fabricated in CMOS, they can be integrated in large arrays38 with small pitch and/or complex functionality on or near pixels39,40.

To perform directional detection, we have integrated the signal over several pixels located on the axis of each ML as discussed previously to mimic an array of detectors with radii r = 1.5 μm and obtain high angular sensitivity. Figure 4f shows the integrated signal for the maps in Fig. 4e. The color associated with each detector represents, in relative, the integrated signal (in red the maximum value and in blue the minimum value). The detector with the largest signal corresponds to the direction of the back reflected light coming for the associated object. Looking at the three integrated maps, we observe that the position of the maximum intensity changes according to the direction of illumination. For all six illumination directions, the pairs of optimization angles of the activated pixels (white arrows in Fig. 4d, f and Supplementary Fig. S14) agree with the theoretical illumination angles (see black box in Fig. 4a).

As for all LiDAR based systems, the detection of highly specular objects that do not backscatter light can introduce false images detection. These problems are well known in the field of LiDAR imaging, and can be solved using various approaches41,42,43,44.

The depth information is recovered by calculating the ToF, i.e. by integrating the signal of each detecting area as a function of the time delay of the integration gate. The post-processing part can be made more accurate by integrating the MLA on a SPAD array. In our case, the aberrations of the optical components used to re-image the MLA focal plane modify the position of the sub-aperture images, which affects the angular information provided by the MLA. Figure 4d shows, for the six illumination points, that the time evolution of the integrated signals of six detectors correspond to the expected depth and detection angles of light returning from each of the six reflective objects respectively. Depending on the illumination direction, the upward slope of the pulse, that physically corresponds to the returning time of the pulse and thus the beginning of the pixel activation, does not occur at the same time for the different detectors. The reflective ball (0.75 m) is measured at a closer distance with respect to the rounded reflector (1.38 m), and the long-curved reflector (2.35 m). These values are in perfect agreement with the actual distances between the objects and the MLA (see. Supplementary Fig. S10). The last reflector has been manufactured so that its entire surface is equidistant from the MLA, displayed in the measurements for illuminations 3–6.

Discussion

We fabricated an MLA for directional detection, and we have integrated this detection module in a ToF LiDAR experiment. As a proof of concept, the MLA designed to focus directional light in a plane is positioned in front of a gated CCD, synchronously triggered by the pulse emission rate of the LiDAR illumination laser source (500kHz frequency). The activation of the detectors by the MLA is in good agreement with the designed system to achieve both spatial (direction) and temporal (depth) detection of illumination. Some imprecisions at large angles, partly due to aberrations in the optical system used to image the focal plane of the MLA on the CCD camera, can be resolved using -a more challenging but technologically relevant solution of- vertical integration of the MLA on a SPAD-array detection module.

Both the distance and the rotation between the two systems must be controlled precisely to exactly match the focal length of the MLs (10 μm in our example). To facilitate the integration, MLA with different focal length and increased sizes or increased number of ML could be investigated to improve the angular resolution and light throughput detectivity.

As this MLA can simultaneously detect light coming from different illumination directions, it might be a suitable solution to achieve directional detection in flash LiDAR experiments, in which we would jointly design the MLA profile to perfectly match a given complex illumination pattern, aiming at increasing the limited flash LiDAR working distances45. An MLA could furthermore be a viable solution for improving the performances while keeping an ultra-compact form factor of a scanning LiDAR’s detection module, by cascading a spectral filter, hence achieving high FoV detection with a good SNR. One can also think of improving the tiling of the k-vector sphere by spatially distributing the solid angles of all the MLs to potentially reduce cross talk, and/or increase triangulation. Such a device could also be used to further improve the security of data transmission in applications related to wireless optical communication by implementing crosstalk free angularly protected detection channels. The use of time-stamping detectors and detector arrays could potentially improve the estimation of TOF, making the whole estimation process more efficient and faster. The requirement would be to use cameras based on gated SPAD arrays or cameras capable of detecting photon time-of-arrival directly by means of external or embedded time-to-digital or time-to-amplitude converters. These cameras are available today with large pixel counts and picosecond single-photon time resolutions with fast interfaces to achieve real-time and/or event-based data transfer.

Methods

Optical bench for the single axis scanning experiment

A collimated beam (λ = 633 nm) is generated by a high power supercontinuum fiber (NKT SuperK EXTREME) and is sent to the MLA located 25 cm from the laser fiber source. A Nikon microscope objective (100 × NA = 0.9, WD = 0.1), mounted on a Thorlabs MTS50-Z8 translational stage (driven by a Thorlabs KDC101 - K-Cube Brushed DC Servo Motor Controller), and a tube lens (of focal lens f = 6.28 cm) are placed just after the MLA to re-image the MLA image plane on a Thorlabs CCD Camera (DCC 1545M). To find as accurately as possible the focal plane zf of the MLA, we chose a translation step of 0.2 μm. Rotation to adjust the lighting direction is carried out in 0. 5 steps using a 1D rotation stage.

Optical bench for the 2D directional imaging and the lidar experiments

A second optical bench is used for the 2D directional imaging (cf. Fig. 3f–h) and the LiDAR experiments (cf. Fig. 4). A collimated beam (λ = 633 nm), produced by a TOPTICA Photonics iBeam-smart source, is sent through an AOD device AA Opto-electronic DTSXY-400-633) to deflect light at small arbitrary angles (within 2). The AOD is driven by a voltage-controlled RF generator (AA Opto-electronics DRFA10Y2X-D-34-90.210). The deflected signal is directed to a scanning lens (THORLABS LSM03-VIS) which focuses the light at different transverse positions on the metasurface. The passive metasurface converts the small 2 × 2 FoV into an enhanced 150 × 150 FoV. The focal plane of the MLA is re-imaged using the same microscope objective and tube lens as described in the previous subsection. In both experiments, we used an Andor Solis iStar 334T CCD camera presenting a Digital Delay Generator (DDG) gate mode.

For the 2D directional imaging experiment, a large 180 × 240 cm2 scattering screen is placed at 1.1 m from the MLA. The signal is acquired by the camera during an exposure time of 1.25 s and accumulated over 150 acquisitions to increase the signal-to-noise ratio.

For the LiDAR experiment, the laser source and the camera are triggered at a frequency of 500 kHz. The laser produces pulses of 20 ns time duration. The camera has a digital gate generator step feature that is utilized with an integration time gate of 2 ns time window and a temporal displacement step of 0.5 ns. The camera exposure time is fixed at 2 s and the signal is accumulated over 25 acquisitions, which means that the signal on the camera is averaged over 25 × 106 pulses.

Metasurface fabrication

The fabrication of the different metasurfaces has been realized using GaN grown by Molecular Beam Epitaxy (MBE) on sapphire substrate. The sample were nanostructured at PLANETE nanofabrication platform (CT-PACA, member of the French Renatech+ network). Details are available in the Supplementary Materials S4.