Single shot multispectral multidimensional imaging using chaotic waves

Multispectral imaging technology is a valuable scientific tool for various applications in astronomy, remote sensing, molecular fingerprinting, and fluorescence imaging. In this study, we demonstrate a single camera shot, lensless, interferenceless, motionless, non-scanning, space, spectrum, and time resolved five-dimensional incoherent imaging technique using tailored chaotic waves with quasi-random intensity and phase distributions. Chaotic waves can distinctly encode spatial and spectral information of an object in single self-interference intensity distribution. In this study, a tailored chaotic wave with a nearly pure phase function and lowest correlation noise is generated using a quasi-random array of pinholes. A unique sequence of signal processing techniques is applied to extract all possible spatial and spectral channels with the least entropy. The depth-wavelength reciprocity is exploited to see colour from depth and depth from colour and the physics of beam propagation is exploited to see at one depth by calibrating at another.


Section S1. Theoretical Analysis
The optical configuration is given in Fig. S1. The theoretical analysis is from the object plane (o plane) to the sensor plane (s plane) [1]. Light from an incoherent point located at ( , ) with an amplitude of reaches the plane of the mask with a complex amplitude given by  where the transverse magnification MT=(v/u) and Λ is a delta-like function with a maximum at the origin and negligible values in places other than the origin. The above analysis shows that it is possible to reconstruct the object information by a cross correlation between the object intensity pattern and the point spread function. For an object consisting of multiple planes and illuminated by different wavelengths, the object intensity pattern is the summation of the object intensity patterns at different planes and different wavelengths ( ) = ∑ , , , , which can be reconstructed at different wavelengths and depth using , , , where p and q are the number of wavelengths and depths, i=1,2,3…p and j=1,2,3…q.
Section S2. Lateral resolution of an imaging system using chaotic waves A schematic of a basic multispectral sequential direct imaging system is shown in Fig. S2. In order to have a reliable comparison, we consider the direct imaging with a single lens in place of the pinhole array but with a wavelength tunable light source, an object and a monochrome sensor. The above method is also equivalent to illuminating an object with a white light source and using chromatic filters with different central wavelength at the image sensor. A tunable light source is used to illuminate a colour object. The light diffracted by the object is imaged by a monochrome image sensor. By changing the wavelength of the light source in increments, the images of the object corresponding to different wavelengths can be recorded sequentially in time.
The lateral resolution of RAP is calculated by simulation of the random distribution for u = 10 cm, v = 10 cm, D = 2 mm, λ = 600 nm and n = 100, where n is the number of pinholes, for pinhole diameters varying from 10 μm to 50 μm. The lateral resolution of a direct imaging system with the above configuration with a lens of focal length f = 5 cm instead of the pinhole array is given as Δl = 1.22λu/D ~ 37 µm. The size of the smallest speckle that can be formed with the RAP cannot be smaller than the lateral resolution limit of direct imaging as the mean diameter of the speckle is 1.22λu/D [2,3]. As seen from the relation, the speckle size appears to be dependent only on the numerical aperture (NA) and the wavelength. However, the above expression is only a limiting value and the speckle size depends upon the scattering degree of the scattering function of RAP. In the indirect imaging method with RAP, the lateral resolution is given by the autocorrelation of the random intensity distribution. Therefore, the smallest spot that could be obtained with RAP is ~2.44λu/D. However, with different filters, the smallest speckle size could be tuned to match with 1.22λu/D [4].  The images of the RAP for R = 50 μm to 10 μm in steps of 10 µm are shown in Figs. S3a-S3e respectively. The images of the IPSFs for R = 50 μm to 10 μm are shown in Figs. S3f -S3j respectively. From the figures, it is seen that the speckle size decreases and visibility increases with a decrease in the diameter of the pinhole and reaches the limiting value Δl = 1.22λu/D ~ 37 µm for this configuration. The plot of the autocorrelation function with a phase only filter for R = 10 μm to 50 μm in steps of 5 µm is shown in Fig. S4 [5]. From the plot, it is seen that the resolving power of the system improves with a decrease in the size of the pinholes. The offset seen in indirect imaging with RAP is because correlation between two positive functions results in a background. The FWHM of the autocorrelation function is plotted as a function of the diameter of the pinholes in Fig. S5 and the data is analysed to obtain a semi-empirical relationship between the diameter of the pinhole and the diameter of the autocorrelation function including the factors λ, u, and D. The FWHM of airy disk for direct imaging is ~0.52λu/D= 16 µm. The variation of the FWHM is linear with respect to the variation in the diameter of the pinholes. The following approximate semi-empirical relationship has been established between the radius of pinhole and the FWHM as Δl-FWHM (R) ~ 0.52λu/D+K×(R-5) for R ≥ 5 µm, and the proportionality factor K=3/4 is the slope of the variation of the FWHM of the autocorrelation function with respect to the radius of the pinhole.  Section S3. Imaging with a wide field of view using chaotic waves The field of view (FOV) of an imaging system is limited by the magnification of the system and the size of the sensor. From Fig. S6, the FOV of the imaging system is given as ~(su/v). In a direct imaging system, when an object is imaged, the points of the object which lie beyond the FOV are not recorded indicated by the red points in both the object and the image in Fig.  S6(a). Imaging using a RAP, generates an intensity pattern which is the superposition of the random object images, an original perspective is shown in Fig. S6(b). This does not give any additional information about the FOV of the imaging system. However, an alternative perspective of the same system is presented in Fig. S6(c). As per this view, any off-axis object point will generate a shifted spatio-spectral signature. Therefore, every object point is converted not into an image point like the direct imaging system but into a spatio-spectral signature which is much larger in area than the sensor. Therefore, even when the object point lies beyond the FOV, i.e., the centre of the spatio-spectral signature is beyond the area of the sensor, there is always some partial spatio-spectral signature that is incident on the image sensor. If the library of spatio-spectral signatures has been acquired synthetically over a larger area only once, then any object imaged using RAP can be imaged with a high FOV compared to the direct imaging system. Even without such training, from the figures, it is clear that in order to move an object point beyond the FOV of the direct imaging system the object point must be shifted by ±su/2v from the centre. On the other hand, a shift of at least ±su/v from the centre is needed in the case of imaging using RAP in order to not have any partial spatio-spectral signature present in the image sensor. For u = 10 cm and v = 10 cm, S = 4 mm, the limits are ± 2 mm and ± 4 mm for direct and indirect imaging respectively.

Section S4. Photon budget
The photon budget is compared between direct imaging and the proposed indirect imaging with chaotic waves using Fig. S7. A circular disk of object with a diameter of d1 illuminated critically by an incoherent light source is considered in the object plane. The disk has a uniform intensity distribution and Io is the intensity at any point within the disk, the optical power at the plane-A is Po= Ioπd1 2 /4. The light diffracted from the disc has a Bessel intensity distribution with a central maximum of diameter 2.44λu/d1. Within the central maximum, about 84% of the optical power is distributed. The power reaching the lens or the RAP within an area with a diameter D at plane-B is kABPo, where kAB is the power ratio between plane-B and plane-A. The power reaching plane-B is identical for both imaging systems. In the case of direct imaging using a lens, there is loss of optical power due to Fresnel reflection Rf at the air-glass and glassair interfaces. The fraction of light entering the lens is kABPo(1-Rf). Therefore, the light exiting the lens at plane-C has an optical power given by kABPo(1-Rf) 2 . The image of the disc is formed at the plane-D in the sensor of area S×S. The diameter of the image of the disc is given as . The first term in the expression is the diameter of the airy disk and the second term is the image of the disk. In order to have a better approximation, the root squared sum approach is used here. The intensity is therefore given as For the indirect imaging system using RAP, the analysis begins from plane-B where the optical power is kABPo. The RAP consists of n pinholes each with a diameter of d3. Hence, the total area of pinholes to the overlap area of the RAP mask with the diffracted light is given as = ⁄ . It is assumed that d1>>d3, therefore, each and every pinhole in RAP generates an image of the disc and the summation of these shifted images is obtained in the sensor. The maximum diameter of the random intensity distribution is given as ~ {1 + ( / )} + ( / ) + (2.44 / ) . It is assumed that the random intensity distribution follows a uniform random variable (2D locations of pinholes) with a uniform probability distribution function and so the intensity distribution can be considered uniform. However, there are localization effects due to the self-interference phenomenon resulting in a redistribution of light from out-of-phase wavelets to in-phase wavelets. A case study is considered and the number of photons is compared between direct imaging system and the indirect imaging system with a RAP. For u = 10 cm and v = 10 cm, d1 = 1 mm, d3 = 0.1 mm, λ = 600 nm, n = 100, D = 2 mm, Rf = 0.04 (n1=1.5 and n2 = 1), Po = 100 mW and Δ = 4 µm. The diameter of the airy disk at plane-B is ~ 146 µm which is much smaller than D and so kAB ~ 1. kBC = 0.25. The number of photons per second incident on a pixel of the camera for the direct imaging system is ~ 6×10 12 photons. The number of photons per second incident on a pixel of the camera for the indirect imaging system is ~ 5×10 10 photons. The ratio of the number of photons per pixel per second for direct imaging to indirect imaging is 120. However, for a point object such as a pinhole (20 µm) as used in the experiment, the ratio is ~20000. However, the above ratio is not surprising as it matches with the ratio between typical holography systems and direct imaging methods. , were synthesized such that the mask consisting of the pinholes at the new locations have an improved SNR. The SNR is defined as Signal/(Average background noise) which reduces to 1/(Average background noise) upon normalizing the reconstructed intensity pattern. The two-step optimization procedure is shown in Fig. S8. The random variables , are iterated over N times and the random variables , corresponding to maximum SNR is selected and given as input to the second stage optimization procedure. In the next stage, the location of the pinholes is shifted along the X and Y directions with the limits − ≤ ∆ ≤ and − ≤ ∆ ≤ both in steps of Δ and the SNR is calculated at every step and the quasi-random variables ′, ′ corresponding to the maximum SNR is determined. The second optimization, therefore, runs over 2L/Δ iterations for every location along X or Y direction. Therefore, the total number of iterations in the second optimization is 4nL/Δ.
A specific design condition of object distance (u = 10 cm) and image distance (v = 10 cm), wavelength (λ = 617 nm), pinhole diameter (d = 80 μm), mask diameter (D = 8 mm) and number of pinholes (n = 2000) is considered. During every iteration, the light from a point object is propagated by 10 cm and modulated by the RAP mask and the modulated light is propagated by another distance of 10 cm. The diameter of the pinholes was selected to be 80 μm and the wavelength was selected to be 617 nm. The intensity distribution at 10 cm from the quasi RAP (QRAP) mask is autocorrelated using a phase-only filter [5] as * ′ , where = ℱ ( ) and the SNR was calculated as 1/(Average background noise) after normalizing the maximum intensity value of the autocorrelation. The plot of the SNR with the number of iterations is shown in Fig. S9. The image of the RAP masks and the reconstructed images with the phase-only filter for an object "NANO LAB" using the masks with the minimum SNR, maximum SNR after step -1 and maximum SNR after step -2 respectively are shown in Fig. S10. The improvement in the SNR is clearly visible with an improvement of 64%. The final mask design was transferred to a chromium coated mask plate using Intelligent micropatterning SF100 XPRESS. The size of the QRAP was 8 mm and the diameter of the pinholes was 80 μm after fabrication.

Fig. S8
Optimization procedure for designing QRAP. Two-steps optimization procedure to improve the SNR of reconstruction. In the first step, two random number generators are used to synthesize RAP masks 1000 times and the RAP mask with the highest SNR is given as input to the next step. In this step, the location of every pinhole is shifted, and the optimal locations are determined iteratively. Figure S9. Plot of the SNR as a function of the number of iterations. The blue shaded region indicates the iterations using optimization step -1 and the orange shaded region indicates the iterations using the optimization step -2. In the optimization process -1, the random array generator was iterated 1000 times and the RAP profiles with lowest and highest SNR are identified. The RAP profile with the highest SNR was selected for the second optimization where the location of every pinhole was shifted by ± 5 pixels and the profile with the highest SNR was identified. An overall SNR enhancement of 64% was achieved in comparison to the lowest SNR of optimization 1.

Section S6. Non-linear correlation and adaptability tests
A direct cross-correlation between an object intensity distribution and a point spread intensity distribution results in substantial background noise. The background noise is partially suppressed by optimizing the location of the pinholes. However, the background noise arising from the unipolar nature of the recorded intensity distributions is still present and will generate background noise during image reconstruction. In other words, a cross-correlation between two positive functions produces an offset or background noise. A direct method to solve this problem is to create a bipolar intensity pattern by recording two intensity patterns from two statistically different chaotic waves from the object and subtracting one from the other [1]. The above procedure demands two camera recordings for every object and requires either two QRAPs or requires rotating the QRAP to record a second intensity pattern. This decreases the temporal resolution of imaging. In non-linear correlation, the magnitudes of the spectrum of the two correlation functions are modified with respect to one another to create an effect equivalent to that of correlating two bipolar intensity distributions.
In the non-linear correlation, the object reconstruction can be expressed as An LED Thorlabs (M617L3, λc = 617 nm, FWHM = 18 nm) critically illuminated an object: United States Air Force (USAF) resolution target (Group -2, Element -6, 7.13 lp/mm). A pinhole with a diameter of 100 μm was used for sampling the system. The point object and object intensity distributions were recorded by an image sensor (Thorlabs DCU223M, 1024 x 768 pixels, pixel size = 4.65 μm) as shown in Figs. S11(A) and S11(B). The distance between the object and the QRAP was 10 cm and the distance between the QRAP and the image sensor was 10 cm. The values of α and β were varied between -1 and 1 in steps of 0.2 and the entropy was calculated for every value of α and β. Since this is a test run, this is carried out only for Figs. S11(A) and S11(B). The reconstructed images for all the values of α and β are shown in Fig.  S12. The reconstructed images for matched filter (α = 1 and β = 1), phase-only filter (α = 0 and β = 1) and inverse filter (α = -1 and β = 1) are indicated using red, green and blue squares respectively. The optimal filter (α= 0 and β = 0.6) with entropy value 1 is indicated using a yellow square. The optimal filter is used for further reconstructions. Fig. S12 Reconstruction results of the non-linear filter. α and β were varied between -1 and 1 in steps of 0.2. The optimal filter for entropy value 1 was found to occur at (α = 0 and β = 0.6). The results of the matched filter, phase-only filter and inverse filter are indicated by red, green and blue squares respectively.
The adaptability of the imaging technique with the non-linear correlator is evaluated next. Imaging using QRAP was repeated under a few extreme conditions and the performances were studied. In the first case, a part of a curved refractive element (a broken beaker) was introduced in between the object and the RAP as shown in Fig. S13(A). The images of the point spread intensity and object intensity patterns are shown in Figs. S13(B) and S13(C) respectively. The reconstructed image is shown in Fig. S13(D). In the second case, a RAP was created by creating multiple holes on a black insulation tape. Then the insulation tape was then attached to a curved surface (same broken beaker) and was used for imaging as shown in Fig. S13(E). The images of the point spread intensity, object intensity distributions and the reconstructed image are shown in Figs. S13(F), S13(G) and S13(H) respectively. From Figs. S13(F) and S13(G), the distortions due to the curved surface of the beaker is visible. However, the image of the object was recovered with some additional noise. The above two cases were selected to demonstrate the level of flexibility available with the proposed technique. Section S7. Axial resolution of an imaging system using chaotic waves The axial resolution of RAP was simulated for u = 10 cm, v = 10 cm, D = 2 mm, λ = 0.6 µm and n = 100, where n is the number of pinholes, for pinhole radius R = 10 μm, 20 μm, 30 μm, 40 μm and 50 μm. The axial resolution of a direct imaging system is given by ~8λ(u/D) 2 =12 mm and the width of the central maxima is 16λ(u/D) 2 = 24 mm. The FWHM of the axial intensity variation is ~6.7λ(u/D) 2 = 10 mm. The intensity at the origin is observed for direct imaging during blurring of the image in the sensor plane when the object is moved from u = 5 cm to 15 cm. The plots of the variation of the value at the origin for the above values of pinhole diameters for indirect imaging and for direct imaging are shown in Fig. S14. The plots indicate the improvement in axial resolving power with a decrease in the diameter of the pinhole approaching the axial resolution limit of direct imaging. The FWHM of the plots were calculated for R = 10 μm, 20 μm, 30 μm, 40 μm and 50 μm and plotted as shown in Fig. S15. An approximate semi-empirical relationship has been established between the radius of the pinholes and the FWHM of the axial intensity variation as Δa-FWHM (R) ~ 6.7λ(u/D) 2 +K1×(R-10) for R ≥ 10 µm, and the proportionality factor K1=33×10 -5 is the slope of the variation of the FWHM of the axial intensity to the radius of the pinhole.  Section S8. Spectral resolution of an imaging system using chaotic waves The spectral resolution of a sequential imaging system depends upon the resolution of the spectral scanning system installed either at the source or at the detector. Three cases are considered for the same optical configuration: refractive lens, diffractive lens and RAP. The following design parameters are considered for the comparison. u = 10 cm, v = 10 cm, D = 2 mm, λ = 0.4-0.8 µm (0.6 µm as design wavelength), f = 5 cm and n = 100. A refractive lens made up of silica is considered. The dispersion formula given by the Sellmeier equation for silica is given as  [6]. The plot of the refractive index for different wavelengths is shown in Fig. S16. The refractive index of silica for λ = 0.6 µm is 1.458. The radius of curvature of a biconvex lens using Lens maker's formula is given as R=2f(µ-1) = 4.58 cm. The focal length is plotted with respect to the wavelength λ = 0.4-0.8 µm in the Fig. S16. The radius of zones of diffractive lens is given as ≅ 2 and so ≅ . The radius of the first zone for λ = 0.6 µm and f = 5 cm is given as r1= 245 µm. The focal length × is plotted as a function of wavelength in Fig. S16. From the plots it is seen that the diffractive lens has a higher chromatic aberration in comparison to a refractive lens. A delta-like function is imaged using the three elements namely refractive lens, diffractive lens and a RAP and the wavelength is varied from 0.4-0.8 µm. The variation of the central intensity value as a function of wavelength is plotted in Fig. S17.The spectral resolution dependency on the diameter of the pinholes is studied. IPSF(λ = 600 nm) is calculated and cross-correlated with IPSF(λ) when the wavelength is varied from λ = 401 nm to 800 nm in steps of 1 nm. The plot of the IR(x=y=0) for different wavelengths and for different radii of the pinholes R = 10 μm to 50 μm in steps of 10 μm are shown in Fig. S17.
From the plots, it is seen that the spectral resolutions improve when the diameter of the pinholes decreases and matches with that of a diffractive lens when R ≤ 10 μm. The spectral resolution of a refractive lens made of silica has the least spectral resolution. Therefore, for a direct imaging system with a refractive lens, it is almost impossible to resolve images based on wavelength without a spectral scanning system. For the same reason, the direct imaging system does not suffer from chromatic aberrations as a diffractive imaging system does. The ability for chromatic aberration forms the basis for wavelength discrimination and multidimensional imaging without a colour camera.
Fig. S17 Spectral sensitivity of direct imaging with refractive lens and diffractive lens and RAP. Plot of the IR(x=y=0) for λ = 401 nm to 800 nm for R = 10 μm to 50 μm in steps of 10 μm for RAP and direct imaging using a refractive and diffractive lens.

Section S9. Depth-Wavelength relationships
Quasi-achromatic lenses with capabilities to change focal distance by changing wavelength wavelength have been investigated [7]. Let us consider Eq. (S.1), in which the distance and wavelength factors which affect IPSF are λu and λv in the quadratic and linear phase factors. Therefore, any change in the wavelength can be compensated by an equal and opposite change in the distances. For instance, if the wavelength is increased by a factor of k, then if the distances u and v are decreased by a factor of 1/k, the function IPSF remains a constant. Consequently, IPSF for a wavelength λ1 can be synthesized from a different wavelength λ2 by varying u and v by a factor of λ1/λ2. On the other hand, by varying λ and v, it is possible to obtain IPSF for a different u. This is an extraordinary relationship as it is possible to discriminate wavelength without having all the wavelength samples and discriminate depth without having all the depth sampled using the point object. A computer simulation was carried out to validate the above idea using the following parameters: u = 10 cm, v = 10 cm and λ1 = 617 nm. The wavelength was varied to λ2 = 530 nm.
The values of u and v were iterated between 5 cm and 15 cm in steps of 20 μm to find a case u = u1 and v = v1 when the cross-correlation between IPSF(u = 10 cm, v = 10 cm, λ1 = 617 nm) and IPSF'(u1, v1, λ2 = 530 nm) equals the autocorrelation of IPSF(u = 10 cm, v = 10 cm, λ1 = 617 nm). The simulation result of the IR(x = 0, y = 0) for different value of u and v is plotted in Fig.  S18. As expected, the cross-correlation value matched with the autocorrelation for a case u = u1 and v = v1 and λ2 = 530 nm and the values of u1 = v1 = 11.642 cm. The factor of increase of u1 and v1 is the same as the factor of decrease of λ1 to λ2. The above finding reduces the spectral and spatial sampling requirements.

Section S10. Computational modules for automation
The computational module was developed using open source components namely Debian GNU/Linux (Ubuntu 16.04) as an operating system and GNU Octave (4.3.0) for video processing. For easy implementation, this module has been presented to be used with any available web camera. The schematic of the computational module is shown in Fig. S19. The spatio-spectral signatures are recorded by sampling along different wavelengths and depths which are catalogued and stored in a library. The software grabs one frame at a time and crosscorrelates the frame using an optimal non-linear filter with the spatio-spectral signatures and reconstruct the frame simultaneously in different spatio-spectral dimensions. This process is repeated for different frames and the corresponding reconstructions are updated in the computer display. The Octave code for building dependencies and automation is given in Database file.