Imaging through scattering medium by adaptive non-linear digital processing

Scattering media have always posed obstacles for imaging through them. In this study, we propose a single exposure, spatially incoherent and interferenceless method capable of imaging multi-plane objects through scattering media using only a single lens and a digital camera. A point object and a resolution chart are precisely placed at the same axial location, and light scattered from them is focused onto an image sensor using a spherical lens. For both cases, intensity patterns are recorded under identical conditions using only a single camera shot. The final image is obtained by an adaptive non-linear cross-correlation between the response functions of the point object and of the resolution chart. The clear and sharp reconstructed image demonstrates the validity of the method.

requires multiple camera shots from different positions of the sensor and 3D imaging is not demonstrated. A high-speed, full-color image technique 35 through a standard scattering medium using broadband white-light as the illumination source was shown recently; this technique can reconstruct objects hidden behind turbid media using a reconstruction algorithm. More recently it was shown that a broadband image of an object can be reconstructed from its speckle pattern, where the scattering medium plays the role of an imaging lens 36 .
In this study, we present a new method of imaging through a scattering medium. The method is based on characterization of the scatterer with a guide-star. However, the linear cross-correlation with the response to the guide-star 37 is replaced by an adaptive non-linear reconstruction process. Thus, instead of using two camera shots with two independent scatterers 37 , the non-linear digital process makes it possible to reconstruct the hidden multi-plane object from a single camera shot and without using an interferometer. Unlike our previous work 37 , the optimal parameters of the non-linear reconstructing process are chosen by optimizing a blind figure-of-merit, without the need for any prior knowledge about the covered object. Although characterizing the scatterer with a guide-star limits the use of the method to certain applications, the use of a guide-star makes it possible to do 3D imaging 37 .

Methodology
The optical setup for the proposed technique is shown in Fig. 1. The light diffracted by the point object is modulated by a scatterer located at a distance z s from the point object. A refractive lens L 1 , placed at close proximity to the scatterer is used to focus the modulated light onto the image sensor. Lens L 1 has a focal length f = (1/z s + 1/z h ) −1 , where z h is the separation between the lens L 1 and the image sensor. Without the scatterer, a focused image of the point object is obtained on the image sensor. It is well-known 38 that when a positive lens is illuminated by a quasi-monochromatic point source, a 2D Fourier transform of a transparency (multiplied by some quadratic phase function) is obtained on the image plane of the source, when the transparency is placed anywhere between the source and the image point. The center of the Fourier transform corresponds to the image of the point source. Hence, if the source is at the point = r x y ( , ) s s s , the intensity at the sensor plane will be located at = r rz z / o s h s . The intensity at the sensor plane, known as the point spread function 37 (PSF) is given by,    x y 1 . A 2-D object can be represented by a set of independent points, and is mathematically expressed as, where each c j is a positive real constant. The object is placed at the same axial location as the point object. The light emitted from the object passes through the same scattering sheet before reaching the image sensor. Since the optical system is linear and space-invariant, the intensity profile captured at the image sensor is given by, One can state that the intensity response at the sensor plane is the 2D convolution between the object O r ( ) s and the PSF. The goal here is to reconstruct the object O from the camera intensity I Obj . To successfully retrieve the image of the object, let us formulate the intensity response of Eq. (3) as a problem of optical pattern recognition [38][39][40] . The distribution given by Eq. (3) can be considered as the observed scene in which the patterns of interest I PSF s are distributed according to the shape of the input object. The reconstruction process is done by correlating I Obj with a reconstructing function calculated based on I PSF , where the goal is to obtain the sharpest delta-like function in each and every position of I PSF over the entire response I Obj . Next, for a single point object at some r , s the intensity on the camera plane is The reconstructing function should be To correlate I Obj with I PSF is apparently not the optimal choice, because correlation between two positive functions leads to a high level of background noise, and a correlation peak which is not the sharpest will be achieved. This sub-optimal correlation is equivalent to the use of a matched filter in pattern recognition 39 , and it is demonstrated in the following experiments with a relatively high level of background noise. To choose the optimal reconstructing process we consider the spatial spectral domain, where the Fourier transform of the cross-correlation is a product of the Fourier transforms of I Obj and the reconstructing function I Rec , as shown in the following: is chosen, the spatial filter becomes again the matched filter with relatively high background noise and a wide correlation peak. In order to effectively cross-correlate two functions, with relatively low background noise, both magnitudes | |  I Obj and | |  I c Re are raised to a power of α and β, respectively. Substituting , the Fourier transform of the cross-correlation becomes,

PSF h s s PSF
Note that using the power of α ≠ 1 makes the entire reconstruction process non-linear for a multi-point object 40 . However, we can argue that since α does not modify the phase of  I Obj , but only its magnitude, and since the location of each object point is embedded in the phase distribution, the influence of this non-linearity is mainly an improvement of the SNR of the reconstructed image, as the experimental results show. Recalling that the goal is to obtain h s s 0 , the natural choice for α and β is the values that satisfy the equation α + β = 0, a condition that guarantees | | =  C 1. However, in a practical noisy setup the reconstruction results under this condition are far from being optimal (see the following figures). Therefore, the pair of parameters α and β should be sought in the range between the inverse filter (α or β = −1) to the matched filter (α = β = 1). The search should be based on an optimization of some blind figure-of-merit, since the object in this stage has not been reconstructed yet and, in principle, is unknown to the system user.
For clustered objects on a dark background an appropriate blind figure-of-merit is the entropy 41 . The entropy is maximized when all the energy in the reconstructed image is spread over the entire image matrix, and it is minimized when this same energy is concentrated in the smallest region, i.e., in a single pixel. Therefore, we suggest checking the entropy of the reconstructed image for α β − ≤ ≤ 1 , 1 in some chosen step size, and choosing the pair of parameters with the minimum entropy. The entropy corresponding to the energy-normalized distribution function φ is given as 41 : m n m n ( , ) ( , )log( ( , )), where M and N are the numbers of rows and columns of the image matrix, and the reconstructed image for a multi-point object is, Since this search for parameters is done for each object and for each imaging experiment, the system is adaptive for each image and for each noise condition in each experiment. Our experience with the process shows that the two parameters α and β are different from one object or scene to another. It should be emphasized that the search for the two parameters is done digitally with the same single PSF and with the same single object response. Hence, after the training stage of the system, in which the PSF is stored in the computer, the system captures the object response with only a single camera shot.

Experiments
The experimental setup is shown in Fig. 2. It consists of three light channels, to facilitate multi-plane imaging, with three LEDs (Thorlabs LED635L, 170 mW, λ = 635 nm, Δλ = 15 nm) serving as incoherent light sources at λ = 635 nm. The channels are adjusted such that the two objects and the point source can be critically illuminated at the same time. Three refractive lenses L 0 , L 0 ′ and L 0 ′′ were used to illuminate the objects and the pinhole. The experiment was completed in two stages: first, the nonlinear processing was tested using only a single object and the results showed a significant improvement over a linear (α = 1) reconstruction process 37 , which encouraged us to proceed to the second part of the experiment, i.e. multi-plane imaging with adaptive tuning. The digit '5' from element 5 of group 2 of the United States Air Force (USAF) resolution chart was considered as the object for the first part of the experiment and a pinhole with an approximate diameter of 100 μm was used as a point object. The object and the pinhole were kept at the same axial location, at a distance of 11.7 cm from the scattering sheet (shown as insert in Fig. 2). A simple polycarbonate sheet (its statistic properties were measured and are described in ref. 37 ) was used as a scattering layer, and was placed adjacent to the lens L 1 (focal length 5 cm). The imaging sensor (GigE vision GT Prosilica, 2750 × 2200 pixels, 4.54 μm pixel pitch) was then placed at a distance of 9 cm from the lens L 1 , as dictated by the imaging equation. For the multi-plane imaging case, the same pinhole with a different pair of objects was used. Initially, the PSFs were recorded at the two different transverse planes. Next, the object holograms for the multi-plane object were recorded by placing the two different objects in the two channels and separating them by an axial distance of ΔZ = 3 mm. The gratings of element 6 of group 2 in the USAF resolution test chart with 7.13 lp/mm and line spacing of 70.15 μm was considered as object 1, and the numeric digit '6' adjacent to element 6 of group 2 was considered as object 2. All the objects were aligned in the absence of the scatterer, and only once it was ensured that the objects are at the desired axial location, the scatterer was introduced in the setup. An object hologram of a multi-plane object can be reconstructed plane by plane using different pre-recorded PSFs to reconstruct the final image of the original multi-plane object.
The entropies of the reconstructed images were calculated for different values of α varying from −1 to +1 in steps of 0.2, and for each individual value of α, the filter's coefficient β was tuned between the regime of inverse filter (−1) to matched filter (+1) via phase only filtering (0) with a similar step size of 0.2. The image processing was done off-line after the intensity pattern acquisition stage and where the step size of α and β can be varied according to the user's requirement. The non-linear reconstruction for a step size of 0.2 requires only 121 iterations, whereas dropping to a step size of 0.1 the number of iterations increases to 441.

Experimental Results
The first part of the experiment was carried out by embedding the scatterer between the object and the lens L 1 and recording the intensity profile for the point object I PSF and the object I Obj . The intensity patterns of the object with and without the scatterer are shown in Fig. 3(a,b), respectively. Figure 3(a) is the evidence that the object cannot be seen through the scatterer directly without a digital recovery process. | |  I Obj and | |  I PSF were raised to the power of α and β, respectively, and the best reconstruction result with the least normalized entropy value of 1 was obtained for α = 0.6 and β = −0.2. As explained in the Methodology section, α and β are the parameters that modify the magnitudes of the object spectrum and the filter, respectively, in order to get the sharpest correlation peak, with minimum sidelobes for every reconstructed image point. α and β are found in a search procedure, and not analytically, because of the presence of noise (which is different for each experiment) in the spatial spectrum domain. The final reconstructed image with the optimum values of α and β is shown in Fig. 3(c). The different reconstructed images for different values of α and β, with their entropy values, are shown in Fig. 4, where the entropy value of each sub-image has been normalized with respect to the minimum entropy value. A few interesting cases are marked with colored outlines. The images that were reconstructed by filtering with a matched filter, a phase-only filter and an inverse filter in a linear (α = 1) correlator are designated by purple, red, and green frames, respectively. The entire yellow frames represent the reconstructed images which satisfy the equation α + β = 0, whereas the optimal case with the least entropy value has been outlined by a blue frame. The same color scheme has been followed throughout the article.
In the next part of the experiment, the multi-plane imaging capability of the system was studied. The two objects mentioned earlier were critically illuminated in two different channels and the pinhole was illuminated in the third channel. Initially, object 1 and object 2 were placed at the same axial location along with the pinhole. All the three objects were kept at a distance of 11.7 cm from the scattering sheet, and the intensities of the object and the pinhole were captured by the image sensor placed at a distance of 9 cm from the lens L 1 . The magnitudes of the object and point object were tuned as earlier, and the entropies were calculated. Based on Fig. 5, the best reconstruction result was obtained for α = −0.2 and β = 0.8. Next, object 1 and object 2 were axially separated by a distance of ΔZ = 3 mm, and the pinhole was kept at the same axial location as that of object 2 (Z 2 ). The same procedure of tuning α and β was repeated and the entropies were recorded for different cases. In this case, the best reconstruction result was obtained for α = −0.4 and β = 1, as shown in Fig. 6. Similarly, when the pinhole and object 1 were in the same axial plane, whereas object 2 was separated by a distance of ΔZ = 3 mm, it was observed that the best reconstructed image was obtained for α = −0.2 and β = 0.8, as shown in Fig. 7. Note that optimum values of α and β are different for each state of the 3D scene even when the objects are the same, and hence the non-linear correlation is adaptive in the sense of adapting different optimal α and β parameters to different observed scenes.

Summary and Conclusions
In conclusion, we have presented a simple incoherent interferenceless single-shot imaging technique capable of imaging through scattering layers. In this method we have implemented adaptive non-linear processing and demonstrated both single-plane imaging and multi-plane imaging. The scattering layer has to be characterized first by using a guide-star, thus currently making the method effectively invasive imaging through the scattering medium. However, it can be made non-invasive using a fluorescent dye to mark the point object and the object. If such fluorescent markers can be excited from outside the scattering layers, the proposed method might become non-invasive. A pinhole of 100 microns is selected solely to maintain the intensity above the detectable threshold, although a smaller pinhole would provide better image resolution, but might not provide the desirable intensity of light required for the process to work. Thus, while the technique has several advantages, it also has some disadvantages, and additional research is required to overcome them.