Introduction

Lensfree imaging is an emerging technique that requires no imaging lens or its equivalent between the specimen and the image sensor planes1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30. In its specific ‘on-chip’ implementation, by placing the sample close (e.g., <1–2 mm) to the active area of an image sensor chip, this technique brings not only extreme compactness to the entire optical system, but also the unique feature of unit fringe magnification, where the object field-of-view (FOV) of the lensfree on-chip imaging platform is equal to the active area of the sensor chip12,13,14,15,16,17,18,19,20,21,22,23,24. Therefore, the FOV of a lensfree on-chip microscope can easily reach e.g., ~20–30 mm2 or ~10–20 cm2 using a CMOS (Complementary Metal-Oxide-Semiconductor) or a CCD (Charge-Coupled-Device) imager, respectively24,25,26,27. Unlike conventional lens-based microscopy approaches, an increase in FOV does not necessarily sacrifice spatial resolution and new image sensor chips with larger active areas and smaller pixel sizes immediately translate into a larger FOV as well as a better spatial resolution, without a change in the optical design of the lensfree on-chip microscope19,28,29.

The setup of a lensfree microscope is simple and compact (see e.g., Fig. 1.a); a partially coherent and quasi-monochromatic light source (center wavelength, λ) illuminates a specimen that is positioned onto an optoelectronic image sensor-array24,25,30. The scattered light transmitted through the specimen interferes with the unperturbed background light and creates an in-line hologram that is sampled and digitized by the image sensor-array (see inset Fig. 1.a). Since this on-chip microscope design has unit magnification, when capturing a raw lensfree hologram, the spatial sampling period and the sampling function are determined by the sensor's pixel pitch and its two-dimensional (2D) pixel responsivity map within each pixel (which we refer to as the pixel function). Stated differently, it is the pixel function of an opto-electronic sensor-array that fundamentally affects the spatial resolution and image distortions/aberrations in a lensfree holographic on-chip microscope. Different sensor chips have different pixel functions (with various pixel widths/heights and 2D functional forms) and therefore the nature of the spatial under-sampling and convolution operations that occur at the sensor plane is highly dependent on the sensor choice29.

Figure 1
figure 1

Lensfree on-chip microscopy setup.

(a) Shows a schematic of the lensfree holographic microscopy setup. The close-up of (a) shows that the scattered wave from the object interferes with the unperturbed reference wave and forms an in-line hologram, which is then sampled by the image sensor chip. The pixel structures exhibit large variability in terms of pixel pitch and morphology as can be seen in (b) and (c). (b) Shows an optical microscope image (20× objective, NA = 0.5) of a 6.8 μm monochrome CCD image sensor chip. (c) Shows an optical microscope image (100× Water immersion objective, NA = 1) of a 1.12 μm color CMOS image sensor chip, where the Bayer pattern can be readily seen.

In this manuscript we demonstrate that by incorporating the 2D pixel function of an image sensor chip into lensfree holographic image reconstruction and pixel super-resolution steps, one can improve the numerical aperture (NA) of the reconstructed images by a factor of ~3 compared to a raw lensfree image. Note that our use of the term ‘pixel super-resolution'28,29,31,32,33 refers to improving the effective NA of an imaging system and should not be confused with other microscopy techniques that aim to surpass the diffraction limit of light. This numerical aperture improvement is achieved using computational techniques (e.g., pixel super-resolution and hologram deconvolution) and is found to be, by and large, independent of the sensor chip design. Toward this end, we worked with both a monochrome CCD and a color CMOS image sensor chip that had a physical pixel size of e.g., 6.8 μm (Fig. 1.b) and 1.12 μm (Fig. 1.c), respectively. We used experimental and numerical techniques to estimate the 2D pixel function of each sensor-array, which in general would also be applicable for characterization of other opto-electronic sensors. Based on the information of this 2D pixel function, we experimentally found that using a CCD image sensor chip with a physical pixel size of 6.8 μm, in our reconstructed super-resolved images an NA of ~0.14 across an ultra-large field-of-view (FOV) of ~18 cm2 can be achieved, yielding a super-resolved effective pixel size of λ/0.56, where λ is the illumination wavelength. Under the same lensfree on-chip imaging geometry, using a CMOS image sensor chip that has a physical pixel size of 1.12 μm, we achieved an NA of ~0.83 across a FOV of ~ 20 mm2, yielding a super-resolved effective pixel size of λ/3.32. Compared to the pixel count (i.e., megapixel value) of each native sensor chip, these pixel super-resolved lensfree images (under unit magnification) demonstrate a pixel density increase of (3.81 μm/λ)2 and (3.72 μm/λ)2, for the CCD and CMOS imagers respectively, which empirically point to roughly the same space-bandwidth improvement factor regardless of the sensor chip architecture used in our lensfree on-chip imaging set-up. With these results, we achieved an effective pixel count of 2.52 billion with the 6.8 μm-pitch CCD image sensor; and obtained an effective pixel count of 1.64 billion with the 1.12 μm-pitch CMOS image sensor.

Finally, we also demonstrate that by utilizing a light emitting diode (LED) with a short illumination wavelength (λ = 372 nm), this pixel super-resolution based lensfree on-chip microscope can resolve periodic grating lines with a line-width of 225 nm. To better illustrate the capabilities and the potential applications of this wide FOV high-resolution lensfree microscopy platform we also imaged helical multi-walled carbon nanotubes (MWCNTs) with a diameter of ~160 nm.

Results

The resolution improvement of lensfree on-chip imaging is achieved by incorporating the estimated pixel function of a sensor array into the computational steps that are used in lensfree imaging (see Fig. 2.a). In the next sub-sections, we will report estimation of the pixel function of CCD (pixel size: 6.8 μm) and CMOS (pixel size: 1.12 μm) image sensors, using an experimental and a numerical approach, respectively (see Fig. 2.b). These pixel functions are then used to deconvolve the high-resolution lensfree holograms to undo distortions and enhance high spatial frequency components that were suppressed during lensfree hologram recording. Following this deconvolution step, each lensfree hologram is reconstructed to retrieve both the phase and the amplitude images of the object (see the Methods Section). We present lensfree imaging results of a resolution test chart (1951 USAF), periodic grating lines fabricated by focused ion beam (FIB) milling and helical MWCNTs to demonstrate the resolution improvement on both of these CCD and CMOS image sensors.

Figure 2
figure 2

Image processing block diagram of lensfree on-chip microscopy.

(a) Illustrates the block diagram of the computational methods that are used in creating a high-resolution image. In the hologram deconvolution step, either an experimental or a computational approach can be used to estimate the pixel function of the image sensor. (b) Shows two different pixel function obtained with different methods: the left pixel function is obtained with an experimental method for the 6.8 μm monochrome CCD image sensor; and the right pixel function is obtained with a computational methods for 1.12 μm color CMOS image sensor.

Pixel function estimation of 6.8 μm CCD image sensor

To measure the pixel function of our monochrome CCD image sensor, a scanning microscopy system was assembled from a bright field microscope, a LED (λ = 470 nm) and an X-Y-Z piezo stage (see the Methods Section). The scanning microscope illumination spot had a full width at half maximum (FWHM) of ~1.4 μm in both axes (Fig. 3.a), which is much narrower in comparison to the pixel size of the image sensor-array (6.8 μm). This illumination scheme allowed the measurement of the pixel function by probing different positions within the area of a single pixel and recording the pixel response at each position. Fig. 3.b shows a microscope image (20× objective lens, NA = 0.5) of a single pixel and the illumination spot (i.e., the bright spot on the upper right corner). Using the pixel output recorded from 54 measured sub-pixel locations (shown in Fig. 3.b), an initial estimate of the pixel function of the CCD image sensor was obtained (Fig. 3.c, see the Methods Section for details). Since the illumination spot size (~1.4 μm) cannot be treated as a spatial delta function, further refinement of this initial pixel function could be achieved. Toward this end, we deconvolved the lensfree hologram of a known test object (e.g., 1951 USAF resolution test chart) using a blind deconvolution algorithm (built-in MATLAB routine: deconvblind), which provides maximum likelihood estimation for both the pixel function and the unblurred image34,35,36,37. After 35 iterations of this blind deconvolution algorithm a refined pixel function was obtained for our CCD image sensor as illustrated in Fig. 3.d.

Figure 3
figure 3

Experimental approach for pixel function estimation using a scanning microscope.

(a) Shows a cross section of the illumination spot (see inset) of the scanning microscope, which is used to probe the pixel function of the 6.8 μm CCD image sensor. (b) Shows the illumination spot (the bright spot on the upper right corner) over the CCD image sensor. To estimate the pixel function of the CCD chip, 54 different locations were probed, as marked by yellow hollow rectangles. (c) Shows the measured pixel function after spatial interpolation. (d) Shows the resulting pixel function after blind deconvolution.

This asymmetrical CCD pixel-function reported in Fig. 3.d is also in agreement with the literature that reports the architecture of this sensor-array38,39. The KAF 39MP CCD image sensor has two gate electrodes for each pixel; one is built using Indium Tin Oxide (ITO), while the other gate electrode is built from doped Polysilicon. ITO is more transparent and therefore the light collection within the ITO region should be more efficient then in the doped Polysilicon gate electrode39. This is also confirmed by the optical microscope image of the pixel (Fig. 3.b) which clearly shows its asymmetrical structure: the dark rectangle is the ITO gate electrode and bright rectangle is the doped Polysilicon. Furthermore, the KAF 39MP pixel architecture includes a lateral overflow drain (LOD), which allows off-chip draining of the excessive signal. Accordingly, this LOD region does not collect light; and we believe that its position, which is not visible in the microscope image shown in Fig. 3.b, corresponds to the area of the pixel function that is not sensitive to light.

Pixel function estimation of 1.12 μm CMOS image sensor

It is experimentally challenging to directly measure the pixel function, when the pixel size of image sensor approaches a micrometer scale. Therefore we adopted a computational approach instead, to estimate the pixel function of the 1.12 μm CMOS image sensor (see the Methods Section for details). This computational method generates various pixel functions and for each pixel function the holograms of known test objects are deconvolved and reconstructed. By evaluating these reconstructed images one can quantify the effect of the estimated pixel function and the pixel function with the best performance can be treated as an approximation to the real pixel function. Based on the reported architecture of the image sensor chip in the literature40,41 and the morphology of the lens-array installed on this CMOS imager (see Fig. 1.c), two assumptions were made on the structure of the pixel function. (1) Similar to the morphology of the microlenses, the pixel function possesses a circular symmetry. (2) The crosstalk between neighbouring pixels is negligible; and therefore the size of the pixel function equals the pixel pitch. Accordingly, we approximated the pixel function of our CMOS sensor-array with a two-dimensional Gaussian distribution within a 1.12 μm square area. Four parameters of the Gaussian pixel function were optimized: the X-Y coordinates of its center position and the FWHM of the Gaussian in both the vertical and the horizontal directions. We digitally scanned the values of these parameters to generate various responsivity distributions within the pixel area and fed each 2D distribution into the hologram deconvolution step (Fig. 4.a). As shown in Fig. 4.b, the objects reconstructed from the deconvolved holograms are evaluated by either measuring the modulation depths (grating lines), or the width of averaged cross-section profiles (helical MWCNTs). We combined all these evaluation results from various objects and used this combination as the ‘cost function’ for pixel function optimization. By minimizing this cost function, our estimate of pixel function converged to a 2D Gaussian distribution, which has both a vertical and a horizontal FWHM of ~550 nm as illustrated in Fig. 2b.

Figure 4
figure 4

Computational approach for pixel function estimation.

(a) Shows the block diagram of our pixel function estimation steps using a computational method. (b) Shows the reconstructed images when an optimized pixel function is used in the hologram deconvolution step. Three representative objects are illustrated: horizontally and vertically oriented grating lines (top and middle) and a helical multi-walled carbon nanotube (bottom). The insets show the estimated pixel function of the 1.12 μm CMOS sensor chip.

Lensfree on-chip imaging results obtained with 6.8 μm CCD image sensor

Using the CCD sensor-array, we imaged a 1951 USAF resolution test chart to quantify the resolution improvement of our holographic microscope, when pixel super-resolution and hologram deconvolution steps are utilized. In these experiments, the illumination wavelength was 480 nm (illumination bandwidth ~4 nm) and the objects were located at ~390 μm away from the CCD image sensor active area. Fig. 5.a shows the amplitude image of a reconstructed hologram obtained from only one lensfree hologram measurement i.e., without pixel super-resolution. The thinnest resolved grating lines are within Group 6 Element 4, which corresponds to a half-pitch resolution of ~5.52 μm and an NA of ~0.04. After applying only pixel super-resolution, the amplitude image of a reconstructed hologram exhibits a major improvement in resolution (see Fig. 5.b). The entire group 7 can now be resolved, which translates to a half-pitch resolution of ~2.2 μm and an NA of ~0.11. In group 8, the horizontal lines of elements 1 and 2 are also resolved, while the vertical lines cannot be resolved as indicated by yellow cross sections in the same figure. Fig. 5.c shows the amplitude image of a reconstructed lensfree hologram after applying hologram deconvolution. To deconvolve the image we used 35 iterations of MATLAB built-in routine deconvblind, using the measured pixel function described earlier as the initial guess34,36. The horizontal and vertical lines in group 8 elements 1 and 2 are now resolved as indicated by the cross sections in the image, which translates to a half-pitch resolution of ~1.74 μm and an NA of ~0.14. Overall, after applying pixel super-resolution and hologram deconvolution with the 2D pixel function, the NA of the lensfree holographic microscope improves by a factor of ~3 compared to a single lensfree hologram. Therefore, the effective pixel size is also reduced from λ/0.16 to λ/0.56, yielding an increase in the pixel count by a factor of ~12. Moreover, this resolution improvement does not compromise the FOV and therefore with our 6.8 μm 40 Mega-pixel CCD image sensor, the effective pixel count over a FOV of 18 cm2 reaches to ~2.52 Giga-pixels when e.g., 480 nm illumination wavelength is used.

Figure 5
figure 5

Lensfree on-chip imaging results obtained with a 6.8 μm CCD image sensor demonstrating an NA of ~ 0.14 over a field-of-view of ~18 cm2.

(a) Shows a lensfree amplitude image, which was reconstructed from a single lensfree hologram without using pixel super-resolution. (b) Shows a lensfree amplitude image, which was reconstructed from a pixel super-resolved lensfree hologram without the deconvolution step. The horizontal lines of group 8 elements 1 and 2 were resolved, while the vertical lines were not resolved as indicated by the yellow cross sections in the image. (c) Shows a lensfree amplitude image, which was reconstructed from a pixel super-resolved hologram with the deconvolution step using the estimated pixel function (see Fig. 3.d) before the final reconstruction step. The vertical lines in group 8 elements 1 and 2 are now resolved as indicated by the cross sections in the image, which corresponds to half pitch resolution of ~1.74 μm and an NA of ~0.14.

Lensfree on-chip imaging results obtained with 1.12 μm CMOS image sensor

Using the 1.12 μm CMOS image sensor, we imaged 225 nm grating lines (fabricated using FIB) and helical MWCNTs at an illumination wavelength of 372 nm. With hologram deconvolution based on the estimated pixel function (Fig. 2b), both the grating lines and the helical MWCNTs can be clearly resolved as illustrated in Fig. 6. At an illumination wavelength of 372 nm, resolving a grating of 225 nm line-width corresponds to an NA of ~0.83, which once again confirms an improvement factor of ~3 compared to a single lensfree holographic image. Stated differently, using pixel super-resolution and hologram deconvolution steps on the 1.12 μm CMOS sensor chip, the effective pixel size can be reduced from λ/1.08 to λ/3.32. Such pixel size reduction yields an increase in the effective pixel density by a factor of ~9.4. Therefore, with our 16.4 Mega-pixel CMOS image sensor we achieve an effective pixel count of 1.64 billion over a FOV of ~20 mm2 when e.g., 372 nm illumination wavelength is used.

Figure 6
figure 6

Lensfree on-chip imaging results obtained with a 1.12 μm CMOS image sensor demonstrating an NA of ~0.83 over a field-of-view of ~20 mm2.

(a) Shows lensfree images reconstructed from super-resolved holograms without deconvolution. (b) Shows lensfree images reconstructed from super-resolved and deconvolved holograms using the optimized pixel function shown in the inset of Fig. 4.b. (c) Top: a conventional optical microscope image (60× water immersion objective, NA = 1) of a grating with 225 nm line-width. Bottom: an SEM image of a helical carbon nanotube that is 160 nm in diameter. Note that in the SEM image, the carbon nanotube is coated with 20 nm metal coating (after lensfree imaging) and therefore the observed carbon nanotube diameter is thicker in SEM.

Discussion

The image sensor properties play a critical role in lensfree imaging performance, especially for implementing pixel super-resolution. In this work, we shed more light onto this affect and reported that by using an estimated 2D pixel function of an image sensor-array as an input to lensfree holographic image reconstruction steps, pixel super-resolution can improve the NA of the reconstructed images by a factor of ~3 compared to a raw lensfree image. We confirmed this improvement factor using two different image sensors that significantly vary in their designs, i.e., a monochrome CCD and a color CMOS image sensor. Using the CCD image sensor-array (pixel size of 6.8 μm), we achieved an NA of ~0.14 across an ultra-large field-of-view (FOV) of ~18 cm2 yielding a super-resolved effective pixel size of λ/0.56; whereas using the CMOS image sensor-array (pixel size of 1.12 μm), we achieved an NA of ~0.83 across a FOV of ~20 mm2, yielding a super-resolved effective pixel size of λ/3.32. Furthermore, by adopting a short illumination wavelength (λ = 372 nm) a record high spatial resolution for lensfree on-chip imaging is obtained with the same CMOS sensor: a grating with a line-width of 225 nm is resolved and a helical MWCNT with a diameter of ~160 nm is successfully imaged.

An interesting observation in these results is a sensor-chip independent NA improvement factor of ~3, which is achieved by utilizing pixel super-resolution and hologram deconvolution. Furthermore, compared to the original pixel count of each sensor-chip, our pixel super-resolved lensfree images demonstrate a pixel count increase of (3.81 μm/λ)2 and (3.72 μm/λ)2, for our CCD and CMOS imagers respectively, which empirically point to roughly the same space-bandwidth improvement factor. We believe that a similar level of space-bandwidth improvement can in general be maintained in lensfree on-chip imaging even if the image sensors differ in their technologies (CMOS vs. CCD), pixel-pitches, detection architectures (e.g., back illuminated vs. front illuminated) and imaging applications (color vs. monochrome).

Finally, we would like to emphasize that in our hologram deconvolution process, higher spatial frequencies that are normally undersampled and suppressed are now boosted; and as a direct consequence of this, the noise is also amplified. Different deconvolution algorithms might better handle this noise amplification problem and therefore future research on optimization of hologram deconvolution steps could improve our results since most of the existing deconvolution codes are optimized for photography applications and not for holography37.

Methods

Experimental setup for measuring the pixel function

The scanning microscopy system used for CCD pixel function measurement was composed of a bright field microscope in reflection mode (Olympus, BX51), an X-Y-Z piezo stage (PI, 611.3S) and a LED (λ = 470 nm, Mightex, FCS-0470-000) that was butt-coupled to a single mode fiber (ThorLabs, P1-630A-FC2). To create the illumination spot, the eyepiece of the microscope was removed and the fiber end was mounted instead of the eyepiece, while allowing movement of the fiber in only one axis (toward and away from the microscope). In this configuration the image of the fiber end is demagnified and projected on the object plane. The demagnification factor used in our setup was 20×, as determined by the objective lens in use. To independently verify the illumination spot size and to focus the spot on the KAF 39 MP image sensor active plane, a calibration step was performed. In this step, the focal plane of the projected image of the fiber end was calibrated to coincide with the focal plane of the bright field microscope in reflection mode. The calibration was done by placing a reflective metal surface on the microscope stage and focusing the bright field microscope on this reflecting surface. Then, the microscope lamp was turned off, while the LED was turned on, thus creating a spot on the reflective surface. By moving the fiber in the eyepiece toward and away from the microscope, the minimum spot size was found. The fiber is then fixed to the position that corresponds to the minimum illumination spot size, in order to ensure that the illumination focal plane would coincide with the microscope focal plane. The FWHM of the illumination spot after this calibration step was ~1.4 μm as shown in Fig. 3.a.

Next, the CCD image sensor chip was placed on the top of the X-Y-Z stage, which was itself placed onto the microscope stage. By turning the LED on and observing the image sensor using the reflection microscope, the spot position within the pixel area could be determined. As an example, Fig. 3.b shows the reflection microscope image and the illumination spot (bright spot on the upper right corner). To probe a specific location within a single pixel area, the X-Y-Z piezo stage was used to change the relative position of the illumination spot and to correct for possible focus drifts. A non-uniform scanning pattern was utilized to better (i.e., with more measurements) sample the spatial regions that showed rapid transitions in responsivity. After a specific location was selected, the bright field microscope illumination was turned off while the LED (positioned in the eyepiece) was kept on. In this configuration, a narrow spot illuminates only a single pixel while the KAF 39 MP image sensor acquires an image. To reduce noise, multiple measurements/frames (~10) were averaged for the same spot location. It should also be noted that to reduce the intensity of the illumination spot and to avoid saturation while capturing a CCD image, a neutral density filter, which was placed in the filter cube of the microscope, was brought into the illumination path of our set-up.

From raw CCD measurements to pixel function estimation

After probing the area of a single CCD pixel at 54 locations (see Fig. 3.b) the pixel function was estimated using the following steps. First, the relative position of each measurement was determined by finding the correlation peak between a Gaussian spot and the blue channel image of the microscope image. The blue channel was selected since the illumination spot contrast was higher in comparison to the pixel structure. Second the image was shifted by three pixels to compensate for a systematic bias caused by the positioning of the neutral density filter in the illumination path. Third, the measurements were interpolated in 2D to obtain the resulting pixel function shown in Fig. 3.c.

To deconvolve the lensfree holograms of the test objects with the estimated pixel function, we used a built-in MATLAB routine deconvblind. This routine implements maximum likelihood estimation for both the blur kernel (the pixel function in our case) and the unblurred image using the expectation-maximization algorithm34,35. The measured pixel function (Fig. 3c) serves as an initial guess for the algorithm and after 35 iterations the unblurred hologram and the modified pixel function (see Fig. 3.d) are provided as outputs.

Computational method for 1.12 μm CMOS sensor pixel function estimation

Based on the assumption that a 2D Gaussian distribution can be used for estimation of our CMOS pixel function, we optimized the vertical and horizontal FWHM values of this distribution and its center position within the pixel area. During this optimization process, both the FWHM values and the center position were numerically scanned and their corresponding 2D Gaussian distributions were used in hologram deconvolution step using Wiener deconvolution algorithm42. The deconvolved holograms were then back-propagated using an angular spectrum approach to reconstruct the objects19,43. By evaluating all the reconstructed images one can find an optimized Gaussian distribution, which can be considered as an approximation to the actual CMOS pixel function.

To evaluate the reconstruction results, two types of known objects were chosen: (1) periodic grating lines fabricated onto a glass substrate using focused ion beam (FIB) milling; and (2) helical MWCNTs (CheapTubes Inc.), which were smeared on a thin glass substrate (~50 μm). During the lensfree imaging process, the vertical distance between the objects and the image sensor surface was on the order of 50–150 μm. This gap between the substrate and image sensor planes is filled with a refractive index matching oil to minimize reflection losses and increase the effective NA.

Grating lines exhibit a strong signal-to-noise ratio (SNR) at specific spatial frequencies due to their periodic structure. As expected, the reconstruction results of grating lines show a strong orientation dependency: when the vertically oriented grating lines are imaged, the parameters of horizontal pixel distribution drastically affect the reconstructed image, while parameters of vertical pixel distribution do not exhibit such a strong effect on the reconstruction results. Considering this orientation sensitivity, in our lensfree imaging experiments each set of grating lines has been imaged in horizontal and vertical orientations in order to find the optimized pixel parameters in both directions. The reconstructed images are then evaluated by measuring the corresponding modulation depth at the period of the grating lines.

We should also emphasize that the gratings lines are not sufficient for searching the globally optimized pixel parameter space since gratings are inherently limited in terms of their spatial frequency contents, which might lead to locally optimized pixel functions. To avoid such a bias, besides grating lines with various periods, we also used the helical MWCNTs for pixel function parameter scan. These helical MWCNTs vary in their widths (e.g., ~100–200 nm) and morphologies and therefore are quite rich in spatial frequency content. To verify our results, the same MWCNTs imaged with our lensfree microscope were also imaged using scanning electron microscopy (SEM) to confirm their widths and morphologies. After reconstruction of a lensfree amplitude image, cross-section profiles are taken across the entire imaged MWCNT and the average width of these cross-sections is used for evaluation of the success (i.e., the cost function) of our reconstruction.

During the digital search for the optimal pixel function, each individual object might yield its own ‘locally optimized’ pixel distribution. Since the pixel function should be independent of the objects, we combined of all the evaluation results for different objects within our cost function and searched for a ‘globally optimized’ pixel distribution. The pixel distribution which gave the maximum overall modulation depth in grating samples and the minimum overall line-width in MWCNT samples is considered as our converged pixel function.

Implementation of pixel super-resolution in lensfree on-chip holography

Pixel super-resolution is a computational method to overcome undersampling of an image, due to for example the physical pixel size of the image sensor-chip28,31,44,45. Therefore, it aims to generate a high-resolution image from a stack of lower resolution images. Each image in the lower resolution stack should be of the same object; however, each image should also be translated from the other images in the stack, thus containing new undersampled information about the object of interest. In our experimental setup we used lateral movements of the light source, which was mounted on an X-Y stage (Newport, SMC100PP) in order to achieve sub-pixel shifts of the lensfree holograms on the sensor-chip. The high-resolution lensfree hologram is then synthesized by first digitally estimating the shifts between the acquired lensfree holograms in the stack using an iterative gradient method29. After these shifts are calculated, we use a non-iterative method to synthesize the high-resolution lensfree hologram (with a much smaller effective pixel size), while preserving the optimality of the reconstruction in maximum-likelihood sense45. Pixel super-resolution performs very well with either monochrome or color image sensors (see e.g., Figs. 56); however, for color image sensors minor modifications are required as detailed in46,47.

Hologram reconstruction and phase recovery

To reconstruct pixel super-resolved lensfree holograms, they are first multiplied by a reference wave, which can be approximated as a plane wave in our experimental setup19. Then, the holograms are back-propagated to the object plane by using the angular spectrum approach43. The resulting back-propagated field is complex and it contains both the phase and amplitude information of the imaged objects. The resulting back-propagated field also contains a noise term commonly referred to as the twin image noise, which is unavoidable for in-line holography geometry. This twin image noise can be mitigated by using an object support based phase-recovery approach; an iterative process that iterates between the object and the hologram planes, enforcing a unique constraint in each one of these planes19,29. For example, in the hologram plane the enforced constraint is the measured intensity, while in the object plane the enforced constraint suppresses the field to a constant value outside the object support, also keeping the field unchanged within the object support19,29. The object support can be evaluated by a simple threshold in the object domain and this phase recovery process typically converges after ~10–15 iterations. Recently, a multi-height based lensfree imaging technique has also been demonstrated for on-chip microscopy to entirely eliminate this object support step, which is especially superior for imaging of dense and connected specimen48,49.