Introduction

Quantitative phase imaging (QPI) is an invaluable tool for biomedical research thanks to its unique capabilitie for long-term in vivo imaging1,2,3 without the need for specific labeling4,5. QPI has been successfully used in optical microscopy for several decades often with modified microscope systems, such as digital holography (DH)6,7,8,9, phase shifting interferometry10,11, transport-of-intensity equation (TIE)1,12,13, differential phase contrast (DPC)14,15. However, the space-bandwidth product (SBP) of conventional optical microscopes is fundamentally limited by their optical configurations, leading to a trade-off between image resolution and field-of-view (FOV), which fails to meet the growing demands for high-throughput biomedical applications16,17. To address this issue, high-precision mechanical scanning stages and stitching algorithms are used to extend the narrow FOV of high-magnification objectives, which not only complicate the imaging procedure but also significantly increase the overall cost of these systems18,19.

Several computational microscopy techniques have been developed in recent years to address these issues, such as synthetic aperture interference microscope20,21,22,23,24,25, Fourier ptychographic microscope (FPM)26,27,28,29,30, and lens-free on-chip microscopy (LFOCM)31,32,33,34,35,36, which can produce both high-resolution and large FOV images without the need for any mechanical scanning platform or stitching algorithm. Among these approaches, the LFOCM offers the unique advantage of eliminating potential optical aberrations and chromatic aberrations by removing all lens and microscope objective structures from the compact optical configuration.

LFOCM is a typical computational optical imaging system that exhibits resolution limitations determined by both the sampling frequency and the coherent diffraction limit. Although the large effective numerical aperture (NA) ~ 137 across the native FOV of the imaging sensor (tens of mm2) can also be achieved based on a so-called unit-magnification configuration, the attained resolution is still less than the ideal coherent diffraction limit (NA ~ 1). According to the Nyquist-Shannon sampling theorem38, the resolution of the holographic reconstruction is fundamentally limited to the sampling resolution of the CMOS sensor (pixel size). Nevertheless, the pixel size of the available sensor still fails to meet the rapidly growing demands in LFOCM due to the obstacles of the technology of semiconductor manufacturing. Another way to overcome the limitation of sampling frequency and thus increase the resolution is pixel super-resolution, which involves sub-pixel shifting39,40, active parallel plate scanning41, multi-height measurements42,43, multi-wavelength scanning32. The original LFOCM system employed a highly coherent laser source to generate holograms for the reconstruction of phase and amplitude information. However, due to the smaller size of the incoherent diffraction limit compared to the coherent diffraction limit, the utilization of partially coherent illumination has the potential to achieve higher resolution44,45. Furthermore, the coherence-gating effect under low coherence illumination helps minimize crosstalk and speckle noise46.

However, as in our previous research of the image formation model under partially coherent illumination47, it has been found that when employing extended incoherent illumination with low spatial coherence, the superimposition of diffuse spots generated by different object points leads to a blurred diffraction pattern. On the other hand, when using polychromatic illuminations with low temporal coherence, the intensity recorded at the defocused plane can be regarded as the superposition of many diffraction patterns of different wavelengths. In conclusion, the intensity recorded at the image plane, with partially coherent illumination, can be interpreted as an incoherent superposition of coherent partial images arising from all points of the incoherent source. This phenomenon can be modeled as a convolution of the ideal in-line hologram (arising from an ideal on-axis point source with strict monochromaticity) with a properly resized source intensity distribution48,49,50. The deconvolution algorithms have been proposed to mitigate the blur of high-frequency resulting from low coherence51,52. However, in typical LFOCM systems, the pixel-averaging effect within the limited detection range cannot be disregarded. This effect impacts the imaging resolution in conjunction with partial coherence, and the method for solving this combined limitation to enhance resolution has not yet been presented.

In this paper, we present a pixel-super-resolved lens-free quantitative phase microscopy (PSR-LFQPM) with partially coherent illumination. We present a comprehensive analysis of the impact of spatio-temporal coherence and pixel smoothing effects in the LFOCM system. To address these factors, we integrate the SCTF into the iterative process and combine it with the pixel binning model. Our technique, termed PSR-LFQPM, achieves pixel-super-resolved [1.41-fold resolution (780 nm)] quantitative phase results across wide FOV (19.53 mm2) by integrating the image formation model under partially coherent illumination with the multi-wavelength method, effectively suppressing the loss of resolution due to high-frequency blur in holograms. Time-lapse imaging of living HeLa cells in vitro is then presented to highlight its long-term dynamic QPI ability to image subcellular dynamics. With such a compact design, cost-effectiveness, and high-throughput label-free QPI capability, the PSR-LFOCM will be a competitive and promising tool for biomedical applications.

Results

Simulation of quantitative PRT

To ascertain the resolution loss resulting from high-frequency blur caused by the low spatial coherence, Fig. 1 shows the numerical simulation results. The parameters are consistent with our setup (pixel size: 0.9 μm, Z2: 1000 μm, wavelength: 466 nm, 521 nm, 588 nm, 607 nm, and 632 nm). For such a configuration, the theoretical 1952 USAF resolution target image needed to reconstruct is shown in Fig. 1a, which is defined on a grid with 160 × 160 pixels with a pixel size of 0.9 μm × 0.9 μm. The profile of group 8 is chosen to verify the resolution limit. The pre-calibrated intensity distribution of LED is properly resized to accurately model the PSF of partially coherent illumination. Figure 1b1–b3 are phase components obtained by direct backpropagation of holograms recorded at different source-sample distances (Z1 = 3, 6, 9 cm), while Fig. 1c1–c3 show phase components obtained by direct backpropagation of holograms recorded using illumination with different emitting sizes (Δs = 47, 94, 141 μm). The simulations above demonstrate that as the source-to-sample distance decreases or as the emitting area of the source expands, the spatial coherence diminishes, leading to a more severe blur of high-frequency to reduce resolution. In other words, the partially coherent LED illumination source is considered as a quasi-monochromatic point source as the distance between the light source to the sample increases towards infinity. However, the increased distance will reduce the imaging power, resulting in a longer sensor exposure time for hologram collection, which leads to additional noise and affects the quality of the hologram.

Fig. 1: Simulation of quantitative PRT.
figure 1

a The ground truth of the phase resolution target (PRT) with 160 × 160 pixels, having a pixel size of 0.9 × 0.45 μm. b1b3 The quantitative reconstruction results varying in the distance between illumination source and the sample Z1 = 3, 6, 9 cm. c1c3 The quantitative reconstruction phase varying in the diameters of illumination source Δs = 47, 94, 141 μm. d The ground truth of the PRT defined on a grid with 128 × 128 pixels, having a pixel size of 0.45 × 0.45 μm. d1 Profile curve of d. e The blurred hologram through convolution and down-sampling process. f Retrieval phase using PSR-LFQPM. (f1) Profile curve of f. g The curve of RMSE and iteration number.

Simulated reconstruction of PRT validates the pixel super-resolution QPI capability of PSR-LFQPM. In Fig. 1d, the ground truth of the PRT is defined on a grid with 128 × 128 pixels, having a pixel size of 0.45 × 0.45 μm, which corresponds to one-half pixel size of the sensor [Element 2 Group 10 (435 nm)]. The simulated PSF of the illumination fits well with a Gaussian function, as shown in the insert of Fig. 1d. Based on the theoretical analysis of the image formation model under partially coherent illumination, Fig. 1e displays the diffraction pattern recorded by the CMOS sensor after a convolution and down-sampling process. By considering a partially coherent illumination model and introducing a spatially coherent transfer function (SCTF) in the iterative process, as shown in Fig. 1f, the resolution can be accurately reconstructed to match the Ground Truth (Element 2 Group 10). In addition, the curve of RMSE and iteration number is shown in Fig. 1g. The simulation converged after approximately 34 iterations.

Comparison results between conventional methods and PSR-LFQPM

Furthermore, based on the same raw measurements using LED illumination, we conducted experiments on PRT and HeLa cells in Fig. 2 by using a multi-wavelength method53,54 (without considering the effect of partial coherence) and PSR-LFQPM, respectively. Before the iterative process, a direct deconvolution process was performed on the up-sampled hologram [Fig. 2a] to get the high-contrast initial hologram as depicted in Fig. 2b. The white enlarged areas in Fig. 2a1, b1 demonstrate that the direct deconvolution process brings about higher fringe visibility in the holograms. We first performed quantitative phase reconstruction of the PRT [Fig. 2c] using a total of 5 raw images. The enlarged areas in Fig. 2c1, c2 show that the resolution can be improved by a factor of 1.41 to Element 3 Group 9 after considering the effect of partial coherence. Then, the comparison results in Fig. 2d1, d2 on HeLa cells demonstrate that smaller features of the subcellular structure can be resolved after introducing SCTF in the iterative process. To secure the robustness of our method, Visualization 1 shows comparative results with and without considering the effect of coherence of living HeLa cells over an extended period of time. Compared to the multi-wavelength method, PSR-LFQPM achieves higher resolution results and observes more subcellular organelle structures in the cells within several hours.

Fig. 2: Comparison results between conventional methods and PSR-LFQPM.
figure 2

a The hologram based on partially coherent LED illumination. b The hologram after direct deconvolution process. a1b1 The enlarged area in (a) and (b). c Reconstructed results of PRT. c1c2 The enlarged area in (c) using multi-wavelength method and PSR-LFQPM. d1e2 The comparison results of HeLa cell.

PSR-LFQPM on HeLa cell cultures over an extended period of time

The proposed method was verified for wide-field pixel super-resolution QPI of live HeLa cells in culture over an extended period of time. HeLa cells were cultured in 20 mm glass bottom dishes with 10% fetal bovine serum. Our compact system allowed for in situ observation by placing it directly in the incubator. Visualization 2 presents a real-time video of HeLa cell reconstitution across the full FOV, demonstrating intercellular interaction movement and the activity of organelles in live cells for hours.

The reconstructed full FOV phase and hologram images at 00:00:01 in Visualization 2 are illustrated in Fig. 3a. The multi-modal results for the batch of cells in Area 1 are shown in Fig. 3b1–b6. The enlarged phase result of Area 1 is shown in Fig. 3b1. As shown in Fig. 3b2, b3, phase-contrast (PC), and differential interference contrast (DIC) images were computed from the retrieved quantitative phases without additional hardware, respectively. Figure 3b4 demonstrates the pseudo-three-dimensional (3D) morphology (refractive index accumulation over cell thickness). Compared to the phase image, PC and DIC images provide higher contrast and clearer views of organelle movement, as shown in Visualization 2 from 17 s to 21 s. To verify the application potential of our proposed method, Fig. 3b5, b6 illustrate the cell segmentation and cell counting results for the corresponding areas, providing assistance for subsequent cellular analysis and tracking. The formation and disappearance of tunneling nanotubes (TNTs) between different cells over 130 min are demonstrated in Fig. 3c1–c6. High-resolution phase images reveal cellular morphology at different stages of TNT formation, such as filopodia [Fig. 3c1, c6], single filopodia bridges [SFB, Fig. 3c2, c5], double filopodia bridges [DFB, Fig. 3c3] and stable TNTs [Fig. 3c4]. Figure 3c7, c8 display the mass change curves and motion trajectories of the corresponding three cells in Fig. 3c1. Through the experimental results demonstrated in Fig. 3, we can see that the proposed PSR-LFQPM could stably provide continuous analysis of any cells in the full FOV for long-term imaging.

Fig. 3: Results of long-term real-time dynamic QPI of HeLa cells in a culture.
figure 3

a The full FOV reconstructed phase and hologram. b1 The enlarged phase image of Area 1; (b2) The phase-contrast image; (b3) The differential interference contrast (DIC) image; (b4) The 3D rendering (refractive index accumulated over cell thickness); (b5b6) the cell segmentation and counting results for the corresponding cell clusters. c1c6 The production and disappearance process of intercellular tunneling nanotubes (TNTs) in a cell cluster in Area 2 in Fig. 5a. c7c8 The mass change curves and movement trajectories of the cell stems corresponding to the three cells in (c1).

Discussions

In this paper, we have a compact, and cost-effective LFOCM system with partially coherent illumination, termed pixel-super-resolved lens-free quantitative phase microscopy (PSR-LFQPM). Our proposed method enables pixel-super-resolved and long-term dynamic QPI over a large FOV. Through the simulations and experiments, we have confirmed that the accuracy of the reconstructed phase significantly decreases when using a source with lower coherence, unless considering the effect of partial coherence. Therefore, PSR-LFQPM can effectively mitigate the loss of resolution caused by the blur of high frequency by incorporating the SCTF into the iterative process, which is pre-calibrated from the intensity distribution of the LED. Compared to the iterative algorithm neglects the effect of partially coherent illumination32,54, our method has demonstrated the capability to achieve pixel-super-resolved QPI results with a half-pitch resolution of 775 nm across a native FOV of the sensor (19.53 mm2), resulting in a 1.41-fold resolution improvement. Furthermore, the estimated intensity at the image plane closely resembles the raw measurement. To further illustrate the performance of PSR-LFQPM on biomedical samples, we have applied this method to HeLa cell cultures for long-term and wide-field imaging. Our demonstration indicates that the PSR-LFQPM approach offers a high-throughput, compact, and cost-effective tool for biomedical and POCT applications.

However, in LFOCM with partially coherent illumination, solutions that jointly consider both temporal and spatial partial coherence have not been reported so far47,55,56,57. In addition, we implemented a scale conversion of the phase during the iterative process under different wavelength illumination, which is performed based on the assumption that the sample exhibits the wavelength-independent absorption (i.e., unstained samples)58,59. Moreover, for highly confluent live cells or tissue samples, the superposition of diffraction fringes of different samples will reduce the fringe contrast, leading to the loss of resolution. Therefore, further investigation is necessary to explore the potential of applying PSR-LFQPM to achieve high-quality imaging of of highly confluent objects with wavelength-dependent absorption using low spatio-temporal coherent illumination.

Methods

Setup

The developed PSR-LFQPM with partially coherent illumination (as shown in Fig. 4), which does not contain any objective lens and can be positioned within an incubator for in situ live cell observation. The LFOCM system has dimensions of 75 × 110 × 155 mm and its three fundamental components are a CMOS sensor (5664 × 4256, pixel size: 0.9 μm, 24,000 pixels, Jiangsu Team one Intelligent Technology Co., Ltd.), a color LEDs matrix, and narrow-band filters corresponding to the wavelength of illumination(GCC-2010, bandwidth: ~ 15 nm, Daheng Optics Co., Ltd.). The color LED matrix consists of five quasi-monochromatic SMD LEDs that emit different wavelengths (466 nm, 521 nm, 588 nm, 607 nm, 623 nm, ~ 20–50 nm bandwidth, ~ 150–240 μm emitting size). The refractive index of our samples in the 466–632 nm range is almost constant ( n < 0.35%)52, so we disregard the effect of dispersion between various wavelengths of LED illumination. The partially coherent beam passes through a narrow-band filter and travels roughly Z1 (~90 mm) to interact with the sample, generating a diffraction pattern. The diffraction pattern is recorded by the CMOS sensor, placed close to the sample (Z2 ~ 1000 μm), and satisfies near-field diffraction theory. According to Zhang et al.’s55 formulas for deriving temporal coherence resolution (Eq. 10 and 12), substituting the specific filter parameters, the resolution limit from low temporal coherence is enhanced from qt ≤ 1381 nm to 799 nm, which exceeds the sensor pixel size limit (~900 nm) (see Supplementary, Fig. S1). Therefore, the resolution is solely influenced by the spatial coherence of the illumination and the pixel size of the sensor, as will be detailed below.

Fig. 4: Pixel-super-resolved lens-free quantitative phase microscopy with partially coherent illumination.
figure 4

a The optical configuration of the PSR-LFQPM. b Schematic diagram illustrating of sequential illumination using different wavelength LED.

Forward model

Based on our previous work47,55, the image formation model under partially coherent illumination is represented by the following equation

$$I\left({{{\bf{x}}}}\right)=\int\,S\left({{{\bf{u}}}}\right){\left| \int\,T\left({{{{\bf{x}}}}}^{{\prime} }\right)h\left({{{\bf{x}}}}-{{{{\bf{x}}}}}^{{\prime} }\right)\exp (i2\pi {{{{\bf{ux}}}}}^{{\prime} })d{{{{\bf{x}}}}}^{{\prime} }\right| }^{2}d{{{\bf{u}}}}\equiv \int\,S\left({{{\bf{u}}}}\right){I}_{{{{\bf{u}}}}}\left({{{\bf{x}}}}\right)d{{{\bf{u}}}}.$$
(1)

Equation (1) suggests that the intensity captured at the image plane can be interpreted as an incoherent superposition of the coherent partial images \(I\left({{{\bf{x}}}}\right)\) arising from all points of the incoherent source. Thus, in our LFOCM system, the ultimate diffraction pattern captured by the CMOS sensor can be expressed as

$$\begin{array}{l}{I}_{raw}^{HR}(x,y)=\iint \,I\left(x-\frac{{Z}_{2}}{{Z}_{1}}{x}_{1},y-\frac{{Z}_{2}}{{Z}_{1}}{y}_{1}\right){S}_{i}\left(\frac{{Z}_{1}}{{Z}_{2}}{x}_{1},\frac{{Z}_{1}}{{Z}_{2}}{y}_{1}\right)d{x}_{1}d{y}_{1}\\\qquad\qquad\;\, =I(x,y)\left[{\left(\frac{{Z}_{1}}{{Z}_{2}}\right)}^{2}{S}_{i}\left(\frac{{Z}_{1}}{{Z}_{2}}\cdot x,\frac{{Z}_{1}}{{Z}_{2}}\cdot y\right)\right],\end{array}$$
(2)

where \(I\left(x,y\right)\) is the hologram image obtained from the direct incidence at the center of the LED source and \({S}_{i}\left({x}_{1},{y}_{1}\right)\) refers to the ith intensity distribution of the LED. The (x, y) and (x1, y1) are the coordinates at the sencor plane and LED plane, respectively. Equation (2) suggests that the recorded intensity \({I}_{raw}^{HR}(x,y)\) influenced by the finite spatial coherence can be modeled as a convolution of the ideal in-line hologram \(I\left(x,y\right)\) with a point spread function (PSF)

$${I}_{raw}^{HR}(x,y)=I(x,y)\otimes PS{F}_{i}(x,y),$$
(3)

where PSFi(x, y) represents the PSF properly resized from the pre-calibrated intensity distribution \({S}_{i}\left({x}_{1},{y}_{1}\right)\) of the ith LED[Fig. 5(b)], which can be expressed as

$$PS{F}_{i}(x,y)={\left(\frac{{Z}_{1}}{{Z}_{2}}\right)}^{2}{S}_{i}\left(\frac{{Z}_{1}}{{Z}_{2}}x,\frac{{Z}_{1}}{{Z}_{2}}y\right).$$
(4)

Assuming that the illumination source is extended and has a finite size, such as a uniform disk with a radius of Δs, the hologram based on the point source at the edge of the illumination compared to the center point source will result in a horizontal shift of \(\frac{{Z}_{2}}{{Z}_{1}}\cdot \Delta s\). The magnitude of this offset is inversely proportional to the distance Z2 between the illumination and the sensor. The combination of different holograms results in a blurred diffraction pattern, leading to a decreased visibility of high-frequency details. Moreover, in actual systems, the achieved resolution is still only less than the ideal coherent diffraction limit (NA ~ 1) due to the pixel averaging effect within the finite detection. The pixel PSF of the sensor is often modeled as a spatial averaging operator Pa. We assume that the finest feature to be captured corresponds to the half-pitch resolution Δp/w, where w≥1 and the number of pixels of this image is M × N. Then suppose the actual pixel size of the sensor is Δp with \(\frac{M}{w}\times \frac{N}{w}\) pixels. Ideal pixel aliasing can be understood as first pixelating the ideal image, and then undergoing a subsampling process. Specifically, the ideal pixel aliasing can be modeled as a convolution process with the convolution kernel Pa

$${I}_{raw}^{LR}(x,y)={I}_{raw}^{HR}(x,y)\otimes {P}_{a}(w).$$
(5)
Fig. 5: The flowchart for the reconstruction algorithm that the proposed wavelength scanning pixel super-resolution (PSR) involves.
figure 5

a Initial guess process. b Up-sample and direct deconvolution. c Iterative deconvolution phase retrieval process.

Initial guess

A stack of the holograms (e.g., the pixel dimension of the hologram is m × n) is captured based on different wavelength illumination. After capturing the raw images, up-sampling will be carried out on each hologram with the double cubic interpolation which coincides with the imaging theory of sensors. The pixel dimension of the up-sampling images is M × N with the interpolation weight w (M × N = wm × wn)

$${I}_{raw\_hr}^{i}(x,y)=upsample[{I}_{raw\_lr}^{i}(x,y),w].$$
(6)

The PSFi(x, y) is then used to perform a direct deconvolution step on the up-sampled hologram \({I}_{raw\_hr}^{i}(x,y)\) to eliminate the high-frequency aliasing in the hologram to some extend

$${I}_{raw}^{i}(x,y)={{{{\mathcal{F}}}}}^{-1}\left\{{{{\mathcal{F}}}}\left[{I}_{raw\_hr}^{i}(x,y)\right]/\left\{{{{\mathcal{F}}}}\left[PS{F}_{i}(x,y)\right]+\beta \right\}\right\},$$
(7)

where β is a regularization factor. Subsequently, the deconvolved hologram is back-propagated to the object plane with the angular spectrum diffraction theory

$${U}_{{{{\rm{o}}}}}^{i}={{{{\mathcal{F}}}}}^{-1}\left[{{{\mathcal{F}}}}\left(\sqrt{{I}_{raw}^{i}}\right){H}_{-{Z}_{2}}(\lambda )\right],$$
(8)

where fx and fy are the corresponding spatial frequencies. Hd(λ) is the angular specturm function

$${H}_{-{Z}_{2}}(\lambda )=\exp \left(-j2\pi d\sqrt{1/{\lambda }^{2}-{f}_{x}^{2}-{f}_{y}^{2}}\right).$$
(9)

Next, the ith intensity \({\left\vert {U}_{{{{\rm{o}}}}}^{i}\right\vert }^{2}\) is registered with the first intensity \({\left\vert {U}_{{{{\rm{o}}}}}^{1}\right\vert }^{2}\). The positional error is represented by \(({x}_{shift}^{i},{y}_{shift}^{i})\), which is achieved by calculating the cross-correlation between the image to register and a reference image using a fast Fourier transform (FFT), and locating its peak60. Then we shift ith the intensity \({\left\vert {U}_{{{{\rm{o}}}}}^{i}\right\vert }^{2}\) by the amount of \((-{x}_{shift}^{i},-{y}_{shift}^{i})\) to remove the displacement caused by illumination offset. Then, the registered field \({\left\vert {U}_{{{{\rm{o}}}}}^{i}\right\vert }^{2}\) is forth-propagated to the image plane

$${I}_{ini}^{i}={{{{\mathcal{F}}}}}^{-1}\left\{{{{\mathcal{F}}}}\left[{U}_{o}^{i}\left(x-{x}_{{{{\rm{shift}}}}}^{i},y-{y}_{{{{\rm{shift}}}}}^{i}\right)\right]{H}_{{Z}_{2}}(\lambda )\right\}.$$
(10)

Finally, all the holograms are superimposed together to significantly suppress the twin image noise, aliasing signals and up-sampling related artifacts61,62: \({I}_{ini}=\mathop{\sum }\nolimits_{i}^{m}{I}_{ini}^{i}\) (m is the number of raw measurements), which is used as the high-resolution input of the iterative process. The total processing time for the initial guess takes about 25 s.

Iterative deconvolution phase retrieval

The method we used in this paper is an improved version of the Richard-Lucy deconvolution algorithm63 (as shown in Fig. 5 step 3, iterative phase retrieval), which is more suitable for the multi-wavelength illumination system. The high-resolution hologram achieved from the ‘Initial guess’ process acts as the initial input of our iterative algorithm [\({I}_{s}^{1}(x,y)={I}_{ini}\), when i = 1].

Intensity constraint

The ith estimate hologram \({I}_{s}^{i}(x,y)\) is convolved with the measured PSFi(x, y) to get the intensity distribution at the image plane

$${I}_{s\_new}^{i}(x,y)={I}_{s}^{i}(x,y)\otimes PS{F}_{i}(x,y).$$
(11)

Then we update the intensity distribution with the corresponding up-sampled hologram, which has been registered with the positional error \((-{x}_{shift}^{i},-{y}_{shift}^{i})\)

$${I}_{s}^{i+1}(x,y)={I}_{s\_new}^{i}(x,y)/{I}_{raw\_\,hr}^{i}(x-{x}_{shift}^{i},y-{y}_{shift}^{i}).$$
(12)

The complex amplitude at the image plane is updated with an adaptive factor α (~0.5) as follows

$${U}_{{{{\rm{s}}}}}^{i+1}=\left[\alpha \sqrt{{I}_{s}^{i+1}}+(1-\alpha )\sqrt{{I}_{{{{\rm{s}}}}}^{i}}\right]\exp \left(j{\varphi }_{{{{\rm{s}}}}}^{i}\right),$$
(13)

which will be back-propagated to the object plane to get the ith estimated object field

$${U}_{{{{\rm{o}}}}}^{i}={{{{\mathcal{F}}}}}^{-1}\left\{{{{\mathcal{F}}}}\left({U}_{{{{\rm{s}}}}}^{i+1}\right){H}_{-{Z}_{2}}(\lambda )\right\}={A}_{{{{\rm{o}}}}}^{i}\exp \left(j{\varphi }_{{{{\rm{o}}}}}^{i}\right).$$
(14)

Wavelength conversion

The \({U}_{{{{\rm{o}}}}}^{i}\) should be converted to \({U}_{{{{\rm{o}}}}}^{i+1}\) corresponding to the next wavelength. Assuming that the sample consists of a weakly scattering object, its absorption is expected to be independent of the illumination wavelength. The phase component should be changed proportionally (\({\varphi }_{o}^{i+1}=\frac{{\lambda }_{i}}{{\lambda }_{i+1}}{\varphi }_{o}^{i}\)) while changing the wavelength. Specifically, we perform two-dimensional phase unwrapping64 to update the phase information and obtain the estimated object field for the next iteration \({U}_{o}^{i+1}=\left\vert {U}_{o}^{i}\right\vert \exp \left(j{\varphi }_{o}^{i+1}\right)\). Finally, \({U}_{{{{\rm{o}}}}}^{i+1}\) is forth-propagated to the image plane to get the \({\left(i+1\right)}^{th}\) estimated exit wave

$${U}_{s}^{i+1}={{{{\mathcal{F}}}}}^{-1}\left[{{{\mathcal{F}}}}\left({U}_{{{{\rm{o}}}}}^{i+1}\right){H}_{{Z}_{2}}(\lambda )\right]=\sqrt{{I}_{s}^{i+1}}\exp \left(j{\varphi }_{s}^{i+1}\right).$$
(15)

The iterative process is repeated until all the holograms are gone through, which means all the raw measurements are used once and is considered to be one iteration cycle. The complete flowchart of the iterative deconvolution algorithm is shown in Supplementary chapter 4. We calculated the root-mean-square error (RMSE) between estimated intensity \({I}_{s\_new}^{i}(x,y)\) at the image plane and the corresponding up-sampled hologram \({I}_{raw\_hr}^{i}(x,y)\) to monitor the algorithm convergence as well as estimating the accuracy of the reconstructed object wavefield (see Supplementary, Fig. S2). The RMSE is given by

$$\varepsilon =\mathop{\sum }\limits_{q=1}^{Q}{\left| \sqrt{{I}_{s\_new}^{i}}-\sqrt{{I}_{raw\_hr}^{i}}\right| }^{2}/\mathop{\sum }\limits_{q=1}^{Q}{I}_{raw\_\,hr}^{i},$$
(16)

where q is the pixel index and Q = M × N is the number of pixels of the image. In the experiments described in the following Sections, the algorithm reached convergence with RMSE variation drops below 1%, taking about 154 s, which has been further reduced by implementing GPU acceleration rather than MATLAB.