Abstract
Highresolution imaging of densely connected samples such as pathology slides using digital inline holographic microscopy requires the acquisition of several holograms, e.g., at >6–8 different sampletosensor distances, to achieve robust phase recovery and coherent imaging of specimen. Reducing the number of these holographic measurements would normally result in reconstruction artifacts and loss of image quality, which would be detrimental especially for biomedical and diagnosticsrelated applications. Inspired by the fact that most natural images are sparse in some domain, here we introduce a sparsitybased phase reconstruction technique implemented in wavelet domain to achieve at least 2fold reduction in the number of holographic measurements for coherent imaging of densely connected samples with minimal impact on the reconstructed image quality, quantified using a structural similarity index. We demonstrated the success of this approach by imaging Papanicolaou smears and breast cancer tissue slides over a large fieldofview of ~20 mm^{2} using 2 inline holograms that are acquired at different sampletosensor distances and processed using sparsitybased multiheight phase recovery. This new phase recovery approach that makes use of sparsity can also be extended to other coherent imaging schemes, involving e.g., multiple illumination angles or wavelengths to increase the throughput and speed of coherent imaging.
Introduction
Lensfree digital inline holographic microscopy^{1,2} is a rapidly emerging computational imaging technique, which allows highly compact and highthroughput microscope designs. It is enabled by leveraging constant advances and improvements in microscopy and image reconstruction techniques as well as image sensor technology and computational power, mostly driven by consumer electronics industry. Its current implementations can achieve gigapixel level spacebandwidthproducts, by employing costeffective and fieldportable imaging hardware^{1,2,3,4}. In order to keep the imaging setup as compact as possible, the onchip holographic image acquisition platform employs an inline holography geometry^{5}, where the scattered object field and the unscattered reference beam copropagate in the same direction, and the intensity of the interference pattern between these two beams is recorded by a digital image sensorarray. Because the recorded hologram only contains the intensity information of the complex optical field, direct backpropagation of this inline hologram to the object plane will generate a spatial artifact called twin image on top of the object’s original image. Unlike an offaxis holographic imaging geometry^{5}, where the twin image artifact can be robustly removed by angled wave propagation, inline holography is more susceptible to this twin image related artifact term^{5}. The negative impact of this artifact on image quality is further amplified owing to the small sampletosensor distances that are used in onchip implementations of digital inline holographic microscopy, where the sample fieldofview is equal to the sensor active area^{2,3,6,7,8,9,10}.
Twin image artifact in digital inline holography can also be computationally eliminated by imposing physical constraints that the twin image does not satisfy. Based on such constraints, a twinimagefree object can be retrieved through e.g., an iterative error reduction algorithm^{7,8}. One of the earliest explored constraints for this purpose is the object support where a threshold defines a 2D object mask and the backpropagated field on the object plane outside the mask is considered as noise and iteratively removed^{1,8}. Although this simple approach requires a single hologram measurement, this constraint works better for relatively isolated objects and its implementation is challenging in dealing with dense and connected samples, such as pathology slides, which are of significant importance in biomedical diagnosis.
To address this phase retrieval problem^{5,7,8} of inline holography it is common to apply measurement diversity which can include, e.g. sampletosensor distances^{9,10,11,12}, illumination angles^{13,14} and wavelengths^{14,15}. However, previous efforts have shown that for imaging of spatially connected and dense biological objects such as pathology slides and tissue samples, the measurements always need to be oversampled in one domain, with several additional images acquired with different physical parameters. For instance, in imaging pathology slides using multiheight measurements, usually 6–8 holograms at different sampletosensor distances are required to get high quality and clinically relevant microscopic reconstructions (amplitude and phase images) of the object^{10}, i.e., the number of measurements is 3–4 times of the number of variables, including the amplitude and phase pixels that needed to be retrieved in a complex image of the sample. This increase in the number of measurements also increases the data acquisition and processing times, limiting the throughput of the imaging system.
Inspired by the fact that the images of most natural objects, such as biological specimen, can be sparsely represented in some wavelet domain^{16}, here we introduce the use of a sparsity constraint in the wavelet domain, improving multiheight based phase retrieval, to significantly reduce the required number of measurements while maintaining the quality of the reconstructed phase and amplitude images of the objects. We experimentally demonstrate that for densely connected biological samples, such as Papanicolaou smears and breast cancer tissue slides, 2 inline holograms with different sampletosensor distances are sufficient for image reconstruction, when sparsity constraints are applied during the iterative reconstruction process. The resulting reconstructed object image quality is comparable to the ones that can be reconstructed from >4–8 different measurements using conventional multiheight phase recovery methods^{2,9,10}. This means, by making use of a sparsity constraint in our reconstruction, we achieved at least 2fold decrease in the number of holograms that need to be acquired. Furthermore, if we consider the number of unknown variables as 2 N (i.e., the amplitude and phase images of the object, each with N pixels), the number of measurements is also 2 N in this sparsitybased multiheight phase recovery approach, which means the physical measurement space is no longer oversampled unlike the previous multiheight phase and image recovery approaches. In fact, the additional sparsity constraint in wavelet domain enables us to go below a 3N2 measurement limit that has been previously shown to be the smallest number of measurements needed for robust phase retrieval^{17,18}.
Note that the sparsity constraint in our approach is quite different from the previously reported compressive holography efforts^{19,20,21,22}. These earlier reports imaged isolated objects and have shown that the freespace propagation in itself is an extremely efficient encoding mechanism for compressive sensing, allowing the inference of higher dimensional data from traditionally undersampled projections^{23,24,25}. Here, we demonstrate the ability of wavelet domain sparsity encoding for multiheightbased phase recovery and demonstrate its success for clinically relevant dense samples, including highly connected pathology slides of breast tissue and Papanicolaou (Pap) smears that are imaged over a large fieldofview of ~20 mm^{2}, equal to the active area of our image sensor chip. In fact, as a result of this significant difference in the density and connectivity of the object to be imaged, the number of measurements that we need to have without losing image quality is two, rather than a single hologram.
This sparsitybased phase recovery approach can also be extended to other coherent microscopy schemes, involving e.g., multiangle^{13} or multiwavelengthbased^{26} phase retrieval. Furthermore, this technique can be potentially combined with a recently introduced phasor approach for high resolution and wide fieldofview imaging^{27} and/or multiplexed color imaging^{28} to further reduce the number of measurements in these holographic microscopy approaches. Enabled by novel algorithmic processing, this sparsitybased holographic image reconstruction technique can be regarded as another step forward in making lensfree onchip holography more efficient, higher throughput and more appealing in microscopyrelated applications.
Methods
Sparse object representation
Image recovery based on sparsity constraints is a paradigm which has been applied for many different imagingrelated tasks such as denoising, inpainting, deblurring, compression and compressed sensing/sampling^{29}. In this framework, we wish to recover the discrete approximation of a complex sample, , where 2 N is the total number of pixels which is required to represent this complexvalued sample/object with independent phase and amplitude channels, each having N pixels^{5}. The main assumption of sparse image recovery is that the sought signal can be written as a linear combination of a small number of basis elements, , such that for , the number of significant coefficients of θ = (θ_{1}, .., θ_{N}), which are required for accurate signal representation, S, is much less than N, i.e., S ≪ N. For dense biological samples including tissue sections and smears that we used in this manuscript, we found that the reconstructed coherent imaging results acquired by using 8 different sampletosensor distances (which we consider as our clinically relevant reference standard as confirmed in an earlier study^{10}) can be accurately represented using a very small S, i.e., ρ = S/N ≈ 0.07:0.15 through a mathematical transformation such as CDF 9/7 wavelet transform^{30,31} which is one of the leading nonadaptive image compression techniques, also applied in JPEG2000 image compression standard. This observation is used as a loose constraint for the number of sparse coefficients to be utilized during our iterative object reconstruction process, which will be detailed below.
Another dimension of sparsitybased image reconstruction involves nonlinear operators which can be performed on the signal. One of the common sparsity promoting operators which is used in imagingrelated applications is known as the total variation norm^{32}, i.e., TV(f) which quantifies the magnitude of the gradient of the signal:
where k and l are pixel indexes in the reconstructed image. This operator has been shown to be extremely useful in image processing tasks such as denoising, deblurring and compressed sensing, specifically with holographically acquired data^{19,20,33,34}. A total variation norm based constraint aims the preservation of sharp boundaries of the object with smooth spatial textures confined between them. In this work, we apply the total variation operator in the wavelet domain in order to suppress noise within our iterative reconstruction process, which will be detailed below.
Lensfree on chip imaging setup
A schematic figure of our lensfree holographic onchip microscope is shown in Fig. 1. A broadband illumination source (WhiteLase micro, Fianium Ltd.) is filtered by an acoustooptic tunable filter to output partially coherent light within the visible spectrum with a spectral bandwidth of ~2–3 nm. The light is coupled into a singlemode optical fiber, and the emitted light from the fiber tip propagates a distance of ~5–15 cm before impinging on the sample plane, which is mounted on a 3Dprinted sample holder. The sample is placed ~300–600 μm above the active area of a CMOS image sensor chip (IMX081, Sony, 1.12 μm pixel size, 16.4 Megapixels). In this onchip imaging configuration, the sample fieldofview is equal to the sensor chip active area, i.e., ~20 mm^{2}. The image sensor is attached to a positioning stage (MAX606, Thorlabs, Inc.), which is used for alignment, image sensor translation (to perform pixel superresolution) and acquisition of several holograms with different sampletosensor distances, . Acquisition of several images with different sampletosensor distances generates a series of measurement constraints which are used for multiheightbased phase recovery, detailed in the next subsections. A custom developed LabVIEW program coordinates different components of this setup during the entire image acquisition stage.
Hologram acquisition and preprocessing
In our experiments, a series of wide fieldofview and low resolution (1.12 μm pixel size, before the pixel superresolution step) holograms were acquired at each sensortosample distance. For a given illumination wavelength, λ, refractive index, n, and sampletosensor distance the hologram formation at the sensor plane can be written as:
where o is the complexvalued object function, A is the amplitude of the reference (plane) wave and the operator is the angular spectrum based freespace propagation of the illuminated object, which can be calculated by the spatial Fourier transform of the input signal and then multiplying it by the following filter (defined over the spatial frequency variables, υ_{x}, υ_{y}):
which is then followed by an inverse 2D Fourier transform. The hologram intensity, , is sampled by the imaging sensor chip with a sampling interval which corresponds to the pixel pitch. In order to generate higher resolution holograms, the stage that holds the sensor chip was programmed to move the sensor laterally on a 6 ×6 grid and at each grid point a lower resolution hologram was acquired. Applying a conjugategradientbased pixel superresolution method^{35,36} on these 6 × 6 holograms results in a new hologram with an effective pixel size of ~0.37 μm. Next, the process is repeated for different sensortosample distances, i.e., in order to create the measurement diversity required for standard phase recovery. Following the hologram acquisition, these superresolved holograms are digitally aligned with respect to each other and the estimated sampletosensor distance is refined using an autofocusing algorithm^{37}.
Initial phase estimation using the TransportofIntensity Equation (TIE)
Previous reports^{10} have demonstrated that the effectiveness of the image reconstruction process using lensfree multiheight holograms can be substantially accelerated by solving the TIE^{38,39,40} to obtain an initial phase guess. The TIE is a deterministic phase retrieval method that generates the solution from a set of two or more diffraction patterns or holograms, which are acquired at different sampletosensor distances. Unfortunately, this analytical solution is a lowerresolution method and cannot in itself generate a highresolution object reconstruction in lensfree onchip microscopy, and that is why it is followed by an iterative phase refinement method, as detailed below.
Iterative multiheight phase recovery
The multiheightbased phase recovery approach^{9,10,41} is an iterative errorreduction algorithm which uses the acquired holograms at various sampletosensor distances as a set of physical constraints to correct the estimated phase in each iteration. In this algorithm, the lowerresolution phase result obtained by the TIE method is used as the initial phase term to accompany the field amplitude that is acquired at the plane that is closest to the sensor chip. This newly formed complex field is then numerically propagated to the plane of the next sampletosensor distance, where its amplitude is averaged with the square root of the second hologram, and the phase is retained for the next step of the iteration. The same procedure is repeated for all the other acquired holograms at different sampletosensor distances and then the process is repeated in a reverse fashion, i.e., from larger sampletosensor distances toward smaller, all the way to the first plane that is the closest to the sensor chip. This iterative algorithm is terminated after typically ~10–30 iterations or once a convergence criterion has been achieved. Following the termination of the iterative process, the refined complex field at the plane closest to the sensor plane is numerically backpropagated to the object plane. This complexvalued result,, is used as the initial object guess and fed to the sparsity constrained reconstruction algorithm which will be discussed in the following subsection.
Sparsitybased multiheight phase recovery
Sparsitybased multiheight phase recovery algorithm, as summarized in the right panel of Fig. 2, can be described as follows:
Step 1 – Perform numerical forward propagation of the current guess, to the ith hologram plane, , with .
Step 2  Update the magnitude according to: and keep the phase of . Typical range of values for our update parameter is ~0.5–0.9.
Step 3  Perform backward angular spectrum propagation for the updated complex field amplitude, .
Step 4 – Project the object function to the sparsifying (Wavelet) domain, which results in a coefficient set , where (.)^{T} refers to the transpose operation.
Step 5  Apply the object sparsity constraint to the result of Step 4:
Update the sparse support area for the magnitude, by keeping the largest S coefficients and update the remaining (NS) coefficients. To achieve this, we first define the sparsity support region (Λ) which contains the most significant S coefficients within .
Keep all the elements within unchanged, i.e., .
Reduce the error outside of the sparse support region in the Wavelet domain, which is defined by Λ^{C}, by performing the following iterative update: , where β is a relaxation coefficient, e.g., β~0.7–0.9.
Perform total variation denoising in the Wavelet domain to achieve two aims: (i) Smoothen out the regions where the low frequency components of the twin image are more dominant; and (ii) preserve the edges (details) of the objects in the higher frequencies of the image, helping to reduce the measurement noise as well as selfinterference related terms. The total variation denoising algorithm can be implemented by using either the original formulation of RudinOsherFatemi^{31} or Chambolle’s algorithm^{42}. We used:
where from Step 5(c), θ is the variable of the denoising algorithm, TV(θ) is the totalvariation norm, which is defined in Equation (1) and is the l2norm, which serves as a fidelity term. The parameter λ is a tuning parameter, which can be adaptively refined^{32} or selected a priori and it controls the tradeoff between data fidelity and denoising. Generally, we perform ~1–2 iterations in order not to introduce spatial blurring to the reconstruction. Also, since large values of λ favors blurring, it should be carefully chosen^{43} and we typically set λ to be ~0.002:0.01 for intensity normalized biological samples.
Step 6  Update the object estimate by applying an inverse wavelet transform on the solution of Equation (4), , to return to the object space for the next iteration.
Following Step 6, Step 1 is repeated for all the N_{z} acquired holograms at different sampletosensor distances. Following the update of the solution, and the magnitude corresponding to the hologram measured at N_{z}th plane, we proceed to the next iteration by incrementing . This algorithm is repeated for ~100 iterations, or until a convergence criterion is met, for example when smaller than a predefined update tolerance between two consecutive iterations is achieved in either the object or hologram planes.
The entire reconstruction algorithm is implemented using Matlab on a computer with an Intel Xeon E5–2667v3 3.2 GHz CPU (central processing unit) and 256 GB of RAM (Random Access Memory), running Windows Server R2012 R2 operating system. For a fieldofview of 1.1 × 1.1 mm^{2}, the entire reconstruction process took ~28 minutes for N_{z} = 2 and 6 × 6 = 36 raw holograms for pixel superresolution at each height. Implementation of the presented algorithm using a dedicated parallel computing platform and programming environment on a GPU (Graphics Processing Unit) should yield a significant speed improvement^{44}.
Evaluation of the image reconstruction quality
Our image reconstruction quality assessment is based on (i) visual inspection of the results in comparison to our clinically relevant reference coherent images^{10} and (ii) quantitatively applying structural similarity index (SSIM)^{45} on the reconstructed object image. The SSIM has been proven to be more consistent with the human visual system when compared to peak signaltonoise ratio (PSNR) and mean squareerror (MSE)^{45} based image evaluation criteria. The SSIM quantifies the changes in structural information by inspecting the relationship among the image contrast, luminance, and structure components. The contrast is evaluated as the standard deviation of an image:
where μ_{p} is the luminance (mean) of the pth image, U_{p}. The structural measurement is estimated using the crosscovariance between two images that are compared to each other:
Based on these definitions, the SSIM between two images is given by:
where C_{1}, C_{2} are stabilization constants, which prevent division by a small denominator. These coefficients^{45} are selected as: C_{1} = (K_{1}L)^{2} and C_{2} = (K_{2}L)^{2} with K_{1}, K_{2} ≪ 1 and L is the dynamic range of the image, e.g., 255 for an 8bit grayscale image.
Sample Preparation
Deidentified Pap smear slide was provided by UCLA Department of Pathology (Institutional Review Board no. 11–003335) using ThinPrep® preparation. Deidentified Hematoxylin and Eosin (H&E) stained human breast cancer tissue slides were acquired from the UCLA Translational Pathology Core Laboratory. We used existing and anonymous specimen, where no subject related information is linked or can be retrieved.
Results and Discussion
In order to experimentally test the proposed sparsitybased image reconstruction algorithm, we acquired a set of 8 superresolved holograms at different sampletosensor distances (~300–600 μm) corresponding to stained Papanicolaou (Pap) smears as well as H&E stained breast cancer tissue slides. First, we applied the multiheight based iterative phase recovery algorithm on all of these N_{z} = 8 pixel superresolved holograms in order to get clinically relevant^{10} baseline reference images, which are shown in Figs 3(a,d) and 4(a,d). Once we attempt to reconstruct the images of the same samples using N_{z} = 2 holograms acquired at different sampletosensor distances with the same iterative multiheight phase retrieval algorithm, spatial artifacts appear which are illustrated in Figs 3(b,e) and 4(b,e). However, using the same 2 holograms with the proposed sparsity constrained multiheight phase retrieval algorithm, the reconstruction results, shown in Figs 3(c,f) and 4(c,f), significantly improve and become comparable in image quality to the reference images. In order to quantify our reconstruction quality, we also calculated the SSIM values for these images, with the results summarized in Table 1. For the Pap smear sample, the standard multiheight reconstruction of N_{z} = 2 pixel superresolved holograms gave an SSIM value of 0.66, while the sparsitybased reconstruction using the same measurements gave an improved SSIM value of 0.89. Similarly, for the H&E stained breast cancer pathology slide, the SSIM value for the multiheight reconstruction (N_{z} = 2) was 0.73, while for the sparsitybased reconstruction the SSIM value increased to 0.83. Similar improvements in SSIM values using sparsitybased multiheight phase recovery were also observed for N_{z} = 4, as shown in Table 1. These results illustrate that we are gaining at least 2fold imaging speed improvement with reduced number of measurements compared to the standard multiheight phase recovery approach, without compromising spatial resolution or the fieldofview of our onchip microscope. This, inturn, reduces the data bandwidth and storage related requirements, which is especially important for fieldportable implementations of lensfree microscopy tools.
The presented sparsitybased multiheight phase retrieval method could also work without using pixel superresolution. Nevertheless, the usage of and the need for pixel superresolution depend on the targeted resolution in the reconstructed image. In this work, we used a CMOS image sensor that has a pixel size of 1.12 μm and to achieve a resolution comparable to a conventional benchtop microscope with e.g., a 40X objective lens we used the pixel superresolution framework to digitally create effectively smaller pixels. As an alternative to lateral shifts between the hologram and the sensor array planes (which can be achieved by e.g., source shifting, multiaperture illumination, sample shifting or sensor shifting), wavelength scanning^{26} over a narrow bandwidth (e.g., 10–30 nm) can also be used for rapid implementation of pixel superresolution, which also has the advantage of creating a uniform resolution improvement across all the directions on the sample plane.
While the presented approach has been demonstrated for multiheight holographic imaging and phase recovery, other types of physical measurement diversities can also be utilized in the same sparse signal recovery framework, such as multiangle illumination and wavelength scanning^{26,27} which might benefit various applications in quantitative imaging of live biological samples, such as growing colonies of bacteria, fungi or other types of cells.
It is also important to note that, in addition to the CDF 9/7 wavelet transform that we used in this work, other wavelet transforms can also be used in the same image reconstruction method. As described in the Methods Section, an effective way of doing this can be by applying several wavelet transforms on a known database of preacquired images, thus finding the best representation and getting a good approximation of the number of required coefficients for sparse signal recovery. Adaptive methods, such as dictionary learning^{46} and optimal basis generation^{47}, which may yield overcomplete linear signal representations could also be considered for the same framework^{48}.
Since one of the main goals of this work is to use less number of raw measurements, while also preserving image quality, it is important to choose our measurements in a way that will help us converge to the correct result. Of great importance for a successful reconstruction is a careful initialization of the algorithm. Specifically, it has been shown that when the measurement operator is given by freespace propagation, special emphasis needs to be put on the low spatial frequencies^{20}, which contain most of the information about the sample, and therefore the low frequency wavelet bands cannot be considered sparse. Since the low frequency phase curvature changes slowly as a function of the sampletosensor distance^{39}, the distance between the first two measurements should be large enough to acquire changes in these low frequencies. However, the distance should not be too large, since the signaltonoiseratio (SNR) also decreases proportional to the sampletosensor distance. Practically, we found that an axial distance of ~100–150 μm between the two holographic measurements gives the best result for our initialization. On the other hand, to better resolve high spatial frequencies, we should choose a small sampletosensor distance, typically ~300 μm for our setup. The closer the acquired hologram, the more suitable it is to sense sparse frequency components^{24}, while as we take our measurements further away from the object, the reconstruction would favor sparse objects, such as point sources and scatterers.
Conclusions
We developed a sparsitybased phase reconstruction algorithm for digital inline holographic imaging of densely connected samples. This algorithm is capable of reconstructing amplitude and phase images of biological samples using only 2 holograms acquired at different sampletosensor distances, which is at least 2fold less compared to the number of holograms that is utilized in previous multiheight phase retrieval approaches. Stated differently, using this sparsitybased holographic phase retrieval method, we demonstrated that the number of the reconstructed pixels (i.e., 2N, including the phase and amplitude channels of the sample) can be made equal to the number of measured intensityonly pixels. We demonstrated the success of this approach by imaging Papanicolaou smears and breast cancer tissue slides over a large fieldofview of ~20 mm^{2}. This sparsitybased phase retrieval method is also applicable to other high resolution holographic imaging techniques involving e.g., multiple illumination angles or wavelengths, both of which can be used for enhancing the spacebandwidth product of a coherent holographic microscope.
Additional Information
How to cite this article: Rivenson, Y. et al. Sparsitybased multiheight phase recovery in holographic microscopy. Sci. Rep. 6, 37862; doi: 10.1038/srep37862 (2016).
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
 1.
Mudanyali, O. et al. Compact, lightweight and costeffective microscope based on lensless incoherent holography for telemedicine applications. Lab. Chip 10, 1417–1428 (2010).
 2.
Greenbaum, A. et al. Imaging without lenses: achievements and remaining challenges of widefield onchip microscopy. Nat. Methods 9, 889–895 (2012).
 3.
Isikman, S. O., Greenbaum, A., Luo, W., Coskun, A. F. & Ozcan, A. GigaPixel Lensfree Holographic Microscopy and Tomography Using Color Image Sensors. PLOS ONE 7, e45044 (2012).
 4.
McLeod, E., Luo, W., Mudanyali, O., Greenbaum, A. & Ozcan, A. Toward gigapixel nanoscopy on a chip: a computational widefield look at the nanoscale without the use of lenses. Lab. Chip 13, 2028–2035 (2013).
 5.
Goodman, J. W. Introduction to Fourier optics, 3rd Edition. (Roberts and Company Publishers, 2005).
 6.
Stern, A. & Javidi, B. Theoretical analysis of threedimensional imaging and recognition of microorganisms with a singleexposure online holographic microscope. J. Opt. Soc. Am. A 24, 163 (2007).
 7.
Gerchberg, R. W. & Saxton, W. Practical Algorithm for Determination of Phase from Image and Diffraction Plane Pictures. Optik 35, 237 (1972).
 8.
Fienup, J. Phase Retrieval Algorithms  a Comparison. Appl. Opt. 21, 2758–2769 (1982).
 9.
Greenbaum, A. & Ozcan, A. Maskless imaging of dense samples using pixel superresolution based multiheight lensfree onchip microscopy. Opt. Express 20, 3129–3143 (2012).
 10.
Greenbaum, A. et al. Widefield computational imaging of pathology slides using lensfree onchip microscopy. Sci. Transl. Med. 6, 267ra175–267ra175 (2014).
 11.
Allen, L. J. & Oxley, M. P. Phase retrieval from series of images obtained by defocus variation. Opt. Commun. 199, 65–75 (2001).
 12.
Almoro, P., Pedrini, G. & Osten, W. Complete wavefront reconstruction using sequential intensity measurements of a volume speckle field. Appl. Opt. 45, 8596 (2006).
 13.
Luo, W., Greenbaum, A., Zhang, Y. & Ozcan, A. Synthetic aperturebased onchip microscopy. LIGHTSci. Appl. 4, (2015).
 14.
Bao, P., Situ, G., Pedrini, G. & Osten, W. Lensless phase microscopy using phase retrieval with multiple illumination wavelengths. Appl. Opt. 51, 5486–5494 (2012).
 15.
Min, J. et al. Phase retrieval without unwrapping by singleshot dualwavelength digital holography. J. Opt. 16, 125409 (2014).
 16.
Gonzalez, R. C. & Woods, R. E. Digital Image Processing 3^{rd} Edition 461–520 (Pearson Education, 2008).
 17.
Candès, E. J., Strohmer, T. & Voroninski, V. PhaseLift: Exact and Stable Signal Recovery from Magnitude Measurements via Convex Programming. Commun. Pure Appl. Math. 66, 1241–1274 (2013).
 18.
Finkelstein, J. Purestate informationally complete and “really” complete measurements. Phys. Rev. A 70, 52107 (2004).
 19.
Brady, D. J., Choi, K., Marks, D. L., Horisaki, R. & Lim, S. Compressive holography. Opt. Express 17, 13040–13049 (2009).
 20.
Rivenson, Y., Stern, A. & Javidi, B. Compressive fresnel holography. J. Disp. Technol. 6, 506–509 (2010).
 21.
Rivenson, Y., Stern, A. & Javidi, B. Overview of compressive sensing techniques applied in holography [Invited]. Appl. Opt. 52, A423–A432 (2013).
 22.
Latychevskaia, T. & Fink, H.W. Solution to the Twin Image Problem in Holography. Phys. Rev. Lett. 98, 233901 (2007).
 23.
Rivenson, Y., Rot, A., Balber, S., Stern, A. & Rosen, J. Recovery of partially occluded objects by applying compressive Fresnel holography. Opt. Lett. 37, 1757 (2012).
 24.
Rivenson, Y., Stern, A. & Rosen, J. Reconstruction guarantees for compressive tomographic holography. Opt. Lett. 38, 2509 (2013).
 25.
Rivenson, Y., Aviv (Shalev), M., Weiss, A., Panet, H. & Zalevsky, Z. Digital resampling diversity sparsity constrainedwavefield reconstruction using singlemagnitude image. Opt. Lett. 40, 1842 (2015).
 26.
Luo, W., Zhang, Y., Feizi, A., Göröcs, Z. & Ozcan, A. Pixel superresolution using wavelength scanning. Light Sci. Appl. 5, e16060 (2016).
 27.
Luo, W., Zhang, Y., Göröcs, Z., Feizi, A. & Ozcan, A. Propagation phasor approach for holographic image reconstruction. Sci. Rep. 6, 22738 (2016).
 28.
Wu, Y., Zhang, Y., Luo, W. & Ozcan, A. Demosaiced pixel superresolution for multiplexed holographic color imaging. Sci. Rep. 6, 28601 (2016).
 29.
Elad, M. & Figueiredo, M. A. T. & Yi Ma. On the Role of Sparse and Redundant Representations in Image Processing. Proc. IEEE 98, 972–982 (2010).
 30.
Cohen, A., Daubechies, I. & Feauveau, J.C. Biorthogonal bases of compactly supported wavelets. Commun. Pure Appl. Math. 45, 485–560 (1992).
 31.
Daubechies, I. Ten Lectures on Wavelets. (Society for Industrial and Applied Mathematics, 1992).
 32.
Rudin, L. I., Osher, S. & Fatemi, E. Nonlinear total variation based noise removal algorithms. Phys. Nonlinear Phenom. 60, 259–268 (1992).
 33.
Rivenson, Y., Stern, A. & Rosen, J. Compressive multiple view projection incoherent holography. Opt. Express 19, 6109 (2011).
 34.
Rivenson, Y., Shalev, M. A. & Zalevsky, Z. Compressive Fresnel holography approach for highresolution viewpoint inference. Opt. Lett. 40, 5606 (2015).
 35.
Barnard, K. J. Highresolution image reconstruction from a sequence of rotated and translated frames and its application to an infrared imaging system. Opt. Eng. 37, 247 (1998).
 36.
Bishara, W., Su, T.W., Coskun, A. F. & Ozcan, A. Lensfree onchip microscopy over a wide fieldofview using pixel superresolution. Opt. Express 18, 11181 (2010).
 37.
Memmolo, P., Paturzo, M., Javidi, B., Netti, P. A. & Ferraro, P. Refocusing criterion via sparsity measurements in digital holography. Opt. Lett. 39, 4719 (2014).
 38.
Teague, M. R. Deterministic phase retrieval: a Green’s function solution. J. Opt. Soc. Am. 73, 1434 (1983).
 39.
Paganin, D., Barty, A., McMahon, P. J. & Nugent, K. A. Quantitative phaseamplitude microscopy. III. The effects of noise. J. Microsc. 214, 51–61 (2004).
 40.
Waller, L., Tian, L. & Barbastathis, G. Transport of Intensity imaging with higher order derivatives. Opt. Express 18, 12552 (2010).
 41.
Greenbaum, A., Sikora, U. & Ozcan, A. Fieldportable widefield microscopy of dense samples using multiheight pixel superresolution based lensfree imaging. Lab. Chip 12, 1242–1245 (2012).
 42.
Chambolle, A. An Algorithm for Total Variation Minimization and Applications. J. Math. Imaging Vis. 20, 89–97 (2004).
 43.
Liao, H., Li, F. & Ng, M. K. Selection of regularization parameter in total variation image restoration. J. Opt. Soc. Am. A 26, 2311 (2009).
 44.
Isikman, S. O. et al. Lensfree optical tomographic microscope with a large imaging volume on a chip. Proc. Natl. Acad. Sci. 108, 7296–7301 (2011).
 45.
Wang, Z., Bovik, A. C., Sheikh, H. R. & Simoncelli, E. P. Image Quality Assessment: From Error Visibility to Structural Similarity. IEEE Trans. Image Process. 13, 600–612 (2004).
 46.
KreutzDelgado, K. et al. Dictionary Learning Algorithms for Sparse Representation. Neural Comput. 15, 349–396 (2003).
 47.
Peyre, G. Best Basis Compressed Sensing. IEEE Trans. Signal Process. 58, 2613–2622 (2010).
 48.
Aharon, M., Elad, M. & Bruckstein, A. KSVD: An Algorithm for Designing Overcomplete Dictionaries for Sparse Representation. IEEE Trans. Signal Process. 54, 4311–4322 (2006).
Acknowledgements
The Ozcan Research Group at UCLA gratefully acknowledges the support of the Presidential Early Career Award for Scientists and Engineers (PECASE), the Army Research Office (ARO; W911NF13–1–0419 and W911NF1310197), the ARO Life Sciences Division, the National Science Foundation (NSF) CBET Division Biophotonics Program, the NSF Emerging Frontiers in Research and Innovation (EFRI) Award, the NSF EAGER Award, NSF INSPIRE Award, NSF Partnerships for Innovation: Building Innovation Capacity (PFI:BIC) Program, Office of Naval Research (ONR), the National Institutes of Health (NIH), the Howard Hughes Medical Institute (HHMI), Vodafone Americas Foundation, the Mary Kay Foundation, Steven & Alexandra Cohen Foundation, and KAUST. This work is based upon research performed in a laboratory renovated by the National Science Foundation under Grant No. 0963183, which is an award funded under the American Recovery and Reinvestment Act of 2009 (ARRA). Furthermore, Yair Rivenson is supported by the European Union’s Horizon 2020 research and innovation programme under grant agreement No H2020MSCAIF2014659595 (MCMQCT).
Author information
Author notes
 Yair Rivenson
 & Yichen Wu
These authors contributed equally to this work.
Affiliations
Electrical Engineering Department, University of California, Los Angeles, CA, 90095, USA
 Yair Rivenson
 , Yichen Wu
 , Hongda Wang
 , Yibo Zhang
 , Alborz Feizi
 & Aydogan Ozcan
Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
 Yair Rivenson
 , Yichen Wu
 , Hongda Wang
 , Yibo Zhang
 , Alborz Feizi
 & Aydogan Ozcan
California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
 Yair Rivenson
 , Yichen Wu
 , Hongda Wang
 , Yibo Zhang
 , Alborz Feizi
 & Aydogan Ozcan
Department of Surgery, David Geffen School of Medicine, University of California, Los Angeles, CA, 90095, USA
 Aydogan Ozcan
Authors
Search for Yair Rivenson in:
Search for Yichen Wu in:
Search for Hongda Wang in:
Search for Yibo Zhang in:
Search for Alborz Feizi in:
Search for Aydogan Ozcan in:
Contributions
Y.R., Y.W. and A.O. discussed and conceptualized the original idea. Y.Z. and H.W. conducted experiments. Y.R., Y.W., Y.Z. and H.W. processed the data. A.F. contributed to experiments and methods. Y.R., Y.W. and A.O. wrote this manuscript. A.O. supervised the research.
Competing interests
The authors declare no competing financial interests.
Corresponding author
Correspondence to Aydogan Ozcan.
Rights and permissions
This work is licensed under a Creative Commons Attribution 4.0 International License. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in the credit line; if the material is not included under the Creative Commons license, users will need to obtain permission from the license holder to reproduce the material. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/
About this article
Further reading

1.
Phase recovery and holographic image reconstruction using deep learning in neural networks
Light: Science & Applications (2018)
Comments
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.