Abstract
Highresolution wide fieldofview (FOV) microscopic imaging plays an essential role in various fields of biomedicine, engineering, and physical sciences. As an alternative to conventional lensbased scanning techniques, lensfree holography provides a new way to effectively bypass the intrinsical tradeoff between the spatial resolution and FOV of conventional microscopes. Unfortunately, due to the limited sensor pixelsize, unpredictable disturbance during image acquisition, and suboptimum solution to the phase retrieval problem, typical lensfree microscopes only produce compromised imaging quality in terms of lateral resolution and signaltonoise ratio (SNR). Here, we propose an adaptive pixelsuperresolved lensfree imaging (APLI) method which can solve, or at least partially alleviate these limitations. Our approach addresses the pixel aliasing problem by Zscanning only, without resorting to subpixel shifting or beamangle manipulation. Automatic positional error correction algorithm and adaptive relaxation strategy are introduced to enhance the robustness and SNR of reconstruction significantly. Based on APLI, we perform fullFOV reconstruction of a USAF resolution target (~29.85 mm^{2}) and achieve halfpitch lateral resolution of 770 nm, surpassing 2.17 times of the theoretical Nyquist–Shannon sampling resolution limit imposed by the sensor pixelsize (1.67µm). FullFOV imaging result of a typical dicot root is also provided to demonstrate its promising potential applications in biologic imaging.
Introduction
Highresolution widefield optical imaging is an essential tool in various biomedical applications^{1,2} including cell cycle assay, digital pathology, and highthroughput biologic screening. The growing need in digitalizing the biological slides has facilitated the development of whole slide imaging (WSI) systems. Nevertheless, these systems are built based on a conventional microscope, which suffer from the inherent tradeoff between the fieldofview (FOV) and imaging resolution. To get an image with both high resolution and large FOV, mechanical scanning and stitching is required to expand the limited FOV of a conventional high magnification objective^{3}, which not only complicate the imaging procedure, but also significantly increase the overall cost of these systems. The recently developed computational microscopy techniques provide new opportunities to create highresolution widefield images without any scanning and stitching, such as synthetic aperture microscopy^{4,5,6,7}, Fourier ptychography microscopy (FPM)^{8,9,10}, and lensfree superresolution holography^{11,12,13}. Among these approaches, the lensfree superresolution holography has unique advantages of achieving a large effective numerical aperture (NA) approaching to unity across the native FOV of the imaging sensor, without requiring any lenses and other intermediate optical components. This further allows to significantly simplify the imaging setup and meanwhile effectively circumvent the optical aberrations and chromaticity^{12} that are inherent in conventional lensbased imaging systems. Besides, the whole system can be built in a miniaturized and costeffective format, providing a potential solution for reducing health care costs for pointofcare diagnostics in resourcelimited environments.
In recent years, numerous lensfree image systems have been proposed, and there is a clear trend towards adopting a socalled unitmagnification configuration, where the samples are placed as close as possible to the imaging sensor. Compared to the conventional inline holographic setups^{14,15,16,17,18}, the unitmagnification configuration can not only reduce the demand on the coherence of the illumination source but also have a significantly larger FOV which equals to the active area of the sensor chip. However, these lensfree holographic microscopes generally suffer from low imaging resolution which is far from enough to meet the demand of recent biomedical research, particularly with respect to the visualization of cellular or subcellular details of biological structures and processes. According to NyquistShannon sampling theorem^{19}, the resolution of the holographic reconstruction is fundamentally limited to the sampling resolution of the imaging devices. In other words, the physical pixelsize will be the main limiting factor of these lensfree imaging systems. Because of the spatial aliasing/undersampling, the imaging sensor will fail to record holographic oscillation corresponding to high spatial frequency information of the specimen. Using a sensor with smaller pixelsize can directly alleviate the aliasing problem, unless pixel design exhibits severe angular distortions creating aberrations for oblique rays^{20}. Nevertheless, the physical reduction of pixelsize will sacrifice signaltonoise ratio (SNR) due to the reduction of the external quantum efficiency on a smaller light sensing area^{21}. Moreover, a smaller pixelsize is a major development trend of the commercial sensor chips, but the pixelsize of the available sensor still cannot satisfy the rapidly growing demands in lensfree inline holographic microscopes due to the obstacles of the technology of semiconductor manufacturing. Pixel superresolution is another way to address this problem in which a smaller effective pixelsize is synthesized from a series of subpixel shifted low resolution images through specific computational algorithms^{13,22}. To achieve such subpixel image shifts, either the illumination source or samples need to be precisely displaced, which in turn require extra controllable mechanical device with very high precision and repeatability^{11,23,24}. Recently, wavelength scanning is proposed to effectively avoid mechanical subpixel displacement and requires significantly fewer measurements without sacrificing performance^{25}. However, it needs extra wavelength calibration and dispersion compensation, and simultaneously increases the cost of the whole system due to the wavelengthtunable light source. These pixel superresolution methods in lensfree inline holographic systems are usually associated with phase retrieval methods to reconstruct the objects on the focus plane, such as the objectivesupport based single intensity measurement^{26,27}, the GerchbergSaxton algorithm^{23}, the synthetic aperture method^{22,25}, the transport of intensity equation (TIE)^{28,29,30}. Moreover, these superresolution reconstruction methods perform pixel superresolution and phase retrieval in a sequential manner, and consequently considerable quantities of data need to be collected.
Recently, a new computational method termed propagation phasor approach has been proposed by Luo et al., which combines phase retrieval and pixel superresolution into a unified mathematical framework^{31}. It has been found that besides phase recovery, the diversity of the sampletosensor distance also provides additional information to overcome spatial aliasing problem^{31,32}. This propagation phasor framework can deliver superresolved reconstructions with significantly reduced number of raw measurements. However, it still needs the theoretical imaging model to match the actual imaging process perfectly, which is difficult to achieve in actual operation. Moreover, the lateral drift of the specimen with respect to the imaging sensor over the course of the axial scanning can severely deteriorate the reconstruction quality. As such kind of drift can barely be fully avoided experimentally, it is desirable to determine the true positions of the sample from the raw data computationally. Though the knowledge of translation positions can be estimated through registration before reconstruction, such onetime calibration has only limited success due to the registration error resulting from the disturbance of the noise and inherent twin image. On the other hand, the stability and convergence of the reconstruction process may be significantly affected by the nonnegligible noise present in the image. Once the captured intensity images are inconsistent with each other due to the noise and model mismatch, successive iterative reconstruction process may become oscillatory and frequently cannot converge to a reasonable solution.
In this paper, we propose a method called adaptive pixelsuperresolved lensfree imaging (APLI), in which adaptive relaxed iterative phase retrieval is used to both achieve superresolution reconstruction and overcome abovementioned limitations simultaneously. Furthermore, different from the traditional reconstruction method based on multiheight intensity measurements^{23,33}, the presented method has taken pixel binning process into account and pixel superresolution reconstruction is achievable only based on a stack of outoffocus images during the iterative process in the spatial domain. During the reconstruction process, to find optimum solution to the phase retrieval problem and effectively reduce impact of unpredictable disturbance, APLI first introduces the adaptive relaxation factor strategy and the automatic lateral positional error correction. This improves the stability and robustness of the reconstruction towards noise as well as retains rapid convergence speed. We demonstrate the success of our approach by reconstructing the USAF resolution target and the stained biological paraffin section of typical dicot root across the FOV of ~29.85 mm ^{2} with only ten intensity images. Based on the ALPI, we achieve halfpitch lateral resolution of 770 nm, surpassing 2.17 times of the theoretical NyquistShannon sampling resolution limit imposed by the sensor pixelsize (1.67 μm). We believe the proposed method will offer a way to exploit the full resolution potential of lensfree microscopy, and fewer raw holograms will make the method to be a very attractive and promising technique for various biomedical applications.
Materials and Methods
Experimental Setup
Figure 1(a) depicts the configuration of the lensfree imaging setup. The coherent or partially coherent light irradiates the specimen, and then the scattered light and the transmitted light copropagate in the same direction, finally forming interference fringes on the imaging device. In order to make the emitted light impinging on the object plane to be considered as a plane wave, the distance between the source and samples should be by far larger than that between the sensor and samples. Furthermore, the length scale of the distance between samples and the imaging device typically is on the order of submillimeter^{34}, and finally this structure can achieve the whole active area of the imaging sensor as the FOV and reduce the unacceptable resolution loss of the reconstructed images while the magnification (F) approaches unit [see Fig. 1(a), Z _{2} >> Z _{1} and F = (Z _{1} + Z_{2})/Z _{2} ≈ 1]. Based on this, the scale of the FOV will be directly restricted by the number of pixels and the pixelsize of the imaging device, and at the meantime the latter will be the main limiting factor of improvement of the spatial resolution. Unfortunately, during the process of Zscanning which is implemented to achieve superresolution, whether electric or manual adjustment of the sampletosensor distance will result in the tiny lateral positional error inevitably and the copropagated light beam carrying the error will be sampled by the imaging device. As shown in Fig. 1(b), the tiny error will result in the subpixel shift among the intensity images on the different planes.
As depicted in Fig. 1(a), our lensfree inline digital holographic imaging system mainly contains three parts: a single mode fibercoupled light source (LP660SF20, Thorlabs, the United States), a monochrome imaging device (DMM 27UJ003ML, the imaging source, Germany), and thin specimen placed above the imaging device. In our experimental system, the optical fiber is put at ~20 cm over the samples. At the meantime, the imaging sensor is placed ~400–900 μm away from the sample which is attached to a piezodriven positioning stage (MAX301, Thorlabs, the United States). The stage holds the specimen with the selfdesigned 3Dprinted support and can move vertically to change the distance between the sample and the image sensor. The camera in our lensfree inline digital holographic imaging system has 1.67 μm pixelsize and 10.7 megapixels, and it is used for the acquisition of ten holograms with different sampletosensor distances Z _{1}. Theoretically, the FOV can reach ~29.85 mm ^{2} and simultaneously the resolution is up to 2.5 fold of camera resolution. Finally, the spatial resolution of the experimental results is enhanced to the 2.17 times of the theoretical NyquistShannon sampling resolution and the spacebandwidth product (Megapixel) is increased from 10.7 to 50.34.
Sample preparation
A standard 2′′ × 2′′ positive 1951 USAF resolution test target (Edmund Scientific Corporation, Barrington, New Jersey, USA) is used to quantitatively demonstrate resolution improvement. Besides the resolution target, the typical dicot root (Carolina Biological Supply Company, Burlington, North Carolina, USA) stained with fast green and the counterstain safranin is a representative sample for the study of the internal structure of plants, and applied to demonstrate universality of the proposed method.
Adaptive pixelsuperresolved lensfree imaging (APLI)
In order to recover a superresolution image of the complex object field based on a series of the outoffocus lowresolution holograms in the spatial domain, the overview flowchart of our method is shown in Fig. 2 which is mainly composed of the following three stages.
Stage 1: Generation of an initial guess
A stack of the holograms (e.g, the pixel dimension of the hologram is m × n) is captured on different sampletosensor planes and the first plane should get as close as possible to the sensor. After capturing the raw images, upsampling will be carried to all holograms with the nearest neighbor interpolation which coincides with the imaging theory of cameras, and all the upsampling images [e.g, the pixel dimension of the hologram is M × N with the interpolation weight k (M × N = km × kn)] will backpropagate to the object plane with the autofocusing algorithm^{35,36}. All the upsampling intensity images are superimposed together to acquire a good initial guess which will be used as the input of Stage 2.
Although the single backpropagated upsampling hologram can be regarded as the initial guess, simply summing up all backpropagated upsampling holograms can significantly suppress the twin image noise, aliasing signals and upsampling related artifacts^{31,32}. Furthermore, with the same set of raw data, this initialization method can have resolution improvement of the initial guess compared to the previous initialization method^{14} and then have faster convergent rate in the Stage 2.
Stage 2: Iterative multiheight images reconstruction
The whole iteration process is a procedure for phase retrieval, and it is essentially a process of solving the inverse problem which is very common in the computational imaging. To solve this problem, we need to build a precise forward model and reconstruct the superresolution intensity images and the phase map from the captured discretized intensity images. The abovementioned initialization is input into the precise forward model to obtain the estimated captured images. If the estimated captured images cannot match the corresponding captured images, the iterative loop in the model will continue. The loop will terminate until the estimated captured images can accord with the actual captured images, and the current assumptions will be regarded as the actual superresolution intensity and phase images.
Thus, the following two key elements must be taken seriously to make the model to be consistent with actual physical process. Firstly, in order to obtain the precise imaging model, the limited sensor pixelsize and the unpredictable disturbance during image acquisition should be taken into account. The pixel binning as the process of recording the image is a downsampling procedure which can be regarded as the spatial averaging. On the other hand, to acquire the diffraction patterns on the distinct planes, the longitudinal shift of the samples is inevitable which will lead to the accidental lateral displacement. Hence, the lateral positional error must be absorbed in the model. (More details are given in the Section Automatic lateral positional error correction). Secondly, the stability and the robustness of the solution to the phase retrieval problem must be improved, and the algorithm should be able to converge to a desired optimal solution that can be considered as the optimization of phase recovery based on multiheight measurements. Solving the optimization problem is deemed to make the current estimate close fit the input actual captured images as a whole, and the quantification is given by the realspace error as described in the following equation
where \(\Vert \mathrm{.}\Vert \) is the Euclidean norm, \(\sqrt{{{\bf{I}}}_{{\bf{i}}}}\) is the amplitude of i _{ th } captured image, \({{\bf{g}}}_{{\bf{i}}}{\boldsymbol{}}\) is the current downsampling estimated amplitude corresponding to the i _{ th } measurement, which is the output after consideration of system uncertainties. The solution process is an incremental gradient optimization process, which unfortunately will provide a relatively correct solution in early iterations, but then overshoot. This problem is often attributed to the nonconvex nature of phase retrieval, but we find the reason for this is more closely related to the choice of the relaxation factor based on the analysis of the similar problem in prework^{37}, and the relaxation factor needs to be gradually diminishing for convergence even in the convex case. Thus, the adaptive relaxation factor should be introduced into the model to achieve the improvement in the stability and robustness of the reconstruction towards noise. (Details can be referred to the Section Adaptive relaxation factor).
The step of updating amplitude is crucial and depends on a correction coefficient matrix. This matrix is determined by the product of the adaptive relaxation factor α and the proportional relation matrix between the upsampling captured images and the prior estimated intensity images. The specific process can be seen in Fig. 3 and described as the following three steps.
Step 1, the (i1)_{ th } estimated complex amplitude \({O}_{i1}^{j}\) is forthpropagated to the next height (\({O}_{i}^{j}\)) and then the i _{ th } captured images are upsampled (j represents the current index of the iteration cycle and i represents the index of the outoffocus plane.). Next, the estimated intensity image \({O}_{i}^{j}{}^{2}\) is registered with the upsampling captured images I _{ upsample }, and the positional error is represented by (x _{ shift }, y _{ shift }). Then we shift \({O}_{i}^{j}{}^{2}\) in place by the amount of (−x _{ shift }, −y _{ shift }) and the original estimated intensity image is substituted by the refined intensity termed \({O}_{i\_ref}^{j}{}^{2}\) (the amplitude can be denoted as A).
Step 2, we implement downsampling to the refined estimated superresolution intensity images \({O}_{i\_ref}^{j}{}^{2}\) with the point spread function (PSF) of the lowresolution sensor as shown in the upper portion of Fig. 3. PSF is usually modeled as a spatial averaging operator \(LRPixel=\frac{\sum {a}_{h}}{{k}^{2}}(h=0,\,1\ldots ,{k}^{2}1)\) ^{38}, where a _{ h } is the gray value of the superresolution intensity images, k is a downsampling factor.
Step 3, after downsampling, the estimated lowresolution intensity image has the same dimension with the original captured image on the corresponding i _{ th } sensortosample plane. Then we upsample the estimated lowresolution intensity image and the corresponding captured image (the amplitude can be referred as the matrix B and C respectively.) with the nearest neighbor interpolation. To acquire the correction coefficient matrix, we multiply the adaptive relaxation factor α with proportional value between the matrix C and B, and the expression \(\mathrm{(1}\alpha )\times {\bf{A}}+\alpha \times \frac{{\bf{C}}}{{\bf{B}}}\times {\bf{A}}\) will be regarded as the updated i _{ th } estimated amplitude. The relaxation factor α in above expression is a diminishing value differing from the traditional fixed value ~0.5, and the guided filter is taken into account to further eliminate the influence of noise as well. At last, the complex amplitude containing the new updated estimated amplitude and the previous unchanged phase is forthpropagated to the next height using the Angular Spectrum Method^{39}.
The process of the Step 1–Step 3 is repeated until all the sampletosensor distances are gone through. That is to say, all the raw measurements are used for once, and it will be considered as one iteration cycle.
Stage 3: Reconstruction on the object plane
After some iterations, we will achieve the complex amplitude on the plane closest to the imaging device and then backpropagate the complex amplitude to the object plane as shown in Fig. 2.
Physical modeling of the pixel binning
Blurring may be caused by an optical system (inherent noise of the camera, diffraction limit, etc.), and the PSF of the imaging device. The former can be modeled as linear space invariant while the the latter is considered as linear space variant^{38}. It is difficult to obtain the exact information about the linear space invariant, so it is usually compensated by the specific algorithms or avoided as much as possible. Besides the linear space invariant, in the process of image reconstruction, the PSF of the imaging device (which can also be regarded as the finiteness of the physical pixelsize) is an important factor for blur, which should be incorporated into the reconstruction procedure. As a complementary interpretation, there is a natural loss of spatial resolution caused by the insufficient sensor density and noise that occurs within the sensor or during transmission. As shown in Fig. 4(b), the spectrum loss will be more serious while the decimation factor increases. Figure 4(b) indicates again that the pixelsize is the main limiting factor of the systems which will determine whether it can directly record the high frequency fringes corresponding to the superresolution of the samples.
In traditional multiheight reconstruction method^{23,33}, many efforts are made to implement subpixel shift to achieve the superresolution, but the pixel binning is not taken into account, which does not accord with actual physical process. Thus, involving the process of recording digital images in the reconstruction procedure has drawn attention, and the enhancement in resolution has been validated^{31}. The process of recording digital images is a downsampling process which is usually modeled as a spatial averaging operator [\(LRPixel=\frac{\sum {a}_{h}}{{k}^{2}}(h=0,\,1\,\ldots \,{k}^{2}1)\) where a _{ h } is the gray value of the superresolution intensity images, k is a decimation factor] as shown in Fig. 4(a). In the iterative process of the reconstruction, we convolve the estimated intensity of the field \({O}_{i}^{j}{}^{2}\) with the PSF of the image sensor^{38}, and then it has the same dimension as the raw measurement.
Automatic lateral positional error correction
In many lensfree systems, the different sampletosensor distances are provided by the mechanical movement, so the mechanical lateral error will be generated as shown in Fig. 1(b). In order to eliminate the unavoidable error at subpixelscale, the traditional method is registering images before reconstruction^{23,24} called beforehand lateral positional error correction (BLPEC). However, in the actual imaging process, the light illuminates the specimen which has tiny lateral movement because the positioning stage will lead into the lateral mechanical error while it moves longitudinally, and then forthpropagates to the imaging plane carrying the information of the object. Thus, the lateral positional error appears before camera sampling and the captured images carry the error signals. Based on this, the BLPEC can only correct the lateral positional error cursorily, and the accuracy of the correction will decrease due to existence of the artifacts and aliasing. Additionally, this method has incapacity to rectify registration error in the later process which will affect the final quality of reconstruction.
In order to solve the problems existing in traditional BLPEC, we introduce automatic lateral positional error correction (ALPEC) into our method. The crux of solving the general problem of subpixel image registration is computing the cross correlation between the image to register and a reference image by means of a fast Fourier transform (FFT), and locating its peak^{40}. The cross correlation of the captured image f(x, y) and its corresponding estimated image g(x, y) is defined by:
where M and N are the image dimensions, (*) represents complex conjugation, F and G denote the discrete Fourier transform of the f and g respectively. The expression of F(u, v) is \(F(u,\,v)=\sum _{x,y}\frac{f(x,y)}{\sqrt{MN}}\,\exp \,[i2\pi (\frac{ux}{M}+\frac{vy}{N})]\) and there is a similar expression for G(u, v). It is important to determine accurately the peak of the crosscorrelation function r _{ fg }(x _{ shift }, y _{ shift }), and relax the limitation on computational speed and memory caused by the FFT. Thus, the refined initial estimate method^{40,41} is used, which uses aid by the existence of analytic expressions for the derivatives of r _{ fg }(x _{ shift }, y _{ shift }) with respect to x _{ shift } and y _{ shift }, and the algorithm iteratively searches for the image displacement (x _{ shift }, y _{ shift }) that maximizes r _{ fg }(x _{ shift }, y _{ shift }). At last, it can achieve registration precision to within an arbitrary fraction of a pixel at a fast rate.
Adaptive relaxation factor
In most cases, the propagation phasor approach^{31} is an effective solution to the pixellation problem and it gives a unified mathematical framework combining phase retrieval and pixel superresolution. Nevertheless, in practical operation, the stability and reconstruction quality of the method may be significantly degraded due to the existence of nonnegligible noise during the sampling process. This problem is often attributed to the nonconvex nature of phase retrieval and the illcondition process of the superresolution reconstruction. Although numerous superresolution algorithms have been proposed in the literature^{11,23,25,42,43,44}, the superresolution image reconstruction remains extremely illposed^{43,44}. Moreover, the noise effect will accumulate as the iterations augment.
The choice of the relaxation factor can suppress noise to a certain extent, typically the relaxation factor α = 0.5 in traditional phase retrieval methods. The relaxation factor will be utilized to update amplitude, and the corrected superresolution intensity images substitutes for the earlier estimated superresolution intensity images incompletely. In other words, the corrected superresolution intensity images will occupy a part in the new updated estimated intensity (as shown in Fig. 3), which have a close relationship with the captured intensity images containing noise. So the new updated estimated intensity images suffer from the noise because the captured intensity images impose illcondition restrictions with the fixed relaxation factor.
The stability and reconstruction quality may be significantly degraded when nonnegligible noise is present in the captured images, and the same problem is encountered in FPM. We find that the reason for the phenomenon in this field is the nonconvex nature of phase retrieval and more closely related to the choice of the stepsize, so the adaptive stepsize strategy is introduced to successfully solve this problem^{37}. Considering that the problems of the iterative method in lensfree imaging are also attributed to the nonconvex nature of phase retrieval, instead of the traditional fixed relaxation factor, the adaptive relaxation factor which diminishes to an infinitely small value will be used to improve the performance of the incremental solutions. So the critical issue in practical application will be how to determine a suitable relaxation factor sequence α ^{iter} to get close to a solution within fewer iterations. The α ^{iter} must satisfy the two conditions that are shrinking the relaxation factor to zero and making the diminishing speed not be too fast. Because if the relaxation factor shrinks too fast, the estimated object field may converge to a point that is not a minimum especially when the initial point is sufficiently far from the optimum. So the relaxation factor should not be reduce too fast and then the algorithm can travel infinitely far. Thus, in this paper, we give an alteration of the relaxation factor when the global error ε(O ^{ite−1}) and ε(O ^{iter}) obtained in consecutive cycles satisfies the following criterion:
where ‘iter’ is the index of the iteration cycle, ‘η’ is a small constant which should be much less than 1. The global error ε(O ^{iter}) is determined by \(\varepsilon ({{\bf{O}}}^{iter})={\sum }_{i}{\Vert \sqrt{{{\bf{I}}}_{i}}{{\bf{g}}}_{i}\Vert }^{2}\), where \(\Vert \mathrm{.}\Vert \) is the Euclidean norm. The captured and the estimated downsampling images are rasterscanned into vectors \({{\bf{I}}}_{i}={\{{I}_{i}\}}^{m\times n}\) and g _{ i } = {g _{ i }}^{m×n} (with m × n pixels). \({{\bf{g}}}_{{\bf{i}}}{\boldsymbol{=}}{{\bf{O}}}_{{\bf{i}}}^{{\bf{i}}{\bf{t}}{\bf{e}}{\bf{r}}}\otimes {\rm{P}}{\rm{S}}{\rm{F}}\) is spatial averaging operation, and \({{\bf{O}}}_{i}^{iter}\) is an M × N matrix (the estimated intensity of the field) while PSF is determined by the intrinsic property of the camera. Here we should know that the g _{ i } considers the system uncertainties such as the lateral positional error. Finally, the algorithm will converge to the stationary point when the relaxation factor reach a prespecified minimum.
Discussion and Results
The comparison between the adaptive and fixed factor
Figure 5 shows the influence of the noise on the system and emphasizes the important role played by the relaxation factor in iterative process. A theoretical superresolution image needed to reconstruct is shown in Fig. 5(a), and Fig. 5(b) shows the lowresolution image captured by the camera on object plane in theory. Figures 5(d) and (c) describe a set of emulational images captured by the camera on the planes of distinct sampletosensor distances with Gaussian noise and not respectively. Figures 5(e) and (f) depict the reconstructed superresolution images using the fixed relaxation factor ~0.5 under the noisefree and noisy circumstances separately. The two yellow curves in the subgraphs of red region convey the information that if we update the earlier estimated superresolution intensity images using a fixed relaxation factor (α = 0.5) as the new estimated amplitude in the abovementioned Section APLI, the reconstructed result under the noisy condition [Fig. 5(f)] is much worse in respect of the resolution and background after the same iterations compared to the result without noise [Fig. 5(e)]. To demonstrate that an adaptive relaxation factor can effectively solve the problem having a close relationship with the overamplification noise, we test our method under the noisefree and noisy circumstances separately. The results can tell apart the densest line and give relatively clean background under the two different conditions as shown in Figs. 5(g) and (h) respectively.
Furthermore, the quantitative comparison of reconstruction accuracy versus intensity noise among using the adaptive and fixed relaxation factors (α = 0.5,0.01), as well as the rate of convergence is shown in Fig. 6. Figure 6(c) depicts the reconstructed results corresponding to iterations labelled in Fig. 6(b) under the condition of the adaptive and fixed relaxation factors respectively. Figure 6(a) shows the curves of the intensity error following the iterations increasing with different relaxation factors and Fig. 6(b) shows the local enlarged drawing of Fig. 6(a) (shaded region). Among the curves, the purple bight represents that using the perfect initial guess and a very small fixed relaxation factor ~0.01, the intensity error still accumulates as the iterations increase. This offers an explanation for the following phenomenon that with the fixed relaxation factors, the reconstruction error will have convergent tendency at the outset and then get worse after reaching their respective minima which can be obviously seen in the green curve of Fig. 6(a). The same goes for the small relaxation factor corresponding to the red curve, but it is not obvious to observe the the turning point of the curve (the red curve reaches the minimum in the about 700 iterations and then overshoots), because the speed of convergence is extremely slow and the curve rises at a glacial pace after reaching the minimum. The iteration should be suspended when the curves reach their respective minima due to the overshooting of the curve in the later period. The cause of the overshooting is that even if the reconstruction converges to a true value, the captured images still provide the ill intensity constraints as before. Nonconvergence is a disadvantage for the iterative methods, and suspending the iteration when reaching the minimum will result in loss of image details or taking a long time. Comparing the orange curve with the green one or the red one, we can find that using adaptive relaxation factor can obtain the converged reconstruction and effectively prevent the overshooting. Meanwhile the introduction of the adaptive relaxation factor into our method can retain the relatively fast initial convergence speed and it is seen that this method decreases more rapidly than fixed relaxation factor methods (α = 0.01) and converges in the early 20 iterations.
Although the adaptive relaxation factor has the antinoise capability to a certain extent, the noise will cause the estimated intensity image to have no tendency to be consistent one with another on the next plane and the reconstructed image to deviate from the theoretically calculated values. To avoid the overamplification of noise we take into account nonlinear denoising algorithm termed guided filter^{45}. It is essentially equivalent to add the relevant transcendental knowledge that objects are piecewise smooth. The introduction of the guided filter will further restrain the noise and the smooth regions of the reconstruction results will tend to the ideal value. However, the reconstruction results are slightly flawed at edge, because the guide filter cannot distinguish whether it is noise or jump edges of the object and preserves the edges during the reconstruction process. The combination of the adaptive relaxation factor and the guided filter can effectively suppress the noise and achieve better reconstruction results corresponding to the blue curve. From these results, we can safely conclude that the adaptive relaxation factor method outperforms the fixed relaxation factor methods, with both faster convergence rate and lower misadjustment error simultaneously achieved.
The comparison between BLPEC and ALPEC
Figure 7 shows that the mechanical lateral positional error has great effect on the reconstruction results and lateral positional error correction significantly improves reconstruction performance. The theoretical superresolution image needed to reconstruct is shown in Fig. 5(a). Figures 7(a)–(h) all have lateral positional error. Figures 7(a)–(d) and Figs. 7(e)–(h) are in clean and noisy environments respectively. To further describe that ALPEC can be widely used in the cases of either the fixed or adaptive relaxation factor, the simulation analysis is conducted as shown in Fig. 7. Comparing Figs. 7(a) and (b) with Figs. 7(c) and (d), we can deduce that in the absence of noise the residual tiny lateral positional error can cause serious deviation to the reconstructed results and ALPEC is extremely effective to correct positional error under the condition of either fixed or adaptive relaxation factor. To achieve more intuitive comparison between whether introducing ALPEC or not, under noisy conditions the reconstructed results are shown in Figs. 7(e)–(h). From abovementioned comparative simulation, we can find that the remanent lateral positional error brings disastrous distortions to the reconstructed results. Using APLI (with ALPEC), the effect of the lateral positional error and noise will be effectively removed without any prior knowledge.
As shown in the preceding graphs (Figs. 5–7), simulation studies are conducted under the following conditions:

(1)
The downsampling by a factor of four is implemented which can also be regarded as the spatial averaging operating weight of the sensor (the average value of sixteen pixels in the superresolution image is equivalent to the value of the corresponding pixel in the lowresolution image). The actual physical phenomena and process is that the superresolution image propagates in the free space, and then the twodimensional continuous intensity distribution of the hologram (diffraction patterns) is discretized into a matrix through the twodimensional convolution of the hologram and a pixel unit in an imaging array, which results in the lowresolution image.

(2)
In each group of simulations, variablecontrolling approach (for instance, the number of iteration remains unchanged) is used to make the conclusion more convincing.

(3)
In every simulation, at least 32 raw lowresolution images (the number of pixels is m × n) are utilized to reconstruct superresolution images [the number of pixels is M × N (M = k × m, N = k × n, k = 4)]. These captured diffraction patterns are enforced as object constraints, gradually converging to the missing twodimensional phase information^{46} and the corresponding superresolution amplitude. For a complex intensity object function, to obtain the superresolution intensity, the recovery problem becomes undetermined by a factor of 2 since there are 2 × M × N pixels defining the object function (M × N pixels for the real part and M × N pixels for the imaginary part), whereas there are only m × n pixels in the measurement matrix^{47,48}. In order to solve this underdetermined problem, more information about the object function needs to be acquired and incorporated as a constraint on the solution space, so at least 2 × k × k raw lowresolution images are needed in theory^{47}.
The experimental results of the USAF resolution test target
A standard 1951 USAF resolution test target as the experimental samples is utilized to prove that our method has the universality and stability during the actual measurements. In order to test our method, we acquire 10 raw holograms at different sampletosensor distances (~547–577 μm) with the standard 1951 USAF resolution test target and each raw hologram is digitized by the imaging device with 1.67 μm pixelsize. Figure 8(a) shows a full FOV (~29.85 mm ^{2}) lowresolution hologram which is captured by the camera directly. The inset shows local enlarged drawing of the dashed rectangular area in Fig. 8(a) which corresponds to the full FOV of 10X objective lens. Due to the relatively large pixelsize resulting in downsampling, our method is applied to diminish the effective pixelsize namely improving the resolution. During the process of reconstruction (the imageprocessing steps are depicted in the Section APLI), the raw holograms are used as the intensity constraints and the recovered superresolution intensity image is shown in Figs. 8(b) and (c). In Fig. 8(c), we can deduce that the smallest resolved halfpitch can reach 0.77 μm, which exceeds the double resolution of the result [see Fig. 8(d)] based on the conventional multiheight reconstruction method^{23,33} using the same raw images. It is important to emphasize that results shown in Figs. 8(b) and (c) only utilize ten raw images which are obtained without wavelength scanning, subpixel lateral displacement and illumination angles scanning. In other word, we only move the sample along the Zaxis. For comparison, the sample digitalized with the 10X objective lens (NA = 0.4) is shown in Fig. 8(e). The theoretical imaging resolution of 10X objective lens (NA = 0.4) can reach λ/NA = 1.58 μm (λ = 0.632 μm, NA = 0.4). Although there is a nonnegligible phenomenon that the imaging results with a 10X objective and Kohler illumination (condenser aperture wide open) outperforms the proposed method, the spacebandwidth product of the reported method has been increased by nearly 100 fold. Furthermore, the whole system requires no lenses, which provides the possibility of miniaturization and low cost.
In Supplementary Video 1, we show a zooming video of the fullFOV reconstructed images of a USAF target with our method and the traditional method^{23,33} (with BLPEC) respectively. To process the data in parallel, the large format raw image (3872 × 2764 raw pixels) is divided into 35 portions (700 × 700 raw pixels) for computation. Here, the blocks at the end of each row or each column do not have the same pixels as others and the adjacent portions have 200 pixels overlap with each other. For the reconstructed image (2800 × 2800 pixels), we cut away 200 pixels at the edge and the adjacent portions introduce a certain degree of redundancy (400 pixels) into our stitching. Thus, no observable boundary is present in the stitched region and the blending comes at a small computational cost of redundantly processing the overlapping regions twice.
Figure 9 describes additional experimental work to address the significance of ALPEC, and intuitively shows the comparison between reconstructed results based on the adaptive and the fixed relaxation factor. Figure 9(a) presents the FOV of the USAF resolution target recorded by the camera directly. Figures 9(a1) and (a2) show the local enlarge enlargements of the rectangular areas in Fig. 9(a) and the respective reconstruction results are shown in Figs. 9(b1)–(c6). To illustrate great effects of ALPEC on experimental results, we have carried out two groups of experiments with the adaptive and the fixed relaxation factor respectively. Figures 9(b1) and (c1) show the reconstructed results of enlargements of two different small segments in Fig. 9(a) based on the single raw image, and as shown in them, the reconstructed results are blurry. Figures 9(b2) and (c2) show the reconstructed results without positional error correction. From Fig. 9(b2) we can deduce that the tiny lateral positional error has little effect on low frequency of the reconstructed results, but exerts a tremendous influence on high frequency which corresponds to the superresolution [Fig. 9(c2)]. The same data set is employed to recover the superresolution image for each segment using our adaptive method with BLPEC and ALPEC respectively. In addition, the same iterations are conducted in two reconstruction methods, the only difference between the methods utilized in this paper is that the latter involves ALPEC while the former puts into effect positional error correction in advance. Figures 9(b3) and (c3) present the recovered superresolution intensity images with BLPEC corresponding to the same segment of Figs. 9(b1) and (c1) respectively. It can be seen that the silhouette of lines in Fig. 9(b3) is unsharp but recognizable because the corrected images using BLPEC have removed the relatively large lateral positional error, but still remain tiny the positional misalignment. Even worse, due to the remnant lateral positional error, the higher frequency of the object is still unable to recover as shown in Fig. 9(c3), although the resolution of the reconstructed results has been improved compared to Fig. 9(c2). With the help of ALPEC, highquality recovered intensity distributions are obtained, as shown in Figs. 9(b4) and (c4). The blur in Figs. 9(b3) and (c3) is eliminated completely and the clarity of image is increased, meanwhile better outlines are given in Figs. 9(b4) and (c4). Similarly, with the fixed relaxation factor (α = 0.5), the reconstructed results are shown in Figs. 9(b5) and (c5) as well as Figs. 9(b6) and (c6) using the BLPEC and ALPEC separately. We can also come to the same conclusion that the ALPEC can bring benefits to improve the resolution with the adaptive relaxation or fixed factor.
In order to experimentally illustrate that the adaptive relaxation factor can improve the stability and robustness of the reconstruction towards noise, the comparison between the adaptive relaxation and the fixed factor based reconstructed results is also shown in Fig. 9. Figures 9(b6) and (c6) show the reconstructed results with fixed relaxation factor (α = 0.5) and ALPEC, while the reconstructed results using our adaptive relaxationfactor method and ALPEC are shown in Figs. 9(b4) and (c4). It is obvious that using the adaptive relaxationfactor method can suppress the overamplification noise without extra auxiliary information.
Imaging of the typical dicot root
Another experiment was demonstrated that our method can also be used for the dense sample such as plant slice, which can be seen in Fig. 10. Figure 10(a) shows the full FOV of the typical dicot root (~29.85 mm ^{2}), and the whole sample can be captured which is challenging for the traditional highmagnification lens microscope. From upper left enlarged region of the orange dashed box in raw full FOV lowresolution hologram [see the inset of Fig. 10(a)], we can find that the details in the typical dicot root are hard to be observed because they are submerged in the diffraction fringes. Figures 10(b) and (c) show the reconstructed intensity images based on the traditional method^{23,33} (α = 0.5, BLPEC) and our method respectively. The selected area [Fig. 10(c)] occupy only 1% of the FOV of Fig. 10(a), which corresponds to the whole FOV with the 10X objective as shown in Fig. 10(d). From Fig. 10(c), it is easy to distinguish endodermis, pericycle, primary phloem, primary xylem, and parenchyma cell, which are extremely important for botanical studies. To observe the details inside the amyloplasts, we further select a small area in Fig. 10(c) which is the full FOV with 60X objective [Fig. 10(d1)] and we can find that the unit magnification lensfree systems greatly expand the FOV. Figures 10(b1) and (c1) are the local enlarge enlargements of the rectangular areas in Figs. 10(b) and (c) separately. As shown in the enlargements [Fig. 10(c1)], the grains in amyloplasts are distinguishable and the sharp improvements are noticed in the image contrast compared to the results shown in Fig. 10(b1). Figures 10(b2)–(d2) show the line profiles along the respective arrow, and two particles in the middle cortex are easily to distinguish which is impossible in the traditional method. However, there are many horizontal and perpendicular lines in the enlargements [Fig. 10(c1)], and blur is obvious in Fig. 10(c2). The reasons are mainly that guide filter is sensitive to the smooth background, but this experimental sample is not piecewise smooth and the guided filter brings the aberration to the reconstructed results. Furthermore, the test object has a certain thickness and the diffraction patterns of the nontarget objects in the vertical direction of objects on focal plane will influence imaging reconstructed results. In Supplementary Video 2, we show a zooming video of the fullFOV reconstruction result of a typical dicot root with our method and the traditional method^{23,33} (α = 0.5, BLPEC) respectively.
Conclusion
In this work, a method termed as APLI is proposed to mitigate the artifacts and simultaneously obtain superresolution images only with Zscanning. According to the method more than double pixel resolution of camera is successfully achieved, and there is no extra embedding medium between the object and sensor, like the refractive index matching oil. Here we emphasize that this superresolution technique does not require lateral displacements, wavelength changing, and illumination angles scanning. Throughout the experiment, an imaging sensor with pixelcount of 10.7 million and pixelsize of 1.67 μm provides a large FOV (~29.85 mm ^{2}), and the samples are moved vertically to generate ten outoffocus undersampling intensity images with artifacts. Instead of the traditional fixedstep, an adaptive relaxation factor strategy has been firstly introduced into our method to suppress the overamplification noise and retain the convergence speed under noisy conditions. Furthermore, we introduce an ALPEC method into our method which tallies with actual physical process, and it can avoid the misalignment effectively and improve the reconstruction stability. APLI offers a way to exploit the resolution potential of lensfree microscopy and achieves smallest resolved halfpitch of 770 nm, surpassing 2.17 times of the theoretical NyquistShannon sampling resolution limit.
We believe that our method will broadly benefit the lensfree imaging microscope and acquire higher resolution with the same amount of data comparing to the traditional reconstruction methods. In addition, our method can vastly not only remove the adverse impact of alteration in multiple systematic parameters on the reconstructed results, but also reduce the complexity of the actual operation. The results of the resolution target and botanical samples demonstrate that the proposed reconstructed method can offer a new way to make the lensfree microscope to be a competitive and promising tool for the medical care in remote areas in future.
However, some issues still deserve further consideration. Although the number of the captured images may influence the reconstructed results, the artifacts cannot be removed completely due to the tradeoff between the resolution and the artifacts. Specifically, if we need to weaken the influence of the artifacts, the sampletosensor distance must be increased, but at the meantime the longdistance will result in failure of acquisition of the highfrequency patterns because many patterns become denser suffering from more severe pixel aliasing or exceed the sensor area limits. On the other hand, in the actual situation, the resolution cannot be further improved while the number of raw images increases. We consider the reason for this phenomenon is that during the whole process, the longitudinal error is not taken seriously. In future work, we will make effort to correct the tiny longitudinal error automatically.
Additional information
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
 1.
Maricq, H. R. & Carwile LeRoy, E. Patterns of finger capillary abnormalities in connective tissue disease by “widefield” microscopy. Arthritis & Rheumatology 16, 619–628 (1973).
 2.
Huisman, A., Looijen, A., van den Brink, S. M. & van Diest, P. J. Creation of a fully digital pathology slide archive by highvolume tissue slide scanning. Human Pathology 41, 751–757 (2010).
 3.
Ma, B. et al. Use of autostitch for automatic stitching of microscope images. Micron 38, 492–499 (2007).
 4.
Mico, V., Zalevsky, Z., GarcaMartnez, P. & Garca, J. Synthetic aperture superresolution with multiple offaxis holograms. JOSA A 23, 3162–3170 (2006).
 5.
Hillman, T. R., Gutzler, T., Alexandrov, S. A. & Sampson, D. D. Highresolution, widefield object reconstruction with synthetic aperture fourier holographic optical microscopy. Optics Express 17, 7873–7892 (2009).
 6.
Feng, P., Wen, X. & Lu, R. Longworkingdistance synthetic aperture fresnel offaxis digital holography. Optics Express 17, 5473–5480 (2009).
 7.
Tippie, A. E., Kumar, A. & Fienup, J. R. Highresolution syntheticaperture digital holography with digital phase and pupil correction. Optics Express 19, 12027–12038 (2011).
 8.
Zheng, G., Horstmeyer, R. & Yang, C. Widefield, highresolution fourier ptychographic microscopy. Nature Photonics 7, 739–745 (2013).
 9.
Tian, L., Li, X., Ramchandran, K. & Waller, L. Multiplexed coded illumination for fourier ptychography with an led array microscope. Biomedical Optics Express 5, 2376–2389 (2014).
 10.
Sun, J., Zuo, C., Zhang, L. & Chen, Q. Resolutionenhanced fourier ptychographic microscopy based on highnumericalaperture illuminations. Scientific Reports 7, 1187 (2017).
 11.
Bishara, W., Su, T.W., Coskun, A. F. & Ozcan, A. Lensfree onchip microscopy over a wide fieldofview using pixel superresolution. Optics Express 18, 11181–11191 (2010).
 12.
Zheng, G., Lee, S. A., Antebi, Y., Elowitz, M. B. & Yang, C. The epetri dish, an onchip cell imaging platform based on subpixel perspective sweeping microscopy (spsm). Proceedings of the National Academy of Sciences 108, 16889–16894 (2011).
 13.
Greenbaum, A. et al. Widefield computational imaging of pathology slides using lensfree onchip microscopy. Science Translational Medicine 6, 267ra175–267ra175 (2014).
 14.
Haddad, W. S. et al. Fouriertransform holographic microscope. Applied Optics 31, 4973–4978 (1992).
 15.
Xu, W., Jericho, M., Meinertzhagen, I. & Kreuzer, H. Digital inline holography for biological applications. Proceedings of the National Academy of Sciences 98, 11301–11305 (2001).
 16.
Pedrini, G. & Tiziani, H. J. Shortcoherence digital microscopy by use of a lensless holographic imaging system. Applied Optics 41, 4489–4496 (2002).
 17.
Repetto, L., Piano, E. & Pontiggia, C. Lensless digital holographic microscope with lightemitting diode illumination. Optics Letters 29, 1132–1134 (2004).
 18.
GarciaSucerquia, J., Xu, W., Jericho, M. & Kreuzer, H. J. Immersion digital inline holographic microscopy. Optics Letters 31, 1211–1213 (2006).
 19.
Shannon, C. E. Communication in the presence of noise. Proceedings of the IRE 37, 10–21 (1949).
 20.
Greenbaum, A. et al. Imaging without lenses: achievements and remaining challenges of widefield onchip microscopy. Nature Methods 9, 889–895 (2012).
 21.
Chen, T., Catrysse, P. B., El Gamal, A. & Wandell, B. A. How small should pixel size be? In Electronic Imaging, 451–459 (International Society for Optics and Photonics, 2000).
 22.
Luo, W., Greenbaum, A., Zhang, Y. & Ozcan, A. Synthetic aperturebased onchip microscopy. Light: Science & Applications 4, e261 (2015).
 23.
Greenbaum, A. & Ozcan, A. Maskless imaging of dense samples using pixel superresolution based multiheight lensfree onchip microscopy. Optics Express 20, 3129–3143 (2012).
 24.
Sobieranski, A. C. et al. Portable lensless widefield microscopy imaging platform based on digital inline holography and multiframe pixel superresolution. Light: Science and Applications 4, e346 (2015).
 25.
Luo, W., Zhang, Y., Feizi, A., Göröcs, Z. & Ozcan, A. Pixel superresolution using wavelength scanning. Light: Science & Applications 5, e16060 (2016).
 26.
Marie, K., Bennett, J. & Anderson, A. Digital processing technique for suppressing the interfering outputs in the image from an inline hologram. Electronics Letters 15, 241–243 (1979).
 27.
Koren, G., Polack, F. & Joyeux, D. Iterative algorithms for twinimage elimination in inline holography using finitesupport constraints. JOSA A 10, 423–433 (1993).
 28.
Zuo, C., Sun, J., Zhang, J., Hu, Y. & Chen, Q. Lensless phase microscopy and diffraction tomography with multiangle and multiwavelength illuminations using a led matrix. Optics Express 23, 14314–14328 (2015).
 29.
Spence, J. C., Howells, M., Marks, L. & Miao, J. Lensless imaging: a workshop on “new approaches to the phase problem for nonperiodic objects”. Ultramicroscopy 90, 1–6 (2001).
 30.
Zuo, C. et al. Highresolution transportofintensity quantitative phase microscopy with annular illumination. Scientific Reports 7, 7654 (2017).
 31.
Luo, W., Zhang, Y., Göröcs, Z., Feizi, A. & Ozcan, A. Propagation phasor approach for holographic image reconstruction. Scientific Reports 6 (2016).
 32.
Wang, H. et al. Computational outoffocus imaging increases the space–bandwidth product in lensbased coherent microscopy. Optica 3, 1422–1429 (2016).
 33.
Gerchberg, R. W. A practical algorithm for the determination of phase from image and diffraction plane pictures. Optik 35, 237 (1972).
 34.
Oh, C., Isikman, S. O., Khademhosseinieh, B. & Ozcan, A. Onchip differential interference contrast microscopy using lensless digital holography. Optics Express 18, 4717–4726 (2010).
 35.
PechPacheco, J. L., Cristóbal, G., ChamorroMartinez, J. & FernándezValdivia, J. Diatom autofocusing in brightfield microscopy: a comparative study. In Pattern Recognition, 2000. Proceedings. 15 ^{th} International Conference on, vol. 3, 314–317 (IEEE, 2000).
 36.
Mudanyali, O., Oztoprak, C., Tseng, D., Erlinger, A. & Ozcan, A. Detection of waterborne parasites using fieldportable and costeffective lensfree microscopy. Lab on a Chip 10, 2419–2423 (2010).
 37.
Zuo, C., Sun, J. & Chen, Q. Adaptive stepsize strategy for noiserobust fourier ptychographic microscopy. Optics Express 24, 20724–20744 (2016).
 38.
Park, S. C., Park, M. K. & Kang, M. G. Superresolution image reconstruction: a technical overview. IEEE Signal Processing Magazine 20, 21–36 (2003).
 39.
Goodman, J. W. Introduction to Fourier optics (Roberts and Company Publishers, 2005).
 40.
GuizarSicairos, M., Thurman, S. T. & Fienup, J. R. Efficient subpixel image registration algorithms. Optics Letters 33, 156–158 (2008).
 41.
Fienup, J. R. Invariant error metrics for image reconstruction. Applied Optics 36, 8352–8357 (1997).
 42.
Huang, T. Multiframe image restoration and registration. Advances in Computer Vision and Image Processing 1, 317–339 (1984).
 43.
Baker, S. & Kanade, T. Limits on superresolution and how to break them. IEEE Transactions on Pattern Analysis and Machine Intelligence 24, 1167–1183 (2002).
 44.
Yang, J., Wright, J., Huang, T. & Ma, Y. Image superresolution as sparse representation of raw image patches. In Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on, 1–8 (IEEE, 2008).
 45.
He, K., Sun, J. & Tang, X. Guided image filtering. IEEE Transactions on Pattern Analysis and Machine Intelligence 35, 1397–1409 (2013).
 46.
Fienup, J. & Wackerman, C. Phaseretrieval stagnation problems and solutions. JOSA A 3, 1897–1907 (1986).
 47.
Miao, J., Sayre, D. & Chapman, H. Phase retrieval from the magnitude of the fourier transforms of nonperiodic objects. JOSA A 15, 1662–1669 (1998).
 48.
Miao, J., Kirz, J. & Sayre, D. The oversampling phasing method. Acta Crystallographica Section D: Biological Crystallography 56, 1312–1315 (2000).
Acknowledgements
This work was supported by the National Natural Science Fund of China (61722506, 61505081, 111574152), Final Assembly “13^{th} FiveYear Plan” Advanced Research Project of China (30102070102), National Defense Science and Technology Foundation of China (0106173), National Key Technologies R&D Program of China (2017YFF0106400, 2017YFF0106403), Outstanding Youth Foundation of Jiangsu Province of China (BK20170034), “Six Talent Peaks” project of Jiangsu Province, China (2015DZXX009), “333 Engineering” Research Project of Jiangsu Province, China (BRA2016407), Fundamental Research Funds for the Central Universities (30917011204, 30916011322), Open Research Fund of Jiangsu Key Laboratory of Spectral Imaging & Intelligent Sense (3091601410414).
Author information
Author notes
Jialin Zhang and Jiasong Sun contributed equally to this work.
Affiliations
School of Electronic and Optical Engineering, Nanjing University of Science and Technology, No. 200 Xiaolingwei Street, Nanjing, Jiangsu Province, 210094, China
 Jialin Zhang
 , Jiasong Sun
 , Qian Chen
 , Jiaji Li
 & Chao Zuo
Jiangsu Key Laboratory of Spectral Imaging & Intelligent Sense, Nanjing, Jiangsu Province, 210094, China
 Jialin Zhang
 , Jiasong Sun
 , Qian Chen
 , Jiaji Li
 & Chao Zuo
Smart Computational Imaging Laboratory (SCILab), Nanjing University of Science and Technology, Nanjing, Jiangsu Province, 210094, China
 Jialin Zhang
 , Jiasong Sun
 , Jiaji Li
 & Chao Zuo
Authors
Search for Jialin Zhang in:
Search for Jiasong Sun in:
Search for Qian Chen in:
Search for Jiaji Li in:
Search for Chao Zuo in:
Contributions
C.Z. proposed the idea. J.Z. and C.Z. conceived and designed the experiments. J.Z. performed the experiments and processed the resulting data. J.Z., J.S., C.Z. and J.L. wrote the manuscript. C.Z. and Q.C. supervised the research. J.Z. and J.S. contributed equally to this work.
Competing Interests
The authors declare that they have no competing interests.
Corresponding authors
Electronic supplementary material
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Comments
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.