Abstract
Fourier ptychographic microscopy (FPM) is a novel computational microscopy technique that provides intensity images with both wide fieldofview and highresolution. By combining ideas from synthetic aperture and phase retrieval, FPM iteratively stitches together a number of variably illuminated, lowresolution intensity images in Fourier space to reconstruct a highresolution complex sample image. Although FPM is able to bypass the spacebandwidth product (SBP) limit of the optical system, it is vulnerable to the various capturing noises and the reconstruction is easy to trap into the local optimum. To efficiently depress the noise and improve the performance of reconstructed highresolution image, a FPM with sparse representation is proposed in this paper. The cost function of the reconstruction is formulated as a regularized optimization problem, where the data fidelity is constructed based on a maximum likelihood theory, and the regulation term is expressed as a small number of nonzero elements over an appropriate basis for both amplitude and phase of the reconstructed image. The Nash equilibrium is employed to obtain the approximated solution. We validate the proposed method with both simulated and real experimental data. The results show that the proposed method achieves stateoftheart performance in comparison with other approaches.
Introduction
Fourier ptychographic microscopy^{1} (FPM) is a novel computational imaging method which is capable of providing a scalable spacebandwidth product (SBP) for most existing microscopes. In this method, illumination angles are scanned sequentially with a programmable LED array source, while taking a lowresolution (LR) image at each angle. Assuming that illuminating a thin sample by an oblique plane wave is equivalent to shifting the center of the sample’s spectrum in the Fourier domain, each offaxis LED shifts different amounts of high spatial frequency information, diffracted from the sample, into the acceptance angle of an objective lens^{2}. By capturing a stack of LR images that cover a wide region of Fourier domain and stitching them together coherently, one can achieve spatial resolution beyond the objective’s diffraction limit, corresponding to the sum of illumination and objective numerical aperture (NA)^{3}.
In practice, however, the reconstruction of FPM is sensitive to the input noise. Recently, multiple algorithms have been proposed to address the noise. Generally, these algorithms utilize maximum likelihood theory which provides a flexible framework for formulating the FPM optimization problem with various noise models. If the measured images suffer only from white Gaussian noise, the negative loglikelihood function would be reduced to least squares formulation^{3}. In the case of Poisson shot noise, Bian et al.^{4} proposed a FPM reconstruction method termed as truncated Poisson Wirtinger Fourier ptychographic (TPWFP) reconstruction. This method incorporated Poisson maximum likelihood objective function and truncated Wirtinger gradient^{5} together into a gradientdescent optimization framework. Based on the cost functions of existing FPM algorithms, Yeh et al.^{3} tested amplitudebased algorithms and intensitybased algorithms. The results demonstrated that amplitudebased Newton’s method gives a better reconstruction but needs much more running time for the reconstruction. Recently, Zhang et al.^{6} proposed a generalized Anscombe transform based approximation model (GATFP) for FPM reconstruction, which could efficiently reduce the noises. However, all the mentioned methods neglect the sparse priority of the reconstructed image, and the performance is not as good as expected.
Based on the GATFP, we propose a FPM reconstruction method termed Fourier ptychographic with sparse representation (FPSR), which brings together sparse representation^{7, 8} and Nash equilibrium^{9} to improve the performance of FPM reconstruction. Sparse representation is based on the approximation that one image could be expressed as a combination of few atomic functions taken from a certain dictionary. However, to successfully employ sparse representation, we typically have to address the additional problem of how to correctly choose the dictionary. Clearly, a proper dictionary should be an overcomplete system with a number of elements essentially larger than the dimensionality of the approximated images. Numerous pioneering works have been proposed to address the overcomplete dictionary. Christensen^{10} presented the general theory for frames and Riesz bases, where frames are generalization of the concept of basis to the case when the atomic functions are linearly dependent and form an overcomplete system. Recently, Danielyan et al.^{11} constructed analysis and synthesis frames, formalizing the Block Matching 3D (BM3D)^{12} image modeling and used these frames to develop novel iterative deblurring algorithms. The other important technique incorporated in the proposed FPSR is Nash equilibrium^{9, 13} presented by Nash. This technique was originally developed for game theory, aiming at solving multiple objective optimization problems originating from game theory. In Nash’s work^{13}, NonCooperative Games were defined as mixed strategy Nash equilibrium for any game with a finite set of actions and it was proved that at least one (mixed strategy) Nash equilibrium must exist in such a game. The Nash equilibrium is a solution concept of a game involving two or more players, in which each player is assumed to know the equilibrium strategies of the other players, and no player has anything to gain by changing only his own strategy unilaterally^{14, 15}. If each player has chosen a strategy and no player can benefit by changing his or her strategy while the other players keep theirs unchanged, then the current set of strategy choices and the corresponding payoffs constitute a Nash equilibrium^{15}. This strategy is very closely related to the nature of the proposed FPSR model where multiple objective functions should be considered.
In this paper, we assume that the intensity capturing process is subject to various signaldependent errors, and the detected photons follow mixed PoissonGaussian noise. We also introduce the sparsity hypothesis^{16, 17} for both the amplitude and phase as the constraint in the reconstruction to improve the accuracy of reconstruction. To illustrate the effectiveness of the proposed method, we compare our method with three stateoftheart algorithms, including Newton method, TPWFP, GATFP on both simulated and real data. Besides, we also compare the method incorporating sparse representation in the Newton method, termed as NSR for short. In NSR, the Newton method does not involve the background subtraction step. Our results show that the proposed method reconstructs more accurate results compared with other stateoftheart algorithms.
Methods
As detailed described in ref. 1, a typical FPM consists of an LED array, a light microscope with a low NA objective lens, and a monochromatic CCD camera. By sequentially turning on the LED elements on the array, the sample is illuminated from different angles, which correspond to a shift proportional to the angle of the illumination in Fourier space. Consequently, the estimated intensity I _{ n } at the detector imaging plane can be generated as:
where \({ {\mathcal F} }^{1}\) is the 2D inverse Fourier transform, k _{ n } is the spatial frequency corresponding to the n ^{th} LED, P(·) represents the pupil function of the system and S(·) represents the Fourier spectrum of the object. For multivariate problems such as Eq. (1), it is convenient to reformulate the problem using linear algebra. Similar to ref. 18, the relation between the highresolution (HR) reconstruction and the LR observations corresponds to two sequential linear operation:

downsampling caused by the object aperture

inverse Fourier transform to the LR spectrum bands caused by the microscope imaging system
We treat these two operations as a whole and use A to represent this combinational sampling process. The image formation model (1) can be finally vectorized as:
Many methods have been developed to obtain the optimal S(·) utilizing the following optimization function:
where r = (x, y) is the realspace coordinate vector. Apparently, most of these methods neglect the sparse priority of reconstructed images. Motivated by this, a FPM with sparse representation was proposed in this paper. We formulate the reconstruction as the following minimization problem:
where \({\mathscr{D}}(S)\) is the datafidelity term, R _{ a } and R _{ φ } are the regulation terms, s _{ a } represents the amplitude of s, s _{φ} represents the phase angle of s, \({\tau }_{a} > \mathrm{0,}\,{\tau }_{\phi } > 0\) are the regulation parameters, respectively. To efficiently depress the noise and obtain better performance, we provide a datafidelity based on noisy observation modelling and a regulation term based on sparse representation, which will be described in the following.
Datafidelity based on noisy observation modelling
Amongst the existing noise models for the FPM reconstruction, almost all of the methods are based on the Gaussian or Poisson noise. However, the capturing process of many imaging devices is subject to various signaldependent errors, and a standard way to model these errors is to consider them as PoissonGaussian noise^{19}. Recently, a mixed PoissonGaussian model^{6} was proposed for FPM reconstruction and obtained a satisfactory performance under different types of noise. This paper describes an extension of the mixed model to properly handle different types of noises and improve the robustness of the reconstruction. Now we express a brief overview of the mixed PoissonGaussian model and introduce the datafidelity term. Given a noisy image y _{ n } with the form
where subscript n is the n ^{th} image, I _{ n } is the true image, \({\tilde{{\bf{I}}}}_{n}\) represents the image related to I _{ n } and \({\tilde{{\bf{I}}}}_{n}\sim Poisson({{\bf{I}}}_{n})\). w _{ n } is independent identically distributed zeromean Gaussin noise component with variance σ ^{2}, \({{\bf{w}}}_{n}\sim {\mathscr{N}}\mathrm{(0,}{\sigma }^{2})\). Specifically, we could easily obtain the likelihhood function as ref. 20:
where, for every i ∈ 1, …, m ^{2}, [I _{ n }]_{ i } and [y _{ n }]_{ i } denote each pixel in I _{ n } and y _{ n }. However, it is not appropriate to use this function directly as the noise model, since the infinite Gaussian mixture distribution in Eq. (6) makes it difficult to obtain the gradient of the function. Furthermore, the complicated gradient will inevitably increase the processing time. Fortunately, based on the inspiration of variance stabilizing transform, the Generalized Anscombe Transform (GAT)^{21, 22} is proposed to approximate the exact likelihood. Using the GAT approximation, the likelihood of \({\tilde{{\bf{y}}}}_{n}\) with components \({\tilde{{\bf{y}}}}_{n}=2\sqrt{{{\bf{y}}}_{n}+\frac{3}{8}+{\sigma }^{2}}\) is approximately given by refs 6 and 23:
The negative loglikelihood function, assuming that the measurements are independent from each other, is a function of the parameters S, given the observed data y _{ n }:
The first two terms are constant offsets that can be either ignored or computed prior to the reconstruction. Thus we can obtain the datafidelity for the proposed reconstruction as^{6}:
Regulation term based on sparse representation
To efficiently express the sparse coding for complex value^{24}, we employ the separate sparse realvalued representations for amplitude and absolute phase image. The sparse representation is formulated as the following matrix operation:
where Θ_{ a } and Θ_{ ϕ } are vectors of the amplitude and phase spectra. The modulus and angle operations applied to vetors in Eqs (10) and (11) are elementwise. Thus s _{ a } and s _{ φ } are the vectors of amplitude and phase values. In (10), the amplitude s _{ a } and phase s _{ φ } are synthesized from the amplitude and phase spectra Θ_{ a } and Θ_{ ϕ }. On the other hand, the analysis in Eq. (11) gives the spectra for amplitude and phase of the sample s. In Eqs (10–11) the synthesis and analysis matrices are denoted as Ψ_{ a }, Ψ_{ φ } and Φ_{ a }, Φ_{ φ }, respectively. Following the sparsity rationale we assume that amplitude and phase spectra, Θ_{ a } and Θ_{ φ }, respectively, are sparse; i.e., most elements thereof are zero. In order to quantify the level of sparsity of Θ_{ a } and Θ_{ φ }, i.e., their number of nonzero (active) elements, we use the pesudo l _{0}norm ·_{0} defined as the number of nonzero elememts of the vectorargument. Therefore, we will design estimation criteria promoting low values of Θ_{ a }_{0} and Θ_{ φ }_{0} ^{17}.
Two principally different variational formulations classified as the analysis and synthesis approaches can be viewed for sparse modelling. In the synthesis approach, the relation between the signal and spectrum variables are given by the synthesis in Eq. (10), while in the analysis approach these relations are given by the analysis in Eq. (11)^{17}.
Based on Eq. (4), the variational setup in the synthesis approach is of form
The first summand in Eq. (12) is the negative loglikelihood function corresponding to the approximated version of mixed PoissonGaussian distribution in Eq. (7). The pseudo l _{0}norms with the coefficients τ _{ a } and τ _{ φ } are included in order to enable the sparsity of the amplitude and phase^{17}. It is clear that the synthesis setup leads to quite complex optimization problem. Correspondingly, in the analysis approach the variational setup has similar form and the same problem exists. Similar to ref. 17, a different Nash equilibrium approach is used to tackle this problem. The constrained optimization with a single criterion function, as in Eq. (12), is replaced by a search for the Nash equilibrium balancing two criteria. Demonstrations of this approach for the synthesisanalysis sparse inverse imaging can be seen in ref. 11, where it is devised for linear realvalued observation modeling. According to the above analysis, we introduce two criteria for formalization of the algorithm design:
where \(S= {\mathcal F} (s)\) and ·_{2} stands for the Euclidean norm. The criterion (13) is identical to (9). The sparsity is enabled by the criterion (14).
Minimization of \({ {\mathcal L} }_{1}(S)\) requires the calculation of gradient. Inspired by previous works^{4, 5, 25}, we can obtain the gradient of \({ {\mathcal L} }_{1}(S)\) as:
where [A _{ n }]^{H} denotes the conjugate transpose of [A _{ n }].
In addition, the hard and soft thresholding operators^{26} are employed, and the solutions of \({ {\mathcal L} }_{2}({{\rm{\Theta }}}_{a},{{\rm{\Theta }}}_{\phi },{s}_{a},{s}_{\phi })\) with respect to Θ_{ a } and Θ_{ φ } are expressed as
where \(\sqrt{2{\tau }_{a}}\) and \(\sqrt{2{\tau }_{\phi }}\) are the thresholds for the amplitude and phase, respectively. In this paper, \(\sqrt{2{\tau }_{a}}=1.4\); \(\sqrt{2{\tau }_{\phi }}=1.4\). Here, 1[u] is an elementwise vector function, 1[u] = 1 if u ≥ 0 and 1[u] = 0 if u < 0. The value of Θ_{ a } and Θ_{ φ }, which are smaller than the corresponding thresholds, are set to zero.
According to the idea of the Nash equilibrium balancing multiple penalty function, the proposed algorithm is composed of alternating optimization steps performed for \({ {\mathcal L} }_{1}\) and \({ {\mathcal L} }_{2}\) ^{17, 27}. The alternating optimizaiton steps could be:
To enable sparse approximations, the dictionary should be rich enough to grasp all variety of images. In our algorithm, we use BM3D frames for the analysis and synthesis operations. The frame is a generalization of the concept of basis to the case when the dictionary forms an overcomplete system. The BM3D filter can be split into three steps^{11}.

Analysis. Similar image blocks are collected in groups in order to obtain highly correlated data. Blocks in each group are stacked together to form a 3D data array, which is decorrelated using an invertible 3D transform.

Processing. Obtained 3D group, which are filtered by hardthresholding.

Synthesis. The filtered spectra are inverted, providing estimates for each block in the group. These blockwise estimates are returned to their original position, and the final image reconstruction is calculated as a weighted average of all of the obtained blockwise estimates.
It follows from Eq. (20) that the steps Eqs (19) and (20) including the Analysis step defining the analysis Θ and synthesis Ψ frames can be combined in a single algorithm. In the remainder of the manuscript, we use the notation BM3D for this algorithm. Note the standard BM3D algorithm^{12} is composed of two successive steps: thresholding and Wiener filtering. However, in this paper, BM3D consists of only the first thresholding step. Using the BM3D algorithm for implementation of the Eqs (19) and (20) we obtain:
Combining the solutions for Eq.(18) and Eqs (22–23), we obtain the FPSR algorithm shown in Algorithm 1. y _{ c } in Algorithm 1 represents the image captured with central LED.
Results
In this section, we conduct a series of experiments on both simulated and real captured data.
Quantitative metric and parameter settings
Besides the visual results, we also introduce the relative error (RE)^{4,5,6} to quantify the recovery performance of different methods, defined as:
where S _{ t } is the true sample spectrum, and S denotes the reconstructed spectrum.
In the simulation experiments, we model a microscope setup realistically with its parameters as follows: the wavelength of incident light is 630 nm; CCD pixel size is 1.845 μm; the NA of the objective lens is 0.1. We use a 15 × 15 LED matrix as the light source to provide anglevaried illuminations. The distance between adjacent LED elements is 4 mm, and the distance between the sample and LED matrix is 90.88 mm. Besides, we use the ‘cameraman’ from ‘standard’ test images^{28} as the HR amplitude, and the ‘Aerial’ image (512 × 512 pixels) from the USCSIPI image database^{29} is used as phase image. Using the Fourier ptychographic imaging formation, we simulated the ideal data with three sequential operations: (1) select different subregions of the HR Fourier domain caused by different incident angles, (2) inverse Fourier transform to the subregion and get the LR plural image, (3) only retain the intensity of LR plural to obtain the ideal data. In addition, we also consider three different types of noises explicitly as follows:

Gaussian noise: we added Gaussian white noise to the ideal data.

Poisson noise: the ideal data is corrupted by Poissondistributed noise at each pixel.

Mixed PoissonGaussian noise: First, we simulated the data with Poisson noise. Then we added Gaussian white noise to the data corrupted with Poisson noise.
For Gaussian noise and the Gaussian component of the mixed PoissonGaussian, the standard deviation is the ratio between actual standard deviation and the maximum of the ideal data. Then we compared FPSR with three stateoftheart methods, i.e., Newton method, TPWFP and GATFP. Besides, we also provide the result generated by NSR to show whether the benefit of the proposed method come form the noisy observing modeling or the sparse representation. The code of Newton method is adapted according to the code samples from http://sites.bu.edu/tianlab/opensource/. The code of TPWFP could be obtained at http://www.sites.google.com/site/lihengbian. In TPWFP, α ^{h} is set to 25, since it works well for the FPM reconstruction as stated in ref. 4. Another important parameter for all the algorithms is the iteration number. For TPWFP, GATFP, NSR and FPSR, 300 iterations are enough. We set 500 iterations for Newton method.
Simulation results
First, we compare FPSR with the above mentioned three stateoftheart methods and NSR to show their pros and cons. We apply each algorithm on the simulated data with Poisson noise, Gaussian noise and PoissonGaussian noise, respectively. The Poisson noise is used to describe the statistics of the incoming photons at each pixel, which is a discrete probability distribution^{3}. The Gaussian noise is mostly caused in the capturing chain, such as thermal noise. Thermal noise is associated with the rapid and random motion of electrons within a conductor due to thermal agitation. Because the number of electrons in a conductor is very large, and their random motions are statistically independent, the central limit theorem indicates that thermal noise is Gaussian distributed with zero mean. A more realistic way to model the noise is a mixed PoissonGaussian distribution. Note that the standard deviation (std) for the PoissonGaussian noise denotes the level of the Gaussian component. The std of the Gaussian noise (Gaussian component of the mixed PoissonGaussian noise) is set to 2e − 3. And the ground truth noise variance is used in the synthetic experiment.
From the results (in Fig. 1), we can see that TPWFP performs well under Poisson noise, which benefits from its accurate Poisson signal model. Instead, Newton method and NSR minimizes the square of the difference between the actual and estimated measurements, which is the equivalent form of Gaussian likelihood function when the constant term is ignored^{3}. Although NSR also incorporates the sparse representation, the performance of NSR is not as good as excepted. GATFP and FPSR can achieve a successful reconstruction without affecting the reconstruction quality. For Gaussian noise, GATFP and FPSR outperform the other three methods. This is because we also consider the Gaussian noise of the measurement explicitly as Eq. (5) in the noisy observation model. FPSR is advantageous than GATFP, since the sparse representaion could effectively reduce the noise by omitting the measurement under the certain threshold. TPWFP could recognize the most of Gaussian noise and remove them using the truncated Wirtinger gradient. However, for situations with high noise level, TPWFP may not be an effective reconstruction approach. Instead, Newton method estimates the background for each image and subtracts it to produce the corrected intensity image^{30}. Obviously, NSR outperforms TPWFP and Newton method when the standard variance is large, however the advantage is not obvious under the case of smaller standard variance. For mixed PoissonGaussin noise, NSR achieves better performance than TPWFP and Newton method when standard variance is larger, but it suffers a significant degree of crosstalk in the phase image and generates some white spots in the amplitude image. GATFP obtains successful reconstruction. This is greatly attributed to its PoissonGaussian assumption. FPSR performs better than all the competing methods. This is mainly attributed to the sparse representation and mixed PoissonGaussian assumption. To conclude, we can see that the type of noise strongly influences the quality of reconstructed image, while FPSR is more robust under different types of noise. This behavior is well explained by the fact that our model can be treated as a generalized model of these types of noise.
All the five algorithms are implemented using MATLAB R2014a and the computer with the processor Intel(R) Core(TM) i53470@ 3.20GHz and 64bit Windows10 system. The complexity of the algorithms is characterized by the running time of 100 iterations, which is provided in Table 1. Since both NSR and FPSR employ the sparse representation, which involves the operations of BM3D frames for both amplitude and phase images, the running time of NSR and FPSR is much higher than the other three methods. Although the computational complexity is higher, the proposed FPSR achieves much better quality than other competing methods (Newton method, TPWFP, GATFP and NSR) for almost all the reconstructed results under different types of noises. With the emerging of more and more powerful computers, the running time of FPSR can be reduced significantly. Further studies on reducing the computational complexity of the proposed FPSR are also needed.
Experimental results
In this subsection, we demonstrate the performance of our method with experimental results. The parameters of the real FPM imaging system are the same as those in the simulation. We obtain the estimated standard deviation of noise based on Median of Absolute Deviation (MAD) technique. However, the standard deviation for the Gaussian component of mixed PoissonGaussian noise might not be exact since the noise also contains the Poisson component. So we adjust the standard deviation based on the value obtained by MAD. In the experiment, we employ the blood smear and USAF target as samples, and capture a sequence of 225 images for both samples.
The reconstruction results over USAF target are shown in Fig. 2. In the result of Newton method, though most of the noise is removed due to the background subtraction step, many image details are subtracted as well (see the group 8 element 6). In the results generated by TPWFP, we can easily observe that TPWFP suffers noise corruption. The resluts of NSR suffer a significant degree of blurring (see the group 9 element 1). In addition, GATFP achieves good reconstruction with more details. Compared with GATFP, the reconstruction results using FPSR suffer from less noise in both amplitude and phase (see the part of zoomin). The reconstruction results of blood smear are shown in Fig. 3. We can see that TPWFP and the Newton method produce fluctuations in the object phase over blood smear. GATFP, NSR and FPSR are able to remove the fluctuations in the object phase and achieve superior performance than the other competing methods. In addition, the reconstruction of the proposed FPSR has stronger contrast and contains less noises compared with GATFP and NSR. This is consistent with the simulation experiment. In all, FPSR offers a novel way for FPM to reconstruct highly accurate results suffered from noisedeteriorated inputs.
Discussion
In this paper, we develop and test a novel reconstruction method for FPM termed as FPSR. By fully exploring the sparse priority of captured images, the proposed FPSR is formulated as a regularized optimization problem, which is solved by the Nash equilibrium algorithm. In FPSR, the data fidelity is constructed as a maximum likelihood problem, and the regulation term is expressed as a small number of nonzero elements over an appropriate basis for both amplitude and phase image. We compare the reconstruction quality of the proposed method and the competing methods under different types of noises. Both simulation and experimental results demonstrate the validity of our method.
One extension of FPSR is to handle much more complex noise by modeling data noise as a mixture of Gaussians. The mixture of Gaussian is a universal approximator to distributions and is able to fit a wide range of noises. In addition, we can also introduce more accurate approach for the estimation of standard deviation to improve the convergence speed and effectiveness of the algorithm.
The limitation of our method is the case that the model is nonconvex, which might converge to incorrect local minima. In addition, GAT is able to stabilize the noise variance, yet the tails of variance stabilized coefficients distribution are empirically longer than normality as evidenced in ref. 31. Besides, when the noise is samll, FPSR may remove some useful information. Therefore, incorporating a convex program on FPSR to obtain a solution with minimum cost will be a research emphasis in the near future.
References
Zheng, G., Horstmeyer, R. & Yang, C. Widefield, highresolution fourier ptychographic microscopy. Nat. Photonics 7, 739–745 (2013).
Ou, X., Horstmeyer, R., Zheng, G. & Yang, C. High numerical aperture fourier ptychography: principle, implementation and characterization. Opt. Express 23, 3472–3491 (2015).
Yeh, L.H. et al. Experimental robustness of fourier ptychography phase retrieval algorithms. Opt. Express 23, 33214–33240 (2015).
Bian, L. et al. Fourier ptychographic reconstruction using Poisson maximum likelihood and truncated Wirtinger gradient. Scientific Reports 6 (2016).
Chen, Y. & Candes, E. Solving random quadratic systems of equations is nearly as easy as solving linear systems. In Advances in Neural Information Processing Systems 739–747 (2015).
Zhang, Y., Song, P. & Dai, Q. Fourier ptychographic microscopy using a generalized Anscombe transform approximation of the mixed PoissonGaussian likelihood. Opt. Express 25, 168 (2017).
Yang, J., Wright, J., Huang, T. S. & Ma, Y. Image superresolution via sparse representation. IEEE Transactions on Image Processing A Publication of the IEEE Signal Processing Society 19(11), 2861–2873 (2010).
Elad, M. Sparse and Redundant Representations: From Theory to Applications in Signal and Image Processing. Springer Publishing Company, Incorporated (2010).
Nash, J. F. Equilibrium Points in nPerson Games. Proceedings of the National Academy of Sciences of the United States of America 36(1), 48 (1950).
Christensen, O. An Introduction to Frames and Riesz Bases. Applied & Numerical Harmonic Analysis 59–76 (2003).
Danielyan, A., Katkovnik, V. & Egiazarian, K. BM3D Frames and Variational Image Deblurring. IEEE Transactions on Image Processing 21, 1715 (2011).
Dabov, K., Foi, A., Katkovnik, V. & Egiazarian, K. Image denoising by sparse 3D transformdomain collaborative filtering. Image Processing IEEE Transactions on 16, 2080–2095 (2007).
Nash, J. F. Noncooperative games. Annals of Mathematics 54, 286–295 (1951).
Osborne, M. J. & Rubinstein, A. A Course in Game Theory. Economica 63, 249 (1994).
Tang, Z. & Zhang, L. Nash equilibrium and multi criterion aerodynamic optimization. Journal of Computational Physics 314, 107–126 (2016).
Shechtman, Y., Beck, A. & Eldar, Y. C. GESPAR: Efficient Phase Retrieval of Sparse Signals. IEEE Transactions on Signal Processing 62, 928–938 (2012).
Katkovnik, V. Phase retieval from noisy data based on sparse approximation of object phase and amplitude.
Bian, L. et al. Fourier ptychographic reconstruction using wirtinger flow optimization. Opt. Express 23, 4856–4866 (2015).
Makitalo, M. & Foi, A. Optimal inversion of the generalized anscombe transformation for poissongaussian noise. IEEE transactions on image processing 22, 91–103 (2013).
Chouzenoux, E., Jezierska, A., Pesquet, J.C. & Talbot, H. A convex approach for image restoration with exact poissongaussian likelihood. SIAM Journal on Imaging Sciences 8, 2662–2682 (2015).
Starck, J.L., Murtagh, F. D. & Bijaoui, A. Image processing and data analysis: the multiscale approach (Cambridge University Press, 1998).
Zhang, B., Fadili, M. J., Starck, J.L. & Olivomarin, J.C. Multiscale VarianceStabilizing Transform for MixedPoissonGaussian Processes and its Applications in Bioimaging. IEEE 6, VI233–VI236 (2013).
Marnissi, Y., Zheng, Y. & Pesquei, J.C. Fast variational bayesian signal recovery in the presence of poissongaussian noise. In 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 3964–3968 (2016).
Katkovnik, V., Egiazarian, K. & BioucasDias, J. Phase imaging via sparse coding in the complex domain based on highorder svd and nonlocal BM3D techniques. 4587–4591 (2015).
Candes, E. J., Li, X. & Soltanolkotabi, M. Phase retrieval via wirtinger flow: Theory and algorithms. IEEE Transactions on Information Theory 61, 1985–2007 (2015).
Katkovnik, V. & Astola, J. Phase retrieval via spatial light modulator phase modulation in 4f optical setup: numerical inverse imaging with sparse regularization for phase and amplitude. Journal of the Optical Society of America A 29, 105–116 (2012).
Facchinei, F. & Kanzow, C. Generalized Nash Equilibrium Problems. Annals of Operations Research 175, 177–211 (2010).
Image Databases, http://www.imageprocessingplace.com.
University of Southern California, “The USCSIPI Image Database,” http://sipi.usc.edu./database.
Tian, L., Li, X., Ramchandran, K. & Waller, L. Multiplexed coded illumination for Fourier Ptychography with an LED array microscope. Biome. Opt. Express 5, 2376 (2014).
Zhang, J., Hirakawa, K. & Jin, X. Quantile analysis of image sensor noise distribution. IEEE International Conference on Acoustics, Speech and Signal Processing 1598–1602 (2015).
Acknowledgements
The authors acknowledge funding support from the National Natural Science Foundation of China (NSFC) U1301257, 61571254, Guangdong Special Support plan 2015TQ01X161, and the Shenzhen Peacock Plan.
Author information
Authors and Affiliations
Contributions
Y.Z. and P.S. proposed the idea and conducted the experiments. J.Z. and Q.D. analyzed the results. All the authors contributed to writing and revising the manuscript, and convolved in discussions during the project.
Corresponding author
Ethics declarations
Competing Interests
The authors declare that they have no competing interests.
Additional information
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Zhang, Y., Song, P., Zhang, J. et al. Fourier ptychographic microscopy with sparse representation. Sci Rep 7, 8664 (2017). https://doi.org/10.1038/s41598017090908
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s41598017090908
This article is cited by

Concept, implementations and applications of Fourier ptychography
Nature Reviews Physics (2021)
Comments
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.