Abstract
This paper proposes a new deconvolution method for 3D fluorescence widefield microscopy. Most previous methods are insufficient in terms of restoring a 3D cell structure, since a point spread function (PSF) is simply assumed as depthinvariant, whereas a PSF of microscopy changes significantly along the optical axis. A few methods that consider a depthvariant PSF have been proposed; however, they are impractical, since they are nonblind approaches that use a known PSF in a premeasuring condition, whereas an imaging condition of a target image is different from that of the premeasuring. To solve these problems, this paper proposes a blind approach to estimate depthvariant specimendependent PSF and restore 3D cell structure. It is shown by experiments on that the proposed method outperforms the previous ones in terms of suppressing axial blur. The proposed method is composed of the following three steps: First, a nonparametric averaged PSF is estimated by the Richardson Lucy algorithm, whose initial parameter is given by the central depth prediction from intensity analysis. Second, the estimated PSF is fitted to Gibson's parametric PSF model via optimization and depthvariant PSFs are generated. Third, a 3D cell structure is restored by using a depthvariant version of a generalized expectationmaximization.
Introduction
3D widefield fluorescence microscopy (WFM) is an essential tool in many disciplines, particularly biological and medical sciences. WFM provides molecular specificity by visualizing only biomolecules where fluorescence dyes can be selectively responded under a dark background. This property makes it possible to obtain micrographs with high contrast. Applying 3D WFM to observe 3D cellular structures refers to optical sectioning, that is, generating a series of discrete 2D image planes (xy plane)^{1}.
3D WFM, however, has several issues, such as outoffocus blur obscuring the entire infocus detail and thereby reducing the contrast of the infocus object. Two major approaches to overcome these problems have been devised^{1}. The first approach is to apply new microscopy optics. Confocal microscopy, the most widely used approach, suppresses outoffocus blur by means of a pinhole. On the other hand, it causes limitations of slow image acquisition and photobleaching. The second approach is to apply image restoration by a deconvolution algorithm. It enhances the resolution and contrast of blurred WFM images that do not have any limitation mentioned above in the first approach. In this study, the second approach was focused on and a method for deconvolution of 3D WFM images is proposed.
To implement the proposed image deconvolution algorithm, it is most important to obtain an accurate point spread function (PSF) of a 3D WFM imaging system. One of the main characteristics of a PSF of 3D WFM is depth variance along the optical axis (z axis), while general camera model ignores this variance^{2}. This characteristic is because an aberration of WFM is caused by mismatch between the refractive indices of the immersion medium and the specimen. As the optical system focuses on a deeper specimen, the aberration increases. This aberration phenomenon is the mechanism of the depthvariant characteristics for 3D WFM.
Aiming to improve the resolution and contrast of 3D WFM through image deconvolution, numerous studies have been carried out^{3}. Most of them have conducted depthinvariant image restoration owing to a simplicity of PSF modelling^{4,5,6,7}. If the specimen is thin enough, the depth variance of PSF can be ignored and their methods suppress the blur effectively, thereby increasing the resolution of 3D WFM up to that of confocal microscopy^{8}. However, in case of an average size of common specimen (10–20μm), the axial blur along the z axis still remains^{9}. For instance, the diameter of the blurred image of a 2500nm bead was measured as 4760nm (with axial blur) and 2867nm (with transverse blur) and after deconvolution of these values under the assumption that the specimen is thin enough, these results were respectively 4000nm and 2664nm^{10}. These deconvoluted values indicate that the restored image is lengthened along the optical axis. This phenomenon, called elongation, occurs when the image is restored by using a depthinvariant PSF which is only suitable for a specific plane^{2}. In consideration of the fact that the general size of an animal cell is 10–20μm, namely, much thicker than the 2500nm bead, the depth variance of PSF cannot be neglected. To handle the elongation problem, several researches have considered a depthvariant PSF. However, they tried experiments accompanied with prePSF measurement^{2,11} or only simulation^{9,12}.
As for the prePSF measurement performed in previous studies, it was assumed that an actual PSF can be approximated by the captured 3D image of a pointlike microbead. However, in the process of replacing a microbead with a cell specimen, the actual imaging condition including the optical path changes. This change makes the difference between the actual PSF and the result of prePSF measurement. In addition, a pointlike microbead is merely a ‘pointlike’ sphere and cannot be a perfect point source. For the abovementioned reasons, the PSF should be estimated directly from captured images without any premeasurement. This study focused not on simulation but on a more practical method that can be applied to a raw image and does not involve any preexperiments for obtaining PSF information.
In order to evaluate and compare quantitative performances for actual WFM image, this paper uses the WFM observation data of precisely 2500nm diameter hollow sphere fluorescence microbead. Its diameter and shape becomes a ground truth. The diameter and the relative contrast between shell and sphere inside would be quantitative performance indicators^{10,7}. Also, since the data is open to public, it enables comparing performances with the same data. Previous nonblind approaches evaluated their quantitative performances using simulation experiments^{2,11,9,12}. Because they know the ground truth, they could use widely used performance indicators in general image restoration field such as mean square error, correlation coefficient values and so on. In some blind approaches, they evaluated their quantitative performances with simulation image that are generated by arbitrary PSF due to lack of ground truth image^{13,14}. However, since the simulation image is generated under their assumption such as depthinvariant or symmetric PSF that are different from real one, they could get advantageous deconvolution result and it cannot be connected to the quantitative performance in actual image. Moreover, each study implements experiments using different images and initial PSFs, which makes difficulty to compare performances. Therefore, we used the open data of actual WFM image and could compare quantitative performances of our algorithm with existing software and algorithm.
Our algorithm estimates depthvariant specimendependent PSF under the specimen homogeneity (xy invariant PSF). Our algorithm first roughly finds the PSF for centre of object by intensity analysis of the observed image. Then the PSF is modified through Richardson Lucy algorithm so as to maximize the conditional probability of the observed image given the PSF. To generate depthvariant PSFs, the modified PSF is parameterized using maximum likelihood function. Finally, depthvariant PSFs for every single depth in observed image are generated adjusting the depth parameter. Using generated depthvariant PSFs, the true object is estimated by penalized RL algorithm.
The major contributions of this study are as follows. First, a new practical WFM image deconvolution algorithm that reflects the depth variance of a PSF and actual imaging conditions is proposed. Second, it showed remarkable experimental results compared to the existing studies and showed that our the proposed algorithm solved the problem of elongation and improved axial resolution. Third, our system is superior to other methods with regard to reducing the computational time.
Results
Datasets of the C. Elegans embryo cell and fluorescent microbead images were used in two experiments. The first experiment on cells aimed to show the applicability and the qualitative performance of the proposed algorithm for biological images. The second experiment on beads, applied the proposed algorithm to the images of a fluorescent microbead whose size and shape were given. It was thus possible to evaluate the performance of the proposed algorithm quantitatively by comparing its quantitative performance to those of three different deconvolution software packages (Huygens Pro, AutoDeblur, Deconvolution Lab) as reported by Griffa^{10} and another depthinvariant method by Soulez^{7}. The datasets can be downloaded from the website of the Biomedical Imaging Group in EPFL (http://bigwww.epfl.ch/deconvolution).
C. Elegans embryo cell
The dataset is the observation image of a C. Elegans embryo cell with a × 100, 1.4NA oil UPlanSApo objective. Enough image stacks should be taken to allow overall shape of a specimen to be observed. Unfortunately, the dataset did not satisfy this condition and bring artefacts on boundaries of the restored image. To avoid the boundary effect, a dataset that is preprocessed by a minimum filter is used (see Method section). The data cube used was composed of 672 × 712 × 216 voxels of size 64.5nm × 64.5nm × 200nm. The PSF size (x × y × z) was set to 151 × 151 × 57 voxels of size 64.5nm × 64.5nm × 200nm. After deconvolution, the restored image was cropped to the original volume 672 × 712 × 104. The dataset was composed of three stacks of images corresponding to three wavelengths. CY3 (red 634nm), FITC (green 531nm) and DAPI (blue 447nm) staining represented the pointwise spots of protein, microtubule filaments and chromosomes in the nuclei, respectively. Each wavelength image was processed separately.
To compare the performance of the proposed algorithm with those of existing algorithms, the results of deconvolution by a commercial software package DeconvolutionLab as well as those obtained by the proposed deconvolution algorithm are depicted in Figure 1. All the experiments using ours and DeconvolutionLab were implemented with the same number of iterations 150. Table 1 summarizes the experimental conditions. The x – y, y – z and x – z profiles shown in Figure 1 are depicted when z = 63, x = 260 and y = 450 pixel, respectively. Performance of each algorithm was examined in terms of qualitative visibility and computational cost.
In raw data, the image detail is represented in a narrow intensity range. The acquired images corresponding to the CY3, FITC and DAPI channels have intensity ranges of (215–2842), (209–2929) and (206–2687), respectively. Each image was deconvoluted, the ranges were widened to (0–45898), (0–24773) and (0–16292), respectively.
An observed image of a C. Elegans embryo cell is shown in Figure 1(a). Since the image is blurry and unsharpened, it is difficult to identify its cellular components. A set of images restored by using DeconvolutionLab with a PSF downloaded from Biomedical Imaging Group in EPFL(http://bigwww.epfl.ch/deconvolution/?p=bio), which was generated without consideration of actual aberration, is shown in Figure 1(b). As shown in the figure, only components that had strong intensity remained and even the remaining components are blurry. The result of image restoration using the depthinvariant PSF which was estimated in step 2 of the proposed algorithm (see Methods section) is shown in Figure 1(c). The result is still blurry, but it is improved from the viewpoint of observing specific components. It can be inferred from this result that the downloaded PSF did not reflect the actual imaging condition. The result obtained with the proposed accelerated generalized expectationmaximization (GEM) algorithm with the depthinvariant PSF is shown in Figure 1(d). The proposed algorithm had a clearer visibility than DeconvolutionLab after the same number of iterations since the image restoration was designed to guarantee the convergence and converge fast by means of vector extrapolation. The result of deconvolution by the proposed algorithm used depthvariant PSFs is depicted in Figure 1(e). While the restored image in Figure 1(d) is almost the same as that in Figure 1(e), the restored image in figure 1(f) shows that the elongation phenomenon was suppressed by our depthvariant GEM image algorithm. Moreover, it seems that the depthvariant GEM algorithm removed blur more effectively than the depthinvariant one, as represented in the pink elliptical area in Figure 1(f). When the observed C. Elegans embryo cell image in Figure 1(a) is compared with the restored image in Figure 1(e), it becomes clear that the proposed algorithm improves the visibility of the cellular structure. In addition, blue chromosomes, green filaments and red spots can be distinguished.
The processing time when DeconvolutionLab was used was five hours. The depthinvariant version of the proposed algorithm took only 113 minutes, which is obviously much faster than DeconvolutionLab. While the proposed depthvariant algorithm achieved better performance than that of the depthinvariant one in terms of qualitative visibility, it took more computational time (27.5 hours) than the depthinvariant version. In other words, a tradeoff between performance and computational time exists.
Fluorescent microbead
Observations of a InSpeck green fluorescent hollow bead with a diameter of 2500nm were used as a fluorescent microbead dataset. The observations were taken with an Olympus Cell R microscope with a × 63, 1.4 NA oilimmersion objective. The data cube was composed of 256 × 256 × 128 voxels with size 64.5nm × 64.5nm × 160nm. The PSF size (x × y × z) was set to 151 × 151 × 57 voxels of size 64.5nm × 64.5nm × 160nm.
The diameter of the restored image was measured in terms of the full width at half maximum (FWHM). As the FWHM value became closer to the real diameter (2500nm), the method was regarded as better one. The relative contrast between the border and the centre of the sphere used as a performance indicator because it was already known that the fluorescent bead was empty inside. As the relative contrast became higher, the boundary between the shell of the fluorescent bead and the hollow bead inside became more clearly distinguishable.
Observed images and images restored by the proposed algorithm are shown in Figure 2. The images were normalized by dividing maximum intensity. Images observed along the transverse axis and the optical axis are shown in Figures 2(a) and (d), respectively. Images of a clear sphere shape restored from the ambiguous images in Figures 2(a) and (d) respectively are shown in Figures 2(b) and (e). Intensity profiles along the centre line (dotted line) in Figures 2(a) and (b) are plotted in Figure 2(c), in which the horizontal axis represents the position of the transverse axis. Blue and red lines depict the intensity of the observed and restored images, respectively. As can be seen from Figure 2(c), the border between the shell of the bead and the centre of the hollow sphere is definitely distinguishable. The relative contrast was calculated from the transverse intensity profiles in Figure 2(c). The axial intensity profiles shown in Figure 2(f) show the same tendency as those shown in Figure 2(c).
It is apparent from Figures 2(a) and (d) that the observed image is especially blurred along the optical axis in comparison to the transverse axis. As shown by the restored image and the intensity profile in Figures 2(e) and (f), respectively, the proposed algorithm clearly removed the blur along the optical axis. This result demonstrates that the elongation phenomenon was effectively suppressed. (The supplemental video shows the restoration process.).
To compare the quantitative evaluation, bead diameter error and relative contrast after previous deconvolution methods were applied to the images are listed in Table 2. As previously mentioned, the bead diameter was calculated as FWHM.
Parameter values of the observed image are presented in the ‘Raw data’ column. From the FWHM error values of raw data, it is clear that the blur was far severer along the optical than transverse axis. As the FWHM error of a deconvolution result gets closer to the zero, the deconvolution has better performance. As shown in Table 2, the axial FWHM error value given by the proposed algorithm was superior to those given by the other algorithms, which was closest to zero. This is because all of them except our algorithm assumed depthinvariant PSFs; thus, this result indicated the importance of applying depthvariant PSFs. The error in the axial FWHM value given by the proposed algorithm is 151nm, which is less than the voxel size along the optical axis (160nm). Although the error in the transverse FWHM value given by the proposed algorithm is 155nm, which is equivalent to 2.34 pixels on the transverse axis, the proposed algorithm gives the best transverse FWHM value. Besides, the relative contrast given by the proposed algorithm is also superior to those values given by the other algorithms. That is, the relative contrast given by the proposed method algorithm is 97% and those values given by the other algorithms do not surpass 90%.
Discussion
This study was undertaken to design a deconvolution algorithm for 3D WFM. Our proposed method removed axial blur effectively and solved the elongation problem via an accurate PSF estimation and a depthvariant image restoration. The proposed algorithm estimates a parameterized PSF reflecting actual imaging conditions from observed image and it generates depthvariant PSFs controlling the depth parameter. A depthvariant image restoration algorithm, which is accelerated by vector extrapolation, was implemented. Results of the C. Elegans embryo cell and fluorescent bead experiment show that the proposed algorithm removes axial blur that could not be removed by algorithms developed in previous studies. Moreover, to compare quantitative performances of our algorithm with existing software and algorithm, we used the open dataset of 2500nm hollow sphere fluorescence bead. The quantitative performance values diameter error and relative contrast given by the proposed algorithm are superior to those given by a commercial software package used in this study. These findings suggest that 3D WFM images should be restored by a depthvariant deconvolution and they imply that the PSF from an observation is more accurate than PSF measurement.
The bead used in this experiment is relatively thin and has about the size of bacteria. For very deep specimens, dataset generation of the object over 10 um would be worth for 3D deconvolution of WFM. Other possible directions for future work include fast algorithm for the depthvariant image restoration, xyz variant asymmetric PSF modelling and extending the proposed algorithm to other applications. The execution time for the proposed algorithm is discussed in the Results section, yet the algorithm does not operate in minutes. In our algorithm, simplex method for PSF parameter fitting and depthvariant convolution operator take most of processing time. Faster parameter fitting method and effective calculation such as distributed processing for depthvariant convolution operator would produce faster deconvolution method. According to the result of the experiment with fluorescent beads, the restored image has a shape of an asymmetric sphere. In this study, however, it was assumed that the PSF is x–y symmetric. The xyz asymmetric PSF would be a next task for the solution of the distorted result. Also, it is expected that not only z but also xy variant deconvolution would express inhomogeneity of specimen and improve accuracy of deconvolution result. Furthermore, the proposed algorithm can be also applicable to other image models that have a spacevariant PSF such as astronomical image.
Methods
The acquired 3D image, g, could be modelled by a 3D convolution between a 3D depthvariant PSF h and the true object f under noise model n:
Where p_{o} = {x_{o},y_{o},z_{o}} and r_{o} = {x_{o},y_{o}} denote 3D and 2D object positions in object domain O, respectively. p = {x,y,z} and r = {x,y} are 3D and 2D image positions in image domain I, respectively. In this paper, the true object function illustrates the object in the air and does not depict image of the object in specimen layer. The PSF includes the object elongation effect due to refractive index mismatch. Since WFM images are taken in a dark room, the noise model of observed images follows a Poisson distribution^{15}. As seen as eq (1), the PSF h varies according to positions of object p_{o} and image p. Since this paper ignores insignificant inhomogeneties in specimen layer^{16}, the PSF variance along x and y axis (in one depth) is ignored. The aim of this study is to estimate the true object f from the acquired image g.
The proposed algorithm is composed of the following three steps: (1) estimation of a depthinvariant PSF, (2) generation of a depthvariant PSF and (3) restoration of a depthvariant image.
In step 1, a single nonparameterized PSF h_{step}_{1}(p) for an overall region is estimated. For constructing depthvariant PSFs, nonparametric h_{step}_{1}(p) is converted to parametric PSF model h_{step}_{2}(r−r_{o},z;z_{o}). Controlling the depth parameter z_{o} makes it possible to obtain depthvariant PSFs. Depthvariant image restoration, which is accelerated by vector extrapolation, is then implemented^{18}.
Step 1. Estimation of depthinvariant PSF
In step 1, an initial PSF is estimated first. Before the procedure for estimating PSF is explained, the method for generating the initial PSF and its specific setting are explained. The accuracy of the estimated PSF depends on the initial PSF. To generate the initial PSF, the Gibson and Lanni PSF model, which is based on Kirchhoffs integral formula (one of the most widely used PSF models for WFM), was applied^{16}. This model generates a 3D WFM PSF by substituting optical parameters. These parameters are refractive indices and optical distances, which are determined by analyzing the intensity profile and objective lens information. The Gibson and Lanni PSF model is given as
where k_{0} denotes the vacuum wave number, NA is the numerical aperture and represents the optical path difference (OPD) between the design and actual conditions. J_{0} denotes the Bessel function of the first kind of the zero.
A schematic of the optical path in a WFM is shown in Figure 3. The OPD, , causes spherical aberration, which is modelled as^{19}
where n_{s} and n_{i} represent the refractive indices of the specimen and the immersion layer. Since the refractive index of internal cellular components (1.33–1.37) are usually similar to those of water, initial n_{s} is set as the refractive index of water^{17}. If user uses other types of samples such as glycerol solution, the refractive index value for initial PSF would be changed. Meanwhile n_{i} depends on the composition of immersion layer. In the case an oilimmersion objective is used, n_{i} is taken as the refractive index of oil.
Unknown parameter z_{o} denotes the position of the object on the z axis. The initial z_{o} setting is calculated from the intensity profile of the captured image. To make it easier to understand, setting of parameter z_{o} is depicted in Figure 3. The object part is set as normalized intensities greater than [(min(g(z))+max(z)))/2] at the origin of the x and y axes. Z_{o} is then set as the central position z_{c} of the object part, under the assumption that the lowest position of the object part as z_{o} = 0. Then the initial PSF, h(r−r_{o},z;z_{o} = z_{c},n_{s}), is generated by using Equation (2)(3).
After the initial PSF is generated, a single PSF for the overall region is estimated. In this step, a nonparameterized and imagebased PSF model is used, while the initial PSF is is derived from the parameterized equation. This is because the nonparameterized PSF estimation is quicker than parameterized PSF estimation.
The equation for finding the true object and the PSF that maximize the conditional probability of the observed image is given as
Since a WFM image follows a Poisson distribution, an objective function can be expressed as
Due to difficulty in differentiating eq (5), the problem from maximizing eq (5) is changed to minimizing the following negative loglikelihood function:^{20}
After eq (6) is differentiated with respect to f and h, the derivative is equated with zero to yield the following equation^{21}:
where k indicates the iteration number. h_{mirror}(p) = h(−p) and f_{mirror}(p) = f(−p) represent the mirrored PSF and true object, respectively. This equation is called a blind RichardsonLucy (RL) algorithm, which is often used for deconvolution of the data in a Poisson distribution. The blind RL algorithm iteratively estimates the true object f and the nonparameterized PSF h simultaneously from the acquired image g and the initial PSF, h(r−r_{o},z;z_{o} = z_{c},n_{s}). The initial f is the acquired image, g. In this step, however, the blind algorithm is utilized only for estimating the PSF. The estimated PSF, , is considered as the actual PSF corresponding to the centre of the object.
Step 2. Generation of depthvariant PSF
To construct depthvariant PSFs from a nonparameterized model, it is required to estimate PSFs for each depth. That estimation, however, is difficult and takes a lot of computational time. If the PSF is converted to a parameterized model, depthvariant PSFs could be effectively generated by controlling parameter z_{o}.
To do so, it is necessary to estimate z_{o} and n_{s} of Equation (2) that minimizes a negative loglikelihood of a given h_{step}_{1}(p_{i}) Poisson distribution.
Equation (8) is minimized by a simplex method, which is a simple and fast mathematical optimization^{22}. Iteratively, Equation (8) is implemented until convergence. A parameterized PSF, , that reflects the position and the refractive index of the specimen can then be obtained.
Specific parameter settings and parameter curves in the case of the abovedescribed experiments with a fluorescent bead and C. Elegans embryo cell are described in the following. Since the datasets were taken by an oilimmersion lens, the refractive index of the immersion layer is set as n_{i} = 1.518. Curves of parameter z_{o} for PSF fitting are shown in Figure 4. In our bead and C.Elegans embryo cell experiments, the refractive index parameter n_{s} showed no variation. It can be seen from the figure that the parameterfitting procedure needs only few iterations and that the parameter curves all converge. In our experiments, the iteration was stopped if the parameter did not change three times in a row.
The PSF equation, namely, (2)(3), into which and is substituted becomes the actual parameterized PSF for the central depth of the object. And then, depthvariant PSFs are generated by shifting parameter z_{o} in accordance with the axial resolution of the acquired image.
Step 3. Restoration of depthvariant image
In this step, a penalized depthvariant RL algorithm is used for restoring the depthvariant image. An image following a Poisson distribution is relatively weaker in respect to noise than a Gaussian distribution; thus, the penalized version of the RL algorithm^{23} was chosen. The penalized RL algorithm restores the image by maximizing the penalized likelihood function, defined as follows:
where is the regularization parameter. The total variation regularization constraint, which preserves edges due to its linear penalty on difference between adjacent pixels, was set as follows:
The regularization parameters were set as and 0.1 × 10^{−3} for the cell and bead experiments, respectively.
The total variation penalty couples each pixel in the restoration with its adjacent neighbours in such a way that a direct derivative for maximizing the penalized likelihood function is not possible^{24}. As the means of solving this problem, most previous studies approximated the difference between adjacent pixels as the difference between the value of a current pixel and the values of the neighbouring pixels from the previous iteration^{23,7}. However, a restored image using the approximation is not accurate since this method does not converge to the solution monotonically. The generalized expectationmaximization (GEM) algorithm was thus chosen to evaluate the derivatives indirectly by using the quadratic surrogates of the regularization term^{25}. The final form of the depthvariant GEM algorithm is given as follows^{11}:
where and m represent curvature and subiteration, respectively. a( f ^{(k,m)}) and b( f ^{(k,m)}) are defined as
This iterative technique, however, is slow to converge toward the final result. To increase the speed of convergence, vector extrapolation is applied^{18}. The acceleration method predicts where each pixel in the image is going from the correction obtained by each iteration. A new point c^{k} is predicted and the GEM algorithm is applied to generate the next estimate f ^{k}^{+1} and gradient d^{k} as follows:
Changes in the objective function, FWHM and relative contrast value during iteration of image restoration of the fluorescent bead image are shown in Figures 5(a), (b) and (c), respectively. The objective function represents a negative loglikelihood function, which is calculated from Equation (9). The smaller the negative loglikelihood function, the more accurately the true object is estimated. In Figure 5(a), it is clear that the objective function converges enough. The iteration is stopped when the relative contrast and FWHM values does not change; accordingly, the results were obtained after 87 iterations. In Figure 5(b), the axial FWHM value changes rapidly for ten iterations, whereas the transverse FWHM curve changes smoothly. In Figure 5(c), the relative contrast rapidly increases in the early stage, namely, a similar tendency with the axial FWHM curve.
Preparation of data set
To compare the quantitative performance of the proposed algorithm with that of other algorithms, it was tested by using the same data sets as those used in previous studies. The data sets are composed of stacks of images of a 2.5μm diameter fluorescent microbead and a C. Elegans embryo. In the case of the data sets for the C. Elegans embryo cell, not enough z stacks were taken to visualize whole shape of the object. To prevent the artefact occurring at the boundary, the data size was extended and a minimum filter was applied to the extended area as follows.
First, after a 672 × 712 × 216 matrix was generated, the raw data of a C. Elegans embryo image (672 × 712 × 104) was put in the centre of the generated matrix (57–160 plane along z axis). Then, intensity values for the unfilled areas were determined by the minimum filter. Undetermined values bordering determined values were calculated as follows. The minimum values obtained by the minimum filter were found in the 3 × 3 matrix of neighbouring determined pixels and the unfilled pixels right above or below the neighbouring determined pixels were filled in. In this way, the whole matrix was fully filled and could be used for the experiments. Through this procedure, enough z stacks could be obtained until most of the intensities along the z axis disappeared, thereby reducing artefacts at the boundary. After image restoration, the restored image was then cropped back to the same size as the raw image.
Computational features
All procedures were carried out in MATLAB 2014a on parallel Intel Xeon E52680 processors (2.8 GHz) 448GB RAM, running Windows. Total computational time for the fluorescent bead experiment was about 265 minutes. step 1 took 5 minutes. The parameter estimation for PSF fitting and the depthvariant PSF generation in step 2 took 70 minutes and 120 minutes, respectively. Step 3 for image restoration took 70 minutes.
References
Conchello, J. A., & Lichtman, J. W. Optical sectioning microscopy. Nature methods. 2, 920–931 (2005).
Shaevitz, J. W., & Fletcher, D. A. Enhanced threedimensional deconvolution microscopy using a measured depthvarying pointspread function. JOSA A. 24, 2622–2627 (2007).
Sarder, P., & Nehorai, A. Deconvolution methods for 3D fluorescence microscopy images. Signal Processing Magazine, IEEE. 23, 32–45 (2006).
Joshi, S., & Miller, M. I. Maximum posteriori estimation with Goods roughness for threedimensional opticalsectioning microscopy. JOSA A. 10, 1078–1085 (1993).
Markham, J., & Conchello, J. A. Parametric blind deconvolution of microscopic images: Further results. BiOS'98 International Biomedical Optics Symposium. International Society for Optics and Photonics, 38–49 (1998, June).
Markham, J., & Conchello, J. A. Fast maximumlikelihood imagerestoration algorithms for threedimensional fluorescence microscopy. JOSA A. 18, 1062–1071 (2001).
Soulez, F., Denis, L., Tourneur, Y., & Thibaut, E. Blind deconvolution of 3D data in wide field fluorescence microscopy. Biomedical Imaging (ISBI). 2012 9th IEEE International Symposium on. IEEE, 1735–1738 (2012, May).
Schermelleh, L., Heintzmann, R., & Leonhardt, H. A guide to superresolution fluorescence microscopy. The Journal of cell biology. 190, 165–175 (2010).
Preza, C., & Conchello, J. A. Depthvariant maximumlikelihood restoration for threedimensional fluorescence microscopy. JOSA A. 21, 1593–1601 (2004).
Griffa, A., Garin, N., & Sage, D. Comparison of deconvolution software: a user point of view–part 2. GIT Imaging & Microscopy, 12, 41–43 (2010).
Kim, J., An, S., Ahn, S., & Kim, B. Depthvariant deconvolution of 3D widefield fluorescence microscopy using the penalized maximum likelihood estimation method. Optics express. 21, 27668–27681 (2013).
Maalouf, E. Contribution to fluorescence microscopy, 3D thick samples deconvolution and depthvariant PSF (Doctoral dissertation, Universit de Haute AlsaceMulhouse, 2010).
Markham, J., & Conchello, J. A. Parametric blind deconvolution: a robust method for the simultaneous estimation of image and blur. JOSA A, 16, 2377–2391 (1999).
Kenig, T., Kam, Z., & Feuer, A. Blind image deconvolution using machine learning for threedimensional microscopy. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 32, 2191–2204 (2010).
van Kempen, G. M., van der Voort, H. T., Bauman, J. G., & Strasters, K. C. Comparing maximum likelihood estimation and constrained TikhonovMiller restoration. Engineering in Medicine and Biology Magazine, IEEE, 15, 76–83 (1996).
Frisken Gibson, S., & Lanni, F. Experimental test of an analytical model of aberration in an oilimmersion objective lens used in threedimensional light microscopy. JOSA A, 8, 1601–1613 (1991).
Axelrod, D. I. and Davidson, M. W. Introduction and Theoretical Aspects Olympus microscopy resource center (2010) Available at: http://www.olympusmicro.com/primer/techniques/fluorescence/tirf/tirfintro.html (Accessed: 14th January 2015).
Biggs, D. S., & Andrews, M. Acceleration of iterative image restoration algorithms. Applied optics, 36, 1766–1775 (1997).
Aguet, F., Van De Ville, D., & Unser, M. An accurate PSF model with few parameters for axially shiftvariant deconvolution. Biomedical Imaging: From Nano to Macro, 2008. ISBI 2008. 5th IEEE International Symposium on. IEEE, 157–160 (2008, May).
Snyder, D. and Miller, M. I. Random point processes in time and space (SpringerVerlag, New York, NY 1991).
Richardson, W. H. Bayesianbased iterative method of image restoration. JOSA. 62, 55–59 (1972).
Lagarias,. Jeffrey, C. et al. Convergence properties of the Nelder–Mead simplex method in low dimensions. SIAM Journal on Optimization 9.1, 112–147 (1998).
Lagarias, J. C., Reeds, J. A., Wright, M. H., & Wright, P. E. Convergence properties of the Nelder–Mead simplex method in low dimensions. SIAM Journal on optimization, 9, 112–147 (1998).
Conchello, J. A., & McNally, J. G. Fast regularization technique for expectation maximization algorithm for optical sectioning microscopy. Electronic Imaging: Science & Technology. International Society for Optics and Photonics, 199–208 (1996, April).
Huber, P. Robust Statistics (Wiley., 1974).
Author information
Authors and Affiliations
Contributions
B.K. designed and executed the experiments and wrote the manuscript. T.N. supervised the work and edited the manuscript. All authors reviewed and approved the final manuscript.
Ethics declarations
Competing interests
The authors declare that they have no competing financial interests.
Electronic supplementary material
Supplementary Information
Supplemental Video S1
Supplementary Information
Supplemental Video S2
Supplementary Information
Supplemental Video S3
Supplementary Information
Supplemental Video S4
Supplementary Information
Supplementary Information
Rights and permissions
This work is licensed under a Creative Commons Attribution 4.0 International License. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in the credit line; if the material is not included under the Creative Commons license, users will need to obtain permission from the license holder in order to reproduce the material. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/
About this article
Cite this article
Kim, B., Naemura, T. Blind Depthvariant Deconvolution of 3D Data in Widefield Fluorescence Microscopy. Sci Rep 5, 9894 (2015). https://doi.org/10.1038/srep09894
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/srep09894
Further reading

Tutorial: avoiding and correcting sampleinduced spherical aberration artifacts in 3D fluorescence microscopy
Nature Protocols (2020)

Deconvolution of light sheet microscopy recordings
Scientific Reports (2019)

A convex 3D deconvolution algorithm for low photon count fluorescence imaging
Scientific Reports (2018)

Snapshot Hyperspectral Volumetric Microscopy
Scientific Reports (2016)

Separation of ballistic and diffusive fluorescence photons in confocal LightSheet Microscopy of Arabidopsis roots
Scientific Reports (2016)
Comments
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.