Abstract
We developed a deconvolution software for light sheet microscopy that uses a theoretical point spread function, which we derived from a model of image formation in a light sheet microscope. We show that this approach provides excellent blur reduction and enhancement of fine image details for image stacks recorded with low magnification objectives of relatively high NA and high field numbers as e.g. 2x NA 0.14 FN 22, or 4x NA 0.28 FN 22. For these objectives, which are widely used in light sheet microscopy, sufficiently resolved point spread functions that are suitable for deconvolution are difficult to measure and the results obtained by common deconvolution software developed for confocal microscopy are usually poor. We demonstrate that the deconvolutions computed using our point spread function model are equivalent to those obtained using a measured point spread function for a 10x objective with NA 0.3 and for a 20x objective with NA 0.45.
Introduction
In recent years, light sheet microscopy became more and more common and novel variations of this technique were established^{1,2}. In all these modifications, a thin sheet of laser light illuminates the specimen perpendicularly to the observation pass way of the microscope, thereby restricting fluorescence excitation to a thin layer within the specimen. This separation of illumination and observation pathway results in a pronounced optical sectioning effect^{3,4,5}. (Fig. 1a). A major advantage of a light sheet microscope is its excellent axial resolution that, differently from a confocal microscope, can also be obtained with very low magnification objectives of relatively high NA (e.g. 2x NA 0.14) and large fields of view ranging up to more than a centimeter^{3,6}.
In parallel, the still ongoing increase of memory and calculation speed of personal computers made 3Ddeconvolution a wellestablished lab tool for debluring of microscopy data. Presently, a few commercial deconvolution software packages (e.g. Huygens, Scientific Volume Imaging Netherlands; Auto Quant, Media Cybernetics, USA) and public domain tools (e.g. Deconvolution LAB2^{7} or CLARITY^{8} are available, making 3D deconvolution accessible also to the nonspecialist in computational image processing.
Standard software, using a theoretical PSF derived for widefield or confocal microcopy provides poor results, when used for deconvolving light sheet microscopy recordings obtained with low magnification objectives. We tried out DeconvolutionLAB^{7} using two different confocal PSFs modeled with the PSFgenerator^{9} plugin for ImageJ^{10} for deconvolving a light sheet microscopy stack comprising 667 optical slices obtained with a 2.5x objective (NA 0.12). We found that the results were poor or even worse compared to the original image stack (Fig. S1).
Nevertheless, the amount of available software for deconvolving light sheet microscopy data is yet low. Preibisch et al.^{11} and Wu et al.^{12} developed software that can be used for deconvolving light sheet microscopy data obtained from different directions of view (multiview combining). For the commercial products Huygens (SVI, Netherlands) and Auto Quant (Media Cybernetics, USA) addon modules for deconvolving SPIM^{5} data exist. However, these modules have to be purchased as expensive optional extensions, rely on proprietary source code, and the price for these products is in the range of several thousand euro, making it unaffordable for many labs. To our knowledge, presently no noncommercial deconvolution tool exists, which works well for deconvolution of light sheet microscopy stacks obtained with low magnification objectives (e.g. 2x, NA0.14 or 4x NA 0.28). Here, we present an approach allowing to deconvolve such data sets in excellent quality without need of PSF measurements. We present a free software tool that can process light sheet microscopy stacks of virtually unlimited size by splitting large data sets into blocks before deconvolution and automatically stitching them afterwards. Due to its capability to use multiple CPUs and optionally GPU acceleration the program is fast, processing a stack of 1392 × 1040 × 699 pixels in less than a quarter of an hour on a NVidia P6000 graphics board (NVidia, Germany).
Light sheet microscopy combined with objectives of low magnification, but relatively high NA and large field numbers (e.g. XL Fluor 2x NA 0.14 FN 22, or XL Fluor 4x NA 0.28 FN 22, Olympus, Germany) can provide detailed images of samples of more than 1 cm size with resolutions of a few micrometer^{3}. However, for these highend objectives bridging the gap between microscopy and macrophotography, accurate point spread function (PSF) measurements that are sufficiently resolved to be applicable for deconvolution are difficult to obtain via recording of fluorescent beads. This is mainly due to the resolution limits of microscope cameras: Considering Rayleigh’s resolution criterion: d = 0.61*l/NA and further considering that according to Nyquist’s theorem the camera sampling frequency should be at least twice the highest spatial frequency resolved in the image, the required camera resolution (in megapixels) for making full use of an objective can be estimated using Rayleigh’s criterion:
(P: required number of megapixels. NA: numerical aperture of the objective, λ: emission wavelength. V: total magnification of the microscope. n_{x,y}: number of pixels in x or y direction. s_{x,y}: camera chip size in x or y direction).
According to Eq. (1), a camera resolution of at least 50 megapixels would be required for full use of the resolving power of e.g. a 4x objective with NA 0.28 (assumed is a typical camera chip size of 15 × 15 mm^{2}, λ = 488 nm, no postmagnification). Unfortunately, this is much more than present microscopy cameras provide. As a consequence, recordings of subresolution fluorescent beads are very poorly resolved, reducing the lateral resolution of the measured PSF to a very limited number of pixels, which is not sufficient for deconvolution. Principally, undersampling could be avoided by applying high optical post magnifications (e.g. 4x or more) during bead recording. However, this approach would severely reduce the light efficiency of the microscope, yielding to very long illumination times and poor signal to noise (SNR) ratios. Further on, the effective PSF of the microscope would be altered by the post magnification itself. Lightscattering introduced by the imaging medium (usually agar or gelatin) and the refractive index mismatch between the fluorescent beads and the embedding medium are further factors, which limit the accuracy of experimental PSF measurements in practice^{13}. While the former drawback can be limited by keeping the concentration of the embedding medium (usually 1–1.5% for agar or about 4% for gelatin) and the concentration of the fluorescent particles as low as possible, the latter is more difficult to prevent due to the limited set of commercially available fluorescent beads and mounting media. Cole et al.^{14} recommend to use the resin based embedding media ProLong Gold or Cytoseal (both available from ThermoFisher Scientific, Germany) instead of agar, which have a refractive index of 1.46 or 1.48, respectively and therefore better match the refractive index of available fluorescent microspheres. A theoretical approach describing the PSF in a complex light scattering medium has been developed by Boniface et al.^{13}.
Our approach avoids the difficulties of PSF measurements by utilizing a computed PSF that is derived from an optical model of image formation in a light sheet microscope. This allows to tabularize the PSF on any required resolution scale. We show that this approach provides excellent deconvolutions for light sheet microscopy stacks recorded within a magnification range from 1x (e.g. 2x NA 0.14 with 0.5x postmagnification) up to 20x (e.g. 20x NA 0.45, no postmagnification) without any requirements for time consuming and errorprone PSF measurements.
Results
Software
The deconvolution software for light sheet microscopy that is available as additional material to this paper was written using MATLAB (The Math Works, Germany). The MATLAB script as well as a compiled standalone version of the software not requiring a MATLAB installation (64 bit windows only, at least 8 GB RAM required) and which also includes a simple graphical user interface is available from the Supplemental Materials.
Validation of the PSF model
We tested our PSF model by comparing modelled PSFs with PSFs measured using a 10x objective (NA 0.3, UPlan FLN, Olympus, Austria) and a 20x objective (NA 0.45, LUCFPLFLN, Olympus. Austria). Both objectives were custom corrected for a refractive index of 1.45. Details of the PSF measurements are described in the Supplemental Materials. For the calculated and for the measured PSFs, intensity profiles were determined and plotted into a common diagram for visual comparison. The high agreement between theory and experiment is obvious for both objectives in lateral, as well as in axial direction (Fig. 2).
Comparison of deconvolutions either obtained with a modelled or with a measured PSF
By visual inspection we could not find any differences in the quality of the deconvolutions obtained with a computed PSF, or with a measured PSF for the 10x objective (Fig. 3a), as well as for the 20x objective (Fig. 3b). We did not perform such comparisons for the 2x (NA 0.14) and for the 4x (NA 0.28) objective, due to the previously addressed difficulties to perform PSF measurements of good quality with these objectives.
We quantitatively compared the raw image stacks, the image stacks obtained after deconvolving with a measured PSF and the image stacks obtained after deconvolving with a calculated PSF by means of their mean squared deviation (MSD).
where x_{1} represents the voxels in the first data set and x_{2} represents the voxels in the second data set. N is the number of voxels.
As depicted in Fig. 3a4,b4 the MSD calculated between the stacks deblurred with the measured and with a computed PSF is much lower compared to the deviations from the original stacks. This strongly suggests that the results obtained using a measured PSF and those obtained using a calculated PSF are highly similar relatively to the original stack, as already evident by visual inspection. The MSD values plotted in Fig. a4, b4 are normalized to the MSD calculated between the original data set and the respective data set deconvolved using a measured PSF.
Deconvolution performance in the presence of noise
To improve deconvolution of image stacks with low SNR, we integrated flux preserving regularization^{15} as an option in the deconvolution algorithm. In order to analyze its effect we generated a noisy input image stack by adding computer generated Gaussian noise (\(\overline{x}\) = 0, σ^{2} = 0.01) to the data set underlying Figs. 4d and 5a). After deconvolution without regularization (damping) we expectedly observed a strong amplification of the background noise masking any fine details of the image (Fig. 5b). However, repeating the deconvolution with 5% damping suppressed the noise amplification to a tolerable degree. While the level of detail in the image deconvolved without regularization is even less compared to the original image, a distinct quality improvement is obvious in the image that was deconvolved using regularization (Fig. 5c).
Processing speed
The required deconvolution times strongly depend on the available hardware. The FFT calculations make use of multiple processor cores. On an older PC, equipped with two quad core processors (Intel Xeon E5520) running at 2.3 GHz and 48 GB RAM, deconvolution of the data stack (1392 × 1024 × 667 voxel) belonging to the mouse embryo shown in Fig. 4a required 66 minutes. According to the limited amount of RAM, the stack was automatically split into 2 × 2 × 3 equally sized blocks. On a highend machine equipped with an 18 core processor (Intel Xeon Gold 6140) at 3.4 GHz and 256 GB RAM the same calculation took 22 minutes. Splitting of the data into blocks was not required.
We further implemented an option to perform the convolution operations on the GPU (requires a NVidia graphic board supporting at least CUDA compute level 3). With GPU processing the required deconvolution time for the mouse embryo (1392 × 1040 × 669 voxel) was reduced to about 10 min on a NVidia P6000 graphic processor board with 24 GB RAM (NVidia, Germany).
Example deconvolutions
We carefully tested our deconvolution program with light sheet microscopy stacks obtained from different chemically cleared biological samples recorded with magnifications ranging from 1x (2x objective NA 0.14, 0.5x post magnification) up to 20x (20x objective NA 0.45, no post magnification). Details of sample preparation are provided in the Supplemental Materials. For all our test samples, we observed a distinct improvement in sharpness, as well as in the level of detail (Fig. 4). Especially for the lowest magnifications (e.g. 2x NA 0.14 with 0.5x postmagnification, Fig. 4a) we were not able to obtain comparably detailed deconvolutions with other deconvolution software tested.
Discussion
In the recent years, light sheet microscopy evolved to a valuable tool for live science imaging^{2}. Major strengths of this fluorescence microscopy technique are its high imaging speed, excellent signaltonoise ratio and low levels of photo bleaching and phototoxic effects^{16}. Differently to confocal microscopy, light sheet microscopy provides good optical sectioning even with objectives of very low magnifications at the threshold between microscopy and macrophotography. The novel possibility to perform optical sectioning microscopy in the very low magnification range that came up with light sheet microscopy makes it a method of choice for numerous neurobiological and developmental studies on large samples that can be chemically cleared, as e.g. whole mouse brains, embryos, or even whole transparent mice^{17}. Our deconvolution software developed for light sheet microscopy provides an easily applicable way to further significantly enhance the quality of 3Dreconstructions obtained from such samples.
A light sheet microscope uses the fluorescence light that is emitted by a specimen a magnitude more efficient than a confocal microscope, since it doesn’t need a pinhole blocking the major part of the photons before reaching the camera target. This allows to use standard CCD or CMOS cameras instead of photo multipliers, as common in confocal microscopy. Therefore, the SNR of light sheet microscopy recordings generally is more than a magnitude higher compared to confocal microscopy^{16}. This makes light sheet microscopy data sets ideally suited for postprocessing by deconvolution, since the major drawback of deconvolution  amplification of background noise  is a much less severe problem compared to confocal microscopy. This corresponds well to our finding that for the deconvolutions presented in this paper no regularization of the RichardsonLucy algorithm was necessary to achieve the best possible results.
The PSF of a light sheet microscope has been described as the elementwise product of an illumination PSF and a detection PSF before^{5,6,18}. WU et al.^{12} used a related approach for image fusion of multiview images captured by widefield or light sheet microscopy. However, differently from us, they multiplied the detection PSF with the Gaussian illumination light sheet profile, instead of the illumination PSF^{12}. We tried out this approach and found that the modeled PSFs were less similar to our measured PSFs for the 10x and the 20x objective and that the obtained deconvolutions were of less quality (Fig. S2).
Attempts to perform deconvolution with a spacevariant PSF have been made by several research groups^{19,20,21}. We also tried to consider the special variability of the light sheet, and thus of the PSF, along the propagation axis in our PSF model. Surprisingly, aside of markedly reducing the deconvolution speed and adding additional model parameters that cannot straightforwardly be determined experimentally, the observed improvement turned out to be very limited. We therefore decided to assume the light sheet as approximately uniform for better speed and operability of our deconvolution tool in practice. However, a quantification of the MSE between original and deconvolved image in six equally sized, stripeshape regions along the light sheet propagation axis suggests a distinct dependency of the deconvolution quality on the lateral position (Fig. S3). Further progress in light sheet generation should reduce this drawback by providing more homogeneous light sheets with much longer Rayleigh ranges than possible with the used standard light sheet microscope comprising a slit aperture in combination with a single cylindrical lens^{22}.
Rolling ball background substraction^{23}, or contrast limited histogram equilibration^{24} (CLAHE) are alternative methods for enhancing microscopy images. Advantages of both approaches are the significantly lower requests for computation power and memory compared to deconvolution and the fact that no information about the PSF or the microscope is required. We compared both approaches with our deconvolution method (Fig. S4). We found that deconvolution provides better sharpening and image contrast compared to both techniques. However, rolling ball background subtraction may be useful for preprocessing image stacks with high background intensities and contrast limited histogram equilibration may be useful for slightly postprocessing deconvolved image stacks.
We developed our deconvolution program for deconvolving light sheet microscopy recordings obtained with low magnification objectives, ranging from 1x up to about 20x, without need for timeconsuming and errorprone PSF measurements. We could proof that for a 10x (NA 0.3) and for a 20x objective (NA 0.45) the deconvolutions obtained with our modelled PSF are as good as those obtained using a measured PSF (Fig. 3). For a standard light sheet microscopy setup utilizing a slit aperture of up to 16 mm width and a cylinder lens with 80 mm focal length, the minimal theoretical beam waist is about 3.7 µm (FWHM) at 488 nm wavelength. Since the axial resolution of a 20x objective (NA 0.45) at 520 nm emission wavelength is about 8–9 µm, a 20x magnification is close to the upper limit for which a reasonable optical sectioning effect can be expected for this type of setup. For higher magnification objectives (e.g. 40x or higher) with numerical apertures above 0.5 used in combination with a more focused light sheet (e.g. by using a cylindrical lens with smaller focal length), an accurate measurement of the PSF by recording of subresolution fluorescent beads may be preferable. However, since for these objectives the sampling rate of microscopy cameras is much better matched to the field of view, PSF measurements can be done in good quality with moderate effort. Taken together, our deconvolution tools provides a stunning enhancement of details for light sheet microscopy recordings performed with low magnification objectives.
Methods
PSF modeling
In a correctly adjusted light sheet microscope, a thin light sheet passes through the focal plane P_{0} of an objective collecting the light emitted by excited fluorochromes. During the imaging process, the positions of the light sheet and of the objective remain unchanged, while the specimen is stepwise shifted through the light sheet. If the objective (or a protection cap mounted on the tip of the objective, respectively) immerses into a liquid filled container comprising the sample, all optical path lengths remain constant during the recording procedure (Fig. 1a).
An appropriate description of the PSF of this optical arrangement has to consider the farfield limit, as well as nearfield diffraction: If the input field in the input plane is E(x_{0}, y_{0}, z = 0), a 2D aperture transmitting a cylindrically symmetric field is described by E(x_{0}, y_{0}, z = 0 = E(ρ_{0}, z = 0), where \({\rho }_{0}=\sqrt{{x}_{0}^{2}+{y}_{0}^{2}}\), x_{0} = ρ_{0}Cos(φ_{0}) and y_{0} = ρ_{0}Sin(φ_{0}). The diffracted field then is described by the HuygensFresnel diffraction integral:
Using cylindrical coordinates Eq. (3) can be rewritten as:
Further considering ρρ_{0}(Cos(φ)Cos(φ_{0}) + Sin(φ)Sin(φ_{0})) = ρρ_{0}Cos(φ_{0} − φ)
Equation (4) becomes:
Substituting \({\int }_{0}^{2\pi }{e}^{\frac{ik\rho {\rho }_{0}Cos({\phi }_{0}\phi )}{z}}d{\phi }_{0}\) by \(2\pi {J}_{0}(\frac{k\rho {\rho }_{0}}{z})\) gives:
which resembles the Fresnel approximation of the KirchhoffFresnel diffraction. Alternatively, the field E(p, z) can be described in terms of the numerical aperture NA = n · Sin(θ), the refractive index (n), and the aperture width a = z · Sin(θ):
By replacing the aperture size with unity, we get:
The field intensity H then is:
where \({\Delta {\rm{\rho }}}_{0}=\frac{({\rho }_{0}^{2})z}{2}{(\frac{NA}{n})}^{2}\), \(\rho =\sqrt{{x}^{2}+{y}^{2}}\) and wave number \(k=\frac{2\pi }{\lambda }\).
After normalizing the intensity to unity, we get:
Equation (10) describes the detection PSF, i.e. the light intensity distribution close to a fluorescence light emitting point source that is located in the focus of an objective with the numerical aperture NA_{Obj}.
The illumination PSF H_{IL} related to the light sheet generator system, which in its simplest form consists of a single cylinder lens and a slit aperture mounted directly in front of it, can be modelled similarly. For this, the x and z coordinates have to be flipped in Eq. (10), since the illumination pathway is turned by 90° relative to the imaging pathway (Fig. 1b). The light intensity distribution along the yaxis is assumed to be approximately constant (y = 0). This leads to an expression for the intensity distribution of the excitation light around the focal line of the light sheet generator with numerical aperture NA_{Ls}:
In a standard light sheet microscope, where the light sheet is formed by a single cylindrical lens and an upstream slit aperture^{3} NALs is:
f denotes the focal length of the cylinder lens and w is the full width of the slit aperture.
The effective PSF of the light sheet microscope can be described as the elementwise product of the detection PSF H_{det} Eq. (10) and the illumination PSF H_{Ls} of the system (11)^{5,6,18} (Fig. 1b)
Combining Eqs (10–13) provides an expression describing the PSF of a light sheet microscope:
After rewriting using trigonometric terms, which may be preferable for implementation in a programming language that cannot handle complex exponents Eq. (14) becomes:
x, y, z are coordinate points, J_{0} is the Bessel function of first kind and order zero, NA_{Obj} is the numerical aperture of the objective, λ_{ex} is the excitation wavelength, λ_{em} is the emission wavelength, f is the focal length of the cylinder lens, and d is the full width of the slit aperture located in front of the cylinder lens.
Since according to Eq. (15) the PSFs of the light sheet and of the objective are elementwise multiplied, the shape of the light sheet predominantly determines the PSF of a light sheet microscope, as long as its thickness is small compared to the axial size of the detection PSF^{16}. Figure 6 depicts 2Dsections (y = 0) of PSF’s modelled according to Eq. (15) illustrating this effect.
Light sheet microscopy recordings obtained with low magnification objectives of relative high NA and large fields of view usually are severely undersampled in xydirection in terms of the Nyquist rate. Hence, a PSF that is tabularized on a 3Dgrid that in its lateral (xy) resolution matches the back projected camera pixel size d_{xy} would also be undersampled. However, according to Eq. (10) the detection PSF H_{det} does not depend on the objective magnification and, as we further pointed out, the shape of the PSF is predominantly determined by the NA of the light sheet generator and not by the objective, as long as the NA of the objective is not too high. Therefore, we can circumvent the undersampling problem addressed above by calculating the PSF on a virtual grid satisfying the Nyquist criterion, while virtually downscaling the image, accordingly. Our deconvolution software utilizes a 3Dgrid with a maximal lateral spacing of \({\Delta }_{xy}=\frac{0.61\lambda }{3NA}\) for PSF calculation, which is 33% above the Nyquist rate as suggested^{14}. The lateral size of the image stack is corrected accordingly. The axial spacing Δ_{z} of the calculation grid always equals the step width d_{z} for stepping the sample through the light sheet (Fig. 1a). The number of sample points along each axis (n_{xy} and n_{z}, respectively) is adjusted in a way that the calculated PSF covers the range from −FWHM to +FWHM along each axis. The full width half maxima (FWHM) FWHM_{xy} and FWHM_{z} are obtained using Eq. (15) via a bisection algorithm^{25}.
n_{xy} and n_{z}: number of PSF sample points in xy and z direction, respectively. D_{xy} and D_{z}: distance between sample points in xy and z direction. FWHM_{xy}: full width half maximum in xydirection.
Richardsonlucy deconvolution
Deconvolution tries to undo the blurring that is introduced by convolving the original image with the PSF of the microscope. Further, some noise, predominantly originating from the camera, sums to the image^{15}.
D is the recorded image stack, O is the restored image stack, H is the point spread function, and N is some additive noise. ⊗ symbolizes the convolution operator.
Equation (17) can be written in the frequency domain as
where ℑ denotes the discrete 3DFourier transform.
In the absence of noise the convolution could be easily reverted by inverse filtering
where ℑ^{−1} denotes the discrete inverse Fourier transform.
However, in practice the straightforward division by ℑ(H) would extremely amplify the additive noise present in the image, thereby boosting high frequency components towards infinity in parts where ℑ(H) contains values close to zero. One of the various approaches that were developed to deal with this problem^{26} is the RichardsonLucy (RL) deconvolution algorithm^{27,28}. The RL algorithm assumes that the recorded image stack is a combination of the desired non blurred image stack that has been convolved with the PSF by the microscope optics and some additional Poisson distributed noise (17). This property makes RL deconvolution especially adequate for images recorded with CCD cameras or photomultipliers. Since these devices count numbers of photons they generally exhibit Poisson distributed noise intensity distributions^{29}.
The RL algorithm tries to find an improved image stack, which, if blurred with the known PSF, best possibly matches the recorded image stack. As a first estimate, the algorithm starts with the recorded image stack. During each further iteration step n a correction factor is computed for each voxel of the current estimate. Then this 3D matrix of correction factors is elementwise multiplied with the current estimate to obtain the next estimate n + 1. For determining the correction factors, a copy of the current estimate is made and blurred by convolution with the PSF. The original stack is elementwise divided by this blurred version, yielding a new stack, which then is blurred a second time using the same PSF (in case the PSF is asymmetric it has to be flipped around its origin first). This yields the correction factors, which now can be used to obtain the estimate used for the next iteration. Equation 20 describes the basic RL algorithm.
D is the recorded image stack, O is the restored image stack at iteration step n, H is the point spread function, and \(\hat{H}\) is the PSF flipped around its center point. ⊗ symbolizes the convolution operator.
Using the Fourier transform Eq. 20 can be written as:
where ℑ^{−1} denotes the inverse Fourier transform, and * the complex conjugate of ℑ(H).
As long as the iteration converges, the difference between the current estimate and the previous estimate becomes smaller, finally approaching zero (i.e. the correction factors converge to one). In practice, either a constant number of iterations is performed (e.g. 30 iterations) or, preferably, a quality criterion D_{rel} is defined, which quantifies the difference between the current image stack and the previous image stack. We defined the normalized average squared difference D_{rel} between the output stacks obtained from two adjacent iterations as a quality criterion.
k is the iteration number, D(k) is the quality criterion at iteration step k, N is the number of voxels in the stack, and O is the deconvolved image stack.
Since D_{rel} is normalized by D_{rel}(1) obtained from the first iteration step, D_{rel} starts with 100% and approaches 0% during the further iteration steps. In practice, the iterations are stopped when D(k) becomes smaller than a predefined stop criterion e.
A known limitation of the RL algorithm is its slow convergence due to oscillations and noise amplification in the presence of significant noise levels^{26}. TikhonovMiller regularization^{30}, or Total Variation Regularization are two improvements of the original RL algorithm that were developed to deal with both problems^{31}. We implemented a straightforward regularization approach termed FluxPreserving Regularization, which relies on simple spatial filtering of the intermediate data obtained in each iteration step^{15}. For FluxPreserving Regularization, a smoothed copy O*^{(n)} is obtained from the image stack O^{(n)} by convolving it with an averaging filter. When calculating the next iteration step n + 1 a weighted fraction γ of O*^{(n)} is added to obtain the next intermediate result O^{(n+1)}.
γ is a weighting factor in the range between zero (no regularization) and 1. ℜ is a 3 × 3 × 3 filter kernel we used for average filtering.
Data availability
The data supporting the findings of this study are available from the corresponding author upon reasonable request.
Code availability
The source code of the program used for obtaining the deconvolutions presented in this paper is available in the supplementary software. For windows (64bit at least 8 GB RAM) there is also a compiled version available, which does not require MATLAB.
References
 1.
Chatterjee, K., Pratiwi, F. W., Wu, F. C. M., Chen, P. & Chen, B. C. Recent Progress in Light Sheet Microscopy for Biological Applications. Appl. Spectrosc. 72, 1137–1169 (2018).
 2.
Girkin, J. M. & Carvalho, M. T. The lightsheet microscopy revolution. J. Opt. (United Kingdom) 20, 53002 (2018).
 3.
Dodt, H.U. et al. Ultramicroscopy: threedimensional visualization of neuronal networks in the whole mouse brain. Nat. Methods 4, 331–6 (2007).
 4.
Keller, P. J. & Dodt, H. U. Light sheet microscopy of living or cleared specimens. Curr.Opin.Neurobiol. 22, 138–143 (2012).
 5.
Huisken, J., Swoger, J., Linkdeck, S. & Stelzer, E. H. K. Selective Plane Illumination Microscopy. In Handbook of Biological Confocal Microscopy (ed. Pawley, J. B.) 672–679 (Springer, https://doi.org/10.1007/9780387455242 2006).
 6.
Engelbrecht, C. J. & Stelzer, E. H. Resolution enhancement in a lightsheetbased microscope (SPIM). Opt. Lett. 31, 1477 (2006).
 7.
Sage, D. et al. DeconvolutionLab2: An opensource software for deconvolution microscopy. Methods 115, 28–41 (2017).
 8.
Quammen, C. Clarity – a C++ opensource deconvolution software library. Available at: http://cismm.cs.unc.edu/downloads/. (2007).
 9.
Kirshner, H., Sage, D. & Unser, M. 3D PSF models for fluorescence microscopy in ImageJ. …. Appl. Fluoresc. … 1, 2010 (2011).
 10.
Abràmoff, M. D., Hospitals, I., Magalhães, P. J. & Abràmoff, M. Image Processing with ImageJ. Biophotonics Int. 11, 36–42 (2007).
 11.
Preibisch, S. et al. Efficient bayesianbased multiview deconvolution. Nat. Methods 11, 645–648 (2014).
 12.
Wu, Y. et al. Simultaneous multiview capture and fusion improves spatial resolution in widefield and lightsheet microscopy. Optica 3, 897 (2016).
 13.
Boniface, A., Mounaix, M., Blochet, B., Piestun, R. & Gigan, S. Pointspreadfunction engineering through a complex medium. Opt. InfoBase Conf. Pap. Part F82C, (2017).
 14.
Cole, R. W., Jinadasa, T. & Brown, C. M. Measuring and interpreting point spread functions to determine confocal microscope resolution and ensure quality control. Nat. Protoc. 6, 1929–1941 (2011).
 15.
Bratsolis, E. & Sigelle, M. A spatial regularization method preserving local photometry for RichardsonLucy restoration. Astron. Astrophys. 375, 1120–1128 (2001).
 16.
Reynaud, E. G., Kržič, U., Greger, K. & Stelzer, E. H. K. Light sheetbased fluorescence microscopy: More dimensions, more photons, and less photodamage. HFSP J. 2, 266–275 (2008).
 17.
Susaki, E. A. et al. WholeBrain Imaging with SingleCell Resolution Using Chemical Cocktails and Computational Analysis. Cell 157, 726–739 (2014).
 18.
Olarte, O. E., Andilla, J., Gualda, E. J. & LozaAlvarez, P. Lightsheet microscopy: a tutorial. Adv. Opt. Photonics 10, 111 (2018).
 19.
Ogier, A., Dorval, T. & Genovesio, A. Inhomogeneous deconvolution in a biological images context. 2008 5th IEEE Int. Symp. Biomed. Imaging From Nano to Macro, Proceedings, ISBI 744–747, https://doi.org/10.1109/ISBI.2008.4541103 (2008).
 20.
Kim, B. & Naemura, T. Blind Depthvariant Deconvolution of 3D Data in Widefield Fluorescence Microscopy. Sci. Rep. 5, 9894 (2015).
 21.
Chen, Y. et al. Measure and model a 3D spacevariant PSF for fluorescence microscopy image deblurring. Opt. Express 26, 14375 (2018).
 22.
Saghafi, S., Becker, K., Hahn, C. & Dodt, H. U. 3Dultramicroscopy utilizing aspheric optics. J. Biophotonics 7, 117–125 (2014).
 23.
Sternberg, S. R. Biomedical Image Processing. Computer (Long. Beach. Calif). 16, 22–34 (1983).
 24.
Pizer, S. M. et al. Adaptive Histogram Equalization and Its Variations. Computer Vision Graphics and Image Processing 39, 355–368 (1987).
 25.
Powell, M. J. D., Authority, U. K. A. E. & H.M.S.O. A fortran subroutine for solving systems of nonlinear algebraic equations. (H.M. Stationery Office, 1968).
 26.
Sibarita, J. B. Deconvolution microscopy. Adv. Biochem. Eng. Biotechnol. 95, 201–243 (2005).
 27.
Lucy, L. B. An iterative technique for the rectification of observed distributions. Astron. J. 79, 745 (1974).
 28.
Richardson, W. H. BayesianBased Iterative Method of Image Restoration. J. Opt. Soc. Am. 62, 55 (1972).
 29.
Laasmaa, M., Vendelin, M. & Peterson, P. Application of regularized RichardsonLucy algorithm for deconvolution of confocal microscopy images. J. Microsc. 243, 124–140 (2011).
 30.
Tikhonov, A. N. & Arsenin, V. I. A. Solutions of illposed problems. (Winston, 1977).
 31.
Dey, N. et al. Richardson – Lucy Algorithm With Total Variation Regularization for 3D Confocal Microscope Deconvolution. 266, 260–266 (2006).
 32.
Becker, K., Jährling, N., Kramer, E. R., Schnorrer, F. & Dodt, H. U. Ultramicroscopy: 3D reconstruction of large microscopical specimens. Journal of Biophotonics 1, 36–42 (2008).
 33.
Ertürk, A. et al. Threedimensional imaging of solventcleared organs using 3DISCO. Nat. Protoc. 7, 1983–1995 (2012).
 34.
Perrin, D. et al. WholeBody Imaging with SingleCell Resolution by Tissue Decolorization. Cell 159, 911–924 (2014).
 35.
Hahn, C. et al. High‐resolution imaging of fluorescent whole mouse brains using stabilised organic media (sDISCO). J. Biophotonics e201800368, https://doi.org/10.1002/jbio.201800368 (2019).
Acknowledgements
The study was funded by the Austrian Science Fund (FWF), Project P 23102N22.
Author information
Affiliations
Contributions
K.B. developed the deconvolution software and wrote the manuscript. S.S. derived the equations describing the PSF of a light sheet microscope. M.P., I.S., C.H., N.J. contributed the sample data sets. M.F. processed most of the sample images. H.U.D. contributed to the scientific discussion and supervised this work.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary information
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Becker, K., Saghafi, S., Pende, M. et al. Deconvolution of light sheet microscopy recordings. Sci Rep 9, 17625 (2019). https://doi.org/10.1038/s4159801953875y
Received:
Accepted:
Published:
Further reading

Fluorescence based rapid optical volume screening system (OVSS) for interrogating multicellular organisms
Scientific Reports (2021)

Recent advances in optical tomography in low scattering media
Optics and Lasers in Engineering (2020)

Modified inverted selective plane illumination microscopy for submicrometer imaging resolution in polydimethylsiloxane soft lithography devices
Lab on a Chip (2020)

Neural mechanisms for developing speciesuniversal and individually unique song of zebra finch.
Hikaku seiri seikagaku(Comparative Physiology and Biochemistry) (2020)

Application of Blind Deconvolution Based on the New Weighted L1norm Regularization with Alternating Direction Method of Multipliers in Light Microscopy Images
Microscopy and Microanalysis (2020)
Comments
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.