Abstract
Interrogation of subcellular biological dynamics occurring in a living cell often requires noninvasive imaging of the fragile cell with high spatiotemporal resolution across all three dimensions. It thereby poses big challenges to modern fluorescence microscopy implementations because the limited photon budget in a live-cell imaging task makes the achievable performance of conventional microscopy approaches compromise between their spatial resolution, volumetric imaging speed, and phototoxicity. Here, we incorporate a two-stage view-channel-depth (VCD) deep-learning reconstruction strategy with a Fourier light-field microscope based on diffractive optical element to realize fast 3D super-resolution reconstructions of intracellular dynamics from single diffraction-limited 2D light-filed measurements. This VCD-enabled Fourier light-filed imaging approach (F-VCD), achieves video-rate (50 volumes per second) 3D imaging of intracellular dynamics at a high spatiotemporal resolution of ~180 nm × 180 nm × 400 nm and strong noise-resistant capability, with which light field images with a signal-to-noise ratio (SNR) down to -1.62 dB could be well reconstructed. With this approach, we successfully demonstrate the 4D imaging of intracellular organelle dynamics, e.g., mitochondria fission and fusion, with ~5000 times of observation.
Similar content being viewed by others
Introduction
Unraveling the spatiotemporal regulation and dynamics inside living cells is crucial for advancing fundamental biological research1,2,3,4. Many fast biological processes occur in three dimensions at a submicron scale and successively change across a long timescale, thereby posing a big challenge for current fluorescence microscopy techniques. While scanning-based light sheet microscopy5,6, confocal microscopy, and structure illumination microscopy7 can provide high spatial resolution in three-dimensional space, they all suffer from compromised volumetric imaging speed and strong photodamage owing to the repetitive light exposure to the samples.
The emerging light-field microscopy (LFM) delivers superior temporal resolution and photon efficiency through encoding both spatial and angular information of 3D signals into a single 2D camera snapshot, thus yielding 3D reconstruction without scanning8. However, the poor and nonuniform spatial resolution, as well as the presence of reconstruction artifacts8,9, greatly limit its application in subcellular imaging beyond the diffraction limit10.
Unlike standard LFM modulating the light at the native image plane, its structure variant Fourier light field microscopy (FLFM) records the spatial and angular information in the Fourier domain, permitting spatially invariant sampling and leading to a significant reduction of sampling-induced artifacts11. In addition, FLFM overcomes the spatial sampling limitation of standard LFM and leverages the optical aperture to achieve improved spatial resolution11,12,13.
Despite the abovementioned advances, the photon budget constraint still exists in FLFM. The spatial resolution remains sacrificed to obtain the angular information, making the subcellular structures difficult to distinguish. When the signal turns weak after long-term excitation and fluorescing, undesirable ringing artifacts also arise14,15 in the reconstructed results, further diminishing the applicability of FLFM in live-cell imaging. Super-resolution radial fluctuations (SRRF) algorithm has been applied to light-field views to obtain high-resolution 3D reconstruction of FLFM beyond diffraction limit16. However, the super-resolution images generated by SRRF usually suffer from severe artifacts, especially under suboptimal imaging conditions with low signal-to-noise ratio (SNR) or insufficient sampling17,18. Also, the fabrication of current Fourier microlens arrays is challenging, owing to the requirement of large pitch sizes and small surface curvatures, leading to a high cost.
Deep neural networks (DNNs) have emerged as a powerful tool to overcome the limitations imposed by photon budgets for their strong fitting ability and their capacity to incorporate abundant prior knowledge19,20,21,22,23,24,25,26. Our previous development of view-channel-depth (VCD) deep-learning strategy successfully exceeds the photon budget limitation of conventional LFM and achieves 3D reconstructions from single 2D LFM snapshots27,28,29,30. Given that severely under-sampled spatial-angular information coupled with varying noise in the light-field imaging of live cells, super-resolution 3D reconstruction from a single 2D raw light-field image (LF) remains highly challenging for standard VCD-based light-field imaging.
Here, we report a FLFM system based on the diffractive optical element (DOE). Besides the low cost and ultracompact structure, the customized DOE Fourier lens can maximize the modulation of incident light and overcome the obstacles in the fabrication of conventional Fourier microlens. Moreover, inspired by the progressive image enhancement19, we present a two-stage VCD reconstruction strategy, F-VCD, which enables fast, long-term 3D imaging of live cells at sub-200 nm resolution. F-VCD decomposes complex FLFM reconstruction problems into two procedures: view-correlated denoising and limited-view 3D reconstruction, which narrows the gap between the corrupted 2D LFs and high-quality 3D stacks. Unlike the conventional VCD model, it contains an additional view-attention network to denoise the light-field (LF) views with various levels of noise. To reconstruct 3D information (over 40 slices) from only 3 FLFM views, triple dilated convolution layers with different kernels were introduced into F-VCD to extract and aggregate multi-channel features from FLFM views, substantially augmenting the input information for the F-VCD model. With F-VCD, we successfully demonstrate 3D super-resolution imaging of live cells at ~180 nm × 180 nm × 400 nm resolution across ~81 μm × 81 μm × 7 μm volume with a volume rate up to 50 Hz and maintain a high reconstruction quality even when the SNR of LF images becomes very low. We also capture the fast locomotion and morphology changes of mitochondria over 4 minutes, containing about 5000-time points of observation. Diverse mitochondrial fusion, fission, and mitochondrial dynamic tubulation (MDT) phenomena have been identified and quantitatively analyzed based on the F-VCD live imaging results.
Results
Principle of F-VCD strategy
We built a DOE-based Fourier light field microscopy to catch the fast 3D dynamics as shown in Fig. 1a. The transmittance function of the DOE was designed to be similar to a previously proposed conventional microlens array12, but the price was dozens of times lower because DOE was much easier to fabricate. To solve the complex task of denoising and super-resolution 3D reconstruction in such light-field imaging of live cells, we developed a two-stage network, F-VCD, to subsequently conquer two outstanding problems of denoising and 2D-to-3D transformation. The principle of F-VCD is shown in Fig. 1b, and its detailed architecture is shown in Supplementary Fig. 1. Before model training, dual-stage ground truths (GTs) (HR Stacks and Clean LFs) and synthetic light-field images (Corrupted LFs) are generated according to the following physics-modeled processes: (i) The “HR Stacks”, which are obtained from a commercial 3D microscope (ZEISS LSM 980 with Airyscan 2), serve as the GTs for the reconstruction module and are resampled to match the sampling rate of FLFM setup; (ii) The “Clean LFs” are designed to guide the denoising task and generated by the convolution of ‘HR Stacks’ with Fourier light field point spread function based on wave-optics theory; (iii) The “‘Corrupted LFs” are generated by adding Gaussian noise, Poisson noise, and specific light-field background, to the “Clean LFs” based on the properties of diverse measured LFs.
We specifically designed two network modules to progressively denoise the corrupted Fourier LFs and reconstruct the denoised LFs. The first module is a view-correlated denoise module, termed “F-denoise” module, which converses the noised views (Fig. 1b-iv) into clean ones (Fig. 1b-v). The presence of parallax and variations in intensity distribution among the three views results in differences in SNR and spatial resolution, highlighting the importance of assigning different weights to each of the views for optimal denoising results14,31. To mitigate these discrepancies, we introduced a view-attention branch in a conventional RCAN network, enabling a balanced influence on network inference results. The second module of F-VCD, F-Reconstruction, functionalized as ‘Fourier light-field reconstruction’, is a “limited-view to 3D depth” transformation module to reconstruct 3D volumes (41slices, Fig. 1b-v) from a few views of FLFM (three views in our case, Fig. 1b-iv). Unlike conventional LFM encoded with hundreds of views, 3D reconstruction from extremely few views in FLFM is challenging for previous VCD network27. Therefore, we added dilated convolutional layers to extract multi-scale features before VCD to increase the output channels extracted from the three views. Furthermore, the losses in these two modules are weighted to preserve the continuity of data distribution when coupling two pseudo-inverse learning. More details about F-VCD implementation are described in Supplementary Fig. 1 and the “Methods” section.
By iteratively minimizing the weighted loss with the guidance of dual-step GT, F-VCD can be well trained to instantly provide noise-free and high-resolution 3D reconstruction from noisy 2D LF images captured from a lab-built FLFM system (Fig. 1c). Compare with the low-speed traditional deconvolution-based reconstruction method, F-VCD achieves hundreds of times faster reconstruction (4–13 s vs. 77 ms, Supplementary Fig. 2).
F-VCD enables high-fidelity reconstruction under various SNRs
We first tested the reconstruction resolution of F-VCD and its fidelity using simulation data. As shown in Fig. 2a, the outer mitochondrial membrane could not be resolved at all by the deconvolution algorithm. With the one-frame SRRF strategy16, although the resolution seemed to be improved, it merely made the mitochondria appear thinner but failed to distinguish the outer membrane. In addition, there were notable artifacts shown in the axial image. Therefore, its structural similarity index (SSIM) was even lower than that of the deconvolution algorithm. In contrast, F-VCD remarkably enhanced the reconstruction quality of FLFM, with which the membrane structure of the mitochondria was clearly reconstructed with higher contrast in both the x–y and x–z planes. Meanwhile, the F-VCD result showed a very high SSIM of 0.941, as compared to the values lower than 0.8 in SRRF and deconvolution results. The normalized root mean square error (NRMSE) values also indicated that F-VCD had the best reconstruction accuracy among the three reconstruction approaches.
Then we tested the robustness of F-VCD under different SNRs on the simulation data to evaluate its capability of monitoring the living cells in the long term. As shown in Fig. 2b, F-denoise could recover the image even with an ultra-low SNR down to −1.62 dB. By using the whole F-VCD, 3D super-resolution reconstruction was realized with a high structure fidelity of 0.91 SSIM.
High-resolution reconstruction of fixed cells using F-VCD
We assessed the performance of F-VCD on fixed U2OS cells expressing Tomm20-EGFP-labeled outer mitochondrial membrane signals. F-VCD realized 3D reconstruction with the hollow membrane structure clearly discernable (Fig. 3a). Statistical analysis revealed that F-denoise could reduce the axial artifacts and slightly improve the resolution from ~400 nm to ~320 nm for conventional FLFM deconvolution, and F-VCD further provided 2× lateral and 1.5× axial resolution improvement with minimal artifacts (Fig. 3b, c). Compared to U-net and RCAN, F-denoise showed superior performance on denoising FLFM views and extending equivalent resolution and frequency spectrum (Supplementary Fig. 3). Together with the limited-view reconstruction module (F-Reconstruction), this progressive optimization strategy achieved higher 3D resolution, fidelity, and fewer artifacts than one-stage reconstruction network (Supplementary Fig. 4). Besides, compared with original VCD, F-Reconstruction shows higher spatial resolution and contrast by the extra multi-scale features extraction module (Supplementary Fig. 5). We further evaluated the ability of F-VCD reconstruction using U2OS cells expressing EGFP-Sec61β-labeled endoplasmic reticulum (ER) signals. Despite the high-level background and densely-labeled signals, F-VCD accurately predicted the 3D distribution of the ER (Fig. 4). In contrast, deconvolution reconstruction suffered from poor resolution, excessive axial artifacts, and noticeable signal loss. It should also be noted that F-VCD successfully reconstructed an axial range of ~7 μm, surpassing the depth-of-field (DoF) limitation imposed by the optical hardware and enabling complete visualization of an entire cell.
4D visualization and quantitative analysis of mitochondrial dynamics in a living cell
With F-VCD-enhanced FLFM, we demonstrated video-rate 4D observation of mitochondrial dynamics in a living cell for the long term. A low excitation power down to 0.22 mW (focal plane of the objective) was set to minimize the phototoxicity and unsurprisingly led to limited SNR and reduced effective image bandwidth in the captured views. Despite these low-quality inputs, the F-VCD network successfully denoised the views and provided high-resolution 3D reconstructions, as shown in Fig. 5a. It is noted that, during the consecutive imaging, the signals were also bleached gradually. This phenomenon is inevitable for live-cell imaging and is evidenced by the decreased cut-off frequency in the raw captured LFs. The quality of deconvolution results degraded quickly as a consequence of such signal bleaching (Fig. 5b, Supplementary Fig. 6). In sharp contrast, F-VCD maintained the resolutions of reconstructed images across ~5000 times of 3D observation (Fig. 5b, Supplementary Video 1), indicating the better robustness of F-VCD under low photon budget, compared with scanning-based imaging modalities (less than 50 volumes, Supplementary Fig. 7). With robust performance on various FLFM inputs, F-VCD achieved fast and sustained 3D observation of mitochondrial dynamics, thereby allowing quantitative analysis of mitochondrial fusion, fission and MDT at milli-second temporal resolution and 180 nm spatial resolution (Fig. 5c, e, Supplementary Fig. 8, Supplementary Video 2). We performed 3D tracking of the mitochondria during the abovementioned events (Fig. 5d) to analyze their velocity dynamics (Fig. 5f, g). Our results revealed a distinct velocity jump occurred during mitochondrial fusion and fission, in contrast to the relatively constant velocity during MDT. It was observed that mitochondrial velocity tended to be significantly higher at the end of fusion and at the beginning of fission. For the quantitative validation of F-VCD’s performance, we conducted an in situ comparison between traditional imaging methods (confocal and wide-field imaging, Supplementary Fig. 9) and F-VCD reconstruction both on fixed and living cells, and Squirrel analysis32 was performed. The results showed high structural similarity and few reconstruction errors of F-VCD, indicating that the proposed imaging strategy could be used to perform biological downstream analysis.
Conclusion and discussion
High spatial resolution, high volumetric imaging speed, and low light exposure have to be carefully balanced in the current fluorescence imaging of live cells. F-VCD surpasses this limitation by incorporating an improved VCD model with DOE-based Fourier light field microscopy, achieving video-rate, super-resolution 3D reconstructions of intracellular dynamics from single diffraction-limited 2D light-field measurements.
In terms of light-field encoding, the use of DOE in self-built FLFM dramatically reduces the cost of the system and allows readily accessible light-field modulation. Since DOE has the capability of flexible phase modulation, the combination with multi-focus imaging33,34 or enhanced DoF modulation35 might further push the limits of FLFM. In terms of light-filed reconstruction, our F-VCD shows ~2.2-times improved resolution accompanied by ~30% increased reconstruction accuracy, as compared to the conventional deconvolution-based FLFM approach Also, unlike regular VCD that directly predicts depth information from corrupted views, F-VCD follow a divide-and-conquer strategy to fulfill the coupled denoise and view-to-depth transformation tasks, thereby showing notably improved performance when dealing with live-cell imaging with various conditions. Besides, with the fast inference ability of the network, F-VCD achieves hundreds of volumetric reconstruction accelerations compared with deconvolution (77 ms vs. 4–13 s). This computation time could be even further reduced with multi-GPU parallel computing or model acceleration36 in future work. It is worth mentioning that, as a view-wise attention-denoising strategy, which is independent of the arrangement mode of input views, F-Denoise could also be used to improve the quality of each view and reconstruction of conventional LFM (Supplementary Fig. 10).
F-VCD demonstrates its strong capability of imaging the fast morphological changes of mitochondria, including MDT, fission, and fusion, and the follow-up imaging-based quantitative analysis. We believe our proposed imaging strategy would be a valuable and readily accessible tool for structural studies of cell biology. Here, we only demonstrated the capability of F-VCD on living cell imaging under the high-magnification-imaging hardware setting, which led to a limited DOF (~7 μm). We anticipate expanding the utility of F-VCD to other biological research that requires fast and clear observations, like in vivo imaging, by designing suitable hardware settings.
Methods
DOE-based FLFM optical setup
The FLFM system is based on a commercial inverted microscope (Olympus IX73) (Fig. 1a). A 100 × /1.4 NA objective (Olympus UPlanSApo 100 × /1.4) is used to collect fluorescence signals and is controlled by an objective scanner (P73.Z200s, Coremorrow) for foci planes shift. An iris (SM1D12, Thorlabs) is positioned at the native image plane (NIP) to adjust the field of view of imaging in order to avoid view-overlapping. A Fourier lens (FL, ACT508-300 Thorlabs) is used to make optical Fourier transformation. To maximize the utilization of the full Fourier aperture and make the optimum balance between resolution and FOV, we customize a low-cost silica DOE (ZheYan Technology) to function as a Fourier MLA (pitch d = 3.25 mm, f-number = 37, fML = 120 mm) as illustrated in the inset of Fig. 1a. The DOE is placed at the back focal plane of the FL, which is conjugated to the back pupil of the objective to segment the Fourier pupil into three views. The light field signals are then captured by a sCMOS (Prime BSI Express, Teledyne Photometrics) located at the back focal plane.
Architecture of F-VCD neural network
F-VCD is composed of two modules: “F-Denoise” and “F-Reconstruction” (see Supplementary Fig. 1). In the F-Denoise module, we adopt the well-known RCAN architecture for feature extraction but add another “view-attention” branch to match the different resolution or SNR in triple views. Due to the dense connection among residual channel attention blocks of RCAN, the low-spatial-frequency information can be easily bypassed among networks, which is beneficial to the prediction of high-spatial-frequency information. In addition to the original channel attention branch, we adjust the data dimension (view,h,w,c) to (c,h,w, view) so that the attention block can extract the view-wise coefficient to reweight the extracted feature. In the F-Reconstruction module, based on our previous VCD network27, we add three dilation convolution blocks to increase the number of input channels (3 for designed triple-views LF) and upscale the lateral size of the extracted feature. We also modify the original encoding block of U-Net in VCD by replacing the plain convolution operation with a residual block for network convergence. The normalization layer and activation function are also changed into Instance Normalization and LeakyRelu to avoid the truncation of weak signals in the optimization of the deep network.
Data preprocessing
In the imaging process of Fourier light field microscopy (FLFM), the captured LF termed y can be formulated as:
where x is the original 3D signal of the sample and H is the point spread function of FLFM. Due to the noise (n) and background (b) from the camera’s dark current, deconvolution-based algorithms produce ringing artifacts and miscalculated signals. To tackle the inverse problem of reconstructing the true signal (x) from its degraded observation (y), F-VCD employs a two-step approach that addresses each subproblem in turn: denoising and 3D reconstruction. So, the training dataset contains clean LF, noisy LF, and corresponding high-resolution 3D stack. We use a scalar diffraction model based on the Fresnel diffraction formula to calculate the PSF of FLFM. Synthetic LF projections were generated by convolving of PSF and 3D stacks. According to the measurement of experimental data such as SNR and signal-to-background ratio (SBR), corresponding noise (Poisson noise and Gaussian noise) and constant background value were added into synthetic LFs. Besides, due to the requirement of the VCD network input format, triple views need to be extracted from LF projections according to the location and pitch of each microlens. Finally, for data augmentation and memory limitation, random image flip, rotation, and crop were operated to establish a training data set.
Training details of F-VCD
In the training stage, the F-VCD network is updated by optimizing two sub-nets simultaneously. For denoise net (F-Denoise), we use weighted L1-L2 loss as a loss function in consideration of the denoise ability of L1 loss and convex properties of L2 loss. For reconstruction net (F-Reconstruction), L2 loss and gradient loss are both used to guarantee pixel-wise regression and high-frequency details recovery. We chose an Adam optimizer with a decay learning rate and weighted ‘denoise-reconstruction’ loss function (e.g., 0.2 and 0.8, respectively). In addition, pretraining on denoise net and reconstruction net separately helps the quick convergence of joint optimization, and we recommend enlarging the lateral size of pretraining data for better performance. For example, 160 × 160 × 3 for pretraining denoise net while 80 × 80 × 3 for joint optimization.
After the convergence of F-VCD, the model would automatically save the best parameters for inference. Before 3D reconstruction by inference, experimental light-field data were realigned and referred to as synthetic projection. A convenient way is to capture an LF image full with a constant value or signal to get the rotation degree and microlens position. Once realigned, view stacks can be obtained by cropping triple views from the input experimental light fields.
Sample preparation
U2OS cells were grown in culture medium containing McCoy’s 5 A medium (Thermo Fisher Scientific) supplemented with 1% antibiotic-antimycotic (Thermo Fisher Scientific) and 10% fetal bovine serum (Thermo Fisher Scientific) at 37 °C with 5% CO2 in a humidified incubator. For labeling mitochondria or ER in fixed and living cells, U2OS cells were first transfected with Tomm20-EGFP for mitochondria and Sec61β-EGFP for ER using Lipofectamine 2000 according to the standard protocol and cultured at 37 °C with 5% CO2 for an additional 24 h. For fixed cell imaging, the cells were fixed with 2% glutaraldehyde for 20 min.
Quantification of SNR
The SNR was calculated by:
where \({{{{{\boldsymbol{f}}}}}}\left({{{{{\boldsymbol{x}}}}}},{{{{{\boldsymbol{y}}}}}}\right)\) is the signal distribution and \(\hat{{{{{{\boldsymbol{f}}}}}}}\left({{{{{\boldsymbol{x}}}}}},{{{{{\boldsymbol{y}}}}}}\right)\) is the noisy image.
Statistics and reproducibility
We randomly split all datasets into training data, validation data, and test data to check the model performance across the training phase. The networks were trained and tested multiple times for 3D reconstruction of mitochondria and ER data to find the optimal set of hyperparameters. The number of training datasets was chosen by the quality of prediction results. The statistical analysis and plotting were completed in GraphPad Prism. Replicates were defined as images obtained from different field-of-views. For fixed and living cell imaging, data was collected under different visits to ensure the reproducibility of our model.
Reporting summary
Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article.
Data availability
All source data for all graphs and charts in the main and supplementary figures can be found in Supplementary Data.
Code availability
F-VCD source code and example data have been uploaded to Github.
References
Spinelli, J. B. & Haigis, M. C. The multifaceted contributions of mitochondria to cellular metabolism. Nat. Cell Biol. 20, 745–754 (2018).
Vyas, S., Zaganjor, E. & Haigis, M. C. Mitochondria and Cancer. Cell 166, 555–566 (2016).
Rowland, A. A. & Voeltz, G. K. Endoplasmic reticulum–mitochondria contacts: function of the junction. Nat. Rev. Mol. Cell Biol. 13, 607–615 (2012).
Wong, Y. C., Kim, S., Peng, W. & Krainc, D. Regulation and function of mitochondria–lysosome membrane contact sites in cellular homeostasis. Trends Cell Biol. 29, 500–513 (2019).
Planchon, T. A. et al. Rapid three-dimensional isotropic imaging of living cells using Bessel beam plane illumination. Nat. Methods 8, 417–423 (2011).
Chen, B.-C. et al. Lattice light-sheet microscopy: imaging molecules to embryos at high spatiotemporal resolution. Science 346, 1257998 (2014).
Shao, L., Kner, P., Rego, E. H. & Gustafsson, M. G. L. Super-resolution 3D microscopy of live whole cells using structured illumination. Nat. Methods 8, 1044–1046 (2011).
Prevedel, R. et al. Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy. Nat. Methods 11, 727–730 (2014).
Levoy, M., Ng, R., Adams, A., Footer, M. & Horowitz, M. Light field microscopy. ACM Trans. Graph. 25, 924–934 (2006).
Li, H. et al. Fast, volumetric live-cell imaging using high-resolution light-field microscopy. Biomed. Opt. Express 10, 29–49 (2019).
Guo, C., Liu, W., Hua, X., Li, H. & Jia, S. Fourier light-field microscopy. Opt. Express 27, 25573–25594 (2019).
Hua, X., Liu, W. & Jia, S. High-resolution Fourier light-field microscopy for volumetric multi-color live-cell imaging. Optica 8, 614–620 (2021).
Cong, L. et al. Rapid whole brain imaging of neural activity in freely behaving larval zebrafish (Danio rerio). eLife 6, e28158 (2017).
Zhu, T. et al. Noise-robust phase-space deconvolution for light-field microscopy. J. Biomed. Opt. 27, 076501 (2022).
Yoon, Y.-G. et al. Sparse decomposition light-field microscopy for high speed imaging of neuronal activity. Optica 7, 1457–1468 (2020).
Han, K. et al. 3D super-resolution live-cell imaging with radial symmetry and Fourier light-field microscopy. Biomed. Opt. Express 13, 5574–5584 (2022).
Yi, X. & Weiss, S. Cusp-artifacts in high order superresolution optical fluctuation imaging. Biomed. Opt. Express 11, 554–570 (2020).
Hernández, I. C., Mohan, S., Minderler, S. & Jowett, N. Super-resolved fluorescence imaging of peripheral nerve. Sci. Rep. 12, 12450 (2022).
Chen, J. et al. Three-dimensional residual channel attention networks denoise and sharpen fluorescence microscopy image volumes. Nat. Methods 18, 678–687 (2021).
Chaudhary, S., Moon, S. & Lu, H. Fast, efficient, and accurate neuro-imaging denoising via supervised deep learning. Nat. Commun. 13, 5165 (2022).
Li, X. et al. Reinforcing neuron extraction and spike inference in calcium imaging using deep self-supervised denoising. Nat. Methods 18, 1395–1400 (2021).
Qiao, C. et al. Evaluation and development of deep neural networks for image super-resolution in optical microscopy. Nat. Methods 18, 194–202 (2021).
Zhang, H. et al. High-throughput, high-resolution deep learning microscopy based on registration-free generative adversarial network. Biomed. Opt. Express 10, 1044–1063 (2019).
Zhao, Y. et al. Isotropic super-resolution light-sheet microscopy of dynamic intracellular structures at subsecond timescales. Nat. Methods 19, 359–369 (2022).
Lu, Z. et al. Virtual-scanning light-field microscopy for robust snapshot high-resolution volumetric imaging. Nat. Methods 20, 735–746 (2023).
Vizcaíno, J. P. et al. Learning to reconstruct confocal microscopy stacks from single light field images. IEEE Trans. Comput. Imaging 7, 775–788 (2021).
Wang, Z. et al. Real-time volumetric reconstruction of biological dynamics with light-field microscopy and deep learning. Nat. Methods 18, 551–556 (2021).
Wagner, N. et al. Deep learning-enhanced light-field imaging with continuous validation. Nat. Methods 18, 557–563 (2021).
Zhu, L., Yi, C. & Fei, P. A practical guide to deep-learning light-field microscopy for 3D imaging of biological dynamics. STAR Protoc. 4, 102078 (2023).
Yi, C., Zhu, L., Li, D. & Fei, P. Light field microscopy in biological imaging. J. Innov. Opt. Health Sci. 16, 2230017 (2023).
Mo, Y., Wang, Y., Xiao, C., Yang, J. & An, W. Dense dual-attention network for light field image super-resolution. IEEE Trans. Circuits Syst. Video Technol. 32, 4431–4443 (2021).
Culley, S. et al. Quantitative mapping and minimization of super-resolution optical imaging artifacts. Nat. Methods 15, 263–266 (2018).
He, K. et al. Snapshot multifocal light field microscopy. Opt. Express 28, 12108–12120 (2020).
Wang, X., Yi, H., Gdor, I., Hereld, M. & Scherer, N. F. Nanoscale resolution 3D snapshot particle tracking by multifocal microscopy. Nano Lett. 19, 6781–6787 (2019).
Ren, J. & Han, K. Y. 2.5D microscopy: fast, high-throughput imaging via volumetric projection for quantitative subcellular analysis. ACS Photonics 8, 933–942 (2021).
Li, X. et al. Real-time denoising enables high-sensitivity fluorescence time-lapse imaging beyond the shot-noise limit. Nat. Biotechnol. 41, 282–292 (2023).
Acknowledgements
We thank the funding support by the National Key Research and Development Program of China (2022YFC3401102 and 2017YFA0700501), the National Natural Science Foundation of China (T2225014, 21874052, 61860206009, and 62375096) and Hubei Provincial Natural Science Foundation of China (2023AFB861).
Author information
Authors and Affiliations
Contributions
C.Y. was involved in the experiment setup, investigation, statistical analysis, and writing. L.Z. was involved in the experiment setup, investigation, writing, and editing. J.S. and Z.W. were involved in image processing and statistical analysis. M.Z. was involved in the sample preparation. F.Z., L.Y., J.T., L.H., and Y.H.Z. were involved in writing and editing. D.L. was involved in conceptualization, writing, and editing. P.F. was involved in the conceptualization, writing, editing, and project management.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Peer review
Peer review information
Communications Biology thanks Qionghai Dai, Liwei Liu, and the other anonymous reviewer(s) for their contribution to the peer review of this work. Primary Handling Editors: Shan Raza and Christina Karlsson-Rosenthal. A peer review file is available.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Yi, C., Zhu, L., Sun, J. et al. Video-rate 3D imaging of living cells using Fourier view-channel-depth light field microscopy. Commun Biol 6, 1259 (2023). https://doi.org/10.1038/s42003-023-05636-x
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s42003-023-05636-x
Comments
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.