We demonstrate residual channel attention networks (RCAN) for the restoration and enhancement of volumetric time-lapse (four-dimensional) fluorescence microscopy data. First we modify RCAN to handle image volumes, showing that our network enables denoising competitive with three other state-of-the-art neural networks. We use RCAN to restore noisy four-dimensional super-resolution data, enabling image capture of over tens of thousands of images (thousands of volumes) without apparent photobleaching. Second, using simulations we show that RCAN enables resolution enhancement equivalent to, or better than, other networks. Third, we exploit RCAN for denoising and resolution improvement in confocal microscopy, enabling ~2.5-fold lateral resolution enhancement using stimulated emission depletion microscopy ground truth. Fourth, we develop methods to improve spatial resolution in structured illumination microscopy using expansion microscopy data as ground truth, achieving improvements of ~1.9-fold laterally and ~3.6-fold axially. Finally, we characterize the limits of denoising and resolution enhancement, suggesting practical benchmarks for evaluation and further enhancement of network performance.
Subscribe to Journal
Get full journal access for 1 year
only $4.92 per issue
All prices are NET prices.
VAT will be added later in the checkout.
Tax calculation will be finalised during checkout.
Rent or Buy article
Get time limited or full article access on ReadCube.
All prices are NET prices.
The code and sample data used in this study are available at https://github.com/AiviaCommunity/3D-RCAN. An installation guide, data and instructions for use are also available from the same webpage.
Gustafsson, M. G. L. et al. Three-dimensional resolution doubling in wide-field fluorescence microscopy by structured illumination. Biophys. J. 94, 4957–4970 (2008).
Galland, R. et al. 3D high- and super-resolution imaging using single-objective SPIM. Nat. Methods 12, 641–644 (2015).
York, A. G. et al. Instant super-resolution imaging in live cells and embryos via analog image processing. Nat. Methods 10, 1122–1126 (2013).
Wu, Y. & Shroff, H. Faster, sharper, and deeper: structured illumination microscopy for biological imaging. Nat. Methods 15, 1011–1019 (2018).
Winter, P. W. & Shroff, H. Faster fluorescence microscopy: advances in high speed biological imaging. Curr. Opin. Chem. Biol. 20, 46–53 (2014).
Li, D. et al. Extended-resolution structured illumination imaging of endocytic and cytoskeletal dynamics. Science 349, aab3500 (2015).
Laissue, P. P., Alghamdi, R. A., Tomancak, P. & Reynaud, E. G. S. H. Assessing phototoxicity in live fluorescence imaging. Nat. Methods 14, 657–661 (2017).
LeCun, Y., Bengio, Y. & Hinton, G. Deep learning. Nature 521, 436–444 (2015).
Weigert, M. et al. Content-aware image restoration: pushing the limits of fluorescence microscopy. Nat. Methods 15, 1090–1097 (2018).
Ronneberger, O., Fischer, P. & Brox, T. U-Net: convolutional networks for biomedical image segmentation. in Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015 (eds Navab, N. et al.) 234–241 (Springer, 2015).
Goodfellow, I. et al. Generative adversarial networks. Commun. ACM 63, 139–144 (2020).
Wang, H. et al. Deep learning enables cross-modality super-resolution in fluorescence microscopy. Nat. Methods 16, 103–110 (2019).
Ouyang, W., Aristov, A., Lelek, M., Hao, X. & Zimmer, C. Deep learning massively accelerates super-resolution localization microscopy. Nat. Biotechnol. 36, 460–468 (2018).
Fang, L. et al. Deep learning-based point-scanning super-resolution imaging. Nat. Methods 18, 406–416 (2021).
Jin, L. et al. Deep learning enables structured illumination microscopy with low light levels and enhanced speed. Nat. Commun. 11, 1934 (2020).
Guo, M. et al. Rapid image deconvolution and multiview fusion for optical microscopy. Nat. Biotechnol. 38, 1337–1346 (2020).
Zhang, Y. et al. Image super-resolution using very deep residual channel attention networks. in Computer Vision—ECCV 2018 (eds Ferrari, V. et al.) 294–310 (2018).
Chen, F., Tillberg, P. & Boyden, E. S. Expansion microscopy. Science 347, 543–548 (2015).
Hu, J., Shen, L., Albanie, S., Sun, G. & Wu, E. Squeeze-and-excitation networks. IEEE Trans. Pattern Anal. 42, 2011–2023 (2020).
Ledig, C. et al. Photo-realistic single image super-resolution using a generative adversarial network. In IEEE Conference on Computer Vision and Pattern Recognition 4681–4690 (2017).
Wang, X. et al. ESRGAN: enhanced super-resolution generative adversarial networks. in Computer Vision—ECCV 2018 Workshops (eds Leal-Taixé, L. & Roth, S.) 63–79 (Springer, 2019).
Blau, Y., Mechrez, R., Timofte, R., Michaeli, T. & Zelnik-Manor, L. The 2018 PIRM challenge on perceptual image super-resolution. in Computer Vision—ECCV 2018 Workshops (Leal-Taixé L., Roth S., eds.) 334–355 (Springer, 2019).
Fadero, T. C. et al. LITE microscopy: tilted light-sheet excitation of model organisms offers high resolution and low photobleaching. J. Cell Biol. 217, 1869–1882 (2018).
Descloux, A., Grußmayer, K. S. & Radenovic, A. Parameter-free image resolution estimation based on decorrelation analysis. Nat. Methods 16, 918–924 (2019).
Lukinavičius, G. et al. SiR-Hoechst is a far-red DNA stain for live-cell nanoscopy. Nat. Commun. 6, 8497 (2015).
Gambarotto, D. et al. Imaging cellular ultrastructures using expansion microscopy (U-ExM). Nat. Methods 16, 71–74 (2019).
Miller, A. L. & Bement, W. M. Regulation of cytokinesis by Rho GTPase flux. Nat. Cell Biol. 11, 71–77 (2009).
Bunnell, S. C., Kapoor, V., Trible, R. P., Zhang, W. & Samelson, L. E. Dynamic actin polymerization drives T cell receptor-induced spreading: a role for the signal transduction adaptor LAT. Immunity 14, 315–329 (2001).
Yi, J. et al. Centrosome repositioning in T cells is biphasic and driven by microtubule end-on capture-shrinkage. J. Cell Biol. 202, 779–792 (2013).
Kendall, A. & Gal, Y. What uncertainties do we need in Bayesian deep learning for computer vision? Proceedings of the 31st International Conference on Neural Information Processing Systems 5580–5590 (2017).
Qiao, C. et al. Evaluation and development of deep neural networks for image super-resolution in optical microscopy. Nat. Methods 18, 194–202 (2021).
Zhao, H., Gallo, O., Frosio, I. & Kautz, J. Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3, 47–57 (2017).
Gulli, A. & Pal, S. Deep Learning with Keras (Packt Publishing, 2017).
Abadi, M. et al. Tensorflow: a system for large-scale machine learning. in 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI) 16, 265–283 (2016).
He, K., Zhang, X., Ren, S. & Sun, J. Deep residual learning for image recognition. 2016 IEEE Conference on Computer Vision and Pattern Recognition 770–778 (IEEE, 2016).
Huang, G. et al. Densely connected convolutional networks. 2017 IEEE Conference on Computer Vision and Pattern Recognition 2261–2269 (IEEE, 2017).
Jolicoeur-Martineau, A. The relativistic discriminator: a key element missing from standard GAN. in International Conference on Learning Representations (2018).
Simonyan, K. & Zisserman, A. Very deep convolutional networks for large-scale image recognition. in International Conference on Learning Representations (2015).
Jolesz, F. Intraoperative Imaging and Image-Guided Therapy (Springer, 2014).
Pietzch, T., Preibisch, S., Tomancak, P. & Saalfeld, S. ImgLib2-generic image processing in Java. Bioinformatics 28, 3009–3011 (2012).
J.C., Y.W. and H. Shroff thank the Marine Biological Laboratory at Woods Hole for providing the Deep Learning for Microscopy Image Analysis Course, as well as F. Jug, J. Folkesson and P. La Riviere for providing superb instruction. We thank W. Bement for the gift of the EMTB-3XEGFP plasmid (also available as Addgene plasmid no. 26741), J. Taraska for the LAMP1-GFP plasmid, G. Patterson for the GalT-GFP plasmid and P. Chandris for the pShooter pEF-Myc-mito-GFP plasmid. We also thank P. La Riviere and H. Eden for their careful reading of, and comments on, this work. This research was supported by the intramural research programs of the National Institute of Biomedical Imaging and Bioengineering and the National Institute of Heart, Lung, and Blood within the National Institutes of Health, and by an SBIR cooperative agreements of the National Institute of General Medical Sciences (nos. 1U44GM136091-01 and 6U44GM136091-02). A.U. and I.R.-S. acknowledge grant support from NIH (no. R01 GM131054) and NSF (no. PHY 1607645). We thank the Office of Data Science Strategy, NIH, for providing a seed grant enabling us to train deep learning models using cloud-based computational resources. The NIH, its staff and officers do not recommend or endorse any company, product or service.
H. Sasaki, H.L., Y.S., H.C., C.C.H., S.-J.J.L. and L.A.G.L. are employees of SVision LLC and Leica Microsystems, a machine vision company. H. Sasaki, H.L., Y.S., H.C., C.C.H., S.-J.J.L. and L.A.G.L. have developed Aivia (a commercial software platform) that offers the 3D RCAN developed here.
Peer review information Nature Methods thanks Wei Ouyang, Lachlan Whitehead and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Rita Strack was the primary editor on this article and managed its editorial process and peer review in collaboration with the rest of the editorial team.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Figs. 1–21, Tables 1–8 and Note 1.
iSIM imaging at high SNR rapidly bleaches the sample.
RCAN enables super-resolution imaging over thousands of volumes.
Two-color volumetric image restoration with RCAN.
Axial views of 2× blurred input, and network predictions.
Axial views of 3× blurred input, and network predictions.
Axial views of 4× blurred input, and network predictions.
RCAN denoises and improves resonant confocal recordings of dividing cells.
RCAN denoises and improves resonant confocal recordings on additional nuclei.
Synthetic input, two-step RCAN prediction and ground truth for mitochondrial images based on ExM training.
Live-cell mitochondrial two-step RCAN prediction based on ExM training.
High-magnification axial view comparing deconvolved iSIM input and two-step RCAN prediction.
Live-cell microtubule two-step RCAN prediction based on ExM, axial views.
Live-cell microtubule two-step RCAN prediction based on ExM, lateral views.
Additional example of microtubule dynamics in another Jurkat T cell.
About this article
Cite this article
Chen, J., Sasaki, H., Lai, H. et al. Three-dimensional residual channel attention networks denoise and sharpen fluorescence microscopy image volumes. Nat Methods 18, 678–687 (2021). https://doi.org/10.1038/s41592-021-01155-x