Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Article
  • Published:

Three-dimensional residual channel attention networks denoise and sharpen fluorescence microscopy image volumes

Abstract

We demonstrate residual channel attention networks (RCAN) for the restoration and enhancement of volumetric time-lapse (four-dimensional) fluorescence microscopy data. First we modify RCAN to handle image volumes, showing that our network enables denoising competitive with three other state-of-the-art neural networks. We use RCAN to restore noisy four-dimensional super-resolution data, enabling image capture of over tens of thousands of images (thousands of volumes) without apparent photobleaching. Second, using simulations we show that RCAN enables resolution enhancement equivalent to, or better than, other networks. Third, we exploit RCAN for denoising and resolution improvement in confocal microscopy, enabling ~2.5-fold lateral resolution enhancement using stimulated emission depletion microscopy ground truth. Fourth, we develop methods to improve spatial resolution in structured illumination microscopy using expansion microscopy data as ground truth, achieving improvements of ~1.9-fold laterally and ~3.6-fold axially. Finally, we characterize the limits of denoising and resolution enhancement, suggesting practical benchmarks for evaluation and further enhancement of network performance.

This is a preview of subscription content, access via your institution

Access options

Rent or buy this article

Prices vary by article type

from$1.95

to$39.95

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: Residual channel attention networks denoise super-resolution data.
Fig. 2: RCAN resolution enhancement assayed with simulated spherical phantoms.
Fig. 3: Confocal-to-STED microscopy restoration with RCAN.
Fig. 4: Using ExM to improve spatial resolution in fixed and live iSIM.

Similar content being viewed by others

Data availability

Training and test datasets for organelle denoising, synthetic phantoms, confocal-to-STED and iSIM-to-ExM predictions are publicly accessible at the Zenodo repository (https://zenodo.org/record/4624364#.YF3jsa9Kibg). Source data are provided with this paper.

Code availability

The code and sample data used in this study are available at https://github.com/AiviaCommunity/3D-RCAN. An installation guide, data and instructions for use are also available from the same webpage.

References

  1. Gustafsson, M. G. L. et al. Three-dimensional resolution doubling in wide-field fluorescence microscopy by structured illumination. Biophys. J. 94, 4957–4970 (2008).

    Article  CAS  Google Scholar 

  2. Galland, R. et al. 3D high- and super-resolution imaging using single-objective SPIM. Nat. Methods 12, 641–644 (2015).

    Article  CAS  Google Scholar 

  3. York, A. G. et al. Instant super-resolution imaging in live cells and embryos via analog image processing. Nat. Methods 10, 1122–1126 (2013).

    Article  CAS  Google Scholar 

  4. Wu, Y. & Shroff, H. Faster, sharper, and deeper: structured illumination microscopy for biological imaging. Nat. Methods 15, 1011–1019 (2018).

    Article  CAS  Google Scholar 

  5. Winter, P. W. & Shroff, H. Faster fluorescence microscopy: advances in high speed biological imaging. Curr. Opin. Chem. Biol. 20, 46–53 (2014).

    Article  CAS  Google Scholar 

  6. Li, D. et al. Extended-resolution structured illumination imaging of endocytic and cytoskeletal dynamics. Science 349, aab3500 (2015).

    Article  Google Scholar 

  7. Laissue, P. P., Alghamdi, R. A., Tomancak, P. & Reynaud, E. G. S. H. Assessing phototoxicity in live fluorescence imaging. Nat. Methods 14, 657–661 (2017).

    Article  CAS  Google Scholar 

  8. LeCun, Y., Bengio, Y. & Hinton, G. Deep learning. Nature 521, 436–444 (2015).

    Article  CAS  Google Scholar 

  9. Weigert, M. et al. Content-aware image restoration: pushing the limits of fluorescence microscopy. Nat. Methods 15, 1090–1097 (2018).

    Article  CAS  Google Scholar 

  10. Ronneberger, O., Fischer, P. & Brox, T. U-Net: convolutional networks for biomedical image segmentation. in Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015 (eds Navab, N. et al.) 234–241 (Springer, 2015).

  11. Goodfellow, I. et al. Generative adversarial networks. Commun. ACM 63, 139–144 (2020).

    Article  Google Scholar 

  12. Wang, H. et al. Deep learning enables cross-modality super-resolution in fluorescence microscopy. Nat. Methods 16, 103–110 (2019).

    Article  CAS  Google Scholar 

  13. Ouyang, W., Aristov, A., Lelek, M., Hao, X. & Zimmer, C. Deep learning massively accelerates super-resolution localization microscopy. Nat. Biotechnol. 36, 460–468 (2018).

    Article  CAS  Google Scholar 

  14. Fang, L. et al. Deep learning-based point-scanning super-resolution imaging. Nat. Methods 18, 406–416 (2021).

    Article  CAS  Google Scholar 

  15. Jin, L. et al. Deep learning enables structured illumination microscopy with low light levels and enhanced speed. Nat. Commun. 11, 1934 (2020).

    Article  CAS  Google Scholar 

  16. Guo, M. et al. Rapid image deconvolution and multiview fusion for optical microscopy. Nat. Biotechnol. 38, 1337–1346 (2020).

  17. Zhang, Y. et al. Image super-resolution using very deep residual channel attention networks. in Computer Vision—ECCV 2018 (eds Ferrari, V. et al.) 294–310 (2018).

  18. Chen, F., Tillberg, P. & Boyden, E. S. Expansion microscopy. Science 347, 543–548 (2015).

    Article  CAS  Google Scholar 

  19. Hu, J., Shen, L., Albanie, S., Sun, G. & Wu, E. Squeeze-and-excitation networks. IEEE Trans. Pattern Anal. 42, 2011–2023 (2020).

    Article  Google Scholar 

  20. Ledig, C. et al. Photo-realistic single image super-resolution using a generative adversarial network. In IEEE Conference on Computer Vision and Pattern Recognition 4681–4690 (2017).

  21. Wang, X. et al. ESRGAN: enhanced super-resolution generative adversarial networks. in Computer Vision—ECCV 2018 Workshops (eds Leal-Taixé, L. & Roth, S.) 63–79 (Springer, 2019).

  22. Blau, Y., Mechrez, R., Timofte, R., Michaeli, T. & Zelnik-Manor, L. The 2018 PIRM challenge on perceptual image super-resolution. in Computer Vision—ECCV 2018 Workshops (Leal-Taixé L., Roth S., eds.) 334–355 (Springer, 2019).

  23. Fadero, T. C. et al. LITE microscopy: tilted light-sheet excitation of model organisms offers high resolution and low photobleaching. J. Cell Biol. 217, 1869–1882 (2018).

    Article  CAS  Google Scholar 

  24. Descloux, A., Grußmayer, K. S. & Radenovic, A. Parameter-free image resolution estimation based on decorrelation analysis. Nat. Methods 16, 918–924 (2019).

    Article  CAS  Google Scholar 

  25. Lukinavičius, G. et al. SiR-Hoechst is a far-red DNA stain for live-cell nanoscopy. Nat. Commun. 6, 8497 (2015).

    Article  Google Scholar 

  26. Gambarotto, D. et al. Imaging cellular ultrastructures using expansion microscopy (U-ExM). Nat. Methods 16, 71–74 (2019).

    Article  CAS  Google Scholar 

  27. Miller, A. L. & Bement, W. M. Regulation of cytokinesis by Rho GTPase flux. Nat. Cell Biol. 11, 71–77 (2009).

    Article  CAS  Google Scholar 

  28. Bunnell, S. C., Kapoor, V., Trible, R. P., Zhang, W. & Samelson, L. E. Dynamic actin polymerization drives T cell receptor-induced spreading: a role for the signal transduction adaptor LAT. Immunity 14, 315–329 (2001).

    Article  CAS  Google Scholar 

  29. Yi, J. et al. Centrosome repositioning in T cells is biphasic and driven by microtubule end-on capture-shrinkage. J. Cell Biol. 202, 779–792 (2013).

    Article  CAS  Google Scholar 

  30. Kendall, A. & Gal, Y. What uncertainties do we need in Bayesian deep learning for computer vision? Proceedings of the 31st International Conference on Neural Information Processing Systems 5580–5590 (2017).

  31. Qiao, C. et al. Evaluation and development of deep neural networks for image super-resolution in optical microscopy. Nat. Methods 18, 194–202 (2021).

    Article  CAS  Google Scholar 

  32. Zhao, H., Gallo, O., Frosio, I. & Kautz, J. Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 3, 47–57 (2017).

    Article  Google Scholar 

  33. Gulli, A. & Pal, S. Deep Learning with Keras (Packt Publishing, 2017).

  34. Abadi, M. et al. Tensorflow: a system for large-scale machine learning. in 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI) 16, 265–283 (2016).

  35. He, K., Zhang, X., Ren, S. & Sun, J. Deep residual learning for image recognition. 2016 IEEE Conference on Computer Vision and Pattern Recognition 770–778 (IEEE, 2016).

  36. Huang, G. et al. Densely connected convolutional networks. 2017 IEEE Conference on Computer Vision and Pattern Recognition 2261–2269 (IEEE, 2017).

  37. Jolicoeur-Martineau, A. The relativistic discriminator: a key element missing from standard GAN. in International Conference on Learning Representations (2018).

  38. Simonyan, K. & Zisserman, A. Very deep convolutional networks for large-scale image recognition. in International Conference on Learning Representations (2015).

  39. Jolesz, F. Intraoperative Imaging and Image-Guided Therapy (Springer, 2014).

  40. Pietzch, T., Preibisch, S., Tomancak, P. & Saalfeld, S. ImgLib2-generic image processing in Java. Bioinformatics 28, 3009–3011 (2012).

    Article  Google Scholar 

Download references

Acknowledgements

J.C., Y.W. and H. Shroff thank the Marine Biological Laboratory at Woods Hole for providing the Deep Learning for Microscopy Image Analysis Course, as well as F. Jug, J. Folkesson and P. La Riviere for providing superb instruction. We thank W. Bement for the gift of the EMTB-3XEGFP plasmid (also available as Addgene plasmid no. 26741), J. Taraska for the LAMP1-GFP plasmid, G. Patterson for the GalT-GFP plasmid and P. Chandris for the pShooter pEF-Myc-mito-GFP plasmid. We also thank P. La Riviere and H. Eden for their careful reading of, and comments on, this work. This research was supported by the intramural research programs of the National Institute of Biomedical Imaging and Bioengineering and the National Institute of Heart, Lung, and Blood within the National Institutes of Health, and by an SBIR cooperative agreements of the National Institute of General Medical Sciences (nos. 1U44GM136091-01 and 6U44GM136091-02). A.U. and I.R.-S. acknowledge grant support from NIH (no. R01 GM131054) and NSF (no. PHY 1607645). We thank the Office of Data Science Strategy, NIH, for providing a seed grant enabling us to train deep learning models using cloud-based computational resources. The NIH, its staff and officers do not recommend or endorse any company, product or service.

Author information

Authors and Affiliations

Authors

Contributions

S.-J.J.L., L.A.G.L. and H. Shroff conceived the project. J.C., Y.S., A.Z., C.A.C., I.R.-S., A.U. and H. Shroff designed the experiments. J.C., Y.S., A.Z., C.A.C. and I.R.-S. performed the experiments. H. Sasaki, H.L., H.C., C.C.H., S.-J.J.L. and L.A.G.L. developed and tested 3D RCAN. J.C. and J.L. adapted and tested CARE, SRResNet and ESRGAN. X.L. wrote the software for hardware control. J.C., H. Sasaki, H.L., Y.S., J.L., Y.W., X.L., M.G. and H. Shroff conceived, developed and tested the ExM pipeline. J.C., J.L., Y.W., M.G., S.N. and H.S. performed and analyzed simulations. All authors analyzed data. H. Shroff wrote the paper with input from all authors. A.U., S.-J.J.L., L.A.G.L. and H. Shroff supervised research.

Corresponding authors

Correspondence to Jiji Chen or Hideki Sasaki.

Ethics declarations

Competing interests

H. Sasaki, H.L., Y.S., H.C., C.C.H., S.-J.J.L. and L.A.G.L. are employees of SVision LLC and Leica Microsystems, a machine vision company. H. Sasaki, H.L., Y.S., H.C., C.C.H., S.-J.J.L. and L.A.G.L. have developed Aivia (a commercial software platform) that offers the 3D RCAN developed here.

Additional information

Peer review information Nature Methods thanks Wei Ouyang, Lachlan Whitehead and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Rita Strack was the primary editor on this article and managed its editorial process and peer review in collaboration with the rest of the editorial team.

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Supplementary Information

Supplementary Figs. 1–21, Tables 1–8 and Note 1.

Reporting Summary

Supplementary Video 1

iSIM imaging at high SNR rapidly bleaches the sample.

Supplementary Video 2

RCAN enables super-resolution imaging over thousands of volumes.

Supplementary Video 3

Two-color volumetric image restoration with RCAN.

Supplementary Video 4

Axial views of 2× blurred input, and network predictions.

Supplementary Video 5

Axial views of 3× blurred input, and network predictions.

Supplementary Video 6

Axial views of 4× blurred input, and network predictions.

Supplementary Video 7

RCAN denoises and improves resonant confocal recordings of dividing cells.

Supplementary Video 8

RCAN denoises and improves resonant confocal recordings on additional nuclei.

Supplementary Video 9

Synthetic input, two-step RCAN prediction and ground truth for mitochondrial images based on ExM training.

Supplementary Video 10

Live-cell mitochondrial two-step RCAN prediction based on ExM training.

Supplementary Video 11

High-magnification axial view comparing deconvolved iSIM input and two-step RCAN prediction.

Supplementary Video 12

Live-cell microtubule two-step RCAN prediction based on ExM, axial views.

Supplementary Video 13

Live-cell microtubule two-step RCAN prediction based on ExM, lateral views.

Supplementary Video 14

Additional example of microtubule dynamics in another Jurkat T cell.

Source data

Source Data Fig. 1

Statistical source data for SSIM, PSNR, resolution and intensity plots (b,d,e).

Source Data Fig. 2

Statistical source data for SSIM and PSNR plots (d).

Source Data Fig. 3

Statistical source data for resolution plot (d).

Source Data Fig. 4

Statistical source data for resolution plot (c).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chen, J., Sasaki, H., Lai, H. et al. Three-dimensional residual channel attention networks denoise and sharpen fluorescence microscopy image volumes. Nat Methods 18, 678–687 (2021). https://doi.org/10.1038/s41592-021-01155-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1038/s41592-021-01155-x

This article is cited by

Search

Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing