Deep learning enables cross-modality super-resolution in fluorescence microscopy

Abstract

We present deep-learning-enabled super-resolution across different fluorescence microscopy modalities. This data-driven approach does not require numerical modeling of the imaging process or the estimation of a point-spread-function, and is based on training a generative adversarial network (GAN) to transform diffraction-limited input images into super-resolved ones. Using this framework, we improve the resolution of wide-field images acquired with low-numerical-aperture objectives, matching the resolution that is acquired using high-numerical-aperture objectives. We also demonstrate cross-modality super-resolution, transforming confocal microscopy images to match the resolution acquired with a stimulated emission depletion (STED) microscope. We further demonstrate that total internal reflection fluorescence (TIRF) microscopy images of subcellular structures within cells and tissues can be transformed to match the results obtained with a TIRF-based structured illumination microscope. The deep network rapidly outputs these super-resolved images, without any iterations or parameter search, and could serve to democratize super-resolution imaging.

Access options

Rent or Buy article

Get time limited or full article access on ReadCube.

from$8.99

All prices are NET prices.

Fig. 1: Deep-learning-based super-resolved images of bovine pulmonary artery endothelial cells (BPAECs).
Fig. 2: Comparison of deep learning results against Lucy–Richardson (LR) and non-negative least square (NNLS) image deconvolution algorithms.
Fig. 3: Image resolution improvement beyond the diffraction limit: from confocal microscopy to STED.
Fig. 4: PSF characterization, before and after the network, and its comparison to STED.
Fig. 5: Deep-learning enabled cross-modality image transformation from confocal to STED.
Fig. 6: Deep-learning enabled cross-modality image transformation from TIRF to TIRF-SIM.

Data availability

We declare that all the data supporting the findings of this work are available within the manuscript and Supplementary Information files. Raw images can be requested from the corresponding author. Deep learning models reported in this work used standard libraries and scripts that are publicly available in TensorFlow. The instruction manual for our Fiji/ImageJ plugin and trained models (available online as Supplementary Software 1–7) is provided as a Supplementary Protocol.

References

  1. 1.

    Betzig, E. et al. Imaging intracellular fluorescent proteins at nanometer resolution. Science 313, 1642–1645 (2006).

    CAS  Article  Google Scholar 

  2. 2.

    Hess, S. T., Girirajan, T. P. K. & Mason, M. D. Ultra-high resolution imaging by fluorescence photoactivation localization microscopy. Biophys. J. 91, 4258–4272 (2006).

    CAS  Article  Google Scholar 

  3. 3.

    Rust, M. J., Bates, M. & Zhuang, X. Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM). Nat. Methods 3, 793–795 (2006).

    CAS  Article  Google Scholar 

  4. 4.

    van de Linde, S. et al. Direct stochastic optical reconstruction microscopy with standard fluorescent probes. Nat. Protoc. 6, 991–1009 (2011).

    Article  Google Scholar 

  5. 5.

    Hell, S. W. & Wichmann, J. Breaking the diffraction resolution limit by stimulated emission: stimulated-emission-depletion fluorescence microscopy. Opt. Lett. 19, 780–782 (1994).

    CAS  Article  Google Scholar 

  6. 6.

    Gustafsson, M. G. L. Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy. J. Microsc. 198, 82–87 (2000).

    CAS  Article  Google Scholar 

  7. 7.

    Cox, S. Super-resolution imaging in live cells. Dev. Biol. 401, 175–181 (2015).

    CAS  Article  Google Scholar 

  8. 8.

    Gustafsson, M. G. L. Nonlinear structured-illumination microscopy: wide-field fluorescence imaging with theoretically unlimited resolution. Proc. Natl. Acad. Sci. USA 102, 13081–13086 (2005).

    CAS  Article  Google Scholar 

  9. 9.

    Henriques, R. et al. QuickPALM: 3D real-time photoactivation nanoscopy image processing in ImageJ. Nat. Methods 7, 339–340 (2010).

    CAS  Article  Google Scholar 

  10. 10.

    Small, A. & Stahlheber, S. Fluorophore localization algorithms for super-resolution microscopy. Nat. Methods 11, 267–279 (2014).

    CAS  Article  Google Scholar 

  11. 11.

    Abraham, A. V., Ram, S., Chao, J., Ward, E. S. & Ober, R. J. Quantitative study of single molecule location estimation techniques. Opt. Express 17, 23352–23373 (2009).

    CAS  Article  Google Scholar 

  12. 12.

    Dempsey, G. T., Vaughan, J. C., Chen, K. H., Bates, M. & Zhuang, X. Evaluation of fluorophores for optimal performance in localization-based super-resolution imaging. Nat. Methods 8, 1027–1036 (2011).

    CAS  Article  Google Scholar 

  13. 13.

    Culley, S. et al. Quantitative mapping and minimization of super-resolution optical imaging artifacts. Nat. Methods 15, 263–266 (2018).

    CAS  Article  Google Scholar 

  14. 14.

    Sage, D. et al. Quantitative evaluation of software packages for single-molecule localization microscopy. Nat. Methods 12, 717–724 (2015).

    CAS  Article  Google Scholar 

  15. 15.

    Almada, P., Culley, S. & Henriques, R. PALM and STORM: into large fields and high-throughput microscopy with sCMOS detectors. Methods 88, 109–121 (2015).

    CAS  Article  Google Scholar 

  16. 16.

    Goodfellow, I. J. et al. Generative adversarial networks. arXiv Preprint at https://arxiv.org/abs/1406.2661 (2014).

  17. 17.

    Wilson, T. & Masters, B. R. Confocal microscopy. Appl. Opt. 33, 565–566 (1994).

    CAS  Article  Google Scholar 

  18. 18.

    Li, D. et al. Extended-resolution structured illumination imaging of endocytic and cytoskeletal dynamics. Science 349, aab3500 (2015).

    Article  Google Scholar 

  19. 19.

    Richardson, W. H. Bayesian-based iterative method of image restoration. J. Opt. Soc. Am. 62, 55 (1972).

    Article  Google Scholar 

  20. 20.

    Lucy, L. B. An iterative technique for the rectification of observed distributions. Astron. J. 79, 745 (1974).

    Article  Google Scholar 

  21. 21.

    Landweber, L. An iteration formula for Fredholm integral equations of the first kind. Am. J. Math. 73, 615–624 (1951).

    Article  Google Scholar 

  22. 22.

    Farahani, J. N., Schibler, M. J. & Bentolila, L. A. Stimulated emission depletion (STED)microscopy: from theory to practice. Microsc. Sci. Technol. Appl. Educ. 2, 1539–1547 (2010).

    Google Scholar 

  23. 23.

    Hamel, P., Davies, M. E. P., Yoshii, K. & Goto, M. Transfer learning in MIR: sharing learned latent representations for music audio classification and similarity. Google AI https://ai.google/research/pubs/pub41530 (2013).

  24. 24.

    Wäldchen, S., Lehmann, J., Klein, T., van de Linde, S. & Sauer, M. Light-induced cell damage in live-cell super-resolution microscopy. Sci. Rep. 5, 15348 (2015).

    Article  Google Scholar 

  25. 25.

    Hein, B., Willig, K. I. & Hell, S. W. Stimulated emission depletion (STED) nanoscopy of a fluorescent protein-labeled organelle inside a living cell. Proc. Natl. Acad. Sci. USA 105, 14271–14276 (2008).

    CAS  Article  Google Scholar 

  26. 26.

    Hein, B. et al. Stimulated emission depletion nanoscopy of living cells using SNAP-tag fusion proteins. Biophys. J. 98, 158–163 (2010).

    CAS  Article  Google Scholar 

  27. 27.

    Dyba, M. & Hell, S. W. Photostability of a fluorescent marker under pulsed excited-state depletion through stimulated emission. Appl. Opt. 42, 5123–5129 (2003).

    Article  Google Scholar 

  28. 28.

    Kner, P., Chhun, B. B., Griffis, E. R., Winoto, L. & Gustafsson, M. G. L. Super-resolution video microscopy of live cells by structured illumination. Nat. Methods 6, 339–342 (2009).

    CAS  Article  Google Scholar 

  29. 29.

    Leyton-Puig, D. et al. Flat clathrin lattices are dynamic actin-controlled hubs for clathrin-mediated endocytosis and signalling of specific receptors. Nat. Commun. 8, 16068 (2017).

    CAS  Article  Google Scholar 

  30. 30.

    Fiolka, R., Shao, L., Rego, E. H., Davidson, M. W. & Gustafsson, M. G. L. Time-lapse two-color 3D imaging of live cells with doubled resolution using structured illumination. Proc. Natl. Acad. Sci. USA 109, 5311–5315 (2012).

    CAS  Article  Google Scholar 

  31. 31.

    Ferguson, J. P. et al. Deciphering dynamics of clathrin-mediated endocytosis in a living organism. J. Cell. Biol. 214, 347–358 (2016).

    CAS  Article  Google Scholar 

  32. 32.

    Forster, B. et al. M. Complex wavelets for extended depth-of-field: a new method for the fusion of multichannel microscopy images. Microsc. Res. Tech. 65, 33–42 (2004).

    Article  Google Scholar 

  33. 33.

    Liu, R. & Jia, J. Reducing boundary artifacts in image deconvolution. in 2008 15th IEEE International Conference on Image Processing 505–508 (IEEE, New York, 2008).

  34. 34.

    Cox, I. J. & Sheppard, C. J. R. Information capacity and resolution in an optical system. J. Opt. Soc. Am. A. 3, 1152–1158 (1986).

    Article  Google Scholar 

  35. 35.

    Katznelson, Y. An Introduction to Harmonic Analysis (Dover Publications, New York, 1976).

  36. 36.

    Bentolila, L. A. et al. Imaging of angiotropism/vascular co-option in a murine model of brain melanoma: implications for melanoma progression along extravascular pathways. Sci. Rep. 6, 23834 (2016).

  37. 37.

    Aguet, F. et al. Membrane dynamics of dividing cells imaged by lattice light-sheet microscopy. Mol. Biol. Cell 27, 3418–3435 (2016).

    CAS  Article  Google Scholar 

  38. 38.

    Willy, N. M. et al. Membrane mechanics govern spatiotemporal heterogeneity of endocytic clathrin coat dynamics. Mol. Biol. Cell 28, 3480–3488 (2017).

    CAS  Article  Google Scholar 

  39. 39.

    Schindelin, J. et al. Fiji: an open-source platform for biological-image analysis. Nat. Methods 9, 676–682 (2012).

    CAS  Article  Google Scholar 

  40. 40.

    Preibisch, S., Saalfeld, S. & Tomancak, P. Globally optimal stitching of tiled 3D microscopic image acquisitions. Bioinformatics 25, 1463–1465 (2009).

    CAS  Article  Google Scholar 

  41. 41.

    Sage, D., Prodanov, D., Tinevez, J.-Y. & Schindelin, J. MIJ: making interoperability between ImageJ and Matlab possible. Poster presented at the ImageJ User & Developer Conference, Mondorf-les-Bains, Luxembourg, 24–26 October, 2012.

  42. 42.

    Rivenson, Y. et al. Deep learning enhanced mobile-phone microscopy. ACS Photonics 5, 2354–2364 (2018).

    CAS  Article  Google Scholar 

  43. 43.

    Rivenson, Y. et al. Deep learning-based virtual histology staining using auto-fluorescence of label-free tissue. arXiv Preprint at https://arxiv.org/abs/1803.11293 (2018).

  44. 44.

    Wang, Z., Bovik, A. C., Sheikh, H. R. & Simoncelli, E. P. Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13, 600–612 (2004).

    Article  Google Scholar 

  45. 45.

    Ronneberger, O., Fischer, P. & Brox, T. U-Net: convolutional networks for biomedical image segmentation. arXiv Preprint at https://arxiv.org/abs/1505.04597 (2015).

  46. 46.

    Wu, Y. et al. Extended depth-of-field in holographic imaging using deep-learning-based autofocusing and phase recovery. Optica 5, 704–710 (2018).

    Article  Google Scholar 

  47. 47.

    Kingma, D. P. & Ba, J. Adam: a method for stochastic optimization. arXiv Preprint at https://arxiv.org/abs/1412.6980 (2014).

  48. 48.

    Abadi, M. et al. TensorFlow: a system for large-scale machine learning. arXiv Preprint at https://arxiv.org/abs/1605.08695 (2016).

  49. 49.

    Aguet, F., Van De Ville, D. & Unser, M. Model-based 2.5-d deconvolution for extended depth of field in brightfield microscopy. IEEE Trans. Image Process. 17, 1144–1153 (2008).

    Article  Google Scholar 

  50. 50.

    Born, M., Wolf, E. & Bhatia, A. B. Principles of Optics: Electromagnetic Theory of Propagation, Interference and Diffraction of Light (Cambridge University Press, 1999).

  51. 51.

    Kirshner, H., Aguet, F., Sage, D. & Unser, M. 3-D PSF fitting for fluorescence microscopy: implementation and localization application. J. Microsc. 249, 13–25 (2013).

    CAS  Article  Google Scholar 

  52. 52.

    Sage, D. et al. DeconvolutionLab2: an open-source software for deconvolution microscopy. Methods 115, 28–41 (2017).

    CAS  Article  Google Scholar 

Download references

Acknowledgements

The Ozcan Research Group at UCLA acknowledges the support of NSF Engineering Research Center (ERC, PATHS-UP), the Army Research Office (ARO; W911NF-13-1-0419 and W911NF-13-1-0197), the ARO Life Sciences Division, the National Science Foundation (NSF) CBET Division Biophotonics Program, the NSF Emerging Frontiers in Research and Innovation (EFRI) Award, the NSF INSPIRE Award, NSF Partnerships for Innovation: Building Innovation Capacity (PFI:BIC) Program, the National Institutes of Health (NIH, R21EB023115), the Howard Hughes Medical Institute (HHMI), Vodafone Americas Foundation, the Mary Kay Foundation, and Steven & Alexandra Cohen Foundation. Yair Rivenson is partially supported by the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No H2020-MSCA-IF-2014-659595 (MCMQCT). Confocal and STED laser scanning microscopy was performed at the California NanoSystems Institute (CNSI) Advanced Light Microscopy/Spectroscopy Shared Resource Facility at UCLA. We also thank the Advanced Imaging Center (AIC) at Janelia Research Campus for access to their TIRF-SIM microscope. The AIC is jointly supported by the Howard Hughes Medical Institute and the Gordon and Betty Moore Foundation. Finally, we thank H. Chang (Purdue University, West Lafayette, IN, USA) for sharing the CLC-mEmerald fly strain.

Author information

Affiliations

Authors

Contributions

H.W., Y.R., and A.O. conceived the research. H.W., Y.R., L.B., and C.K. contributed to the experiments. H.W., Y.J., Z.W., and H.G. processed the data. H.W. and Y.J. prepared the figures. H.W., Y.R., and A.O. prepared the manuscript, and all the authors contributed to the manuscript. H.W., Y.R., and R.G. developed the Fiji/ImageJ plugin. A.O. supervised the research.

Corresponding author

Correspondence to Aydogan Ozcan.

Ethics declarations

Competing interests

A.O., Y.R., and H.W. have a pending patent application on the contents of the presented results.

Additional information

Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

41592_2018_239_MOESM4_ESM.mp4

Deep-learning enabled cross-modality image transformation from TIRF to TIRF-SIM. A video of deep-learning-enabled cross-modality image transformation from TIRF to TIRF-SIM, corresponding to a gene-edited SUM159 cell expressing AP2-eGFP, revealing the temporal dynamics of endocytic protein structures within the cell. The highlighted frames in this video correspond to subpanels of Fig. 6 (main text). Experiments were repeated with >1,000 images/frames, achieving similar results.

Supplementary text and figures

Supplementary Notes 1–10, Supplementary Figures 1–14 and Supplementary Table 1

Reporting Summary

Supplementary Protocol

SISR-Fluorescent Fiji/ImageJ plugin: User Manual

Supplementary Video 1

Deep-learning enabled cross-modality image transformation from TIRF to TIRF-SIM. A video of deep-learning-enabled cross-modality image transformation from TIRF to TIRF-SIM, corresponding to a gene-edited SUM159 cell expressing AP2-eGFP, revealing the temporal dynamics of endocytic protein structures within the cell. The highlighted frames in this video correspond to subpanels of Fig. 6 (main text). Experiments were repeated with >1,000 images/frames, achieving similar results.

Supplementary Software 1

SISR-Fluorescent Fiji/ImageJ plugin

Supplementary Software 2

Pre-trained model: wide-field DAPI

Supplementary Software 3

Pre-trained model: wide-field FITC

Supplementary Software 4

Pre-trained model: wide-field TxRed

Supplementary Software 5

Pre-trained model: confocal STED (nanobeads)

Supplementary Software 6

Pre-trained model: confocal STED (nuclei)

Supplementary Software 7

Pre-trained model: TIRF-SIM

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Wang, H., Rivenson, Y., Jin, Y. et al. Deep learning enables cross-modality super-resolution in fluorescence microscopy. Nat Methods 16, 103–110 (2019). https://doi.org/10.1038/s41592-018-0239-0

Download citation

Further reading