Abstract

Fluorescence microscopy is a key driver of discoveries in the life sciences, with observable phenomena being limited by the optics of the microscope, the chemistry of the fluorophores, and the maximum photon exposure tolerated by the sample. These limits necessitate trade-offs between imaging speed, spatial resolution, light exposure, and imaging depth. In this work we show how content-aware image restoration based on deep learning extends the range of biological phenomena observable by microscopy. We demonstrate on eight concrete examples how microscopy images can be restored even if 60-fold fewer photons are used during acquisition, how near isotropic resolution can be achieved with up to tenfold under-sampling along the axial direction, and how tubular and granular structures smaller than the diffraction limit can be resolved at 20-times-higher frame rates compared to state-of-the-art methods. All developed image restoration methods are freely available as open source software in Python, FIJI, and KNIME.

Access optionsAccess options

Rent or Buy article

Get time limited or full article access on ReadCube.

from$8.99

All prices are NET prices.

Data availability

Training and test data for all experiments presented can be found at https://publications.mpi-cbg.de/publications-sites/7207. The code for network training and prediction (in Python/TensorFlow) is publicly available at https://github.com/CSBDeep/CSBDeep. Furthermore, to make our restoration models readily available, we developed user-friendly FIJI plugins and KNIME workflows (Supplementary Figs. 29 and 30).

Additional information

Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  1. 1.

    Huisken, J. et al. Optical sectioning deep inside live embryos by selective plane illumination microscopy. Science 305, 1007–1009 (2004).

  2. 2.

    Tomer, R. et al. Quantitative high-speed imaging of entire developing embryos with simultaneous multiview light-sheet microscopy. Nat. Methods 9, 755–763 (2012).

  3. 3.

    Chen, B.-C. et al. Lattice light-sheet microscopy: imaging molecules to embryos at high spatiotemporal resolution. Science 346, 1257998 (2014).

  4. 4.

    Gustafsson, M. G. Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy. J. Microsc. 198, 82–87 (2000).

  5. 5.

    Heintzmann, R. & Gustafsson, M. G. Subdiffraction resolution in continuous samples. Nat. Photon. 3, 362–364 (2009).

  6. 6.

    Betzig, E. et al. Imaging intracellular fluorescent proteins at nanometer resolution. Science 313, 1642–1645 (2006).

  7. 7.

    Rust, M. J., Bates, M. & Zhuang, X. Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM). Nat. Methods 3, 793–795 (2006).

  8. 8.

    Mortensen, K. I. et al. Optimized localization analysis for single-molecule tracking and super-resolution microscopy. Nat. Methods 7, 377–381 (2010).

  9. 9.

    Icha, J. et al. Phototoxicity in live fluorescence microscopy, and how to avoid it. Bioessays 39, 700003 (2017).

  10. 10.

    Laissue, P. P. et al. Assessing phototoxicity in live fluorescence imaging. Nat. Methods 14, 657–661 (2017).

  11. 11.

    Pawley, J. B. Fundamental limits in confocal microscopy. In Handbook of Biological Confocal Microscopy (ed Pawley, J. B.) 20–42 (Springer, Boston, MA, 2006).

  12. 12.

    Scherf, N. & Huisken, J. The smart and gentle microscope. Nat. Biotechnol. 33, 815–818 (2015).

  13. 13.

    Müller, M. et al. Open-source image reconstruction of super-resolution structured illumination microscopy data in ImageJ. Nat. Commun. 7, 10980 (2016).

  14. 14.

    Gustafsson, N. et al. Fast live-cell conventional fluorophore nanoscopy with ImageJ through super-resolution radial fluctuations. Nat. Commun. 7, 12471 (2016).

  15. 15.

    Dertinger, T. et al. Superresolution optical fluctuation imaging (SOFI). In Nano-Biotechnology for Biomedical and Diagnostic Research (eds Zahavy, E. et al.) 17–21 (Springer, Dordrecht, the Netherlands, 2012).

  16. 16.

    Agarwal, K. & Macháň, R. Multiple signal classification algorithm for super-resolution fluorescence microscopy. Nat. Commun. 7, 13752 (2016).

  17. 17.

    Richardson, W. H. Bayesian-based iterative method of image restoration. J. Opt. Soc. Am. 62, 55–69 (1972).

  18. 18.

    Arigovindan, M. et al. High-resolution restoration of 3D structures from widefield images with extreme low signal-to-noise-ratio. Proc. Natl. Acad. Sci. USA 110, 17344–17349 (2013).

  19. 19.

    Preibisch, S. et al. Efficient Bayesian-based multiview deconvolution. Nat. Methods 11, 645–648 (2014).

  20. 20.

    Blasse, C. et al. PreMosa: extracting 2D surfaces from 3D microscopy mosaics. Bioinformatics 33, 2563–2569 (2017).

  21. 21.

    Shihavuddin, A. et al. Smooth 2D manifold extraction from 3D image stack. Nat. Commun. 8, 15554 (2017).

  22. 22.

    Buades, A., Coll, B. & Morel, J.-M. A non-local algorithm for image denoising. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (eds Schmid, C., Soatto, S. & Tomasi, C.) 60–65 (IEEE, New York, 2005).

  23. 23.

    Dabov, K., Foi, A., Katkovnik, V. & Egiazarian, K. Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16, 2080–2095 (2007).

  24. 24.

    Morales-Navarrete, H. et al. A versatile pipeline for the multi-scale digital reconstruction and quantitative analysis of 3D tissue architecture. eLife 4, e11214 (2015).

  25. 25.

    LeCun, Y. et al. Gradient-based learning applied to document recognition. Proc. IEEE 86, 2278–2324 (1998).

  26. 26.

    LeCun, Y., Bengio, Y. & Hinton, G. Deep learning. Nature 521, 436–44 (2015).

  27. 27.

    Beier, T. et al. Multicut brings automated neurite segmentation closer to human performance. Nat. Methods 14, 101–102 (2017).

  28. 28.

    Caicedo, J. C. et al. Data-analysis strategies for image-based cell profiling. Nat. Methods 14, 849–863 (2017).

  29. 29.

    Ounkomol, C. et al. Label-free prediction of three-dimensional fluorescence images from transmitted-light microscopy. Nat. Methods 15, 917–920 (2018).

  30. 30.

    Christiansen, E. M. et al. In silico labeling: predicting fluorescent labels in unlabeled images. Cell 173, 792–803 (2018).

  31. 31.

    Rivenson, Y. et al. Deep learning microscopy. Optica 4, 1437–1443 (2017).

  32. 32.

    Nehme, E. et al. Deep-STORM: super-resolution single-molecule microscopy by deep learning. Optica 5, 458–464 (2018).

  33. 33.

    Ouyang, W. et al. Deep learning massively accelerates super-resolution localization microscopy. Nat. Biotechnol. 36, 460–468 (2018).

  34. 34.

    Ronneberger, O., Fischer, P. & Brox, T. U-Net: convolutional networks for biomedical image segmentation. In International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) (eds Navab, N. et al.) 234–241 (Springer, Cham, 2015).

  35. 35.

    Shettigar, N. et al. Hierarchies in light sensing and dynamic interactions between ocular and extraocular sensory networks in a flatworm. Sci. Adv. 3, e1603025 (2017).

  36. 36.

    Mao, X.-J., Shen, C. & Yang, Y.-B. Image restoration using very deep convolutional encoder-decoder networks with symmetric skip connections. In Advances in Neural Information Processing Systems (NIPS) Vol. 29 (eds Lee, D.D. et al.) 2802–2810 (Curran Associates, Red Hook, NY, 2016).

  37. 37.

    Wang, Z. et al. Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13, 600–612 (2004).

  38. 38.

    Ulman, V. et al. An objective comparison of cell-tracking algorithms. Nat. Methods 14, 1141–1152 (2017).

  39. 39.

    Aigouy, B. et al. Cell flow reorients the axis of planar polarity in the wing epithelium of Drosophila. Cell 142, 773–786 (2010).

  40. 40.

    Etournay, R. et al. Interplay of cell dynamics and epithelial tension during morphogenesis of the Drosophila pupal wing. eLife 4, e07090 (2015).

  41. 41.

    Etournay, R. et al. TissueMiner: a multiscale analysis toolkit to quantify how cellular processes create tissue dynamics. eLife 5, e14334 (2016).

  42. 42.

    Chhetri, R. K. et al. Whole-animal functional and developmental imaging with isotropic spatial resolution. Nat. Methods 12, 1171–1178 (2015).

  43. 43.

    Weigert, M., Royer, L., Jug, F. & Myers, G. Isotropic reconstruction of 3D fluorescence microscopy images using convolutional neural networks. In Medical Image Computing and Computer Assisted Intervention—MICCAI 2017 (eds Descoteaux, M. et al.) 126–134 (Springer, Cham, 2017).

  44. 44.

    Heinrich, L., Bogovic, J. A. & Saalfeld, S. Deep learning for isotropic super-resolution from non-isotropic 3D electron microscopy. In Medical Image Computing and Computer Assisted Intervention—MICCAI 2017 (eds Descoteaux, M. et al.) 135–143 (Springer, Cham, 2017).

  45. 45.

    Royer, L. A. et al. Adaptive light-sheet microscopy for long-term, high-resolution imaging in living organisms. Nat. Biotechnol. 34, 1267–1278 (2016).

  46. 46.

    Icha, J. et al. Independent modes of ganglion cell translocation ensure correct lamination of the zebrafish retina. J. Cell Biol. 215, 259–275 (2016).

  47. 47.

    Sommer, C. et al. Ilastik: interactive learning and segmentation toolkit. In IEEE International Symposium on Biomedical Imaging: From Nano to Macro 230–233 (IEEE, New York, 2011).

  48. 48.

    Culley, S. et al. Quantitative mapping and minimization of super-resolution optical imaging artifacts. Nat. Methods 15, 263–266 (2018).

  49. 49.

    Sui, L. et al. Differential lateral and basal tension drives epithelial folding through two distinct mechanisms. Nat. Commun. 9, 4620 (2018).

  50. 50.

    Chollet, F. et al. Keras https://keras.io (2015).

  51. 51.

    Abadi, M. et al. Tensorflow: a system for large-scale machine learning. In Proceedings. 12th USENIX Symposium on Operating Systems Design and Implementation ( OSDI) (eds Keeton, K. & Roscoe, T.) 265–283 (2016).

  52. 52.

    Boothe, T. et al. A tunable refractive index matching medium for live imaging cells, tissues and model organisms. eLife 6, e27240 (2017).

  53. 53.

    Tomasi, C. & Manduchi, R. Bilateral filtering for gray and color images. In Sixth International Conference on Computer Vision 839–846 (IEEE, New York, 1998).

  54. 54.

    Chambolle, A. An algorithm for total variation minimization and applications. J. Math. Imaging Vis. 20, 89–97 (2004).

  55. 55.

    Maggioni, M. et al. Nonlocal transform-domain filter for volumetric data denoising and reconstruction. IEEE Trans. Image Process. 22, 119–133 (2013).

  56. 56.

    Sarrazin, A. F., Peel, A. D. & Averof, M. A segmentation clock with two-segment periodicity in insects. Science 336, 338–341 (2012).

  57. 57.

    Brown, S. J. et al. The red flour beetle, Tribolium castaneum (Coleoptera): a model for studies of development and PestBiology. Cold Spring Harb. Protoc. https://doi.org/10.1101/pdb.emo126 (2009).

  58. 58.

    Jones, E. et al. SciPy: Open Source Scientific Tools for Python http://www.scipy.org (2001).

  59. 59.

    Maška, M. et al. A benchmark for comparison of cell tracking algorithms. Bioinformatics 30, 1609–1617 (2014).

  60. 60.

    Classen, A.-K., Aigouy, B., Giangrande, A. & Eaton, S. Imaging Drosophila pupal wing morphogenesis. Methods Mol. Biol. 420, 265–275 (2008).

  61. 61.

    Li, K. et al. Optimal surface segmentation in volumetric images—a graph-theoretic approach. IEEE Trans. Pattern Anal. Mach. Intell. 28, 119–134 (2006).

  62. 62.

    Wu, X. & Chen, D. Z. Optimal net surface problems with applications. In International Colloquium on Automata, Languages, and Programming (Springer, 2002).

  63. 63.

    Arganda-Carreras, I. et al. Trainable Weka Segmentation: a machine learning tool for microscopy pixel classification. Bioinformatics 33, 2424–2426 (2017).

  64. 64.

    Schindelin, J. et al. Fiji: an open-source platform for biological-image analysis. Nat. Methods 9, 676–682 (2012).

  65. 65.

    Aigouy, B., Umetsu, D. & Eaton, S. Segmentation and quantitative analysis of epithelial tissues. In Drosophila: Methods and Protocols (ed Dahmann, C.) 227–239 (Humana Press, New York, 2016).

  66. 66.

    Ke, M.-T., Fujimoto, S. & Imai, T. SeeDB: a simple and morphology-preserving optical clearing agent for neuronal circuit reconstruction. Nat. Neurosci. 16, 1154–1161 (2013).

  67. 67.

    Ivanova, A. et al. Age-dependent labeling and imaging of insulin secretory granules. Diabetes 62, 3687–3696 (2013).

  68. 68.

    Mchedlishvili, N. et al. Kinetochores accelerate centrosome separation to ensure faithful chromosome segregation. J. Cell Sci. 125, 906–918 (2012).

  69. 69.

    Lakshminarayanan, B., Pritzel, A. & Blundell, C. Simple and scalable predictive uncertainty estimation using deep ensembles. In Advances in Neural Information Processing Systems 30 (eds Guyon, I. et al.) 6402–6413 (Curran Associates, Red Hook, NY, 2017).

  70. 70.

    Guo, C., Pleiss, G., Sun, Y. & Weinberger, K. Q. On calibration of modern neural networks. In Proc. 34th International Conference on Machine Learning (ICML) (eds Precup, D. & Teh, Y. W.) 1321–1330 (PMLR, Cambridge, MA, 2017).

Download references

Acknowledgements

The authors thank P. Keller (Janelia) who provided Drosophila data. We thank S. Eaton (MPI-CBG), F. Gruber and R. Piscitello for sharing their expertise in fly imaging and providing fly lines. We thank A. Sönmetz for cell culture work. We thank M. Matejcic (MPI-CBG) for generating and sharing the LAP2b transgenic line Tg(bactin:eGFP-LAP2b). We thank B. Lombardot from the Scientific Computing Facility (MPI-CBG) for technical support. We thank the following Services and Facilities of the MPI-CBG for their support: Computer Department, Light Microscopy Facility and Fish Facility. This work was supported by the German Federal Ministry of Research and Education (BMBF) under the codes 031L0102 (de.NBI) and 031L0044 (Sysbio II) and the Deutsche Forschungsgemeinschaft (DFG) under the code JU 3110/1-1. M.S. was supported by the German Center for Diabetes Research (DZD e.V.). T.B. was supported by an ELBE postdoctoral fellowship and an Add-on Fellowship for Interdisciplinary Life Sciences awarded by the Joachim Herz Stiftung. R.H. and S.C. were supported by the following grants: UK BBSRC (grant nos. BB/M022374/1, BB/P027431/1, and BB/R000697/1), UK MRC (grant no. MR/K015826/1) and Wellcome Trust (grant no. 203276/Z/16/Z).

Author information

Affiliations

  1. Center for Systems Biology Dresden, Dresden, Germany

    • Martin Weigert
    • , Uwe Schmidt
    • , Tobias Boothe
    • , Alexandr Dibrov
    • , Benjamin Wilhelm
    • , Deborah Schmidt
    • , Coleman Broaddus
    • , Mauricio Rocha-Martins
    • , Loic Royer
    • , Florian Jug
    •  & Eugene W. Myers
  2. Max Planck Institute of Molecular Cell Biology and Genetics, Dresden, Germany

    • Martin Weigert
    • , Uwe Schmidt
    • , Tobias Boothe
    • , Alexandr Dibrov
    • , Akanksha Jain
    • , Coleman Broaddus
    • , Mauricio Rocha-Martins
    • , Fabián Segovia-Miranda
    • , Caren Norden
    • , Marino Zerial
    • , Michele Solimena
    • , Jochen Rink
    • , Pavel Tomancak
    • , Loic Royer
    • , Florian Jug
    •  & Eugene W. Myers
  3. Molecular Diabetology, University Hospital and Faculty of Medicine Carl Gustav Carus, TU Dresden, Dresden, Germany

    • Andreas Müller
    •  & Michele Solimena
  4. Paul Langerhans Institute Dresden of the Helmholtz Center Munich at the University Hospital Carl Gustav Carus and Faculty of Medicine of the TU Dresden, Dresden, Germany

    • Andreas Müller
    •  & Michele Solimena
  5. German Center for Diabetes Research (DZD e.V.), Neuherberg, Germany

    • Andreas Müller
    •  & Michele Solimena
  6. University of Konstanz, Konstanz, Germany

    • Benjamin Wilhelm
  7. MRC Laboratory for Molecular Cell Biology, University College London, London, UK

    • Siân Culley
    •  & Ricardo Henriques
  8. The Francis Crick Institute, London, UK

    • Siân Culley
    •  & Ricardo Henriques
  9. CZ Biohub, San Francisco, CA, USA

    • Loic Royer
  10. Department of Computer Science, Technical University Dresden, Dresden, Germany

    • Eugene W. Myers

Authors

  1. Search for Martin Weigert in:

  2. Search for Uwe Schmidt in:

  3. Search for Tobias Boothe in:

  4. Search for Andreas Müller in:

  5. Search for Alexandr Dibrov in:

  6. Search for Akanksha Jain in:

  7. Search for Benjamin Wilhelm in:

  8. Search for Deborah Schmidt in:

  9. Search for Coleman Broaddus in:

  10. Search for Siân Culley in:

  11. Search for Mauricio Rocha-Martins in:

  12. Search for Fabián Segovia-Miranda in:

  13. Search for Caren Norden in:

  14. Search for Ricardo Henriques in:

  15. Search for Marino Zerial in:

  16. Search for Michele Solimena in:

  17. Search for Jochen Rink in:

  18. Search for Pavel Tomancak in:

  19. Search for Loic Royer in:

  20. Search for Florian Jug in:

  21. Search for Eugene W. Myers in:

Contributions

F.J. and E.W.M. shared last-authorship. M.W. and L.R. initiated the research. M.W. and U.S. designed and implemented the training and validation methods. U.S., M.W., and F.J. designed and implemented the uncertainty readouts. T.B., A.M., A.D., S.C., F.S.M., R.H., M.R.M., and A.J. collected experimental data. A.D., C.B., and F.J. performed cell segmentation analysis. T.B. performed analysis on flatworm data. U.S. and M.W. designed and developed the Python package. F.J., B.W., and D.S. designed and developed the FIJI and KNIME integration. E.W.M. supervised the project. F.J., M.W., P.T., L.R., U.S., and E.W.M wrote the manuscript, with input from all authors.

Competing interests

The authors declare no competing interests.

Corresponding authors

Correspondence to Martin Weigert or Loic Royer or Florian Jug.

Supplementary information

  1. Supplementary Text and Figures

    Supplementary Figures 1–30, Supplementary Tables 1–3 and Supplementary Notes 1–5

  2. Reporting Summary

  3. Supplementary Video 1

    Challenges in time-lapse imaging of flatworm Schmidtea mediterranea. Image stacks of RedDot1-labeled, anesthetized specimen were acquired every 2 min with a spinning disk confocal microscope (NA = 1.05), at high and low SNR (illumination) conditions (10% laser, 30-ms exposure per plane vs. 0.5% laser, 10 ms per plane). Whereas in the high-SNR case the specimen shows illumination-induced twitching, the image quality in the low-SNR case is insufficient for further analysis. Network restoration enabled us to recover high-SNR images from images acquired at low-SNR conditions without twitching of the specimen, thus providing a practical framework for live-cell imaging of Schmidtea mediterranea.

  4. Supplementary Video 2

    Restoration results of low-SNR acquisitions of Schmidtea mediterranea and comparison to ground truth. Shown are a 3D rendering of the results on a multi-tiled acquisition (8,192 × 3,072 × 100 pixels) and the comparison with ground truth.

  5. Supplementary Video 3

    Restoration of low-SNR volumetric time lapses of developing Tribolium castaneum embryos (EFA::nGFP-labeled nuclei).: Acquisition was done on a Zeiss LSM 710 NLO multiphoton laser-scanning microscope with a time step of 8 min, and stack size of a single time point 760 × 760 × 100 pixels. Shown are maximum-intensity projection and single slices of the raw stacks, the network prediction and the high-SNR ground truth.

  6. Supplementary Video 4

    Joint surface projection and denoising of developing Drosophila melanogaster wing epithelia. 3D image stacks of the developing wing of a membrane-labeled (Ecad::GFP) fly pupa were acquired with a spinning disk confocal (63×, NA = 1.3) microscope. We show the projected epithelial surface obtained by a conventional method (PreMosa, ref. 1), the restoration network, and the projected ground truth. We applied a random-forest-based cell segmentation pipeline and show segmentation/tracking results, demonstrating vastly improved accuracy for the restoration when compared to the conventionally processed raw stacks.

  7. Supplementary Video 5

    Isotropic restoration of anisotropic time-lapse acquisitions of hisGFP-tagged developing Drosophila melanogaster embryos. We used the original, preprocessed data set of ref. 2, acquired with a light-sheet microscope (NA = 1.1) with fivefold axially undersampled resolution (lateral/axial pixel size: 0.39 μm/1.96 μm). The video shows different axial (xz) regions of a single time point from both the original (input) stacks and the isotropic restoration.

  8. Supplementary Video 6

    Isotropic restoration of anisotropic dual-color acquisitions of developing Danio rerio retina. The data were acquired with a spinning disk confocal microscope (Olympus 60×, NA = 1.1) and exhibits a tenfold axial anisotropy (lateral/axial pixel size: 0.2 μm/2.0 μm); labeled structures are nuclei (DRAQ5, magenta) and nuclear envelope (GFP+LAP2b, green). The video shows a rendering of the dual-color input stack and its isotropic reconstruction.

  9. Supplementary Video 7

    Enhancement of diffraction-limited structures in widefield images of rat INS-1 (beta) cells. The video shows time lapses of several INS-1 cells, acquired with the widefield mode of a DeltaVision OMX microscope (63×, NA = 1.43). Labeled are secretory granules (pEG-hIns-SNAP, magenta) and microtubules (SiR-tubulin, green). Next to the time lapse of the widefield images we show the output of the reconstruction networks.

  10. Supplementary Video 8

    Enhancement of diffraction-limited widefield images of GFP-labeled microtubules in HeLa cells and comparison with SRRF (super-resolution radial fluctuations [3]). Images were acquired with a Zeiss Elyra PS.1 microscope in TIRF mode (100×, NA = 1.46). The video shows the widefield input sequence, the network restoration and the corresponding SRRF images. Note that the time resolution of the SRRF image sequence is 20 times less than the network restoration, as 20 times more images have to be processed for the same restoration quality.

  11. Supplementary Video 9

    Visualization of network predictions by sampling from the predicted distribution. The video shows for two examples (surface projection of fly wing tissue, and microtubule structure restoration in INS-1 cells) that drawing samples from the per-pixel predicted distribution is beneficial for identifying challenging image regions. For each example, we show successively the raw input, the per-pixel mean of the predicted distribution and random samples from the per-pixel distributions.

About this article

Publication history

Received

Accepted

Published

DOI

https://doi.org/10.1038/s41592-018-0216-7