Skip to main content

Thank you for visiting You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Article
  • Published:

Content-aware image restoration: pushing the limits of fluorescence microscopy


Fluorescence microscopy is a key driver of discoveries in the life sciences, with observable phenomena being limited by the optics of the microscope, the chemistry of the fluorophores, and the maximum photon exposure tolerated by the sample. These limits necessitate trade-offs between imaging speed, spatial resolution, light exposure, and imaging depth. In this work we show how content-aware image restoration based on deep learning extends the range of biological phenomena observable by microscopy. We demonstrate on eight concrete examples how microscopy images can be restored even if 60-fold fewer photons are used during acquisition, how near isotropic resolution can be achieved with up to tenfold under-sampling along the axial direction, and how tubular and granular structures smaller than the diffraction limit can be resolved at 20-times-higher frame rates compared to state-of-the-art methods. All developed image restoration methods are freely available as open source software in Python, FIJI, and KNIME.

This is a preview of subscription content, access via your institution

Access options

Buy this article

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: CARE.
Fig. 2: Joint surface projection and denoising.
Fig. 3: Isotropic restoration of 3D volumes.
Fig. 4: Resolving sub-diffraction structures at high frame rates.
Fig. 5: Reliability readouts for CARE.

Similar content being viewed by others

Data availability

Training and test data for all experiments presented can be found at The code for network training and prediction (in Python/TensorFlow) is publicly available at Furthermore, to make our restoration models readily available, we developed user-friendly FIJI plugins and KNIME workflows (Supplementary Figs. 29 and 30).


  1. Huisken, J. et al. Optical sectioning deep inside live embryos by selective plane illumination microscopy. Science 305, 1007–1009 (2004).

    Article  CAS  Google Scholar 

  2. Tomer, R. et al. Quantitative high-speed imaging of entire developing embryos with simultaneous multiview light-sheet microscopy. Nat. Methods 9, 755–763 (2012).

    Article  CAS  Google Scholar 

  3. Chen, B.-C. et al. Lattice light-sheet microscopy: imaging molecules to embryos at high spatiotemporal resolution. Science 346, 1257998 (2014).

    Article  PubMed  Google Scholar 

  4. Gustafsson, M. G. Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy. J. Microsc. 198, 82–87 (2000).

    Article  CAS  Google Scholar 

  5. Heintzmann, R. & Gustafsson, M. G. Subdiffraction resolution in continuous samples. Nat. Photon. 3, 362–364 (2009).

    Article  CAS  Google Scholar 

  6. Betzig, E. et al. Imaging intracellular fluorescent proteins at nanometer resolution. Science 313, 1642–1645 (2006).

    Article  CAS  Google Scholar 

  7. Rust, M. J., Bates, M. & Zhuang, X. Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM). Nat. Methods 3, 793–795 (2006).

    Article  CAS  PubMed  Google Scholar 

  8. Mortensen, K. I. et al. Optimized localization analysis for single-molecule tracking and super-resolution microscopy. Nat. Methods 7, 377–381 (2010).

    Article  CAS  PubMed  Google Scholar 

  9. Icha, J. et al. Phototoxicity in live fluorescence microscopy, and how to avoid it. Bioessays 39, 700003 (2017).

    Article  Google Scholar 

  10. Laissue, P. P. et al. Assessing phototoxicity in live fluorescence imaging. Nat. Methods 14, 657–661 (2017).

    Article  CAS  Google Scholar 

  11. Pawley, J. B. Fundamental limits in confocal microscopy. In Handbook of Biological Confocal Microscopy (ed Pawley, J. B.) 20–42 (Springer, Boston, MA, 2006).

    Chapter  Google Scholar 

  12. Scherf, N. & Huisken, J. The smart and gentle microscope. Nat. Biotechnol. 33, 815–818 (2015).

    Article  CAS  Google Scholar 

  13. Müller, M. et al. Open-source image reconstruction of super-resolution structured illumination microscopy data in ImageJ. Nat. Commun. 7, 10980 (2016).

    Article  PubMed  Google Scholar 

  14. Gustafsson, N. et al. Fast live-cell conventional fluorophore nanoscopy with ImageJ through super-resolution radial fluctuations. Nat. Commun. 7, 12471 (2016).

    Article  CAS  PubMed  Google Scholar 

  15. Dertinger, T. et al. Superresolution optical fluctuation imaging (SOFI). In Nano-Biotechnology for Biomedical and Diagnostic Research (eds Zahavy, E. et al.) 17–21 (Springer, Dordrecht, the Netherlands, 2012).

    Google Scholar 

  16. Agarwal, K. & Macháň, R. Multiple signal classification algorithm for super-resolution fluorescence microscopy. Nat. Commun. 7, 13752 (2016).

    Article  CAS  PubMed  Google Scholar 

  17. Richardson, W. H. Bayesian-based iterative method of image restoration. J. Opt. Soc. Am. 62, 55–69 (1972).

    Article  Google Scholar 

  18. Arigovindan, M. et al. High-resolution restoration of 3D structures from widefield images with extreme low signal-to-noise-ratio. Proc. Natl. Acad. Sci. USA 110, 17344–17349 (2013).

    Article  CAS  Google Scholar 

  19. Preibisch, S. et al. Efficient Bayesian-based multiview deconvolution. Nat. Methods 11, 645–648 (2014).

    Article  CAS  PubMed  Google Scholar 

  20. Blasse, C. et al. PreMosa: extracting 2D surfaces from 3D microscopy mosaics. Bioinformatics 33, 2563–2569 (2017).

    Article  CAS  Google Scholar 

  21. Shihavuddin, A. et al. Smooth 2D manifold extraction from 3D image stack. Nat. Commun. 8, 15554 (2017).

    Article  CAS  PubMed  Google Scholar 

  22. Buades, A., Coll, B. & Morel, J.-M. A non-local algorithm for image denoising. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (eds Schmid, C., Soatto, S. & Tomasi, C.) 60–65 (IEEE, New York, 2005).

  23. Dabov, K., Foi, A., Katkovnik, V. & Egiazarian, K. Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16, 2080–2095 (2007).

    Article  Google Scholar 

  24. Morales-Navarrete, H. et al. A versatile pipeline for the multi-scale digital reconstruction and quantitative analysis of 3D tissue architecture. eLife 4, e11214 (2015).

    Article  PubMed  Google Scholar 

  25. LeCun, Y. et al. Gradient-based learning applied to document recognition. Proc. IEEE 86, 2278–2324 (1998).

    Article  Google Scholar 

  26. LeCun, Y., Bengio, Y. & Hinton, G. Deep learning. Nature 521, 436–44 (2015).

    Article  CAS  PubMed  Google Scholar 

  27. Beier, T. et al. Multicut brings automated neurite segmentation closer to human performance. Nat. Methods 14, 101–102 (2017).

    Article  CAS  Google Scholar 

  28. Caicedo, J. C. et al. Data-analysis strategies for image-based cell profiling. Nat. Methods 14, 849–863 (2017).

    Article  CAS  Google Scholar 

  29. Ounkomol, C. et al. Label-free prediction of three-dimensional fluorescence images from transmitted-light microscopy. Nat. Methods 15, 917–920 (2018).

    Article  CAS  Google Scholar 

  30. Christiansen, E. M. et al. In silico labeling: predicting fluorescent labels in unlabeled images. Cell 173, 792–803 (2018).

    Article  CAS  Google Scholar 

  31. Rivenson, Y. et al. Deep learning microscopy. Optica 4, 1437–1443 (2017).

    Article  Google Scholar 

  32. Nehme, E. et al. Deep-STORM: super-resolution single-molecule microscopy by deep learning. Optica 5, 458–464 (2018).

    Article  CAS  Google Scholar 

  33. Ouyang, W. et al. Deep learning massively accelerates super-resolution localization microscopy. Nat. Biotechnol. 36, 460–468 (2018).

    Article  CAS  Google Scholar 

  34. Ronneberger, O., Fischer, P. & Brox, T. U-Net: convolutional networks for biomedical image segmentation. In International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) (eds Navab, N. et al.) 234–241 (Springer, Cham, 2015).

  35. Shettigar, N. et al. Hierarchies in light sensing and dynamic interactions between ocular and extraocular sensory networks in a flatworm. Sci. Adv. 3, e1603025 (2017).

    Article  PubMed  Google Scholar 

  36. Mao, X.-J., Shen, C. & Yang, Y.-B. Image restoration using very deep convolutional encoder-decoder networks with symmetric skip connections. In Advances in Neural Information Processing Systems (NIPS) Vol. 29 (eds Lee, D.D. et al.) 2802–2810 (Curran Associates, Red Hook, NY, 2016).

  37. Wang, Z. et al. Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13, 600–612 (2004).

    Article  Google Scholar 

  38. Ulman, V. et al. An objective comparison of cell-tracking algorithms. Nat. Methods 14, 1141–1152 (2017).

    Article  CAS  PubMed  Google Scholar 

  39. Aigouy, B. et al. Cell flow reorients the axis of planar polarity in the wing epithelium of Drosophila. Cell 142, 773–786 (2010).

    Article  CAS  PubMed  Google Scholar 

  40. Etournay, R. et al. Interplay of cell dynamics and epithelial tension during morphogenesis of the Drosophila pupal wing. eLife 4, e07090 (2015).

    Article  PubMed  Google Scholar 

  41. Etournay, R. et al. TissueMiner: a multiscale analysis toolkit to quantify how cellular processes create tissue dynamics. eLife 5, e14334 (2016).

    Article  PubMed  Google Scholar 

  42. Chhetri, R. K. et al. Whole-animal functional and developmental imaging with isotropic spatial resolution. Nat. Methods 12, 1171–1178 (2015).

    Article  CAS  Google Scholar 

  43. Weigert, M., Royer, L., Jug, F. & Myers, G. Isotropic reconstruction of 3D fluorescence microscopy images using convolutional neural networks. In Medical Image Computing and Computer Assisted Intervention—MICCAI 2017 (eds Descoteaux, M. et al.) 126–134 (Springer, Cham, 2017).

    Google Scholar 

  44. Heinrich, L., Bogovic, J. A. & Saalfeld, S. Deep learning for isotropic super-resolution from non-isotropic 3D electron microscopy. In Medical Image Computing and Computer Assisted Intervention—MICCAI 2017 (eds Descoteaux, M. et al.) 135–143 (Springer, Cham, 2017).

    Google Scholar 

  45. Royer, L. A. et al. Adaptive light-sheet microscopy for long-term, high-resolution imaging in living organisms. Nat. Biotechnol. 34, 1267–1278 (2016).

    Article  CAS  Google Scholar 

  46. Icha, J. et al. Independent modes of ganglion cell translocation ensure correct lamination of the zebrafish retina. J. Cell Biol. 215, 259–275 (2016).

    Article  CAS  PubMed  Google Scholar 

  47. Sommer, C. et al. Ilastik: interactive learning and segmentation toolkit. In IEEE International Symposium on Biomedical Imaging: From Nano to Macro 230–233 (IEEE, New York, 2011).

  48. Culley, S. et al. Quantitative mapping and minimization of super-resolution optical imaging artifacts. Nat. Methods 15, 263–266 (2018).

    Article  CAS  PubMed  Google Scholar 

  49. Sui, L. et al. Differential lateral and basal tension drives epithelial folding through two distinct mechanisms. Nat. Commun. 9, 4620 (2018).

    Article  PubMed  Google Scholar 

  50. Chollet, F. et al. Keras (2015).

  51. Abadi, M. et al. Tensorflow: a system for large-scale machine learning. In Proceedings. 12th USENIX Symposium on Operating Systems Design and Implementation ( OSDI) (eds Keeton, K. & Roscoe, T.) 265–283 (2016).

  52. Boothe, T. et al. A tunable refractive index matching medium for live imaging cells, tissues and model organisms. eLife 6, e27240 (2017).

    Article  PubMed  Google Scholar 

  53. Tomasi, C. & Manduchi, R. Bilateral filtering for gray and color images. In Sixth International Conference on Computer Vision 839–846 (IEEE, New York, 1998).

  54. Chambolle, A. An algorithm for total variation minimization and applications. J. Math. Imaging Vis. 20, 89–97 (2004).

    Article  Google Scholar 

  55. Maggioni, M. et al. Nonlocal transform-domain filter for volumetric data denoising and reconstruction. IEEE Trans. Image Process. 22, 119–133 (2013).

    Article  Google Scholar 

  56. Sarrazin, A. F., Peel, A. D. & Averof, M. A segmentation clock with two-segment periodicity in insects. Science 336, 338–341 (2012).

    Article  CAS  Google Scholar 

  57. Brown, S. J. et al. The red flour beetle, Tribolium castaneum (Coleoptera): a model for studies of development and PestBiology. Cold Spring Harb. Protoc. (2009).

    Article  Google Scholar 

  58. Jones, E. et al. SciPy: Open Source Scientific Tools for Python (2001).

  59. Maška, M. et al. A benchmark for comparison of cell tracking algorithms. Bioinformatics 30, 1609–1617 (2014).

    Article  PubMed  Google Scholar 

  60. Classen, A.-K., Aigouy, B., Giangrande, A. & Eaton, S. Imaging Drosophila pupal wing morphogenesis. Methods Mol. Biol. 420, 265–275 (2008).

    Article  CAS  Google Scholar 

  61. Li, K. et al. Optimal surface segmentation in volumetric images—a graph-theoretic approach. IEEE Trans. Pattern Anal. Mach. Intell. 28, 119–134 (2006).

    Article  CAS  PubMed  Google Scholar 

  62. Wu, X. & Chen, D. Z. Optimal net surface problems with applications. In International Colloquium on Automata, Languages, and Programming (Springer, 2002).

  63. Arganda-Carreras, I. et al. Trainable Weka Segmentation: a machine learning tool for microscopy pixel classification. Bioinformatics 33, 2424–2426 (2017).

    Article  CAS  Google Scholar 

  64. Schindelin, J. et al. Fiji: an open-source platform for biological-image analysis. Nat. Methods 9, 676–682 (2012).

    Article  CAS  Google Scholar 

  65. Aigouy, B., Umetsu, D. & Eaton, S. Segmentation and quantitative analysis of epithelial tissues. In Drosophila: Methods and Protocols (ed Dahmann, C.) 227–239 (Humana Press, New York, 2016).

    Google Scholar 

  66. Ke, M.-T., Fujimoto, S. & Imai, T. SeeDB: a simple and morphology-preserving optical clearing agent for neuronal circuit reconstruction. Nat. Neurosci. 16, 1154–1161 (2013).

    Article  CAS  PubMed  Google Scholar 

  67. Ivanova, A. et al. Age-dependent labeling and imaging of insulin secretory granules. Diabetes 62, 3687–3696 (2013).

    Article  CAS  PubMed  Google Scholar 

  68. Mchedlishvili, N. et al. Kinetochores accelerate centrosome separation to ensure faithful chromosome segregation. J. Cell Sci. 125, 906–918 (2012).

    Article  CAS  Google Scholar 

  69. Lakshminarayanan, B., Pritzel, A. & Blundell, C. Simple and scalable predictive uncertainty estimation using deep ensembles. In Advances in Neural Information Processing Systems 30 (eds Guyon, I. et al.) 6402–6413 (Curran Associates, Red Hook, NY, 2017).

  70. Guo, C., Pleiss, G., Sun, Y. & Weinberger, K. Q. On calibration of modern neural networks. In Proc. 34th International Conference on Machine Learning (ICML) (eds Precup, D. & Teh, Y. W.) 1321–1330 (PMLR, Cambridge, MA, 2017).

Download references


The authors thank P. Keller (Janelia) who provided Drosophila data. We thank S. Eaton (MPI-CBG), F. Gruber and R. Piscitello for sharing their expertise in fly imaging and providing fly lines. We thank A. Sönmetz for cell culture work. We thank M. Matejcic (MPI-CBG) for generating and sharing the LAP2b transgenic line Tg(bactin:eGFP-LAP2b). We thank B. Lombardot from the Scientific Computing Facility (MPI-CBG) for technical support. We thank the following Services and Facilities of the MPI-CBG for their support: Computer Department, Light Microscopy Facility and Fish Facility. This work was supported by the German Federal Ministry of Research and Education (BMBF) under the codes 031L0102 (de.NBI) and 031L0044 (Sysbio II) and the Deutsche Forschungsgemeinschaft (DFG) under the code JU 3110/1-1. M.S. was supported by the German Center for Diabetes Research (DZD e.V.). T.B. was supported by an ELBE postdoctoral fellowship and an Add-on Fellowship for Interdisciplinary Life Sciences awarded by the Joachim Herz Stiftung. R.H. and S.C. were supported by the following grants: UK BBSRC (grant nos. BB/M022374/1, BB/P027431/1, and BB/R000697/1), UK MRC (grant no. MR/K015826/1) and Wellcome Trust (grant no. 203276/Z/16/Z).

Author information

Authors and Affiliations



F.J. and E.W.M. shared last-authorship. M.W. and L.R. initiated the research. M.W. and U.S. designed and implemented the training and validation methods. U.S., M.W., and F.J. designed and implemented the uncertainty readouts. T.B., A.M., A.D., S.C., F.S.M., R.H., M.R.M., and A.J. collected experimental data. A.D., C.B., and F.J. performed cell segmentation analysis. T.B. performed analysis on flatworm data. U.S. and M.W. designed and developed the Python package. F.J., B.W., and D.S. designed and developed the FIJI and KNIME integration. E.W.M. supervised the project. F.J., M.W., P.T., L.R., U.S., and E.W.M wrote the manuscript, with input from all authors.

Corresponding authors

Correspondence to Martin Weigert, Loic Royer or Florian Jug.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Supplementary Text and Figures

Supplementary Figures 1–30, Supplementary Tables 1–3 and Supplementary Notes 1–5

Reporting Summary

Supplementary Video 1

Challenges in time-lapse imaging of flatworm Schmidtea mediterranea. Image stacks of RedDot1-labeled, anesthetized specimen were acquired every 2 min with a spinning disk confocal microscope (NA = 1.05), at high and low SNR (illumination) conditions (10% laser, 30-ms exposure per plane vs. 0.5% laser, 10 ms per plane). Whereas in the high-SNR case the specimen shows illumination-induced twitching, the image quality in the low-SNR case is insufficient for further analysis. Network restoration enabled us to recover high-SNR images from images acquired at low-SNR conditions without twitching of the specimen, thus providing a practical framework for live-cell imaging of Schmidtea mediterranea.

Supplementary Video 2

Restoration results of low-SNR acquisitions of Schmidtea mediterranea and comparison to ground truth. Shown are a 3D rendering of the results on a multi-tiled acquisition (8,192 × 3,072 × 100 pixels) and the comparison with ground truth.

Supplementary Video 3

Restoration of low-SNR volumetric time lapses of developing Tribolium castaneum embryos (EFA::nGFP-labeled nuclei).: Acquisition was done on a Zeiss LSM 710 NLO multiphoton laser-scanning microscope with a time step of 8 min, and stack size of a single time point 760 × 760 × 100 pixels. Shown are maximum-intensity projection and single slices of the raw stacks, the network prediction and the high-SNR ground truth.

Supplementary Video 4

Joint surface projection and denoising of developing Drosophila melanogaster wing epithelia. 3D image stacks of the developing wing of a membrane-labeled (Ecad::GFP) fly pupa were acquired with a spinning disk confocal (63×, NA = 1.3) microscope. We show the projected epithelial surface obtained by a conventional method (PreMosa, ref. 1), the restoration network, and the projected ground truth. We applied a random-forest-based cell segmentation pipeline and show segmentation/tracking results, demonstrating vastly improved accuracy for the restoration when compared to the conventionally processed raw stacks.

Supplementary Video 5

Isotropic restoration of anisotropic time-lapse acquisitions of hisGFP-tagged developing Drosophila melanogaster embryos. We used the original, preprocessed data set of ref. 2, acquired with a light-sheet microscope (NA = 1.1) with fivefold axially undersampled resolution (lateral/axial pixel size: 0.39 μm/1.96 μm). The video shows different axial (xz) regions of a single time point from both the original (input) stacks and the isotropic restoration.

Supplementary Video 6

Isotropic restoration of anisotropic dual-color acquisitions of developing Danio rerio retina. The data were acquired with a spinning disk confocal microscope (Olympus 60×, NA = 1.1) and exhibits a tenfold axial anisotropy (lateral/axial pixel size: 0.2 μm/2.0 μm); labeled structures are nuclei (DRAQ5, magenta) and nuclear envelope (GFP+LAP2b, green). The video shows a rendering of the dual-color input stack and its isotropic reconstruction.

Supplementary Video 7

Enhancement of diffraction-limited structures in widefield images of rat INS-1 (beta) cells. The video shows time lapses of several INS-1 cells, acquired with the widefield mode of a DeltaVision OMX microscope (63×, NA = 1.43). Labeled are secretory granules (pEG-hIns-SNAP, magenta) and microtubules (SiR-tubulin, green). Next to the time lapse of the widefield images we show the output of the reconstruction networks.

Supplementary Video 8

Enhancement of diffraction-limited widefield images of GFP-labeled microtubules in HeLa cells and comparison with SRRF (super-resolution radial fluctuations [3]). Images were acquired with a Zeiss Elyra PS.1 microscope in TIRF mode (100×, NA = 1.46). The video shows the widefield input sequence, the network restoration and the corresponding SRRF images. Note that the time resolution of the SRRF image sequence is 20 times less than the network restoration, as 20 times more images have to be processed for the same restoration quality.

Supplementary Video 9

Visualization of network predictions by sampling from the predicted distribution. The video shows for two examples (surface projection of fly wing tissue, and microtubule structure restoration in INS-1 cells) that drawing samples from the per-pixel predicted distribution is beneficial for identifying challenging image regions. For each example, we show successively the raw input, the per-pixel mean of the predicted distribution and random samples from the per-pixel distributions.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Weigert, M., Schmidt, U., Boothe, T. et al. Content-aware image restoration: pushing the limits of fluorescence microscopy. Nat Methods 15, 1090–1097 (2018).

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI:

This article is cited by


Quick links

Nature Briefing AI and Robotics

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing: AI and Robotics