Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Perspective
  • Published:

Applications, promises, and pitfalls of deep learning for fluorescence image reconstruction

Abstract

Deep learning is becoming an increasingly important tool for image reconstruction in fluorescence microscopy. We review state-of-the-art applications such as image restoration and super-resolution imaging, and discuss how the latest deep learning research could be applied to other image reconstruction tasks. Despite its successes, deep learning also poses substantial challenges and has limits. We discuss key questions, including how to obtain training data, whether discovery of unknown structures is possible, and the danger of inferring unsubstantiated image details.

This is a preview of subscription content, access via your institution

Access options

Buy this article

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: Review of current applications of deep learning to fluorescence microscopy.
Fig. 2: Potential applications of deep learning in fluorescence microscopy and key concepts.

Similar content being viewed by others

Data availability

Source code for the experiment described in Box 5 can be found at http://github.com/royerlab/DLDiscovery.

References

  1. Lichtman, J. W. & Conchello, J.-A. Fluorescence microscopy. Nat. Methods 2, 910–919 (2005).

    CAS  PubMed  Google Scholar 

  2. Betzig, E. et al. Imaging intracellular fluorescent proteins at nanometer resolution. Science 313, 1642–1645 (2006).

    CAS  PubMed  Google Scholar 

  3. Rust, M. J., Bates, M. & Zhuang, X. Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (storm). Nat. Methods 3, 793–795 (2006).

    CAS  PubMed  PubMed Central  Google Scholar 

  4. Schermelleh, L. et al. Super-resolution microscopy demystified. Nat. Cell Biol. 21, 72–84 (2019).

    CAS  PubMed  Google Scholar 

  5. LeCun, Y., Bengio, Y. & Hinton, G. Deep learning. Nature 521, 436–444 (2015).

    CAS  PubMed  Google Scholar 

  6. Szegedy, C., Ioffe, S., Vanhoucke, V. & Alemi, A. A. Inception-v4, inception-resnet and the impact of residual connections on learning. AAAI 4, (12 (2017).

    Google Scholar 

  7. Zhao, H., Zarar, S., Tashev, I. & Lee, C.-H. Convolutional-recurrent neural networks for speech enhancement. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (eds. Hayes, M. et al.) 2401–2405 (IEEE, 2018).

  8. Lam, C. & Kipping, D. A machine learns to predict the stability of circumbinary planets. Mon. Not. R. Astron. Soc. 476, 5692–5697 (2018).

    Google Scholar 

  9. Ching, T. et al. Opportunities and obstacles for deep learning in biology and medicine. J. R. Soc. Interface 15, 20170387 (2018).

    PubMed  PubMed Central  Google Scholar 

  10. Radovic, A. et al. Machine learning at the energy and intensity frontiers of particle physics. Nature 560, 41–48 (2018).

    CAS  PubMed  Google Scholar 

  11. Xu, Y. et al. Deep learning of feature representation with multiple instance learning for medical image analysis. In 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (eds. Gini, F. et al.) 1626–1630 (IEEE, 2014).

  12. Jin, K. H., McCann, M. T., Froustey, E. & Unser, M. Deep convolutional neural network for inverse problems in imaging. IEEE Trans. Image Process 26, 4509–4522 (2017).

    Google Scholar 

  13. Weigert, M. et al. Content-aware image restoration: pushing the limits of fluorescence microscopy. Nat. Methods 15, 1090–1097 (2018).

    CAS  PubMed  Google Scholar 

  14. Weigert, M., Royer, L., Jug, F. & Myers, G. Isotropic reconstruction of 3D fluorescence microscopy images using convolutional neural networks. In Medical Image Computing and Computer-Assisted Intervention—MICCAI 2017 (eds. Descoteaux, M. et al.) 126–134 (Springer, 2017).

  15. Shajkofci, A. & Liebling, M. Semi-blind spatially-variant deconvolution in optical microscopy with local point spread function estimation by use of convolutional neural networks. In 2018 25th IEEE International Conference on Image Processing (ICIP) (eds. Nikou, C. et al.) 3818–3822 (IEEE, 2018).

  16. Wang, H. et al. Deep learning enables cross-modality super-resolution in fluorescence microscopy. Nat. Methods 16, 103–110 (2019).

    CAS  PubMed  Google Scholar 

  17. Ouyang, W., Aristov, A., Lelek, M., Hao, X. & Zimmer, C. Deep learning massively accelerates super-resolution localization microscopy. Nat. Biotechnol. 36, 460–468 (2018).

    CAS  PubMed  Google Scholar 

  18. Nehme, E., Weiss, L. E., Michaeli, T. & Shechtman, Y. Deep-STORM: super-resolution single-molecule microscopy by deep learning. Optica 5, 458–464 (2018).

    CAS  Google Scholar 

  19. Christiansen, E. M. et al. In silico labeling: predicting fluorescent labels in unlabeled images. Cell 173, 792–803 (2018).

    CAS  PubMed  PubMed Central  Google Scholar 

  20. Ounkomol, C., Seshamani, S., Maleckar, M. M., Collman, F. & Johnson, G. R. Label-free prediction of three-dimensional fluorescence images from transmitted-light microscopy. Nat. Methods 15, 917–920 (2018).

    CAS  PubMed  PubMed Central  Google Scholar 

  21. Rivenson, Y. et al. Deep learning-based virtual histology staining using auto-fluorescence of label-free tissue. Preprint at https://arxiv.org/abs/1803.11293 (2018).

  22. Moen, E. et al. Deep learning for cellular image analysis. Nat. Methods https://doi.org/10.1038/s41592-019-0403-1 (2019).

  23. Richardson, W. H. Bayesian-based iterative method of image restoration. JOSA 62, 55–59 (1972).

    Google Scholar 

  24. Carlton, P. M. et al. Fast live simultaneous multiwavelength four-dimensional optical microscopy. Proc. Natl Acad. Sci. USA 107, 16016–16022 (2010).

    CAS  PubMed  PubMed Central  Google Scholar 

  25. Marim, M. M., Angelini, E. D. & Olivo-Marin, J.-C. A compressed sensing approach for biological microscopy image denoising. In SPARS ‘09—Signal Processing with Adaptive Sparse Structured Representations (eds. Gribonval, R. et al.) inria-00369642 (IEEE, 2009).

  26. Boulanger, J. et al. Patch-based nonlocal functional for denoising fluorescence microscopy image sequences. IEEE Trans. Med. Imaging 29, 442–454 (2010).

    PubMed  Google Scholar 

  27. Luisier, F., Blu, T. & Unser, M. Image denoising in mixed Poisson–Gaussian noise. IEEE Trans. Image Process 20, 696–708 (2011).

    PubMed  Google Scholar 

  28. Xu, J., Zhang, L. & Zhang, D. A trilateral weighted sparse coding scheme for real-world image denoising. In Computer Vision—ECCV 2018 (eds. Ferrari, V. et al.) 21–38 (Springer, 2018).

  29. Yair, N. & Michaeli, T. Multi-scale weighted nuclear norm image restoration. In Proc. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (eds. Brown, M. S. et al.) 3165–3174 (IEEE, 2018).

  30. Buades, A., Coll, B. & Morel, J.-M. A non-local algorithm for image denoising. In 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR ‘05) (eds. Schmid, C., Soatto, S. & Tomasi, C.) 60–65 (IEEE, 2005).

  31. Dabov, K., Foi, A., Katkovnik, V. & Egiazarian, K. Image denoising with block-matching and 3D filtering. In Image Processing: Algorithms and Systems, Neural Networks, and Machine Learning (eds. Nasrabadi, N. M. et al.) 606414 (International Society for Optics and Photonics, 2006).

  32. Jain, V. & Seung, S. Natural image denoising with convolutional networks. In Advances in Neural Information Processing Systems 21 (eds. Koller, D. et al.) 769–776 (NIPS, 2009).

  33. Zhang, K., Zuo, W., Chen, Y., Meng, D. & Zhang, L. Beyond a Gaussian denoiser: residual learning of deep CNN for image denoising. IEEE Trans. Image Process 26, 3142–3155 (2017).

    PubMed  Google Scholar 

  34. Lehtinen, J. et al. Noise2noise: learning image restoration without clean data. In Proc. 35th International Conference on Machine Learning, PMLR (eds. Dy, J. & Krause, A.) 2965–2974 (PMLR, 2018).

  35. Buchholz, T.-O., Jordan, M., Pigino, G. & Jug, F. Cryo-care: content-aware image restoration for cryo-transmission electron microscopy data. Preprint at https://arxiv.org/abs/1810.05420 (2018).

  36. Batson, J. & Royer, L. Noise2Self: blind denoising by self-supervision. Preprint at https://arxiv.org/abs/1901.11365 (2019).

  37. Krull, A., Buchholz, T.-O. & Jug, F. Noise2Void—learning denoising from single noisy images. Preprint at https://arxiv.org/abs/1811.10980 (2018).

  38. Laine, S., Lehtinen, J. & Aila, T. Self-supervised deep image denoising. Preprint at https://arxiv.org/abs/1901.10277v1 (2019).

  39. Ulyanov, D., Vedaldi, A. & Lempitsky, V. S. Deep image prior. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (eds. Brown, M. S. et al.) 9446–9454 (IEEE, 2018).

  40. Zhu, J., Park, T., Isola, P. & Efros, A. A. Unpaired image-to-image translation using cycle-consistent adversarial networks. In 2017 IEEE International Conference on Computer Vision (ICCV) (eds. Ikeuchi, K. et al.) 2242–2251 (IEEE, 2017).

  41. Hom, E. F. et al. AIDA: an adaptive image deconvolution algorithm with application to multiframe and three-dimensional data. J. Opt. Soc. Am. A Opt. Image Sci. Vis. 24, 1580–1600 (2007).

    PubMed  PubMed Central  Google Scholar 

  42. Dey, N. et al. Richardson–Lucy algorithm with total variation regularization for 3D confocal microscope deconvolution. Microsc. Res. Tech. 69, 260–266 (2006).

    PubMed  Google Scholar 

  43. Preibisch, S. et al. Efficient Bayesian-based multiview deconvolution. Nat. Methods 11, 645–648 (2014).

    CAS  PubMed  PubMed Central  Google Scholar 

  44. Sage, D. et al. DeconvolutionLab2: an open-source software for deconvolution microscopy. Methods 115, 28–41 (2017).

    CAS  PubMed  Google Scholar 

  45. Rivenson, Y. et al. Deep learning microscopy: enhancing resolution, field-of-view and depth-of-field of optical microscopy images using neural networks. In 2018 Conference on Lasers and Electro-Optics (eds. Andersen, P. et al.) 18024028 (IEEE, 2018).

  46. Henriques, R. et al. Quickpalm: 3D real-time photoactivation nanoscopy image processing in ImageJ. Nat. Methods 7, 339–340 (2010).

    CAS  PubMed  Google Scholar 

  47. Sage, D. et al. Quantitative evaluation of software packages for single-molecule localization microscopy. Nat. Methods 12, 717 (2015).

    CAS  PubMed  Google Scholar 

  48. Boyd, N., Jonas, E., Babcock, H. P. & Recht, B. DeepLoco: fast 3D localization microscopy using neural networks. Preprint at https://www.biorxiv.org/content/10.1101/267096v1 (2018).

  49. Goodfellow, I. et al. Generative adversarial nets. In Proc. 27th International Conference on Neural Information Processing Systems (eds. Ghahramani, Z. et al.) 2672–2680 (MIT Press, 2014).

  50. Isola, P., Zhu, J.-Y., Zhou, T. & Efros, A. A. Image-to-image translation with conditional adversarial networks. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (eds. Chellappa, R. et al.) 5967–5976 (IEEE, 2017).

  51. Gustafsson, N. et al. Fast live-cell conventional fluorophore nanoscopy with ImageJ through super-resolution radial fluctuations. Nat. Commun. 7, 12471 (2016).

    CAS  PubMed  PubMed Central  Google Scholar 

  52. Gustafsson, M. G. L. et al. Three-dimensional resolution doubling in wide-field fluorescence microscopy by structured illumination. Biophys. J. 94, 4957–4970 (2008).

    CAS  PubMed  PubMed Central  Google Scholar 

  53. Li, D. et al. Extended-resolution structured illumination imaging of endocytic and cytoskeletal dynamics. Science 349, aab3500 (2015).

    PubMed  PubMed Central  Google Scholar 

  54. Huang, X. et al. Fast, long-term, super-resolution imaging with Hessian structured illumination microscopy. Nat. Biotechnol. 36, 451 (2018).

    CAS  PubMed  Google Scholar 

  55. Mudry, E. et al. Structured illumination microscopy using unknown speckle patterns. Nat. Photon. 6, 312 (2012).

    CAS  Google Scholar 

  56. Ayuk, R. et al. Structured illumination fluorescence microscopy with distorted excitations using a filtered blind-SIM algorithm. Opt. Lett. 38, 4723–4726 (2013).

    CAS  PubMed  Google Scholar 

  57. Zhu, B., Liu, J. Z., Cauley, S. F., Rosen, B. R. & Rosen, M. S. Image reconstruction by domain-transform manifold learning. Nature 555, 487–492 (2018).

    CAS  PubMed  Google Scholar 

  58. Jahr, W., Schmid, B., Schmied, C., Fahrbach, F. O. & Huisken, J. Hyperspectral light sheet microscopy. Nat. Commun. 6, 7990 (2015).

    PubMed  Google Scholar 

  59. Cutrale, F. et al. Hyperspectral phasor analysis enables multiplexed 5D in vivo imaging. Nat. Methods 14, 149–152 (2017).

    CAS  PubMed  Google Scholar 

  60. Hershko, E., Weiss, L. E., Michaeli, T. & Shechtman, Y. Multicolor localization microscopy and point-spread-function engineering by deep learning. Opt. Express 27, 6158–6183 (2019).

    CAS  PubMed  Google Scholar 

  61. Blasse, C. et al. Premosa: extracting 2D surfaces from 3D microscopy mosaics. Bioinformatics 33, 2563–2569 (2017).

    CAS  PubMed  Google Scholar 

  62. Mayer, J., Robert-Moreno, A., Sharpe, J. & Swoger, J. Attenuation artifacts in light sheet fluorescence microscopy corrected by OPTiSPIM. Light Sci. Appl. 7, 70 (2018).

    PubMed  PubMed Central  Google Scholar 

  63. Pathak, D., Krahenbuhl, P., Donahue, J., Darrell, T. & Efros, A. A. Context encoders: feature learning by inpainting. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (eds. Tuytelaars, T. et al.) 2536–2544 (IEEE, 2016).

  64. Liu, G. et al. Image inpainting for irregular holes using partial convolutions. In Computer Vision—ECCV 2018 (eds. Ferrari, V. et al.) 89–105 (Springer, 2018).

  65. Amat, F. et al. Efficient processing and analysis of large-scale light-sheet microscopy data. Nat. Protoc. 10, 1679–1696 (2015).

    CAS  PubMed  Google Scholar 

  66. Cai, B., Xu, X., Jia, K., Qing, C. & Tao, D. DehazeNet: an end-to-end system for single image haze removal. IEEE Trans., Image Process 25, 5187–5198 (2016).

    Google Scholar 

  67. Saalfeld, S., Fetter, R., Cardona, A. & Tomancak, P. Elastic volume reconstruction from series of ultra-thin microscopy sections. Nat. Methods 9, 717–720 (2012).

    CAS  PubMed  Google Scholar 

  68. Zbontar, J. & LeCun, Y. Computing the stereo matching cost with a convolutional neural network. In Proc. 28th IEEE Conference on Computer Vision and Pattern Recognition (eds. Bischof, H. et al.) 1592–1599 (2015).

  69. Rohé, M.-M., Datar, M., Heimann, T., Sermesant, M. & Pennec, X. SVF-Net: learning deformable image registration using shape matching. In Medical Image Computing and Computer-Assisted Intervention—MICCAI 2017 (eds. Descoteaux, M. et al.) 266–274 (Springer, 2017).

  70. Nguyen, T., Chen, S. W., Skandan, S., Taylor, C. J. & Kumar, V. Unsupervised deep homography: a fast and robust homography estimation model. IEEE Robot. Autom. Lett. 3, 2346–2353 (2018).

    Google Scholar 

  71. Christensen, R. P. et al. Untwisting the Caenorhabditis elegans embryo. eLife 4, e10070 (2015).

    PubMed  PubMed Central  Google Scholar 

  72. Prevedel, R. et al. Simultaneous whole-animal 3D imaging of neuronal activity using light- field microscopy. Nat. Methods 11, 727–730 (2014).

    CAS  PubMed  PubMed Central  Google Scholar 

  73. Fei, P. et al. Deep learning light field microscopy for rapid four-dimensional imaging of behaving animals. Preprint at https://www.biorxiv.org/content/10.1101/432807v1 (2018).

  74. Antipa, N. et al. Diffusercam: lensless single-exposure 3D imaging. Optica 5, 1–9 (2018).

    Google Scholar 

  75. Vinegoni, C., Pitsouli, C., Razansky, D., Perrimon, N. & Ntziachristos, V. In vivo imaging of Drosophila melanogaster pupae with mesoscopic fluorescence tomography. Nat. Methods 5, 45–47 (2008).

    CAS  PubMed  Google Scholar 

  76. Xingjian, S. et al. Convolutional LSTM network: a machine learning approach for precipitation nowcasting. In Advances in Neural Information Processing Systems 28 (eds. Cortes, C. et al.) 802–810 (NIPS, 2015).

  77. Naganathan, S. R., Frthauer, S., Nishikawa, M., Jlicher, F. & Grill, S. W. Active torque generation by the actomyosin cell cortex drives left-right symmetry breaking. eLife 3, e04165 (2014).

    PubMed  PubMed Central  Google Scholar 

  78. Meister, S., Hur, J. & Roth, S. Unflow: unsupervised learning of optical flow with a bidirectional census loss. In Thirty-second AAAI Conference on Artificial Intelligence (eds. McIlraith, S. & Weinberger, K.) 7251–7259 (AAAI Press, 2018).

  79. Haring, M. T. et al. Automated sub-5nm image registration in integrated correlative fluorescence and electron microscopy using cathodoluminescence pointers. Sci. Rep. 7, 43621 (2017).

    PubMed  PubMed Central  Google Scholar 

  80. Royer, L. A. et al. Adaptive light-sheet microscopy for long-term, high-resolution imaging in living organisms. Nat. Biotechnol. 34, 1267–1278 (2016).

    CAS  PubMed  Google Scholar 

  81. Liu, T.-L. et al. Observing the cell in its native state: imaging subcellular dynamics in multicellular organisms. Science 360, eaaq1392 (2018).

    PubMed  PubMed Central  Google Scholar 

  82. Turpin, A., Vishniakou, I. & Seelig, J. D. Light scattering control with neural networks in transmission and reflection. Preprint at https://arxiv.org/abs/1805.05602 (2018).

  83. Horstmeyer, R., Chen, R. Y., Kappes, B. & Judkewitz, B. Convolutional neural networks that teach microscopes how to image. Preprint at https://arxiv.org/abs/1709.07223 (2017).

  84. Silver, D. et al. Mastering the game of go with deep neural networks and tree search. Nature 529, 484–489 (2016).

    CAS  PubMed  Google Scholar 

  85. Moosavi-Dezfooli, S.-M., Fawzi, A. & Frossard, P. DeepFool: a simple and accurate method to fool deep neural networks. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (eds. Tuytelaars, T. et al.) 2574–2582 (IEEE, 2016).

  86. Sabour, S., Cao, Y., Faghri, F. & Fleet, D. J. Adversarial manipulation of deep representations. Preprint at https://arxiv.org/abs/1511.05122 (2015).

  87. Su, J., Vargas, D. V. & Sakurai, K. One pixel attack for fooling deep neural networks. IEEE Trans. Evol. Comput. https://doi.org/10.1109/TEVC.2019.2890858 (2019).

    Google Scholar 

  88. Madry, A., Makelov, A., Schmidt, L., Tsipras, D. & Vladu, A. Towards deep learning models resistant to adversarial attacks. Preprint at https://arxiv.org/abs/1706.06083v1 (2017).

  89. Krizhevsky, A., Sutskever, I. & Hinton, G. E. ImageNet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems (eds. Pereira, F. et al.) 1097–1105 (Curran Associates, 2012).

  90. Johnson, G. R., Donovan-Maiye, R. M. & Maleckar, M. M. Generative modeling with conditional autoencoders: building an integrated cell. Preprint at https://arxiv.org/abs/1705.00092 (2017).

  91. Osokin, A., Chessel, A., Salas, R. E. C. & Vaggi, F. Gans for biological image synthesis. In 2017 IEEE International Conference on Computer Vision (ICCV) (eds. Ikeuchi, K. et al.) 2252–2261 (IEEE, 2017).

  92. Goldsborough, P., Pawlowski, N., Caicedo, J. C., Singh, S. & Carpenter, A. CytoGAN: generative modeling of cell images. Preprint at https://www.biorxiv.org/content/10.1101/227645v1 (2017).

  93. Yuan, H. et al. Computational modeling of cellular structures using conditional deep generative networks. Bioinformatics https://doi.org/10.1093/bioinformatics/bty923 (2018).

    Google Scholar 

  94. Zeiler, M. D. & Fergus, R. Visualizing and understanding convolutional networks. In Computer Vision—ECCV 2014 (eds. Fleet, D. et al.) 818–833 (Springer, 2014).

  95. Yosinski, J., Clune, J., Bengio, Y. & Lipson, H. How transferable are features in deep neural networks? In Advances in Neural Information Processing Systems 27 (eds. Ghahramani, Z. et al.) 3320–3328 (Curran Associates, 2014).

  96. Esteva, A. et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature 542, 115 (2017).

    CAS  PubMed  PubMed Central  Google Scholar 

  97. Thul, P. J. et al. A subcellular map of the human proteome. Science 356, eaal3321 (2017).

    PubMed  Google Scholar 

  98. Ljosa, V., Sokolnicki, K. L. & Carpenter, A. E. Annotated high-throughput microscopy image sets for validation. Nat. Methods 9, 637 (2012).

    CAS  PubMed  PubMed Central  Google Scholar 

  99. Sullivan, D. P. et al. Deep learning is combined with massive-scale citizen science to improve large-scale image classification. Nat. Biotechnol. 36, 820 (2018).

    CAS  PubMed  Google Scholar 

  100. Simonyan, K., Vedaldi, A. & Zisserman, A. Deep inside convolutional networks: visualising image classification models and saliency maps. Preprint at https://arxiv.org/abs/1312.6034 (2013).

  101. Zhang, Q., Cao, R., Shi, F., Wu, Y. N. & Zhu, S.-C. Interpreting CNN knowledge via an explanatory graph. In Thirty-second AAAI Conference on Artificial Intelligence (eds. McIlraith, S. & Weinberger, K.) 4454–4463 (AAAI Press, 2018).

  102. Zhang, Q., Wu, Y. N. & Zhu, S.-C. Interpretable convolutional neural networks. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (eds. Brown, M. S. et al.) 8827–8836 (IEEE, 2018).

  103. Lakshminarayanan, B., Pritzel, A. & Blundell, C. Simple and scalable predictive uncertainty estimation using deep ensembles. In Advances in Neural Information Processing Systems 30 (eds. Guyon, I. et al.) 6402–6413 (NIPS, 2017).

  104. Kendall, A. & Gal, Y. What uncertainties do we need in Bayesian deep learning for computer vision? In Advances in Neural Information Processing Systems 30 (eds. Guyon, I. et al.) 5580–5590 (NIPS, 2017).

  105. Hutson, M. Artificial intelligence faces reproducibility crisis. Science 359, 725–726 (2018).

    PubMed  Google Scholar 

  106. Henderson, P. et al. Deep reinforcement learning that matters. In Thirty-Second AAAI Conference on Artificial Intelligence (eds. McIlraith, S. & Weinberger, K.) 3207–3214 (AAAI Press, 2018).

  107. Gazagnes, S., Soubies, E. & Blanc-Féraud, L. High density molecule localization for super-resolution microscopy using CEL0 based sparse approximation. In IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017) (eds. Egan, G. et al.) 28–31 (IEEE, 2017).

  108. McCann, M. T., Jin, K. H. & Unser, M. Convolutional neural networks for inverse problems in imaging: a review. IEEE Signal Process. Mag. 34, 85–95 (2017).

    Google Scholar 

  109. Lucas, A., Iliadis, M., Molina, R. & Katsaggelos, A. K. Using deep neural networks for inverse problems in imaging: beyond analytical methods. IEEE Signal Process. Mag. 35, 20–36 (2018).

    Google Scholar 

  110. Hornik, K., Stinchcombe, M. & White, H. Multilayer feedforward networks are universal approximators. Neural Netw. 2, 359–366 (1989).

    Google Scholar 

  111. Rumelhart, D. E., Hinton, G. E. & Williams, R. J. Learning representations by backpropagating errors. Nature 323, 533–536 (1986).

    Google Scholar 

  112. Masci, J., Meier, U., Cireşan, D. & Schmidhuber, J. Stacked convolutional auto-encoders for hierarchical feature extraction. In Artificial Neural Networks and Machine Learning—ICANN 2011 (eds. Honkela, T. et al.) 52–59 (Springer, 2011).

  113. Ronneberger, O., Fischer, P. & Brox, T. U-Net: convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-assisted Intervention—MICCAI 2015 (eds. Navab, N. et al.) 234–241 (Springer, 2015).

  114. Falk, T. et al. U-Net: deep learning for cell counting, detection, and morphometry. Nat. Methods 16, 67 (2019).

    CAS  PubMed  Google Scholar 

  115. He, K., Zhang, X., Ren, S. & Sun, J. Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (eds. Tuytelaars, T. et al.) 770–778 (IEEE, 2016).

Download references

Acknowledgements

We thank our colleagues at the Chan Zuckerberg Biohub: A. Krishnan, B. Chhun, M. Leonetti, S. Mehta, J. Batson, and R. Gomez Sjoberg for insightful discussions, feedback, and review of the manuscript. We thank J. Zou for advice; M. Weigert for reviewing the manuscript and for innumerable discussions on the topics of deep learning and microscopy; and W. Ouyang, R. Prevedel, F. Jug, and others for feedback on the first version of the preprint. We thank the Chan Zuckerberg Biohub and its donors for funding this work.

Author information

Authors and Affiliations

Authors

Contributions

L.A.R. conceived the piece. C.B. and L.A.R. wrote the manuscript and designed the figures.

Corresponding author

Correspondence to Loic A. Royer.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Peer review information: Rita Strack was the primary editor on this article and managed its editorial process and peer review in collaboration with the rest of the editorial team.

Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Belthangady, C., Royer, L.A. Applications, promises, and pitfalls of deep learning for fluorescence image reconstruction. Nat Methods 16, 1215–1225 (2019). https://doi.org/10.1038/s41592-019-0458-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1038/s41592-019-0458-z

This article is cited by

Search

Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing