Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Article
  • Published:

Virtual histological staining of unlabelled tissue-autofluorescence images via deep learning

Abstract

The histological analysis of tissue samples, widely used for disease diagnosis, involves lengthy and laborious tissue preparation. Here, we show that a convolutional neural network trained using a generative adversarial-network model can transform wide-field autofluorescence images of unlabelled tissue sections into images that are equivalent to the bright-field images of histologically stained versions of the same samples. A blind comparison, by board-certified pathologists, of this virtual staining method and standard histological staining using microscopic images of human tissue sections of the salivary gland, thyroid, kidney, liver and lung, and involving different types of stain, showed no major discordances. The virtual-staining method bypasses the typically labour-intensive and costly histological staining procedures, and could be used as a blueprint for the virtual staining of tissue images acquired with other label-free imaging modalities.

This is a preview of subscription content, access via your institution

Access options

Buy this article

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: Deep-learning-based virtual histology staining using autofluorescence of unstained tissue.
Fig. 2: Virtual staining GAN architecture.
Fig. 3: Virtual staining results match the H&E- and Jones-stained images.
Fig. 4: Virtual staining results match the Masson’s trichrome stain for liver and lung tissue sections.
Fig. 5: Virtual staining reduces staining variability.
Fig. 6: Accelerated convergence is achieved using transfer learning.
Fig. 7: Melanin inference using multiple autofluorescence channels.

Similar content being viewed by others

Code availability

The deep-learning models used in this work employ standard libraries and scripts that are publicly available in TensorFlow. The trained network models for Masson’s trichrome stain (liver) and Jones stain (kidney), alongside sample test-image data are available through a Fiji-based plugin at https://github.com/whd0121/ImageJ-VirtualStain (Fiji can be downloaded at: https://imagej.net/Fiji/Downloads). The Fiji Grid/Collection stitching plugin was used to perform FOVs stitching. The inference (testing) software has been adapted to Fiji. MATLAB was used for the shading correction as well as the registration steps (coarse matching, global registration and local registration). Python based on the TensorFlow library was used to implement both the initial CNN used for image registration as well as the CNN used to produce the final virtually stained images. Our custom training codes are proprietary (and managed by the UCLA Office of Intellectual Property) and are not publicly available.

Data availability

The authors declare that all data supporting the results in this study are available within the paper and the Supplementary Information.

References

  1. Tao, Y. K. et al. Assessment of breast pathologies using nonlinear microscopy. Proc. Natl Acad. Sci. USA 111, 15304–15309 (2014).

    Article  CAS  Google Scholar 

  2. Witte, S. et al. Label-free live brain imaging and targeted patching with third-harmonic generation microscopy. Proc. Natl Acad. Sci. USA 108, 5970–5975 (2011).

    Article  CAS  Google Scholar 

  3. Ji, M. et al. Rapid, label-free detection of brain tumors with stimulated Raman scattering microscopy. Sci. Transl. Med. 5, 201ra119 (2013).

    Article  Google Scholar 

  4. Lu, F.-K. et al. Label-free DNA imaging in vivo with stimulated Raman scattering microscopy. Proc. Natl Acad. Sci. USA 112, 11624–11629 (2015).

    Article  CAS  Google Scholar 

  5. Orringer, D. A. et al. Rapid intraoperative histology of unprocessed surgical specimens via fibre-laser-based stimulated Raman scattering microscopy. Nat. Biomed. Eng. 1, 0027 (2017).

    Article  Google Scholar 

  6. Tu, H. et al. Stain-free histopathology by programmable supercontinuum pulses. Nat. Photon. 10, 534–540 (2016).

    Article  CAS  Google Scholar 

  7. Fereidouni, F. et al. Microscopy with ultraviolet surface excitation for rapid slide-free histology. Nat. Biomed. Eng. 1, 957–966 (2017).

    Article  Google Scholar 

  8. Glaser, A. K. et al. Light-sheet microscopy for slide-free non-destructive pathology of large clinical specimens. Nat. Biomed. Eng. 1, 0084 (2017).

    Article  Google Scholar 

  9. Jamme, F. et al. Deep UV autofluorescence microscopy for cell biology and tissue histology. Biol. Cell 105, 277–288 (2013).

    Article  CAS  Google Scholar 

  10. Monici, M. Cell and tissue autofluorescence research and diagnostic applications. Biotechnol. Annu. Rev. 11, 227–256 (2005).

    Article  CAS  Google Scholar 

  11. Croce, A. C. & Bottiroli, G. Autofluorescence spectroscopy and imaging: a tool for biomedical research and diagnosis. Eur. J. Histochem. 58, 2461 (2014).

  12. Liu, Y. et al. Detecting cancer metastases on gigapixel pathology images. Preprint at https://arxiv.org/abs/1703.02442 (2017).

  13. Giacomelli, M. G. et al. Virtual hematoxylin and eosin transillumination microscopy using epi-fluorescence imaging. PLoS ONE 11, e0159337 (2016).

    Article  Google Scholar 

  14. Goodfellow, I. et al. in Advances in Neural Information Processing Systems 27 (eds Ghahramani, Z. et al.) 2672–2680 (Curran Associates, Inc., New York, 2014).

  15. Histology Laboratory: Price List Effective June 1, 2017 (Miller school of Medicine, accessed 23 March 2018); http://cpl.med.miami.edu/pathology-research/histology-laboratory/price-list

  16. Pathology & Laboratory Medicine: Fee Schedule (Weill Cornell Medicine, accessed 23 March 2018); https://pathology.weill.cornell.edu/research/translational-research-services/fee-schedule

  17. Research Histology: Rates (UC Davis Health, accessed 26 March 2018); http://www.ucdmc.ucdavis.edu/pathology/research/research_labs/histology/rates.html

  18. Cree, I. A. et al. Guidance for laboratories performing molecular pathology for cancer patients. J. Clin. Pathol. 67, 923–931 (2014).

    Article  CAS  Google Scholar 

  19. Patel, P. G. et al. Preparation of formalin-fixed paraffin-embedded tissue cores for both RNA and DNA extraction. J. Vis. Exp. 21, 54299 (2016).

  20. Cho, H., Lim, S., Choi, G. & Min, H. Neural stain-style transfer learning using GAN for histopathological images. Preprint at https://arxiv.org/abs/1710.08543 (2017).

  21. Hamel, P., Davies, M. E. P., Yoshii, K. & Goto, M. Transfer learning in MIR: sharing learned latent representations for music audio classification and similarity. In Proc. 14th International Conference on Music Information Retrieval (ISMR, 2013).

  22. Badano, A. et al. Consistency and standardization of color in medical imaging: a consensus report. J. Digit. Imaging 28, 41–52 (2015).

    Article  Google Scholar 

  23. Vakoc, B. J. et al. Three-dimensional microscopy of the tumor microenvironment in vivo using optical frequency domain imaging. Nat. Med. 15, 1219–1223 (2009).

    Article  CAS  Google Scholar 

  24. Kozikowski, S., Wolfram, L. & Alfano, R. Fluorescence spectroscopy of eumelanins. IEEE J. Quant. Electron. 20, 1379–1382 (1984).

    Article  Google Scholar 

  25. Elleder, M. & Borovanský, J. Autofluorescence of melanins induced by ultraviolet radiation and near ultraviolet light. A histochemical and biochemical study. Histochem. J. 33, 273–281 (2001).

    Article  CAS  Google Scholar 

  26. Rivenson, Y., Zhang, Y., Gunaydin, H., Teng, D. & Ozcan, A. Phase recovery and holographic image reconstruction using deep learning in neural networks. Light Sci. Appl. 7, 17141 (2018).

  27. Lovchik, R. D., Kaigala, G. V., Georgiadis, M. & Delamarche, E. Micro-immunohistochemistry using a microfluidic probe. Lab Chip 12, 1040–1043 (2012).

    Article  CAS  Google Scholar 

  28. Register Multimodal MRI Images (Mathworks, 2018); https://www.mathworks.com/help/images/registering-multimodal-mri-images.html

  29. Lowe, D. G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60, 91–110 (2004).

    Article  Google Scholar 

  30. Torr, P. H. S. & Zisserman, A. MLESAC: a new robust estimator with application to estimating image geometry. Comput. Vis. Image Underst. 78, 138–156 (2000).

    Article  Google Scholar 

  31. Hartley, R. & Zisserman, A. Multiple View Geometry in Computer Vision (Cambridge Univ. Press, Cambridge, 2003).

  32. Rivenson, Y. et al. Deep learning enhanced mobile-phone microscopy. ACS Photon. 5, 2354–2364 (2018).

    Article  CAS  Google Scholar 

  33. Pathak, D., Krahenbuhl, P., Donahue, J., Darrell, T. & Efros, A. A. Context encoders: feature learning by inpainting. Preprint at https://arxiv.org/abs/1604.07379 (2016).

  34. Isola, P., Zhu, J.-Y., Zhou, T. & Efros, A. A. Image-to-image translation with conditional adversarial networks. In Proc. 2017 IEEE Conference on Computer Vision and Pattern Recognition 5967–5976 (IEEE, 2017).

  35. Rivenson, Y. et al. Deep learning microscopy. Optica 4, 1437–1443 (2017).

    Article  Google Scholar 

  36. Ronneberger, O., Fischer, P. & Brox, T. U-Net: convolutional networks for biomedical image segmentation. Preprint at https://arxiv.org/abs/1505.04597 (2015).

  37. He, K., Zhang, X., Ren, S. & Sun, J. in Computer Vision – ECCV 2016 (eds. Leibe, B. et al.) 630–645 (Springer International Publishing, Basel, 2016)

  38. rgb2ycbcr (Mathworks, 2018); https://www.mathworks.com/help/images/ref/rgb2ycbcr.html

  39. He, K., Zhang, X., Ren, S. & Sun, J. Deep residual learning for image recognition. In Proc. 2016 IEEE Conference on Computer Vision and Pattern Recognition 770–778 (IEEE, 2016).

  40. Kingma, D. P. & Ba, J. Adam: a method for stochastic optimization. Preprint at https://arxiv.org/abs/1412.6980 (2014).

  41. Schindelin, J. et al. Fiji: an open-source platform for biological-image analysis. Nat. Methods 9, 676–682 (2012).

    Article  CAS  Google Scholar 

  42. Preibisch, S., Saalfeld, S. & Tomancak, P. Globally optimal stitching of tiled 3D microscopic image acquisitions. Bioinformatics 25, 1463–1465 (2009).

    Article  CAS  Google Scholar 

  43. Zoomify — Zoomable Web Images! (Zoomify Inc., 2018); http://zoomify.com/

  44. GIGAmacro: Exploring Small Things in a Big Way (Four Chambers Studio LLC, 2018); https://viewer.gigamacro.com/

Download references

Acknowledgements

The Ozcan Research Group at UCLA acknowledges the support of the NSF Engineering Research Center (PATHS-UP), the Army Research Office, the NSF CBET Division Biophotonics Program, the National Institutes of Health (NIH, R21EB023115), HHMI, Vodafone Americas Foundation, the Mary Kay Foundation, and the Steven & Alexandra Cohen Foundation. The authors also acknowledge the Translational Pathology Core Laboratory and the Histology Laboratory at UCLA for their assistance with the sample preparation and staining. The authors acknowledge the time and effort of S. French, B.D. Cone, A. Nobori and C.M. Lee of the UCLA Department of Pathology and Laboratory Medicine for their evaluations; the assistance of R. Gao in preparing the ImageJ plugin and of R. Suh at the UCLA Department of Radiology for his help with Fig. 1.

Author information

Authors and Affiliations

Authors

Contributions

Y.R. and A.O. conceived the research, H.W. and Y.R. conducted the experiments, and Y.R., Z.W., K.d.H., H.G., Y.Z. and H.W. processed the data. W.D.W. directed the clinical aspects of the research. J.E.Z., T.C., A.E.S. and L.M.W. performed diagnosis and stain efficacy assessment on the virtual and histologically stained slides. Y.R., H.W., Z.W., K.d.H., Y.Z., W.D.W. and A.O. prepared the manuscript and all authors contributed to the manuscript. A.O. supervised the research.

Corresponding authors

Correspondence to Yair Rivenson or Aydogan Ozcan.

Ethics declarations

Competing interests

A.O., Y.R., H.W. and Z.W. have applied for a patent (US application number: 62651005) related to the work reported in this manuscript.

Additional information

Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Supplementary Information

Supplementary figures and tables.

Reporting Summary

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Rivenson, Y., Wang, H., Wei, Z. et al. Virtual histological staining of unlabelled tissue-autofluorescence images via deep learning. Nat Biomed Eng 3, 466–477 (2019). https://doi.org/10.1038/s41551-019-0362-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1038/s41551-019-0362-y

This article is cited by

Search

Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing