Abstract
The histological analysis of tissue samples, widely used for disease diagnosis, involves lengthy and laborious tissue preparation. Here, we show that a convolutional neural network trained using a generative adversarial-network model can transform wide-field autofluorescence images of unlabelled tissue sections into images that are equivalent to the bright-field images of histologically stained versions of the same samples. A blind comparison, by board-certified pathologists, of this virtual staining method and standard histological staining using microscopic images of human tissue sections of the salivary gland, thyroid, kidney, liver and lung, and involving different types of stain, showed no major discordances. The virtual-staining method bypasses the typically labour-intensive and costly histological staining procedures, and could be used as a blueprint for the virtual staining of tissue images acquired with other label-free imaging modalities.
This is a preview of subscription content, access via your institution
Access options
Access Nature and 54 other Nature Portfolio journals
Get Nature+, our best-value online-access subscription
$29.99 / 30 days
cancel any time
Subscribe to this journal
Receive 12 digital issues and online access to articles
$119.00 per year
only $9.92 per issue
Buy this article
- Purchase on SpringerLink
- Instant access to full article PDF
Prices may be subject to local taxes which are calculated during checkout
Similar content being viewed by others
Code availability
The deep-learning models used in this work employ standard libraries and scripts that are publicly available in TensorFlow. The trained network models for Masson’s trichrome stain (liver) and Jones stain (kidney), alongside sample test-image data are available through a Fiji-based plugin at https://github.com/whd0121/ImageJ-VirtualStain (Fiji can be downloaded at: https://imagej.net/Fiji/Downloads). The Fiji Grid/Collection stitching plugin was used to perform FOVs stitching. The inference (testing) software has been adapted to Fiji. MATLAB was used for the shading correction as well as the registration steps (coarse matching, global registration and local registration). Python based on the TensorFlow library was used to implement both the initial CNN used for image registration as well as the CNN used to produce the final virtually stained images. Our custom training codes are proprietary (and managed by the UCLA Office of Intellectual Property) and are not publicly available.
Data availability
The authors declare that all data supporting the results in this study are available within the paper and the Supplementary Information.
References
Tao, Y. K. et al. Assessment of breast pathologies using nonlinear microscopy. Proc. Natl Acad. Sci. USA 111, 15304–15309 (2014).
Witte, S. et al. Label-free live brain imaging and targeted patching with third-harmonic generation microscopy. Proc. Natl Acad. Sci. USA 108, 5970–5975 (2011).
Ji, M. et al. Rapid, label-free detection of brain tumors with stimulated Raman scattering microscopy. Sci. Transl. Med. 5, 201ra119 (2013).
Lu, F.-K. et al. Label-free DNA imaging in vivo with stimulated Raman scattering microscopy. Proc. Natl Acad. Sci. USA 112, 11624–11629 (2015).
Orringer, D. A. et al. Rapid intraoperative histology of unprocessed surgical specimens via fibre-laser-based stimulated Raman scattering microscopy. Nat. Biomed. Eng. 1, 0027 (2017).
Tu, H. et al. Stain-free histopathology by programmable supercontinuum pulses. Nat. Photon. 10, 534–540 (2016).
Fereidouni, F. et al. Microscopy with ultraviolet surface excitation for rapid slide-free histology. Nat. Biomed. Eng. 1, 957–966 (2017).
Glaser, A. K. et al. Light-sheet microscopy for slide-free non-destructive pathology of large clinical specimens. Nat. Biomed. Eng. 1, 0084 (2017).
Jamme, F. et al. Deep UV autofluorescence microscopy for cell biology and tissue histology. Biol. Cell 105, 277–288 (2013).
Monici, M. Cell and tissue autofluorescence research and diagnostic applications. Biotechnol. Annu. Rev. 11, 227–256 (2005).
Croce, A. C. & Bottiroli, G. Autofluorescence spectroscopy and imaging: a tool for biomedical research and diagnosis. Eur. J. Histochem. 58, 2461 (2014).
Liu, Y. et al. Detecting cancer metastases on gigapixel pathology images. Preprint at https://arxiv.org/abs/1703.02442 (2017).
Giacomelli, M. G. et al. Virtual hematoxylin and eosin transillumination microscopy using epi-fluorescence imaging. PLoS ONE 11, e0159337 (2016).
Goodfellow, I. et al. in Advances in Neural Information Processing Systems 27 (eds Ghahramani, Z. et al.) 2672–2680 (Curran Associates, Inc., New York, 2014).
Histology Laboratory: Price List Effective June 1, 2017 (Miller school of Medicine, accessed 23 March 2018); http://cpl.med.miami.edu/pathology-research/histology-laboratory/price-list
Pathology & Laboratory Medicine: Fee Schedule (Weill Cornell Medicine, accessed 23 March 2018); https://pathology.weill.cornell.edu/research/translational-research-services/fee-schedule
Research Histology: Rates (UC Davis Health, accessed 26 March 2018); http://www.ucdmc.ucdavis.edu/pathology/research/research_labs/histology/rates.html
Cree, I. A. et al. Guidance for laboratories performing molecular pathology for cancer patients. J. Clin. Pathol. 67, 923–931 (2014).
Patel, P. G. et al. Preparation of formalin-fixed paraffin-embedded tissue cores for both RNA and DNA extraction. J. Vis. Exp. 21, 54299 (2016).
Cho, H., Lim, S., Choi, G. & Min, H. Neural stain-style transfer learning using GAN for histopathological images. Preprint at https://arxiv.org/abs/1710.08543 (2017).
Hamel, P., Davies, M. E. P., Yoshii, K. & Goto, M. Transfer learning in MIR: sharing learned latent representations for music audio classification and similarity. In Proc. 14th International Conference on Music Information Retrieval (ISMR, 2013).
Badano, A. et al. Consistency and standardization of color in medical imaging: a consensus report. J. Digit. Imaging 28, 41–52 (2015).
Vakoc, B. J. et al. Three-dimensional microscopy of the tumor microenvironment in vivo using optical frequency domain imaging. Nat. Med. 15, 1219–1223 (2009).
Kozikowski, S., Wolfram, L. & Alfano, R. Fluorescence spectroscopy of eumelanins. IEEE J. Quant. Electron. 20, 1379–1382 (1984).
Elleder, M. & Borovanský, J. Autofluorescence of melanins induced by ultraviolet radiation and near ultraviolet light. A histochemical and biochemical study. Histochem. J. 33, 273–281 (2001).
Rivenson, Y., Zhang, Y., Gunaydin, H., Teng, D. & Ozcan, A. Phase recovery and holographic image reconstruction using deep learning in neural networks. Light Sci. Appl. 7, 17141 (2018).
Lovchik, R. D., Kaigala, G. V., Georgiadis, M. & Delamarche, E. Micro-immunohistochemistry using a microfluidic probe. Lab Chip 12, 1040–1043 (2012).
Register Multimodal MRI Images (Mathworks, 2018); https://www.mathworks.com/help/images/registering-multimodal-mri-images.html
Lowe, D. G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60, 91–110 (2004).
Torr, P. H. S. & Zisserman, A. MLESAC: a new robust estimator with application to estimating image geometry. Comput. Vis. Image Underst. 78, 138–156 (2000).
Hartley, R. & Zisserman, A. Multiple View Geometry in Computer Vision (Cambridge Univ. Press, Cambridge, 2003).
Rivenson, Y. et al. Deep learning enhanced mobile-phone microscopy. ACS Photon. 5, 2354–2364 (2018).
Pathak, D., Krahenbuhl, P., Donahue, J., Darrell, T. & Efros, A. A. Context encoders: feature learning by inpainting. Preprint at https://arxiv.org/abs/1604.07379 (2016).
Isola, P., Zhu, J.-Y., Zhou, T. & Efros, A. A. Image-to-image translation with conditional adversarial networks. In Proc. 2017 IEEE Conference on Computer Vision and Pattern Recognition 5967–5976 (IEEE, 2017).
Rivenson, Y. et al. Deep learning microscopy. Optica 4, 1437–1443 (2017).
Ronneberger, O., Fischer, P. & Brox, T. U-Net: convolutional networks for biomedical image segmentation. Preprint at https://arxiv.org/abs/1505.04597 (2015).
He, K., Zhang, X., Ren, S. & Sun, J. in Computer Vision – ECCV 2016 (eds. Leibe, B. et al.) 630–645 (Springer International Publishing, Basel, 2016)
rgb2ycbcr (Mathworks, 2018); https://www.mathworks.com/help/images/ref/rgb2ycbcr.html
He, K., Zhang, X., Ren, S. & Sun, J. Deep residual learning for image recognition. In Proc. 2016 IEEE Conference on Computer Vision and Pattern Recognition 770–778 (IEEE, 2016).
Kingma, D. P. & Ba, J. Adam: a method for stochastic optimization. Preprint at https://arxiv.org/abs/1412.6980 (2014).
Schindelin, J. et al. Fiji: an open-source platform for biological-image analysis. Nat. Methods 9, 676–682 (2012).
Preibisch, S., Saalfeld, S. & Tomancak, P. Globally optimal stitching of tiled 3D microscopic image acquisitions. Bioinformatics 25, 1463–1465 (2009).
Zoomify — Zoomable Web Images! (Zoomify Inc., 2018); http://zoomify.com/
GIGAmacro: Exploring Small Things in a Big Way (Four Chambers Studio LLC, 2018); https://viewer.gigamacro.com/
Acknowledgements
The Ozcan Research Group at UCLA acknowledges the support of the NSF Engineering Research Center (PATHS-UP), the Army Research Office, the NSF CBET Division Biophotonics Program, the National Institutes of Health (NIH, R21EB023115), HHMI, Vodafone Americas Foundation, the Mary Kay Foundation, and the Steven & Alexandra Cohen Foundation. The authors also acknowledge the Translational Pathology Core Laboratory and the Histology Laboratory at UCLA for their assistance with the sample preparation and staining. The authors acknowledge the time and effort of S. French, B.D. Cone, A. Nobori and C.M. Lee of the UCLA Department of Pathology and Laboratory Medicine for their evaluations; the assistance of R. Gao in preparing the ImageJ plugin and of R. Suh at the UCLA Department of Radiology for his help with Fig. 1.
Author information
Authors and Affiliations
Contributions
Y.R. and A.O. conceived the research, H.W. and Y.R. conducted the experiments, and Y.R., Z.W., K.d.H., H.G., Y.Z. and H.W. processed the data. W.D.W. directed the clinical aspects of the research. J.E.Z., T.C., A.E.S. and L.M.W. performed diagnosis and stain efficacy assessment on the virtual and histologically stained slides. Y.R., H.W., Z.W., K.d.H., Y.Z., W.D.W. and A.O. prepared the manuscript and all authors contributed to the manuscript. A.O. supervised the research.
Corresponding authors
Ethics declarations
Competing interests
A.O., Y.R., H.W. and Z.W. have applied for a patent (US application number: 62651005) related to the work reported in this manuscript.
Additional information
Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary information
Supplementary Information
Supplementary figures and tables.
Rights and permissions
About this article
Cite this article
Rivenson, Y., Wang, H., Wei, Z. et al. Virtual histological staining of unlabelled tissue-autofluorescence images via deep learning. Nat Biomed Eng 3, 466–477 (2019). https://doi.org/10.1038/s41551-019-0362-y
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1038/s41551-019-0362-y
This article is cited by
-
Noise learning of instruments for high-contrast, high-resolution and fast hyperspectral microscopy and nanoscopy
Nature Communications (2024)
-
Deep learning-based virtual staining, segmentation, and classification in label-free photoacoustic histology of human specimens
Light: Science & Applications (2024)
-
Multi-channel feature extraction for virtual histological staining of photon absorption remote sensing images
Scientific Reports (2024)
-
Intraoperative margin assessment for basal cell carcinoma with deep learning and histologic tumor mapping to surgical site
npj Precision Oncology (2024)
-
Quantification of cardiac capillarization in basement-membrane-immunostained myocardial slices using Segment Anything Model
Scientific Reports (2024)