Skip to main content

Thank you for visiting You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Pathologist-level interpretable whole-slide cancer diagnosis with deep learning

A Publisher Correction to this article was published on 17 July 2019

A Publisher Correction to this article was published on 17 May 2019

This article has been updated


Diagnostic pathology is the foundation and gold standard for identifying carcinomas. However, high inter-observer variability substantially affects productivity in routine pathology and is especially ubiquitous in diagnostician-deficient medical centres. Despite rapid growth in computer-aided diagnosis (CAD), the application of whole-slide pathology diagnosis remains impractical. Here, we present a novel pathology whole-slide diagnosis method, powered by artificial intelligence, to address the lack of interpretable diagnosis. The proposed method masters the ability to automate the human-like diagnostic reasoning process and translate gigapixels directly to a series of interpretable predictions, providing second opinions and thereby encouraging consensus in clinics. Moreover, using 913 collected examples of whole-slide data representing patients with bladder cancer, we show that our method matches the performance of 17 pathologists in the diagnosis of urothelial carcinoma. We believe that our method provides an innovative and reliable means for making diagnostic suggestions and can be deployed at low cost as next-generation, artificial intelligence-enhanced CAD technology for use in diagnostic pathology.

This is a preview of subscription content, access via your institution

Relevant articles

Open Access articles citing this article.

Access options

Buy article

Get time limited or full article access on ReadCube.


All prices are NET prices.

Fig. 1: Method framework.
Fig. 2: Data preparation, organized in four data sets.
Fig. 3: Results for the whole-slide diagnosis.
Fig. 4: Visualization of interpretable predictions of the method.
Fig. 5: Visualization of more interpretable predictions of the method.
Fig. 6: Evaluation of the network components.
Fig. 7: Text-to-image retrieval results.

Data availability

The data that support the findings of this study are available from Figshare:

Code availability

Source code are available from the Github repository:

Change history

  • 17 July 2019

    An amendment to this paper has been published and can be accessed via a link at the top of the paper.

  • 17 May 2019

    An amendment to this paper has been published and can be accessed via a link at the top of the paper


  1. Brimo, F., Schultz, L. & Epstein, J. I. The value of mandatory second opinion pathology review of prostate needle biopsy interpretation before radical prostatectomy. J. Urol. 184, 126–130 (2010).

    Article  Google Scholar 

  2. Elmore, J. G. et al. Diagnostic concordance among pathologists interpreting breast biopsy specimens. JAMA 313, 1122–1132 (2015).

    Article  Google Scholar 

  3. Djuric, U., Zadeh, G., Aldape, K. & Diamandis, P. Precision histology: how deep learning is poised to revitalize histomorphology for personalized cancer care. npj Precis. Oncol. 1, 22 (2017).

    Article  Google Scholar 

  4. LeCun, Y., Bengio, Y. & Hinton, G. Deep learning. Nature 521, 436–444 (2015).

    Article  Google Scholar 

  5. Esteva, A. et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature 542, 115–118 (2017).

    Article  Google Scholar 

  6. Poplin, R. et al. Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning. Nat. Biomed. Eng. 2, 158–164 (2018).

    Article  Google Scholar 

  7. Bejnordi, B. E. et al. Diagnostic assessment of deep learning algorithms for detection of lymph node metastases in women with breast cancer. JAMA 318, 2199–2210 (2017).

    Article  Google Scholar 

  8. Litjens, G. et al. Deep learning as a tool for increased accuracy and efficiency of histopathological diagnosis. Sci. Rep. 6, 26286 (2016).

    Article  Google Scholar 

  9. Araújo, T. et al. Classification of breast cancer histology images using convolutional neural networks. PloS ONE 12, e0177544 (2017).

    Article  Google Scholar 

  10. Xu, Y. et al. Large scale tissue histopathology image classification, segmentation, and visualization via deep convolutional activation features. BMC Bioinformatics 18, 281 (2017).

    Article  Google Scholar 

  11. Yoshida, H. et al. Automated histological classification of whole slide images of colorectal biopsy specimens. Oncotarget 8, 90719 (2017).

    Google Scholar 

  12. Han, Z. et al. Breast cancer multi-classification from histopathological images with structured deep learning model. Sci. Rep. 7, 4172 (2017).

    Article  Google Scholar 

  13. Hou, L. et al. Patch-based convolutional neural network for whole slide tissue image classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2424–2433 (IEEE, 2016).

  14. Holzinger, A., Biemann, C., Pattichis, C. S. & Kell, D. B. What do we need to build explainable AI systems for the medical domain? Preprint at (2017).

  15. Lipton, Z. C. The mythos of model interpretability. Queue. 16, 30 (2018).

    Google Scholar 

  16. Pasin, E., Josephson, D. Y., Mitra, A. P., Cote, R. J. & Stein, J. P. Superficial bladder cancer: an update on etiology, molecular development, classification, and natural history. Rev. Urol. 10, 31–43 (2008).

    Google Scholar 

  17. Zhou, M. & Magi-Galluzzi, C. Genitourinary Pathology (Foundations in Diagnostic Pathology, Saunders, 2015).

  18. Humphrey, P. A., Moch, H., Cubilla, A. L., Ulbright, T. M. & Reuter, V. E. The 2016 WHO classification of tumours of the urinary system and male genital organs—Part B: Prostate and bladder tumours. Eur. Urol. 70, 106–119 (2016).

    Article  Google Scholar 

  19. Papineni, K., Roukos, S., Ward, T. & Zhu, W.-J. BLEU: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics 311–318 (Association for Computational Linguistics, 2002).

  20. Vedantam, R., Lawrence Zitnick, C. & Parikh, D. CIDEr: Consensus-based Image Description Evaluation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 4566–4575 (IEEE, 2015).

  21. Karpathy, A. & Fei-Fei, L. Deep visual–semantic alignments for generating image descriptions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 3128–3137 (IEEE, 2015).

  22. Maaten, Lvd & Hinton, G. Visualizing data using t-SNE. J. Mach. Learn. Res. 9, 2579–2605 (2008).

    MATH  Google Scholar 

  23. Miyamoto, H. et al. Non-invasive papillary urothelial neoplasms: the 2004 WHO/ISUP classification system. Pathol. Int. 60, 1–8 (2010).

    Article  Google Scholar 

  24. Ronneberger, O., Fischer, P. & Brox, T. U-net: convolutional networks for biomedical image segmentation. In International Conference on Medical Image Computing and Computer-assisted Intervention 234–241 (Springer, 2015).

  25. Xu, K. et al. Show, attend and tell: neural image caption generation with visual attention. In International Conference on Machine Learning, 2048–2057 (JMLR, 2015).

  26. Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J. & Wojna, Z. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2818–2826 (IEEE, 2016).

  27. Deng, J. et al. Imagenet: a large-scale hierarchical image database. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 248–255 (IEEE, 2009).

  28. Krause, J., Johnson, J., Krishna, R. & Fei-Fei, L. A hierarchical approach for generating descriptive image paragraphs. Preprint at (2016).

  29. Hochreiter, S. & Schmidhuber, J. Long short-term memory. Neural Comput. 9, 1735–1780 (1997).

    Article  Google Scholar 

  30. Bahdanau, D., Cho, K. & Bengio, Y. Neural machine translation by jointly learning to align and translate. Preprint at (2016).

  31. Zhou, B., Khosla, A., Lapedriza, A., Oliva, A. & Torralba, A. Learning deep features for discriminative localization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2921–2929 (IEEE, 2016).

  32. Abadi, M. et al. Tensorflow: a system for large-scale machine learning. In Proceedings of the 12th USENIX Conference on Operating Systems Design and Implementation Vol. 16 265–283 (USENIX Association, 2016).

Download references


The authors thank the Department of Pathology, University of Florida (UF), and UF Health Shands Hospital for support with data collection. The authors also thank members of the Moffitt Cancer Center and the Department of Pathology, the First Affiliated Hospital of Xi’an Jiaotong University, for their participation in this research, and thank all participating pathologists for their valuable suggestions and active involvement. Thanks also go to Y. Cai for assistance with figure production. The research reported in this publication was supported by the National Institute of Arthritis and Musculoskeletal and Skin Diseases of the National Institutes of Health under award no. 5R01AR065479-05. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

Author information

Authors and Affiliations



Z.Z. led the development and evaluation. Z.Z., C.W. and L.Y. designed the research. Z.Z. implemented the algorithm. Z.Z., P.C., M.M. and M.S. collected and cleaned the data and developed the annotation software. L.Y. and M.B. recruited pathologists for annotation and machine–human comparison. L.C. and P.C. managed the machine–human competition. J.D., N.A., F.K.K. and S.I.D. participated in the competition. Z.Z. wrote the manuscript. M.M., F.X., Y.X., X.S., F.L., H.S. and J.C. provided valuable comments on the algorithm design and the manuscript.

Corresponding author

Correspondence to Lin Yang.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Zhang, Z., Chen, P., McGough, M. et al. Pathologist-level interpretable whole-slide cancer diagnosis with deep learning. Nat Mach Intell 1, 236–245 (2019).

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI:

This article is cited by


Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing