Dermatologist-level classification of skin cancer with deep neural networks

Published online:


Skin cancer, the most common human malignancy1,2,3, is primarily diagnosed visually, beginning with an initial clinical screening and followed potentially by dermoscopic analysis, a biopsy and histopathological examination. Automated classification of skin lesions using images is a challenging task owing to the fine-grained variability in the appearance of skin lesions. Deep convolutional neural networks (CNNs)4,5 show potential for general and highly variable tasks across many fine-grained object categories6,7,8,9,10,11. Here we demonstrate classification of skin lesions using a single CNN, trained end-to-end from images directly, using only pixels and disease labels as inputs. We train a CNN using a dataset of 129,450 clinical images—two orders of magnitude larger than previous datasets12—consisting of 2,032 different diseases. We test its performance against 21 board-certified dermatologists on biopsy-proven clinical images with two critical binary classification use cases: keratinocyte carcinomas versus benign seborrheic keratoses; and malignant melanomas versus benign nevi. The first case represents the identification of the most common cancers, the second represents the identification of the deadliest skin cancer. The CNN achieves performance on par with all tested experts across both tasks, demonstrating an artificial intelligence capable of classifying skin cancer with a level of competence comparable to dermatologists. Outfitted with deep neural networks, mobile devices can potentially extend the reach of dermatologists outside of the clinic. It is projected that 6.3 billion smartphone subscriptions will exist by the year 2021 (ref. 13) and can therefore potentially provide low-cost universal access to vital diagnostic care.

  • Subscribe to Nature for full access:



Additional access options:

Already a subscriber?  Log in  now or  Register  for online access.


  1. 1.

    American Cancer Society. Cancer facts & figures 2016. Atlanta, American Cancer Society 2016.

  2. 2.

    et al. Incidence estimate of nonmelanoma skin cancer (keratinocyte carcinomas) in the US population, 2012. JAMA Dermatology 151.10, 1081–1086 (2015)

  3. 3.

    Prevalence of a history of skin cancer in 2007: results of an incidence-based model. Arch. Dermatol. 146, 279–282 (2010)

  4. 4.

    , & Deep learning. Nature 521, 436–444 (2015)

  5. 5.

    & In The Handbook of Brain Theory and Neural Networks (ed. ) 3361.10 (MIT Press, 1995)

  6. 6.

    et al. Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115, 211–252 (2015)

  7. 7.

    , & Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 25, 1097–1105 (2012)

  8. 8.

    & Batch normalization: accelerating deep network training by reducing internal covariate shift. Proc. 32nd Int. Conference on Machine Learning 448–456 (2015)

  9. 9.

    , , , & Rethinking the inception architecture for computer vision. Preprint at (2015)

  10. 10.

    et al. Going deeper with convolutions. Proc. IEEE Conference on Computer Vision and Pattern Recognition 1–9 (2015)

  11. 11.

    , & Deep residual learning for image recognition. Preprint at (2015)

  12. 12.

    & Computer aided diagnostic support system for skin cancer: a review of techniques and algorithms. Int. J. Biomed. Imaging 2013, 323268 (2013)

  13. 13.

    & Ericssons mobility report (2016)

  14. 14.

    et al. Accuracy of computer diagnosis of melanoma: a quantitative meta-analysis. Arch. Dermatol. 139, 361–367, discussion 366 (2003)

  15. 15.

    et al. Melanoma computer-aided diagnosis: reliability and feasibility study. Clin. Cancer Res. 10, 1881–1886 (2004)

  16. 16.

    , , & Diagnostic accuracy of dermoscopy. Lancet Oncol. 3, 159–165 (2002)

  17. 17.

    et al. In Machine Learning in Medical Imaging (eds , , & ) 118–126 (Springer, 2015)

  18. 18.

    et al. Skin lesion analysis toward melanoma detection. International Symposium on Biomedical Imaging (ISBI), (International Skin Imaging Collaboration (ISIC), 2016)

  19. 19.

    et al. Epiluminescence microscopy-based classification of pigmented skin lesions using computerized image analysis and an artificial neural network. Melanoma Res. 8, 261–266 (1998)

  20. 20.

    et al. In Skin Cancer and UV Radiation (eds , & ) 1064–1070 (Springer, 1997)

  21. 21.

    , et al. Model predicting survival in stage I melanoma based on tumor progression. J. Natl Cancer Inst. 81, 1893–1904 (1989)

  22. 22.

    et al. Classification of melanocytic lesions with color and texture analysis using digital image processing. Anal. Quant. Cytol. Histol. 15, 1–11 (1993)

  23. 23.

    & A mobile automated skin lesion classification system. 23rd IEEE International Conference on Tools with Artificial Intelligence (ICTAI) 138–141 (2011)

  24. 24.

    et al. In Color Medical Image Analysis. (eds , & ) 63–86 (Springer, 2013)

  25. 25.

    et al. Imagenet: A large-scale hierarchical image database. EEE Conference on Computer Vision and Pattern Recognition 248–255 (CVPR, 2009)

  26. 26.

    et al. Human-level control through deep reinforcement learning. Nature 518, 529–533 (2015)

  27. 27.

    et al. Mastering the game of Go with deep neural networks and tree search. Nature 529, 484–489 (2016)

  28. 28.

    & A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 22, 1345–1359 (2010)

  29. 29.

    , & Visualizing data using t-SNE. J. Mach. Learn. Res. 9, 2579–2605 (2008)

  30. 30.

    et al. Tensorflow: large-scale machine learning on heterogeneous distributed systems. Preprint at (2016)

Download references


We thank the Thrun laboratory for their support and ideas. We thank members of the dermatology departments at Stanford University, University of Pennsylvania, Massachusetts General Hospital and University of Iowa for completing our tests. This study was supported by funding from the Baxter Foundation to H.M.B. In addition, this work was supported by a National Institutes of Health (NIH) National Center for Advancing Translational Science Clinical and Translational Science Award (UL1 TR001085). The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH.

Author information

Author notes

    • Andre Esteva
    •  & Brett Kuprel

    These authors contributed equally to this work.


  1. Department of Electrical Engineering, Stanford University, Stanford, California, USA

    • Andre Esteva
    •  & Brett Kuprel
  2. Department of Dermatology, Stanford University, Stanford, California, USA

    • Roberto A. Novoa
    • , Justin Ko
    •  & Susan M. Swetter
  3. Department of Pathology, Stanford University, Stanford, California, USA

    • Roberto A. Novoa
  4. Dermatology Service, Veterans Affairs Palo Alto Health Care System, Palo Alto, California, USA

    • Susan M. Swetter
  5. Baxter Laboratory for Stem Cell Biology, Department of Microbiology and Immunology, Institute for Stem Cell Biology and Regenerative Medicine, Stanford University, Stanford, California, USA

    • Helen M. Blau
  6. Department of Computer Science, Stanford University, Stanford, California, USA

    • Sebastian Thrun


  1. Search for Andre Esteva in:

  2. Search for Brett Kuprel in:

  3. Search for Roberto A. Novoa in:

  4. Search for Justin Ko in:

  5. Search for Susan M. Swetter in:

  6. Search for Helen M. Blau in:

  7. Search for Sebastian Thrun in:


A.E. and B.K. conceptualized and trained the algorithms and collected data. R.A.N., J.K. and S.S. developed the taxonomy, oversaw the medical tasks and recruited dermatologists. H.M.B. and S.T. supervised the project.

Competing interests

The authors declare no competing financial interests.

Corresponding authors

Correspondence to Andre Esteva or Brett Kuprel or Roberto A. Novoa or Sebastian Thrun.

Reviewer Information

Nature thanks A. Halpern, G. Merlino and M. Welling for their contribution to the peer review of this work.

Extended data