Owing to improvements in image recognition via deep learning, machine-learning algorithms could eventually be applied to automated medical diagnoses that can guide clinical decision-making. However, these algorithms remain a ‘black box’ in terms of how they generate the predictions from the input data. Also, high-performance deep learning requires large, high-quality training datasets. Here, we report the development of an understandable deep-learning system that detects acute intracranial haemorrhage (ICH) and classifies five ICH subtypes from unenhanced head computed-tomography scans. By using a dataset of only 904 cases for algorithm training, the system achieved a performance similar to that of expert radiologists in two independent test datasets containing 200 cases (sensitivity of 98% and specificity of 95%) and 196 cases (sensitivity of 92% and specificity of 95%). The system includes an attention map and a prediction basis retrieved from training data to enhance explainability, and an iterative process that mimics the workflow of radiologists. Our approach to algorithm development can facilitate the development of deep-learning systems for a variety of clinical applications and accelerate their adoption into clinical practice.

Access optionsAccess options

Rent or Buy article

Get time limited or full article access on ReadCube.


All prices are NET prices.

Data availability

The training, validation and test datasets generated for this study are protected patient information. Some data may be available for research purposes from the corresponding author upon reasonable request.

Additional information

Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.


  1. 1.

    Esteva, A. et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature 542, 115–118 (2017).

  2. 2.

    Gulshan, V. et al. Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA 316, 2402–2410 (2016).

  3. 3.

    Rajpurkar, P. et al. Chexnet: Radiologist-level pneumonia detection on chest X-rays with deep learning. Preprint at https://arxiv.org/abs/1711.05225 (2017).

  4. 4.

    Castelvecchi, D. Can we open the black box of AI? Nature 538, 20–23 (2016).

  5. 5.

    Clinical and Patient Decision Support Software. Draft Guidance for Industry and Food and Drug Administration Staff (US FDA, 2017).

  6. 6.

    Deng, J. et al. Imagenet: a large-scale hierarchical image database. In Proc. IEEE Conference on Computer Vision and Pattern Recognition 248–255 (IEEE, 2009).

  7. 7.

    Simonyan, K. & Zisserman, A. Very deep convolutional networks for large-scale image recognition. Preprint at https://arxiv.org/abs/1409.1556 (2014).

  8. 8.

    He, K., Zhang, X., Ren, S. & Sun, J. Deep residual learning for image recognition. In Proc. IEEE Conference on Computer Vision and Pattern Recognition 770–778 (IEEE, 2016).

  9. 9.

    Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J. & Wojna, Z. Rethinking the inception architecture for computer vision. In Proc. IEEE Conference on Computer Vision and Pattern Recognition 2818–2826 (IEEE, 2016).

  10. 10.

    Szegedy, C., Ioffe, S., Vanhoucke, V. & Alemi, A. A. Inception-v4, inception-ResNet and the impact of residual connections on learning. In Proc. 31st AAAI Conference on Artificial Intelligence 4278–4284 (AAAI, 2017).

  11. 11.

    Wang, X. et al. ChestX-ray8: hospital-scale chest X-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. In Proc. IEEE Conference on Computer Vision and Pattern Recognition 3462–3471 (IEEE, 2017).

  12. 12.

    Sozykin, K., Khan, A. M., Protasov, S. & Hussain, R. Multi-label class-imbalanced action recognition in hockey videos via 3D convolutional neural networks. Preprint at https://arxiv.org/abs/1709.01421 (2017).

  13. 13.

    Zhou, B., Khosla, A., Lapedriza, A., Oliva, A. & Torralba, A. Learning deep features for discriminative localization. In Proc. IEEE Conference on Computer Vision and Pattern Recognition 2921–2929 (IEEE, 2016).

  14. 14.

    Selvaraju, R. R. et al. Grad-cam: visual explanations from deep networks via gradient-based localization. Preprint at https://arxiv.org/abs/1610.02391v3 (2016).

  15. 15.

    Lazer, D., Kennedy, R., King, G. & Vespignani, A. The parable of Google Flu: traps in big data analysis. Science 343, 1203–1205 (2014).

  16. 16.

    Chilamkurthy, S. et al. Deep learning algorithms for detection of critical findings in head CT scans: a retrospective study. Lancet https://doi.org/10.1016/S0140-6736(18)31645-3 (2018).

  17. 17.

    Grewal, M., Srivastava, M. M., Kumar, P. & Varadarajan, S. RADnet: Radiologist level accuracy using deep learning for hemorrhage detection in CT scans. In IEEE International Symposium on Biomedical Imaging 281–284 (IEEE, 2018).

  18. 18.

    Desai, V., Flanders, A. E. & Lakhani, P. Application of deep learning in neuroradiology: automated detection of basal ganglia hemorrhage using 2D-convolutional neural networks. Preprint at https://arxiv.org/abs/1710.03823 (2017).

  19. 19.

    Phong, T. D. et al. Brain hemorrhage diagnosis by using deep learning. In Proc. 2017 International Conference on Machine Learning and Soft Computing 34–39 (ACM, 2017).

  20. 20.

    Prevedello, L. M. et al. Automated critical test Findings identification and online notification system using artificial ntelligence in imaging. Radiology 285, 923–931 (2017).

  21. 21.

    Arbabshirani, M. R. et al. Advanced machine learning in action: identification of intracranial hemorrhage on computed tomography scans of the head with clinical workflow integration. npj Digit. Med. 1, 9 (2018).

  22. 22.

    Rubin, J. et al. Large scale automated reading of frontal and lateral chest X-rays using dual convolutional neural networks. Preprint at https://arxiv.org/abs/1804.07839 (2018).

  23. 23.

    Lakhani, P. & Sundaram, B. Deep learning at chest radiography: automated classification of pulmonary tuberculosis by using convolutional neural networks. Radiology 284, 574–582 (2017).

  24. 24.

    Domingos, P. A few useful things to know about machine learning. Commun. ACM 55, 78–87 (2012).

  25. 25.

    Poplin, R. et al. Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning. Nat. Biomed. Eng. 2, 158 (2018).

  26. 26.

    Brinjikji, W. et al. Inter- and intraobserver agreement in CT characterization of nonaneurysmal perimesencephalic subarachnoid hemorrhage. AJNR Am. J. Neuroradiol. 31, 1103–1105 (2010).

  27. 27.

    Zhou, B., Khosla, A., Lapedriza, A., Oliva, A. & Torralba, A. Object detectors emerge in deep scene cnns. Preprint at https://arxiv.org/abs/1412.6856 (2014).

  28. 28.

    Yosinski, J., Clune, J., Nguyen, A., Fuchs, T. & Lipson, H. Understanding neural networks through deep visualization. Preprint at https://arxiv.org/abs/1506.06579 (2015).

  29. 29.

    LeCun, Y., Bengio, Y. & Hinton, G. Deep learning. Nature 521, 436 (2015).

  30. 30.

    Tsoumakas, G. & Katakis, I. Multi-label classification: an overview. Int. J. Data Warehousing Mining 3, 1–13 (2007).

  31. 31.

    Russakovsky, O. et al. Imagenet large scale visual recognition challenge. Int. J. Comp. Vision 115, 211–252 (2015).

  32. 32.

    Chollet, F. et al. Keras (2015); http://keras.io

  33. 33.

    Lin, M., Chen, Q. & Yan, S. Network in network. Preprint at https://arxiv.org/abs/1312.4400 (2013).

  34. 34.

    Nesterov, Y. A method of solving the convex programming problem with convergence rate O(1/k 2). Dokl. Akad. Nauk USSR 269, 543–547 (1983).

  35. 35.

    Abadi, M. et al. Tensorflow: a system for large-scale machine learning. Proc. 12th USENIX Symposium on Operating Systems Design and Implementation 16, 265–283 (2016).

  36. 36.

    King, G. & Zeng, L. Logistic regression in rare events data. Political Anal. 9, 137–163 (2001).

  37. 37.

    Kimpe, T. & Tuytschaever, T. Increasing the number of gray shades in medical display systems—how much is enough? J. Digit. Imaging 20, 422–432 (2007).

  38. 38.

    Xue, Z., Antani, S., Long, L. R., Demner-Fushman, D. & Thoma, G. R. Window classification of brain CT images in biomedical articles. In AMIA Annual Symposium Proceedings 1023 (American Medical Informatics Association, 2012).

  39. 39.

    Turner, P. & Holdsworth, G. CT stroke window settings: an unfortunate misleading misnomer? Br. J. Radiol. 84, 1061–1066 (2011).

  40. 40.

    Ju, C., Bibaut, A. & van der Laan, A. The relative performance of ensemble methods with deep convolutional neural networks for image classification. J. Appl. Stat. 45, 2800–2818 (2018).

  41. 41.

    Nair, V. & Hinton, G. E. Rectified linear units improve restricted Boltzmann machines. In Proc. 27th International Conference on Machine Learning 807–814 (ICML, 2010).

  42. 42.

    Davis, J. & Goadrich, M. The relationship between precision-recall and ROC curves. Proc. 23rd International Conference on Machine Learning 233–240 (ACM, 2006).

Download references


The authors would like to acknowledge NVIDIA for the use of a DevBox and providing feedback and support, which made this work possible. R.G.G. is funded in part by an NIH U01 grant under the grant number 5U01EB025153.

Author information

Author notes

  1. These authors contributed equally: Hyunkwang Lee, Sehyo Yune.


  1. Department of Radiology, Massachusetts General Hospital, Boston, MA, USA

    • Hyunkwang Lee
    • , Sehyo Yune
    • , Mohammad Mansouri
    • , Myeongchan Kim
    • , Shahein H. Tajmir
    • , Claude E. Guerrier
    • , Sarah A. Ebert
    • , Stuart R. Pomerantz
    • , Javier M. Romero
    • , Shahmir Kamalian
    • , Ramon G. Gonzalez
    • , Michael H. Lev
    •  & Synho Do
  2. John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, USA

    • Hyunkwang Lee


  1. Search for Hyunkwang Lee in:

  2. Search for Sehyo Yune in:

  3. Search for Mohammad Mansouri in:

  4. Search for Myeongchan Kim in:

  5. Search for Shahein H. Tajmir in:

  6. Search for Claude E. Guerrier in:

  7. Search for Sarah A. Ebert in:

  8. Search for Stuart R. Pomerantz in:

  9. Search for Javier M. Romero in:

  10. Search for Shahmir Kamalian in:

  11. Search for Ramon G. Gonzalez in:

  12. Search for Michael H. Lev in:

  13. Search for Synho Do in:


H.L., S.Y., M.M., R.G.G., M.H.L. and S.D. initiated and designed the research. H.L., S.Y. and M.K. executed the research. M.M., S.H.T., C.E.G., S.A.E., S.R.P., J.M.R., S.K., R.G.G. and M.H.L. acquired and/or interpreted the data. R.G.G. and M.H.L. supervised the data collection. H.L., S.Y., M.H.L. and S.D. analysed and interpreted the data. H.L. and M.K. developed the algorithms and software tools necessary for the experiments. H.L., S.Y., S.H.T. and M.H.L. wrote the manuscript.

Competing interests

M.H.L. is a consultant of GE Healthcare and Takeda Pharmaceutical Company and receives an institutional research support from Siemens Healthcare. S.R.P. is a consultant of GE Healthcare. S.D. is a consultant of Nulogix and Doai and receives research supports from ZCAI, Tplus and MediBloc. The remaining authors declare no competing interests.

Corresponding author

Correspondence to Synho Do.

Supplementary information

  1. Supplementary Information

    Supplementary Tables 1–2 and Supplementary Figures 1–7

  2. Reporting Summary

About this article

Publication history