Skip to main content

Thank you for visiting You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Letter
  • Published:

Deep learning models for electrocardiograms are susceptible to adversarial attack


Electrocardiogram (ECG) acquisition is increasingly widespread in medical and commercial devices, necessitating the development of automated interpretation strategies. Recently, deep neural networks have been used to automatically analyze ECG tracings and outperform physicians in detecting certain rhythm irregularities1. However, deep learning classifiers are susceptible to adversarial examples, which are created from raw data to fool the classifier such that it assigns the example to the wrong class, but which are undetectable to the human eye2,3. Adversarial examples have also been created for medical-related tasks4,5. However, traditional attack methods to create adversarial examples do not extend directly to ECG signals, as such methods introduce square-wave artefacts that are not physiologically plausible. Here we develop a method to construct smoothed adversarial examples for ECG tracings that are invisible to human expert evaluation and show that a deep learning model for arrhythmia detection from single-lead ECG6 is vulnerable to this type of attack. Moreover, we provide a general technique for collating and perturbing known adversarial examples to create multiple new ones. The susceptibility of deep learning ECG algorithms to adversarial misclassification implies that care should be taken when evaluating these models on ECGs that may have been altered, particularly when incentives for causing misclassification exist.

This is a preview of subscription content, access via your institution

Access options

Buy this article

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: Demonstration of disruptive adversarial examples.
Fig. 2: Accuracy of the network in classifying adversarial examples and clinician success rate in distinguishing authentic ECGs from adversarial examples.
Fig. 3: Perturbing a known adversarial example to generate multiple new ones.

Similar content being viewed by others

Data availability

The dataset can be accessed from

Code availability

The code is available from


  1. Hannun, A. Y. et al. Cardiologist-level arrhythmia detection and classification in ambulatory electrocardiograms using a deep neural network. Nat. Med. 25, 65–69 (2019).

    PubMed  PubMed Central  Google Scholar 

  2. Szegedy, C. et al. Intriguing properties of neural networks. In International Conference on Learning Representations (ICLR, 2014).

  3. Biggio, B. et al. Evasion attacks against machine learning at test time. In Machine Learning and Knowledge Discovery in Databases Vol. 8190 (eds Blockeel, H., Kersting, K., Nijssen, S. & Železný, F.) 387–402 (Springer, 2013).

  4. Finlayson, S. G. et al. Adversarial attacks on medical machine learning. Science 363, 1287–1289 (2019).

    Article  CAS  Google Scholar 

  5. Paschali, M., Conjeti, S., Navarro, F. & Navab, N. Generalizability vs. robustness: investigating medical imaging networks using adversarial examples. In International Conference on Medical Image Computing and Computer-Assisted Intervention 493–501 (Springer, 2018).

  6. Clifford, G. D. et al. AF Classification from a short single lead ECG recording: the PhysioNet/Computing in Cardiology Challenge 2017. In 2017 Computing in Cardiology (CinC) 1–4 (IEEE, 2017).

  7. Kelly, B. B. & Fuster, V. Promoting Cardiovascular Health in the Developing World: a Critical Challenge to Achieve Global Health (National Academies Press, 2010).

  8. Kennedy, H. L. The evolution of ambulatory ECG monitoring. Prog. Cardiovasc. Dis. 56, 127–132 (2013).

    Article  Google Scholar 

  9. Hong, S. et al. ENCASE: an ENsemble ClASsifiEr for ECG classification using expert features and deep neural networks. In 2017 Computing in Cardiology (CinC) 1–4 (IEEE, 2017).

  10. Chen, H., Huang, C., Huang, Q. & Zhang, Q. ECGadv: generating adversarial electrocardiogram to misguide arrhythmia classification system. Preprint at arXiv (2019).

  11. Goodfellow, S. D. et al. Towards understanding ECG rhythm classification using convolutional neural networks and attention mappings. In Proceedings of the 3rd Machine Learning for Healthcare Conference 83–101 (PMLR, 2018).

  12. Madry, A., Makelov, A., Schmidt, L., Tsipras, D. & Vladu, A. Towards deep learning models resistant to adversarial attacks. In International Conference on Learning Representations (ICLR, 2018).

  13. Kurakin, A., Goodfellow, I. & Bengio, S. Adversarial machine learning at scale. In International Conference on Learning Representations (ICLR, 2017).

  14. Kwon, J.-m et al. Development and validation of deep-learning algorithm for electrocardiography-based heart failure identification. Korean Circ. J. 49, 629–639 (2019).

    Article  Google Scholar 

  15. Galloway, C. D. et al. Development and validation of a deep-learning model to screen for hyperkalemia from the electrocardiogram. JAMA Cardiol. 4, 428–436 (2019).

    Article  Google Scholar 

  16. Singh, G., Gehr, T., Mirman, M., Püschel, M. & Vechev, M. Fast and effective robustness certification. In Advances in Neural Information Processing Systems 10802–10813 (NIPS, 2018).

  17. Cohen, J., Rosenfeld, E. and Kolter, Z. Certified Adversarial Robustness via Randomized Smoothing. In International Conference on Machine Learning 1310–1320 (ICML, 2019).

  18. Julian, K. D., Kochenderfer, M. J. & Owen, M. P. Deep neural network compression for aircraft collision avoidance systems. J. Guid. Control Dyn. 42, 598–608 (2018).

    Article  Google Scholar 

  19. Nguyen, M. T., Van Nguyen, B. & Kim, K. Deep feature learning for sudden cardiac arrest detection in automated external defibrillators. Sci. Rep. 8, 17196 (2018).

    Article  Google Scholar 

  20. Lyle, J. & Martin, A. Trusted Computing and Provenance: Better Together (Usenix, 2010).

  21. Uesato, J., O’Donoghue, B., Kohli, P. and Oord, A. Adversarial Risk and the Dangers of Evaluating Against Weak Attacks. In International Conference on Machine Learning 5025-5034 (ICML, 2018).

  22. Saria, S. & Subbaswamy, A. Tutorial: safe and reliable machine learning. Preprint at (2019).

  23. Goodfellow, I. J., Shlens, J. & Szegedy, C. Explaining and harnessing adversarial examples. In International Conference on Learning Representations (ICLR, 2015).

  24. McSharry, P. E., Clifford, G. D., Tarassenko, L. & Smith, L. A. A dynamical model for generating synthetic electrocardiogram signals. IEEE Trans. Biomed. Eng. 50, 289–294 (2003).

    Article  Google Scholar 

  25. LeCun, Y., Bottou, L., Bengio, Y. & Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 86, 2278–2324 (1998).

    Article  Google Scholar 

  26. Krizhevsky, A. & Hinton, G. Learning Multiple Layers of Features from Tiny Images (Citeseer, 2009).

  27. Schmidt, L., Santurkar, S., Tsipras, D., Talwar, K. & Madry, A. Adversarially robust generalization requires more data. In Advances in Neural Information Processing Systems 5014–5026 (NIPS, 2018).

Download references


X.H. and R.R. were supported in part by the NIH under award no. R01HL148248. We thank W. Lee, S. Mohan, M. Goldstein, A. Li, A.M. Puli, H. Singh, M. Sudarshan, and W. Whitney.

Author information

Authors and Affiliations



X.H. and R.R. designed the problem and performed all the experiments. All authors wrote the manuscript.

Corresponding authors

Correspondence to Xintian Han or Rajesh Ranganath.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Peer review information Michael Basson was the primary editor on this article and managed its editorial process and peer review in collaboration with the rest of the editorial team.

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Extended data

Extended Data Fig. 1 An adversarial example created by Projected Gradient Descent (PGD) method.

This adversarial example contains square waves that are physiologically implausible.

Extended Data Fig. 2 Demonstration of three procedures to show the existence of the adversarial examples.

a, We add a small amount of Gaussian noise to the adversarial example and smooth it to create a new signal. b, For intersected signals, we concatenate the left half of one signal with the right half of the other to create a new one. c, We sample uniformly from the band and smooth to create a new signal.

Supplementary information

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Han, X., Hu, Y., Foschini, L. et al. Deep learning models for electrocardiograms are susceptible to adversarial attack. Nat Med 26, 360–363 (2020).

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI:

This article is cited by


Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing