Abstract
Electrocardiogram (ECG) acquisition is increasingly widespread in medical and commercial devices, necessitating the development of automated interpretation strategies. Recently, deep neural networks have been used to automatically analyze ECG tracings and outperform physicians in detecting certain rhythm irregularities1. However, deep learning classifiers are susceptible to adversarial examples, which are created from raw data to fool the classifier such that it assigns the example to the wrong class, but which are undetectable to the human eye2,3. Adversarial examples have also been created for medical-related tasks4,5. However, traditional attack methods to create adversarial examples do not extend directly to ECG signals, as such methods introduce square-wave artefacts that are not physiologically plausible. Here we develop a method to construct smoothed adversarial examples for ECG tracings that are invisible to human expert evaluation and show that a deep learning model for arrhythmia detection from single-lead ECG6 is vulnerable to this type of attack. Moreover, we provide a general technique for collating and perturbing known adversarial examples to create multiple new ones. The susceptibility of deep learning ECG algorithms to adversarial misclassification implies that care should be taken when evaluating these models on ECGs that may have been altered, particularly when incentives for causing misclassification exist.
This is a preview of subscription content, access via your institution
Access options
Access Nature and 54 other Nature Portfolio journals
Get Nature+, our best-value online-access subscription
$29.99 / 30 days
cancel any time
Subscribe to this journal
Receive 12 print issues and online access
$209.00 per year
only $17.42 per issue
Buy this article
- Purchase on Springer Link
- Instant access to full article PDF
Prices may be subject to local taxes which are calculated during checkout
Similar content being viewed by others
Data availability
The dataset can be accessed from https://physionet.org/challenge/2017/.
Code availability
The code is available from https://github.com/XintianHan/ADV_ECG.
References
Hannun, A. Y. et al. Cardiologist-level arrhythmia detection and classification in ambulatory electrocardiograms using a deep neural network. Nat. Med. 25, 65–69 (2019).
Szegedy, C. et al. Intriguing properties of neural networks. In International Conference on Learning Representations (ICLR, 2014).
Biggio, B. et al. Evasion attacks against machine learning at test time. In Machine Learning and Knowledge Discovery in Databases Vol. 8190 (eds Blockeel, H., Kersting, K., Nijssen, S. & Železný, F.) 387–402 (Springer, 2013).
Finlayson, S. G. et al. Adversarial attacks on medical machine learning. Science 363, 1287–1289 (2019).
Paschali, M., Conjeti, S., Navarro, F. & Navab, N. Generalizability vs. robustness: investigating medical imaging networks using adversarial examples. In International Conference on Medical Image Computing and Computer-Assisted Intervention 493–501 (Springer, 2018).
Clifford, G. D. et al. AF Classification from a short single lead ECG recording: the PhysioNet/Computing in Cardiology Challenge 2017. In 2017 Computing in Cardiology (CinC) 1–4 (IEEE, 2017).
Kelly, B. B. & Fuster, V. Promoting Cardiovascular Health in the Developing World: a Critical Challenge to Achieve Global Health (National Academies Press, 2010).
Kennedy, H. L. The evolution of ambulatory ECG monitoring. Prog. Cardiovasc. Dis. 56, 127–132 (2013).
Hong, S. et al. ENCASE: an ENsemble ClASsifiEr for ECG classification using expert features and deep neural networks. In 2017 Computing in Cardiology (CinC) 1–4 (IEEE, 2017).
Chen, H., Huang, C., Huang, Q. & Zhang, Q. ECGadv: generating adversarial electrocardiogram to misguide arrhythmia classification system. Preprint at arXiv https://arxiv.org/pdf/1901.03808.pdf (2019).
Goodfellow, S. D. et al. Towards understanding ECG rhythm classification using convolutional neural networks and attention mappings. In Proceedings of the 3rd Machine Learning for Healthcare Conference 83–101 (PMLR, 2018).
Madry, A., Makelov, A., Schmidt, L., Tsipras, D. & Vladu, A. Towards deep learning models resistant to adversarial attacks. In International Conference on Learning Representations (ICLR, 2018).
Kurakin, A., Goodfellow, I. & Bengio, S. Adversarial machine learning at scale. In International Conference on Learning Representations (ICLR, 2017).
Kwon, J.-m et al. Development and validation of deep-learning algorithm for electrocardiography-based heart failure identification. Korean Circ. J. 49, 629–639 (2019).
Galloway, C. D. et al. Development and validation of a deep-learning model to screen for hyperkalemia from the electrocardiogram. JAMA Cardiol. 4, 428–436 (2019).
Singh, G., Gehr, T., Mirman, M., Püschel, M. & Vechev, M. Fast and effective robustness certification. In Advances in Neural Information Processing Systems 10802–10813 (NIPS, 2018).
Cohen, J., Rosenfeld, E. and Kolter, Z. Certified Adversarial Robustness via Randomized Smoothing. In International Conference on Machine Learning 1310–1320 (ICML, 2019).
Julian, K. D., Kochenderfer, M. J. & Owen, M. P. Deep neural network compression for aircraft collision avoidance systems. J. Guid. Control Dyn. 42, 598–608 (2018).
Nguyen, M. T., Van Nguyen, B. & Kim, K. Deep feature learning for sudden cardiac arrest detection in automated external defibrillators. Sci. Rep. 8, 17196 (2018).
Lyle, J. & Martin, A. Trusted Computing and Provenance: Better Together (Usenix, 2010).
Uesato, J., O’Donoghue, B., Kohli, P. and Oord, A. Adversarial Risk and the Dangers of Evaluating Against Weak Attacks. In International Conference on Machine Learning 5025-5034 (ICML, 2018).
Saria, S. & Subbaswamy, A. Tutorial: safe and reliable machine learning. Preprint at https://arxiv.org/pdf/1904.07204.pdf (2019).
Goodfellow, I. J., Shlens, J. & Szegedy, C. Explaining and harnessing adversarial examples. In International Conference on Learning Representations (ICLR, 2015).
McSharry, P. E., Clifford, G. D., Tarassenko, L. & Smith, L. A. A dynamical model for generating synthetic electrocardiogram signals. IEEE Trans. Biomed. Eng. 50, 289–294 (2003).
LeCun, Y., Bottou, L., Bengio, Y. & Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 86, 2278–2324 (1998).
Krizhevsky, A. & Hinton, G. Learning Multiple Layers of Features from Tiny Images (Citeseer, 2009).
Schmidt, L., Santurkar, S., Tsipras, D., Talwar, K. & Madry, A. Adversarially robust generalization requires more data. In Advances in Neural Information Processing Systems 5014–5026 (NIPS, 2018).
Acknowledgements
X.H. and R.R. were supported in part by the NIH under award no. R01HL148248. We thank W. Lee, S. Mohan, M. Goldstein, A. Li, A.M. Puli, H. Singh, M. Sudarshan, and W. Whitney.
Author information
Authors and Affiliations
Contributions
X.H. and R.R. designed the problem and performed all the experiments. All authors wrote the manuscript.
Corresponding authors
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Peer review information Michael Basson was the primary editor on this article and managed its editorial process and peer review in collaboration with the rest of the editorial team.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Extended data
Extended Data Fig. 1 An adversarial example created by Projected Gradient Descent (PGD) method.
This adversarial example contains square waves that are physiologically implausible.
Extended Data Fig. 2 Demonstration of three procedures to show the existence of the adversarial examples.
a, We add a small amount of Gaussian noise to the adversarial example and smooth it to create a new signal. b, For intersected signals, we concatenate the left half of one signal with the right half of the other to create a new one. c, We sample uniformly from the band and smooth to create a new signal.
Supplementary information
Rights and permissions
About this article
Cite this article
Han, X., Hu, Y., Foschini, L. et al. Deep learning models for electrocardiograms are susceptible to adversarial attack. Nat Med 26, 360–363 (2020). https://doi.org/10.1038/s41591-020-0791-x
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1038/s41591-020-0791-x
This article is cited by
-
Predicting extremely low body weight from 12-lead electrocardiograms using a deep neural network
Scientific Reports (2024)
-
Boosting adversarial robustness via feature refinement, suppression, and alignment
Complex & Intelligent Systems (2024)
-
Artificial intelligence-powered electronic skin
Nature Machine Intelligence (2023)
-
Adversarial examples: attacks and defences on medical deep learning systems
Multimedia Tools and Applications (2023)
-
SSVEP-based brain-computer interfaces are vulnerable to square wave attacks
Science China Information Sciences (2022)