Electrocardiogram (ECG) acquisition is increasingly widespread in medical and commercial devices, necessitating the development of automated interpretation strategies. Recently, deep neural networks have been used to automatically analyze ECG tracings and outperform physicians in detecting certain rhythm irregularities1. However, deep learning classifiers are susceptible to adversarial examples, which are created from raw data to fool the classifier such that it assigns the example to the wrong class, but which are undetectable to the human eye2,3. Adversarial examples have also been created for medical-related tasks4,5. However, traditional attack methods to create adversarial examples do not extend directly to ECG signals, as such methods introduce square-wave artefacts that are not physiologically plausible. Here we develop a method to construct smoothed adversarial examples for ECG tracings that are invisible to human expert evaluation and show that a deep learning model for arrhythmia detection from single-lead ECG6 is vulnerable to this type of attack. Moreover, we provide a general technique for collating and perturbing known adversarial examples to create multiple new ones. The susceptibility of deep learning ECG algorithms to adversarial misclassification implies that care should be taken when evaluating these models on ECGs that may have been altered, particularly when incentives for causing misclassification exist.
Subscribe to Journal
Get full journal access for 1 year
only $17.42 per issue
All prices are NET prices.
VAT will be added later in the checkout.
Rent or Buy article
Get time limited or full article access on ReadCube.
All prices are NET prices.
The dataset can be accessed from https://physionet.org/challenge/2017/.
The code is available from https://github.com/XintianHan/ADV_ECG.
Hannun, A. Y. et al. Cardiologist-level arrhythmia detection and classification in ambulatory electrocardiograms using a deep neural network. Nat. Med. 25, 65–69 (2019).
Szegedy, C. et al. Intriguing properties of neural networks. In International Conference on Learning Representations (ICLR, 2014).
Biggio, B. et al. Evasion attacks against machine learning at test time. In Machine Learning and Knowledge Discovery in Databases Vol. 8190 (eds Blockeel, H., Kersting, K., Nijssen, S. & Železný, F.) 387–402 (Springer, 2013).
Finlayson, S. G. et al. Adversarial attacks on medical machine learning. Science 363, 1287–1289 (2019).
Paschali, M., Conjeti, S., Navarro, F. & Navab, N. Generalizability vs. robustness: investigating medical imaging networks using adversarial examples. In International Conference on Medical Image Computing and Computer-Assisted Intervention 493–501 (Springer, 2018).
Clifford, G. D. et al. AF Classification from a short single lead ECG recording: the PhysioNet/Computing in Cardiology Challenge 2017. In 2017 Computing in Cardiology (CinC) 1–4 (IEEE, 2017).
Kelly, B. B. & Fuster, V. Promoting Cardiovascular Health in the Developing World: a Critical Challenge to Achieve Global Health (National Academies Press, 2010).
Kennedy, H. L. The evolution of ambulatory ECG monitoring. Prog. Cardiovasc. Dis. 56, 127–132 (2013).
Hong, S. et al. ENCASE: an ENsemble ClASsifiEr for ECG classification using expert features and deep neural networks. In 2017 Computing in Cardiology (CinC) 1–4 (IEEE, 2017).
Chen, H., Huang, C., Huang, Q. & Zhang, Q. ECGadv: generating adversarial electrocardiogram to misguide arrhythmia classification system. Preprint at arXiv https://arxiv.org/pdf/1901.03808.pdf (2019).
Goodfellow, S. D. et al. Towards understanding ECG rhythm classification using convolutional neural networks and attention mappings. In Proceedings of the 3rd Machine Learning for Healthcare Conference 83–101 (PMLR, 2018).
Madry, A., Makelov, A., Schmidt, L., Tsipras, D. & Vladu, A. Towards deep learning models resistant to adversarial attacks. In International Conference on Learning Representations (ICLR, 2018).
Kurakin, A., Goodfellow, I. & Bengio, S. Adversarial machine learning at scale. In International Conference on Learning Representations (ICLR, 2017).
Kwon, J.-m et al. Development and validation of deep-learning algorithm for electrocardiography-based heart failure identification. Korean Circ. J. 49, 629–639 (2019).
Galloway, C. D. et al. Development and validation of a deep-learning model to screen for hyperkalemia from the electrocardiogram. JAMA Cardiol. 4, 428–436 (2019).
Singh, G., Gehr, T., Mirman, M., Püschel, M. & Vechev, M. Fast and effective robustness certification. In Advances in Neural Information Processing Systems 10802–10813 (NIPS, 2018).
Cohen, J., Rosenfeld, E. and Kolter, Z. Certified Adversarial Robustness via Randomized Smoothing. In International Conference on Machine Learning 1310–1320 (ICML, 2019).
Julian, K. D., Kochenderfer, M. J. & Owen, M. P. Deep neural network compression for aircraft collision avoidance systems. J. Guid. Control Dyn. 42, 598–608 (2018).
Nguyen, M. T., Van Nguyen, B. & Kim, K. Deep feature learning for sudden cardiac arrest detection in automated external defibrillators. Sci. Rep. 8, 17196 (2018).
Lyle, J. & Martin, A. Trusted Computing and Provenance: Better Together (Usenix, 2010).
Uesato, J., O’Donoghue, B., Kohli, P. and Oord, A. Adversarial Risk and the Dangers of Evaluating Against Weak Attacks. In International Conference on Machine Learning 5025-5034 (ICML, 2018).
Saria, S. & Subbaswamy, A. Tutorial: safe and reliable machine learning. Preprint at https://arxiv.org/pdf/1904.07204.pdf (2019).
Goodfellow, I. J., Shlens, J. & Szegedy, C. Explaining and harnessing adversarial examples. In International Conference on Learning Representations (ICLR, 2015).
McSharry, P. E., Clifford, G. D., Tarassenko, L. & Smith, L. A. A dynamical model for generating synthetic electrocardiogram signals. IEEE Trans. Biomed. Eng. 50, 289–294 (2003).
LeCun, Y., Bottou, L., Bengio, Y. & Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 86, 2278–2324 (1998).
Krizhevsky, A. & Hinton, G. Learning Multiple Layers of Features from Tiny Images (Citeseer, 2009).
Schmidt, L., Santurkar, S., Tsipras, D., Talwar, K. & Madry, A. Adversarially robust generalization requires more data. In Advances in Neural Information Processing Systems 5014–5026 (NIPS, 2018).
X.H. and R.R. were supported in part by the NIH under award no. R01HL148248. We thank W. Lee, S. Mohan, M. Goldstein, A. Li, A.M. Puli, H. Singh, M. Sudarshan, and W. Whitney.
The authors declare no competing interests.
Peer review information Michael Basson was the primary editor on this article and managed its editorial process and peer review in collaboration with the rest of the editorial team.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
This adversarial example contains square waves that are physiologically implausible.
Extended Data Fig. 2 Demonstration of three procedures to show the existence of the adversarial examples.
a, We add a small amount of Gaussian noise to the adversarial example and smooth it to create a new signal. b, For intersected signals, we concatenate the left half of one signal with the right half of the other to create a new one. c, We sample uniformly from the band and smooth to create a new signal.
About this article
Cite this article
Han, X., Hu, Y., Foschini, L. et al. Deep learning models for electrocardiograms are susceptible to adversarial attack. Nat Med 26, 360–363 (2020). https://doi.org/10.1038/s41591-020-0791-x
European Heart Journal (2021)
Circulation: Arrhythmia and Electrophysiology (2020)