Abstract
Machine learning algorithms are powerful tools for data-driven tasks such as image classification and feature detection. However, their vulnerability to adversarial examples—input samples manipulated to fool the algorithm—remains a serious challenge. The integration of machine learning with quantum computing has the potential to yield tools offering not only better accuracy and computational efficiency, but also superior robustness against adversarial attacks. Indeed, recent work has employed quantum-mechanical phenomena to defend against adversarial attacks, spurring the rapid development of the field of quantum adversarial machine learning (QAML) and potentially yielding a new source of quantum advantage. Despite promising early results, there remain challenges in building robust real-world QAML tools. In this Perspective, we discuss recent progress in QAML and identify key challenges. We also suggest future research directions that could determine the route to practicality for QAML approaches as quantum computing hardware scales up and noise levels are reduced.
This is a preview of subscription content, access via your institution
Access options
Access Nature and 54 other Nature Portfolio journals
Get Nature+, our best-value online-access subscription
$29.99 / 30 days
cancel any time
Subscribe to this journal
Receive 12 digital issues and online access to articles
$119.00 per year
only $9.92 per issue
Rent or buy this article
Prices vary by article type
from$1.95
to$39.95
Prices may be subject to local taxes which are calculated during checkout




References
LeCun, Y., Bengio, Y. & Hinton, G. Deep learning. Nature 521, 436–444 (2015).
Biggio, B. et al. Evasion attacks against machine learning at test time. In Proc. Joint European Conference on Machine Learning and Knowledge Discovery in Databases 387–402 (Springer, 2013).
Szegedy, C. et al. Intriguing properties of neural networks. Preprint at https://arxiv.org/abs/1312.6199 (2013).
Huang, L., Joseph, A. D., Nelson, B., Rubinstein, B. I. & Tygar, J. D. Adversarial machine learning. In Proc. 4th ACM Workshop on Security and Artificial Intelligence AISec ’11 43–58 (Association for Computing Machinery, 2011).
Kurakin, A., Goodfellow, I. & Bengio, S. Adversarial machine learning at scale. Preprint at https://arxiv.org/abs/1611.01236 (2016).
Su, J., Vargas, D. V. & Sakurai, K. One pixel attack for fooling deep neural networks. IEEE Trans. Evolution. Comput. 23, 828–841 (2019).
Athalye, A., Carlini, N. & Wagner, D. Obfuscated gradients give a false sense of security: circumventing defenses to adversarial examples. In Proc. International Conference on Machine Learning 274–283 (PMLR, 2018).
Goodfellow, I. J., Shlens, J. & Szegedy, C. Explaining and harnessing adversarial examples. Preprint at https://arxiv.org/abs/1412.6572 (2014).
Kurakin, A., Goodfellow, I. J. & Bengio, S. in Artificial Intelligence Safety and Security 99–112 (Chapman & Hall, 2018).
Eykholt, K. et al. Robust physical-world attacks on deep learning visual classification. In Proc. IEEE Conference on Computer Vision and Pattern Recognition 1625–1634 (IEEE, 2018).
Carlini, N. & Wagner, D. Adversarial examples are not easily detected: bypassing ten detection methods. In Proc. 10th ACM Workshop on Artificial Intelligence and Security 3–14 (Association for Computing Machinery, 2017).
Wong, E., Rice, L. & Kolter, J. Z. Fast is better than free: revisiting adversarial training. Preprint at https://arxiv.org/abs/2001.03994 (2020).
Madry, A., Makelov, A., Schmidt, L., Tsipras, D. & Vladu, A. Towards deep learning models resistant to adversarial attacks. Preprint at https://arxiv.org/abs/1706.06083 (2017).
Goodfellow, I., McDaniel, P. & Papernot, N. Making machine learning robust against adversarial inputs. Commun. ACM 61, 56–66 (2018).
Miller, D. J., Xiang, Z. & Kesidis, G. Adversarial learning targeting deep neural network classification: a comprehensive review of defenses against attacks. Proc. IEEE 108, 402–433 (2020).
Ilyas, A. et al. Adversarial examples are not bugs, they are features. In Advances in Neural Information Processing Systems 32 (Association for Computing Machinery, 2019).
Sharif, M., Bhagavatula, S., Bauer, L. & Reiter, M. K. Accessorize to a crime: real and stealthy attacks on state-of-the-art face recognition. In Proc. 2016 ACM SIGSAC Conference on Computer and Communications Security CCS ’16 1528–1540 (Association for Computing Machinery, 2016).
Biamonte, J. et al. Quantum machine learning. Nature 549, 195–202 (2017).
Lu, S., Duan, L.-M. & Deng, D.-L. Quantum adversarial machine learning. Phys. Rev. Res. 2, 033212 (2020).
Liu, N. & Wittek, P. Vulnerability of quantum classification to adversarial perturbations. Phys. Rev. A 101, 062331 (2020).
Du, Y., Hsieh, M.-H., Liu, T., Tao, D. & Liu, N. Quantum noise protects quantum classifiers against adversaries. Phys. Rev. Res. 3, 023153 (2021).
Guan, J., Fang, W. & Ying, M. Robustness verification of quantum classifiers. In Proc. International Conference on Computer Aided Verification 151–174 (Springer, 2021).
Weber, M., Liu, N., Li, B., Zhang, C. & Zhao, Z. Optimal provable robustness of quantum classification via quantum hypothesis testing. npj Quantum Inf. 7, 76 (2021).
Ren, W. et al. Experimental quantum adversarial learning with programmable superconducting qubits. Nat. Comput. Sci. 2, 711–717 (2022).
Liao, H., Convy, I., Huggins, W. J. & Whaley, K. B. Robust in practice: adversarial attacks on quantum machine learning. Phys. Rev. A 103, 042427 (2021).
Kehoe, A., Wittek, P., Xue, Y. & Pozas-Kerstjens, A. Defence against adversarial attacks using classical and quantum-enhanced Boltzmann machines. Mach. Learn. Sci. Technol. 2, 045006 (2021).
West, M. et al. Benchmarking adversarially robust quantum machine learning at scale. Preprint at https://arxiv.org/abs/2211.12681 (2022).
Beer, K. et al. Training deep quantum neural networks. Nat. Commun. 11, 808 (2020).
Havlíček, V. et al. Supervised learning with quantum-enhanced feature spaces. Nature 567, 209–212 (2019).
Dallaire-Demers, P.-L. & Killoran, N. Quantum generative adversarial networks. Phys. Rev. A 98, 012324 (2018).
Lu, S. & Braunstein, S. L. Quantum decision tree classifier. Quantum Inf. Process. 13, 757–770 (2014).
Romero, J., Olson, J. P. & Aspuru-Guzik, A. Quantum autoencoders for efficient compression of quantum data. Quantum Sci. Technol. 2, 045001 (2017).
Ristè, D. et al. Demonstration of quantum advantage in machine learning. npj Quantum Inf. 3, 16 (2017).
Huang, H.-Y. et al. Quantum advantage in learning from experiments. Science 376, 1182–1186 (2022).
Ledoux, M. The Concentration of Measure Phenomenon (American Mathematical Society, 2001).
Caro, M. C., Gil-Fuster, E., Meyer, J. J., Eisert, J. & Sweke, R. Encoding-dependent generalization bounds for parametrized quantum circuits. Quantum 5, 582 (2021).
Caro, M. C. et al. Generalization in quantum machine learning from few training data. Nat. Commun. 13, 4919 (2022).
Banchi, L., Pereira, J. & Pirandola, S. Generalization in quantum machine learning: a quantum information standpoint. PRX Quantum 2, 040321 (2021).
Gong, W. & Deng, D. Universal adversarial examples and perturbations for quantum classifiers. Natl Sci. Rev. 9, nwab130 (2022).
LaRose, R. & Coyle, B. Robust data encodings for quantum classifiers. Phys. Rev. A 102, 032420 (2020).
Creswell, A. et al. Generative adversarial networks: an overview. IEEE Signal Process. Mag. 35, 53–65 (2018).
Chen, P.-Y., Zhang, H., Sharma, Y., Yi, J. & Hsieh, C.-J. ZOO: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In Proc. 10th ACM Workshop on Artificial Intelligence and Security 15–26 (Association for Computing Machinery, 2017).
Lecun, Y., Bottou, L., Bengio, Y. & Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 86, 2278–2324 (1998).
Jiang, S., Lu, S. & Deng, D.-L. Adversarial machine learning phases of matter. Preprint at https://arxiv.org/abs/1910.13453 (2019).
Guo, C., Rana, M., Cisse, M. & Van Der Maaten, L. Countering adversarial images using input transformations. Preprint at https://arxiv.org/abs/1711.00117 (2017).
Buckman, J., Roy, A., Raffel, C. & Goodfellow, I. Thermometer encoding: one hot way to resist adversarial examples. In International Conference on Learning Representations (ICLR, 2018).
Feinman, R., Curtin, R. R., Shintre, S. & Gardner, A. B. Detecting adversarial samples from artifacts. Preprint at https://arxiv.org/abs/1703.00410 (2017).
Salman, H. et al. Provably robust deep learning via adversarially trained smoothed classifiers. In Advances in Neural Information Processing Systems 32 (Association for Computing Machinery, 2019).
Zhang, H. et al. Theoretically principled trade-off between robustness and accuracy. In Proc. International Conference on Machine Learning 7472–7482 (PMLR, 2019).
Lecuyer, M., Atlidakis, V., Geambasu, R., Hsu, D. & Jana, S. Certified robustness to adversarial examples with differential privacy. In Proc. 2019 IEEE Symposium on Security and Privacy (SP) 656–672 (IEEE, 2019).
Cohen, J., Rosenfeld, E. & Kolter, Z. Certified adversarial robustness via randomized smoothing. In Proc. International Conference on Machine Learning 1310–1320 (PMLR, 2019).
Wong, E., Schmidt, F., Metzen, J. H. & Kolter, J. Z. Scaling provable adversarial defenses. In Advances in Neural Information Processing Systems 31 (Association for Computing Machinery, 2018).
Raghunathan, A., Steinhardt, J. & Liang, P. Certified defenses against adversarial examples. Preprint at https://arxiv.org/abs/1801.09344 (2018).
Tran, H.-D., Bak, S., Xiang, W. & Johnson, T. T. in Computer Aided Verification (eds Lahiri, S. K. & Wang, C.) 18–42 (Springer, 2020).
Elboher, Y. Y., Gottschlich, J. & Katz, G. An abstraction-based framework for neural network verification. In Proc. International Conference on Computer Aided Verification 43–65 (Springer, 2020).
Fremont, D. J., Chiu, J., Margineantu, D. D., Osipychev, D. & Seshia, S. A. Formal analysis and redesign of a neural network-based aircraft taxiing system with verifai. CoRR 2005.07173 (2020).
Huang, X., Kwiatkowska, M., Wang, S. & Wu, M. Safety verification of deep neural networks. In Computer Aided Verification: 29th International Conference, CAV 2017, Heidelberg, Germany, July 24-28, 2017, Proceedings, Part I 3–29 (Springer International Publishing, 2017).
Zhou, L. & Ying, M. Differential privacy in quantum computation. In Proc. 2017 IEEE 30th Computer Security Foundations Symposium (CSF) 249–262 (IEEE, 2017).
Dwork, C. Differential privacy: a survey of results. In Proc. International Conference on Theory and Applications of Models of Computation 1–19 (Springer, 2008).
Helstrom, C. W. Detection theory and quantum mechanics. Inf. Control 10, 254–291 (1967).
Holevo, A. S. Statistical decision theory for quantum systems. J. Multivariate Anal. 3, 337–394 (1973).
Bai, T., Luo, J., Zhao, J., Wen, B. & Wang, Q. Recent advances in adversarial training for adversarial robustness. Preprint at https://arxiv.org/abs/2102.01356 (2021).
Kang, D., Sun, Y., Brown, T., Hendrycks, D. & Steinhardt, J. Transfer of adversarial robustness between perturbation types. Preprint at https://arxiv.org/abs/1905.01034 (2019).
Tsipras, D., Santurkar, S., Engstrom, L., Turner, A. & Madry, A. Robustness may be at odds with accuracy. Preprint at https://arxiv.org/abs/1805.12152 (2018).
Liu, Y., Arunachalam, S. & Temme, K. A rigorous and robust quantum speed-up in supervised machine learning. Nat. Phys. 17, 1013–1017 (2021).
Schuld, M. & Petruccione, F. In Machine Learning with Quantum Computers 2926, 217–245 (2021).
Li, W., Lu, Z. & Deng, D. Quantum neural network classifiers: a tutorial. SciPost Phys. Lect. Notes 61 (2022).
Henderson, M., Shakya, S., Pradhan, S. & Cook, T. Quanvolutional neural networks: powering image recognition with quantum circuits. Quantum Mach. Intell. 2, 1–9 (2020).
Dilip, R., Liu, Y. J., Smith, A. & Pollmann, F. Data compression for quantum machine learning. Phys. Rev. Res. 4, 043007 (2022).
McClean, J. R., Boixo, S., Smelyanskiy, V. N., Babbush, R. & Neven, H. Barren plateaus in quantum neural network training landscapes. Nat. Commun. 9, 4812 (2018).
Goodfellow, I. et al. Generative adversarial networks. Commun. ACM 63, 139–144 (2020).
Hu, W. & Tan, Y. In Data Mining and Big Data: 7th International Conference, DMBD 2022, Beijing, China, November 21–24, 2022, Proceedings, Part II 409–423 (Springer, 2023).
Xiao, C. et al. Generating adversarial examples with adversarial networks. Preprint at https://arxiv.org/abs/1801.02610 (2018).
Samangouei, P., Kabkab, M. & Chellappa, R. Defense-GAN: protecting classifiers against adversarial attacks using generative models. Preprint at https://arxiv.org/abs/1805.06605 (2018).
Jin, G., Shen, S., Zhang, D., Dai, F. & Zhang, Y. APE-GAN: Adversarial perturbation elimination with GAN. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 3842–3846 (2019).
Lloyd, S. & Weedbrook, C. Quantum generative adversarial learning. Phys. Rev. Lett. 121, 040502 (2018).
Zoufal, C., Lucchi, A. & Woerner, S. Quantum Generative Adversarial Networks for learning and loading random distributions. npj Quantum Inf. 5, 103 (2019).
Peters, E. et al. Machine learning of high dimensional data on a noisy quantum processor. npj Quantum Inf. 7, 161 (2021).
White, G. A. L., Hill, C. D., Pollock, F. A., Hollenberg, L. C. L. & Modi, K. Demonstration of non-Markovian process characterisation and control on a quantum processor. Nat. Commun. 11, 6301 (2020).
Fowler, A. G., Mariantoni, M., Martinis, J. M. & Cleland, A. N. Surface codes: towards practical large-scale quantum computation. Phys. Rev. A 86, 032324 (2012).
Gambetta, J. Expanding the IBM Quantum roadmap to anticipate the future of quantum-centric supercomputing (IBM, 2022); https://research.ibm.com/blog/ibm-quantum-roadmap-2025
Our quantum computing journey (Quantumai); https://quantumai.google/learn/map
Scaling IonQ’s quantum computers: the roadmap (IonQ, 2020); https://ionq.com/posts/december-09-2020-scaling-quantum-computer-roadmap
Mooney, G. J., Hill, C. D. & Hollenberg, L. C. L. Cost-optimal single-qubit gate synthesis in the Clifford hierarchy. Quantum 5, 396 (2021).
Campbell, E. & O’Gorman, J. An efficient magic state approach to small angle rotations. Quantum Sci. Technol. https://doi.org/10.1088/2058-9565/1/1/015007 (2016).
Campbell, E. T. & Howard, M. Unified framework for magic state distillation and multiqubit gate synthesis with reduced resource cost. Phys Rev. A 95.2, 022316 (2017).
Gicev, S., Hollenberg, L. C. & Usman, M. A scalable and fast artificial neural network syndrome decoder for surface codes. Preprint at https://arxiv.org/abs/2110.05854 (2021).
Acknowledgements
We acknowledge useful discussions with S. Gicev and G. Mooney. M.T.W. acknowledges the support of the Australian Government Research Training Program Scholarship. S.M.E. is in part supported by Australian Research Council (ARC) Discovery Early Career Researcher Award (DECRA) DE220100680. We acknowledge the support from Australian Army Research through the Quantum Technology Challenge (QTC22). The computational resources were provided by the National Computing Infrastructure (NCI) and Pawsey Supercomputing Center through the National Computational Merit Allocation Scheme (NCMAS).
Author information
Authors and Affiliations
Contributions
M.U. conceived and supervised the project. M.T.W. and M.U. wrote the manuscript, with input from all authors.
Corresponding authors
Ethics declarations
Competing interests
The authors declare no competing interests.
Peer review
Peer review information
Nature Machine Intelligence thanks Dong-Ling Deng, Christa Zoufal and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
West, M.T., Tsang, SL., Low, J.S. et al. Towards quantum enhanced adversarial robustness in machine learning. Nat Mach Intell 5, 581–589 (2023). https://doi.org/10.1038/s42256-023-00661-1
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1038/s42256-023-00661-1
This article is cited by
-
Seeking a quantum advantage for machine learning
Nature Machine Intelligence (2023)