Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Perspective
  • Published:

Trusting artificial intelligence in cybersecurity is a double-edged sword

Applications of artificial intelligence (AI) for cybersecurity tasks are attracting greater attention from the private and the public sectors. Estimates indicate that the market for AI in cybersecurity will grow from US$1 billion in 2016 to a US$34.8 billion net worth by 2025. The latest national cybersecurity and defence strategies of several governments explicitly mention AI capabilities. At the same time, initiatives to define new standards and certification procedures to elicit users’ trust in AI are emerging on a global scale. However, trust in AI (both machine learning and neural networks) to deliver cybersecurity tasks is a double-edged sword: it can improve substantially cybersecurity practices, but can also facilitate new forms of attacks to the AI applications themselves, which may pose severe security threats. We argue that trust in AI for cybersecurity is unwarranted and that, to reduce security risks, some form of control to ensure the deployment of ‘reliable AI’ for cybersecurity is necessary. To this end, we offer three recommendations focusing on the design, development and deployment of AI for cybersecurity.

This is a preview of subscription content, access via your institution

Access options

Buy this article

Prices may be subject to local taxes which are calculated during checkout

References

  1. The Global Risks Report 2018 (World Economic Forum, 2018).

  2. The 2019 Official Annual Cybercrime Report (Herjavec Group, 2019).

  3. Borno, R. The first imperative: the best digital offense starts with the best security defense. Cisco https://newsroom.cisco.com/feature-content?articleId=1843565 (2017).

  4. Breach level index. Gemalto https://breachlevelindex.com/data-breach-library (2018).

  5. Microsoft Defender ATP Research Team. Protecting the protector: hardening machine learning defenses against adversarial attacks. Microsoft https://www.microsoft.com/security/blog/2018/08/09/protecting-the-protector-hardening-machine-learning-defenses-against-adversarial-attacks/ (2018).

  6. King, T. M., Arbon, J., Santiago, D., Adamo, D., Chin, W. & Shanmugam, R. AI for testing today and tomorrow: industry perspectives. In 2019 IEEE International Conference on Artificial Intelligence Testing 81–88 (IEEE, 2019).

  7. Riley, S. DarkLight offers first of its kind artificial intelligence to enhance cybersecurity defenses. Business Wire https://www.businesswire.com/news/home/20170726005117/en/darklight-offers-kind-artificial-intelligence-enhance-cybersecurity (2017).

  8. Acalvio autonomous deception. Acalvio https://www.acalvio.com/ (2019).

  9. Gens, F. et al. IDC FutureScape: worldwide it industry 2019 predictions. IDC https://www.idc.com/getdoc.jsp?containerId=US44403818 (2018).

  10. Mittal, S., Joshi, A. & Finin, T. Cyber-All-Intel: an AI for security related threat intelligence. Preprint at https://arxiv.org/abs/1905.02895 (2019).

  11. Taddeo, M. & Floridi, L. Regulate artificial intelligence to avert cyber arms race. Nature 556, 296–298 (2018).

    Article  Google Scholar 

  12. AI in cybersecurity market. MarketsandMarkets https://www.marketsandmarkets.com/market-reports/ai-in-cybersecurity-market-224437074.html (2019).

  13. Taddeo, M. Modelling trust in artificial agents, a first step toward the analysis of e-trust. Minds Mach. 20, 243–257 (2010).

    Article  Google Scholar 

  14. Taddeo, M. Trust in technology: a distinctive and a problematic relation. Know Technol. Pol. 23, 283–286 (2010).

    Article  Google Scholar 

  15. Biggio, B. & Roli, F. Wild patterns: ten years after the rise of adversarial machine learning. Pattern Recogn. 84, 317–331 (2018).

    Article  Google Scholar 

  16. Jagielski, M. et al. Manipulating machine learning: poisoning attacks and countermeasures for regression learning. Preprint at https://arxiv.org/abs/1804.00308 (2018).

  17. Athalye, A., Engstrom, L., Ilyas, A. & Kwok, K. Synthesizing robust adversarial examples. Preprint at https://arxiv.org/abs/1707.07397 (2017).

  18. Liao, C., Zhong, H., Squicciarini, A., Zhu, S. & Miller, D. Backdoor embedding in convolutional neural network models via invisible perturbation. Preprint at https://arxiv.org/abs/1808.10307 (2018).

  19. Eykholt, K. et al. Robust physical-world attacks on deep learning visual classification. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition 1625–1634 (IEEE, 2018).

  20. Sharif, M., Bhagavatula, S., Bauer, L. & Reiter, M. K. Accessorize to a crime: real and stealthy attacks on state-of-the-art face recognition. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security 1528–1540 (ACM, 2016).

  21. Szegedy, C. et al. Intriguing properties of neural networks. Preprint at https://arxiv.org/abs/1312.6199 (2013).

  22. Uesato, J., O’Donoghue, B., van den Oord, A. & Kohli, P. Adversarial risk and the dangers of evaluating against weak attacks. Preprint at https://arxiv.org/abs/1802.05666 (2018).

  23. Regulation of the European Parliament and of the Council on ENISA (the European Union Agency for Cybersecurity) and on information and communications technology cybersecurity certification and repealing Regulation (EU) No 526/2013 (Cybersecurity Act). EUR-Lex http://data.europa.eu/eli/reg/2019/881/oj (2019).

  24. Ethics Guideline for Trustworthy AI (High-Level Expert Group on AI, 2019).

  25. Artificial Intelligence and Machine Learning Applied to Cybersecurity (IEEE, 2017).

  26. Taddeo, M. Trusting digital technologies correctly. Minds Mach. 27, 565–568 (2017).

    Article  Google Scholar 

  27. Gu, T., Dolan-Gavitt, B. & Garg, S. BadNets: identifying vulnerabilities in the machine learning model supply chain. Preprint at https://arxiv.org/abs/1708.06733 (2017).

  28. Sinha, A., Namkoong, H. & Duchi, J. Certifying some distributional robustness with principled adversarial training. Preprint at https://arxiv.org/abs/1710.10571 (2017).

  29. Carlini, N. & Wagner, D. Towards evaluating the robustness of neural networks. In 2017 IEEE Symposium on Security and Privacy 39–57 (IEEE, 2017).

  30. Glaessgen, E. H. & Stargel, D. S. The digital twin paradigm for future NASA and U.S. Air Force vehicles. In 53rd Structures, Structural Dynamics, and Materials Conference: Special Session on the Digital Twin (NASA, 2012).

  31. Yang, G.-Z. et al. The grand challenges of Science Robotics. Sci. Robot. 3, eaar7650 (2018).

    Article  Google Scholar 

  32. Recommendation of the Council on Artificial Intelligence (OECD, 2019).

Download references

Acknowledgements

L.F.’s and M.T.’s work was supported by Privacy and Trust Stream—Social lead of the PETRAS Internet of Things research hub; PETRAS is funded by the Engineering and Physical Sciences Research Council (EPSRC), grant agreement no. EP/N023013/1, Google UK Ltd, and Facebook Inc. Funding from Defence Science and Technology Laboratories and The Alan Turing Institute supported the organization of the research workshop on the ‘Ethics of AI in Cybersecurity’, which inspired this Perspective. We are grateful for their feedback to M. Ramili, YOROI, and to the participants in the workshop ‘Ethics of AI in Cybersecurity’ hosted in March 2019 by the Digital Ethics Lab, Oxford Internet Institute, University of Oxford and the UK Defence Science and Technology Laboratories.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mariarosaria Taddeo.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Taddeo, M., McCutcheon, T. & Floridi, L. Trusting artificial intelligence in cybersecurity is a double-edged sword. Nat Mach Intell 1, 557–560 (2019). https://doi.org/10.1038/s42256-019-0109-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1038/s42256-019-0109-1

This article is cited by

Search

Quick links

Nature Briefing AI and Robotics

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing: AI and Robotics