Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • News & Views
  • Published:

NEURAL NETWORKS

Bringing robustness against adversarial attacks

Adversarial attacks make imperceptible changes to a neural network’s inputs so that it recognizes it as something entirely different. This flaw can give us insight into how these networks work and how to make them more robust.

This is a preview of subscription content, access via your institution

Access options

Buy this article

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: Illustration of a simple adversarial example generated from some perturbations in an original image to fool a deep neural network.

References

  1. Woods, W., Chen, J. & Teuscher, C. Nat. Mach. Intell. https://doi.org/10.1038/s42256-019-0104-6 (2019).

    Article  Google Scholar 

  2. Goodfellow, I., Bengio, Y. & Courville, A. Deep Learning (MIT press, 2016).

  3. Finlayson, S. G. et al. Science 363, 1287–1289 (2019).

    Article  Google Scholar 

  4. Craven, M. & Shavlik, J. W. in Proc. 8th International Conference on Neural Information Processing Systems 24–30 (NIPS, 1996).

  5. Gunning, D. & Aha, D. W. AI Mag. 40, 44–58 (2019).

    Article  Google Scholar 

  6. Selvaraju, R. R. et al. in Proc. IEEE International Conference on Computer Vision 618–626 (IEEE, 2017).

  7. Heaven, D. Nature 574, 163–166 (2019).

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Gean T. Pereira or André C. P. L. F. de Carvalho.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Pereira, G.T., de Carvalho, A.C.P.L.F. Bringing robustness against adversarial attacks. Nat Mach Intell 1, 499–500 (2019). https://doi.org/10.1038/s42256-019-0116-2

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1038/s42256-019-0116-2

Search

Quick links

Nature Briefing AI and Robotics

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing: AI and Robotics