Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Comment
  • Published:

Distinguishing two features of accountability for AI technologies

Policymakers and researchers consistently call for greater human accountability for AI technologies. We should be clear about two distinct features of accountability.

This is a preview of subscription content, access via your institution

Access options

Buy this article

Prices may be subject to local taxes which are calculated during checkout

References

  1. Fjeld, J. et al. Berkman Klein Center Research Publication No. 2020-1 (2020).

  2. Barocas, S. & Selbst, A. D. Calif. Law Rev 104, 671–732 (2016).

    Google Scholar 

  3. Zimmermann, A. & Lee-Stronach, C. Can. J. Philos. 52, 6–25 (2021).

    Article  Google Scholar 

  4. Kroll, J. A. et al. Penn Law Rev 165, 633–705 (2017).

    Google Scholar 

  5. European Commission. https://www.aepd.es/sites/default/files/2019-12/ai-definition.pdf (2019).

  6. Algorithmic Accountability Act of 2022 117th Congress, S.3572 (US Government, 2022).

  7. Falco, G. et al. Nat. Mach. Intell. 3, 566–571 (2021).

    Article  Google Scholar 

  8. Kacianka, S. & Pretschner, A. In Proc. 2021 ACM Conf. Fairness, Accountability, and Transparency 424–437 (2021).

  9. Cooper, A. F., Laufer, B., Moss, E. & Nissenbaum, H. In Proc. 2022 ACM Conf. Fairness, Accountability, and Transparency 864–876 (2022).

  10. Wieringa, M. In Proc. 2020 ACM Conf. on Fairness, Accountability, and Transparency 1–18 (2020).

  11. Donia, J. In Proc. 2022 ACM Conf. on Fairness, Accountability, and Transparency 598 (2022).

  12. Bovens, M. Eur. Law J. 13, 447–468 (2007).

    Article  Google Scholar 

  13. Schedler, A. in The Self-Restraining State: Power and Accountability in New Democracies (eds Schedler, A. et al.) Ch. 2 (Lynne Rienner Publishers, 1999).

  14. Kroll, J. A. In Proc. 2021 ACM Conf. on Fairness, Accountability, and Transparency 758–771 (2021).

  15. Falco, G. & Siegel, J. SAE Int. J. Transp. Cyber. & Privacy 3, 97–111 (2020).

    Article  Google Scholar 

  16. Winfield, A. & Jirotka, M. in Towards Autonomous Robotic Systems (eds Gao, Y. et al.) 10454 (Springer, 2017).

  17. Avin, S. et al. Science 374, 1327–1329 (2021).

    Article  Google Scholar 

  18. Mansbridge, J. in The Oxford Handbook of Public Accountability (ed. Bovens, M.) Ch. 4 (Oxford Univ. Press, 2014).

  19. Watson, G. Philos. Top. 24, 227–248 (1996).

    Article  Google Scholar 

  20. Law, J. (ed.). Oxford Dictionary of Law (Oxford Univ. Press, 2022).

  21. Hohfeld, W. N. Yale Law J. 26, 710–770 (1917).

    Article  Google Scholar 

  22. Artificial Intelligence Select Committee. AI in the UK: ready, willing and able? (UK House of Lords, 2018).

  23. Raji, I. D. et al. In Proc. 2020 ACM Conf. on Fairness, Accountability, and Transparency 33–44 (2020).

  24. Yeung, K. in Yeung, K. & Lodge, M. Algorithmic Regulation Ch. 2 (Oxford Univ. Press, 2019)

  25. Fraser, S., Simcock, R. & Snoswell, A. In Proc. 2022 ACM Conf. on Fairness, Accountability, and Transparency 185–196 (2022).

  26. Irion, K. In Proc. 2022 ACM Conf. on Fairness, Accountability, and Transparency 1561–1570 (2022).

  27. Lima, G., Grgić-Hlača, N., Jeong, J. K. & Cha, M. In Proc. 2022 ACM Conf. on Fairness, Accountability, and Transparency 2013–2113 (2022).

  28. Wachter, S., Mittelstadt, B. & Russell, C. Harv. J. Law Technol. 31, 841–888 (2017).

    Google Scholar 

  29. McDermid, J. A., Jia, Y., Porter, Z. & Habli, I. Philos. Trans. R. Soc. A 379, 20200363 (2021).

    Article  Google Scholar 

  30. Hart, H. L. A. & Honoré, T. Causation in the Law (Oxford Univ. Press, 1985).

  31. Cappelletti, M. Justifying Strict Liability: a Comparative Analysis in Legal Reasoning (Oxford Univ. Press, 2022).

  32. Ananny, M. & Crawford, K. New Media Soc. 20, 973–989 (2018).

    Article  Google Scholar 

Download references

Acknowledgements

We thank P. Noordhof and T. Stoneham for their comments. This work was supported by the Engineering and Physical Sciences Research Council (EP/W011239/1) and the Assuring Autonomy International Programme, a partnership between Lloyd’s Register Foundation and the University of York.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zoe Porter.

Ethics declarations

Competing interests

T.L. is Head of Clinical AI at Bradford Teaching Hospitals NHS Foundation Trust. The remaining authors declare no competing interests.

Peer review

Peer review information

Nature Machine Intelligence thanks Jacob Metcalf and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Porter, Z., Zimmermann, A., Morgan, P. et al. Distinguishing two features of accountability for AI technologies. Nat Mach Intell 4, 734–736 (2022). https://doi.org/10.1038/s42256-022-00533-0

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1038/s42256-022-00533-0

Search

Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing