Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Principles alone cannot guarantee ethical AI

Abstract

Artificial intelligence (AI) ethics is now a global topic of discussion in academic and policy circles. At least 84 public–private initiatives have produced statements describing high-level principles, values and other tenets to guide the ethical development, deployment and governance of AI. According to recent meta-analyses, AI ethics has seemingly converged on a set of principles that closely resemble the four classic principles of medical ethics. Despite the initial credibility granted to a principled approach to AI ethics by the connection to principles in medical ethics, there are reasons to be concerned about its future impact on AI development and governance. Significant differences exist between medicine and AI development that suggest a principled approach for the latter may not enjoy success comparable to the former. Compared to medicine, AI development lacks (1) common aims and fiduciary duties, (2) professional history and norms, (3) proven methods to translate principles into practice, and (4) robust legal and professional accountability mechanisms. These differences suggest we should not yet celebrate consensus around high-level principles that hide deep political and normative disagreement.

This is a preview of subscription content, access via your institution

Access options

Rent or buy this article

Prices vary by article type

from$1.95

to$39.95

Prices may be subject to local taxes which are calculated during checkout

References

  1. Greene, D., Hoffmann, A. L. & Stark, L. Better, nicer, clearer, fairer: a critical assessment of the movement for ethical artificial intelligence and machine learning. In Proc. 52nd Hawaii International Conference on System Sciences 2122–2131 (HICSS, 2019).

  2. Whittaker, M. et al. AI Now Report 2018 (AI Now Institute, 2018).

  3. Jobin, A., Ienca, M. & Vayena, E. The global landscape of AI ethics guidelines. Nat. Mach. Intell. 1, 389–399 (2019).

    Google Scholar 

  4. Nemitz, P. Constitutional democracy and technology in the age of artificial intelligence. Philos. Trans. R. Soc. A 376, 20180089 (2018).

    Google Scholar 

  5. Hagendorff, T. The ethics of AI ethics -- an evaluation of guidelines. Preprint at https://arxiv.org/abs/1903.03425 (2019).

  6. Calo, R. Artificial intelligence policy: a primer and roadmap. UC Davis Law Rev. 51, 399–436 (2017).

    Google Scholar 

  7. Whittlestone, J., Nyrup, R., Alexandrova, A., Dihal, K. & Cave, S. Ethical and Societal Implications of Algorithms, Data, and Artificial Intelligence: A Roadmap For Research 59 (Nuffield Foundation, 2019).

  8. Floridi, L. et al. AI4People—an ethical framework for a good AI society: opportunities, risks, principles, and recommendations. Minds Mach. 28, 689–707 (2018).

    Google Scholar 

  9. Forty-two countries adopt new OECD Principles on Artificial Intelligence OECD http://www.oecd.org/science/forty-two-countries-adopt-new-oecd-principles-on-artificial-intelligence.htm (2019).

  10. High Level Expert Group on Artificial Intelligence Ethics Guidelines for Trustworthy AI. (European Commission, 2019).

  11. Filipović, A., Koska, C. & Paganini, C. Developing a Professional Ethics for Algorithmists: Learning from the Examples of Established Ethics (Bertelsmann Stiftung, 2018).

  12. Beauchamp, T. L. & Childress, J. F. Principles of Biomedical Ethics (Oxford Univ. Press, 2009).

  13. Bosk, C. L. Bioethics, raw and cooked: extraordinary conflict and everyday practice. J. Health Soc. Behav. 51, S133–S146 (2010).

    Google Scholar 

  14. Beauchamp, T. L. & DeGrazia, D. in Handbook of Bioethics: Taking Stock of the Field from a Philosophical Perspective (ed. Khushf, G.) 78, 55–74 (Springer, 2006).

  15. Marshall, T. H. The recent history of professionalism in relation to social structure and social policy. Can. J. Econ. Polit. Sci. 5, 325–340 (1939).

    Google Scholar 

  16. Frankel, M. S. Professional codes: why, how, and with what impact? J. Bus. Ethics 8, 109–115 (1989).

    Google Scholar 

  17. Black’s Law Dictionary 2nd edn (West, 2009).

  18. MacIntyre, A. After Virtue: A Study in Moral Theory (Gerald Duckworth & Co, 2007).

  19. Gillon, R. Do doctors owe a special duty of beneficence to their patients? J. Med. Ethics 12, 171–173 (1986).

    Google Scholar 

  20. Pellegrino, E. D. & Thomasma, D. C. The Virtues in Medical Practice (Oxford Univ. Press, 1993).

  21. Van den Bergh, J. & Deschoolmeester, D. Ethical decision making in ICT: discussing the impact of an ethical code of conduct. Commun. IBIMA 2010, 127497 (2010).

    Google Scholar 

  22. Manders-Huits, N. & Zimmer, M. Values and pragmatic action: the challenges of introducing ethical intelligence in technical design communities. Int. Rev. Inform. Ethics 10, 37–44 (2009).

    Google Scholar 

  23. McDowell, B. Ethical Conduct and the Professional’s Dilemma: Choosing Between Service and Success (Quorum, 1991).

  24. Iacovino, L. Ethical principles and information professionals: theory, practice and education. Aust. Acad. Res. Libr. 33, 57–74 (2002).

    Google Scholar 

  25. Boddington, P. Towards a Code of Ethics for Artificial Intelligence Research (Springer, 2017).

  26. Zarsky, T. Incompatible: the GDPR in the age of big data. Seton Hall Law Rev. 47, 995–1010 (2017).

    Google Scholar 

  27. Wachter, S. & Mittelstadt, B. D. A right to reasonable inferences: re-thinking data protection law in the age of big data and AI. Columbia Bus. Law Rev. 2019, 494–620 (2019).

    Google Scholar 

  28. Balkin, J. M. Information fiduciaries and the first amendment. UCDL Rev. 49, 1183–1234 (2015).

    Google Scholar 

  29. Kish-Gephart, J. J., Harrison, D. A. & Treviño, L. K. Bad apples, bad cases, and bad barrels: meta-analytic evidence about sources of unethical decisions at work. J. Appl. Psychol. 95, 1–31 (2010).

    Google Scholar 

  30. Wakabayashi, D. & Shane, S. Google will not renew Pentagon contract that upset employees. The New York Times https://www.nytimes.com/2018/06/01/technology/google-pentagon-project-maven.html (2018).

  31. Conger, K. & Wakabayashi, D. Google employees protest secret work on censored search engine for China. The New York Times https://www.nytimes.com/2018/08/16/technology/google-employees-protest-search-censored-china.html (2018).

  32. Wong, J. C. Demoted and sidelined: Google walkout organizers say company retaliated. The Guardian https://www.theguardian.com/us-news/2019/apr/22/google-mass-protests-employee-retaliation (2019).

  33. Parker, D. B. Ethical Conflicts in Computer Science and Technology (AFIPS Press, 1981).

  34. Carrese, J. A. & Sugarman, J. The inescapable relevance of bioethics for the practicing clinician. Chest 130, 1864–1872 (2006).

    Google Scholar 

  35. Brotherton, S., Kao, A. & Crigger, B. J. Professing the values of medicine: the modernized AMA code of medical ethics. J. Am. Med. Assoc. 316, 1041–1042 (2016).

    Google Scholar 

  36. Greenfield, B. & Jensen, G. M. Beyond a code of ethics: phenomenological ethics for everyday practice. Physiother. Res. Int. 15, 88–95 (2010).

    Google Scholar 

  37. Panensky, S. A. & Jones, R. Does IT go without saying? Professional Times Magazine 30–33 (Summer, 2018).

  38. Perlman, D. T. Who pays the price of computer software failure notes and comments. Rutgers Comput. Technol. Law J. 24, 383–416 (1998).

    Google Scholar 

  39. Mittelstadt, B., Allo, P., Taddeo, M., Wachter, S. & Floridi, L. The ethics of algorithms: mapping the debate. Big Data Soc. https://doi.org/10.1177/2053951716679679 (2016).

    Google Scholar 

  40. Jones, T. M. Ethical decision making by individuals in organizations: an issue-contingent model. Acad. Manage. Rev. 16, 366–395 (1991).

    Google Scholar 

  41. Metcalf, J. & Crawford, K. Where are human subjects in big data research? The emerging ethics divide. Big Data Soc. https://doi.org/10.1177/2053951716650211 (2016).

    Google Scholar 

  42. Burrell, J. How the machine ‘thinks:’ understanding opacity in machine learning algorithms. Big Data Soc. https://doi.org/10.1177/2053951715622512 (2016).

    Google Scholar 

  43. Floridi, L. Faultless responsibility: on the nature and allocation of moral responsibility for distributed moral actions. Philos. Trans. R. Soc. A 374, 20160112 (2016).

    Google Scholar 

  44. Awad, E. et al. The Moral Machine experiment. Nature 563, 59–64 (2018).

    Google Scholar 

  45. Ess, C. Ethical pluralism and global information ethics. Ethics Inf. Technol. 8, 215–226 (2006).

    Google Scholar 

  46. van den Hoven, J. Computer ethics and moral methodology. Metaphilosophy 28, 234–248 (1997).

    Google Scholar 

  47. Gallie, W. B. Essentially contested concepts. Proc. Aristot. Soc. 56, 167–198 (1955).

    Google Scholar 

  48. Richardson, H. S. Specifying norms as a way to resolve concrete ethical problems. Philos. Public Aff. 19, 279–310 (1990).

    MathSciNet  Google Scholar 

  49. Turner, L. Bioethics in a multicultural world: medicine and morality in pluralistic settings. Health Care Anal. 11, 99–117 (2003).

    Google Scholar 

  50. Rhodes, R. Good and not so good medical ethics. J. Med. Ethics 41, 71–74 (2015).

    Google Scholar 

  51. Clouser, K. D. & Gert, B. A critique of principlism. J. Med. Philos. 15, 219–236 (1990).

    Google Scholar 

  52. Degrazia, D. Moving forward in bioethical theory: theories, cases, and specified principlism. J. Med. Philos. 17, 511–539 (1992).

    Google Scholar 

  53. Orentlicher, D. The influence of a professional organization on physician behavior symposium on the legal and ethical implications of innovative medical technology. Albany Law Rev. 57, 583–606 (1993).

    Google Scholar 

  54. Papadakis, M. A. et al. Disciplinary action by medical boards and prior behavior in medical school. N. Engl. J. Med. 353, 2673–2682 (2005).

    Google Scholar 

  55. Toulmin, S. How medicine saved the life of ethics. Perspect. Biol. Med. 25, 736–750 (1982).

    Google Scholar 

  56. van de Poel, I. in Philosophy and Engineering: Reflections on Practice, Principles and Process (eds Michelfelder, D. P., McCarthy, N. & Goldberg, D. E.) 253–266 (Springer, 2013).

  57. Gillon, R. Defending the four principles approach as a good basis for good medical practice and therefore for good medical ethics. J. Med. Ethics 41, 111–116 (2015).

    Google Scholar 

  58. Friedman, B., Hendry, D. G. & Borning, A. A survey of value sensitive design methods. Found. Trends Hum. Comp. Interact. 11, 63–125 (2017).

    Google Scholar 

  59. Friedman, B. & Kahn, P. H. Human agency and responsible computing: Implications for computer system design. J. Syst. Softw. 17, 7–14 (1992).

    Google Scholar 

  60. Morley, J., Floridi, L., Kinsey, L. & Elhalal, A. From what to how: an overview of AI ethics tools, methods and research to translate principles into practices. Preprint at https://arxiv.org/abs/1905.06876 (2019).

  61. Shilton, K. Values levers: building ethics into design. Sci. Technol. Hum. Values 38, 374–397 (2013).

    Google Scholar 

  62. MacKinnon, K. S. Computer malpractice: are computer manufacturers, service bureaus, and programmers really the professionals they claim to be. St. Clara Rev. 23, 1065 (1983).

    Google Scholar 

  63. Bynum, T. W. in Computer Ethics and Professional Responsibility (eds Bynum, T. W. & Rogerson, S.) 60–87 (Blackwell, 2004).

  64. Ladd, J. in Ethical Issues in the Use of Computers (eds Johnson, D. G. & Snapper, J. W.) 8–13 (Wadsworth, 1985).

  65. McNamara, A., Smith, J. & Murphy-Hill, E. Does ACM’s code of ethics change ethical decision making in software development? In Proc. 2018 26th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ACM, 2018).

  66. Brief, A. P., Dukerich, J. M., Brown, P. R. & Brett, J. F. What’s wrong with the treadway commission report? Experimental analyses of the effects of personal values and codes of conduct on fraudulent financial reporting. J. Bus. Ethics 15, 183–198 (1996).

    Google Scholar 

  67. Helin, S. & Sandström, J. An inquiry into the study of corporate codes of ethics. J. Bus. Ethics 75, 253–271 (2007).

    Google Scholar 

  68. McCabe, D. L., Trevino, L. K. & Butterfield, K. D. The influence of collegiate and corporate codes of conduct on ethics-related behavior in the workplace. Bus. Ethics Q. 6, 461–476 (1996).

    Google Scholar 

  69. Jin, K. G., Drozdenko, R. & Bassett, R. Information technology professionals’ perceived organizational values and managerial ethics: an empirical study. J. Bus. Ethics 71, 149–159 (2007).

    Google Scholar 

  70. Shilton, K. “That’s not an architecture problem!”: techniques and challenges for practicing anticipatory technology ethics. In iConference 2015 Proceedings 7 (iSchools, 2015).

  71. Goertzel, K. M. Legal liability for bad software. CrossTalk 23–28 (September/October, 2016).

  72. Laplante, P. A. Licensing professional software engineers: seize the opportunity. Commun. ACM 57, 38–40 (2014).

    Google Scholar 

  73. Pour, G., Griss, M. L. & Lutz, M. The push to make software engineering respectable. Computer 33, 35–43 (2000).

    Google Scholar 

  74. Seidman, S. B. in E-Government ICT Professionalism and Competences Service Science (eds Mazzeo, A., Bellini, R. & Motta, G.) 280, 59–67 (Springer, 2008).

  75. Abbott, A. Professional ethics. Am. J. Sociol. 88, 855–885 (1983).

    Google Scholar 

  76. O’Connor, J. E. Computer professionals: the need for state licensing. Jurimetr. J. 18, 256–267 (1978).

    Google Scholar 

  77. Gotterbarn, D. The ethical computer practitioner—licensing the moral community: a proactive approach. SIGCSE Bull. 30, 8–10 (1998).

    Google Scholar 

  78. Suchman, L. Corporate accountability. Robot Futures https://robotfutures.wordpress.com/2018/06/10/corporate-accountability/ (2018).

  79. Angwin, J., Larson, J. & Kirchner, L. Machine Bias. ProPublica https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing (2016).

  80. European Group on Ethics in Science and New Technologies Statement on Artificial Intelligence, Robotics and ‘Autonomous’ Systems (European Commission, 2018).

  81. Holstein, K., Vaughan, J. W., Daumé, H. III, Dudík, M. & Wallach, H. Improving fairness in machine learning systems: what do industry practitioners need? In Proc. 2019 CHI Conference on Human Factors in Computing Systems 600 (2018).

  82. Benkler, Y. Don’t let industry write the rules for AI. Nature 569, 161–161 (2019).

    Google Scholar 

  83. Chatila, P. & Havens, J. C. in Robotics and Well-Being (eds Aldinhas Ferreira, M. I. et al.) 11–16 (Springer, 2019).

Download references

Acknowledgements

The author would like to thank S. Wachter, B. Prainsack and B. Stahl for their insightful feedback, which that has immensely improved the quality of this work. Financial support for this work was provided by the Alan Turing Institute (EPSRC) and the British Academy.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Brent Mittelstadt.

Ethics declarations

Competing interests

The author has previously received reimbursement for conference-related travel from funding provided by DeepMind Technologies Limited.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Mittelstadt, B. Principles alone cannot guarantee ethical AI. Nat Mach Intell 1, 501–507 (2019). https://doi.org/10.1038/s42256-019-0114-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1038/s42256-019-0114-4

This article is cited by

Search

Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing