Skip to main content

Thank you for visiting You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

An ethical trajectory planning algorithm for autonomous vehicles

A preprint version of the article is available at arXiv.


With the rise of artificial intelligence and automation, moral decisions that were formerly the preserve of humans are being put into the hands of algorithms. In autonomous driving, a variety of such decisions with ethical implications are made by algorithms for behaviour and trajectory planning. Therefore, here we present an ethical trajectory planning algorithm with a framework that aims at a fair distribution of risk among road users. Our implementation incorporates a combination of five ethical principles: minimization of the overall risk, priority for the worst-off, equal treatment of people, responsibility and maximum acceptable risk. To the best of our knowledge, this is the first ethical algorithm for trajectory planning of autonomous vehicles in line with the 20 recommendations from the European Union Commission expert group and with general applicability to various traffic situations. We showcase the ethical behaviour of our algorithm in selected scenarios and provide an empirical analysis of the ethical principles in 2,000 scenarios. The code used in this research is available as open-source software.

This is a preview of subscription content, access via your institution

Access options

Rent or buy this article

Get just this article for as long as you need it


Prices may be subject to local taxes which are calculated during checkout

Fig. 1: Trajectory planning based on risk distribution.
Fig. 2: Visualization of the maximum acceptable risk principle.
Fig. 3: Schematic example for the responsibility principle in the case of a rule violation.
Fig. 4: Risk distribution of the highest 100 occurring risks.
Fig. 5: Cumulated personal harm resulting from accidents during 2,000 simulated scenarios.
Fig. 6: Comparison of the ethical principles by the average deviation of the corresponding selected trajectories based on 2,000 simulated scenarios.

Data availability

All data gathered in this research are available via figshare at This includes the evaluation files for the simulated scenarios with the three shown algorithms and a sample log file for one scenario. All remaining log files are available upon request. Source data are provided with this paper.

Code availability

The algorithm for trajectory planning40, as well as corresponding tools for analysis and visualization, are available open source at


  1. Lin, P. in Autonomous Driving: Technical, Legal and Social Aspects (eds Maurer, M. et al.) 69–85 (Springer, 2016);

  2. Kriebitz, A., Max, R. & Lütge, C. The German Act on Autonomous Driving: why ethics still matters. Phil. Technol. 35, 29 (2022).

    Article  Google Scholar 

  3. Vehicle Automation Report #HWY18MH010 (NTSB, 2018);

  4. Gill, T. Ethical dilemmas are really important to potential adopters of autonomous vehicles. Ethics Inf. Technol. 23, 657–673 (2021).

    Article  Google Scholar 

  5. Thornton, S. M., Pan, S., Erlien, S. M. & Gerdes, J. C. Incorporating ethical considerations into automated vehicle control. IEEE Trans. Intell. Transp. Syst. 18, 1429–1439 (2017).

    Article  Google Scholar 

  6. Wang, H., Huang, Y., Khajepour, A., Cao, D. & Lv, C. Ethical decision-making platform in autonomous vehicles with lexicographic optimization based model predictive controller. IEEE Trans. Veh. Technol. 69, 8164–8175 (2020).

    Article  Google Scholar 

  7. Geisslinger, M., Poszler, F., Betz, J., Lütge, C. & Lienkamp, M. Autonomous driving ethics: from trolley problem to ethics of risk. Phil. Technol. 34, 1033–1055 (2021).

    Article  Google Scholar 

  8. Hübner, D. & White, L. Crash algorithms for autonomous cars: how the trolley problem can move us beyond harm minimisation. Ethical Theory Moral Pract. 21, 685–698 (2018).

    Article  Google Scholar 

  9. Bhargava, V. & Kim, T. W. in Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence (eds Lin, P. et al.) 5–19 (Oxford Academic, 2017);

  10. Keeling, G., Evans, K., Thornton, S. M., Mecacci, G. & Santoni de Sio, F. Four perspectives on what matters for the ethics of automated vehicles. Road Veh. Autom. 6, 49–60 (2019).

    Article  Google Scholar 

  11. Goodall, N. J. Away from trolley problems and toward risk management. Appl. Artif. Intell. 30, 810–821 (2016).

    Article  Google Scholar 

  12. Reuel, A. K., Koren, M., Corso, A. & Kochenderfer, M. J. Using adaptive stress testing to identify paths to ethical dilemmas in autonomous systems. Proceedings of the Workshop on Artificial Intelligence Safety 2022 (SafeAI 2022) retrieved from CEUR Workshop Proc. Vol-3087.

  13. Bonnefon, J. F., Shariff, A. & Rahwan, I. The trolley, the bull bar, and why engineers should care about the ethics of autonomous cars. Proc. IEEE 107, 502–504 (2019).

    Article  Google Scholar 

  14. Horizon 2020 Commission Expert Group Ethics of Connected and Automated Vehicles: Recommendations on Road Safety, Privacy, Fairness, Explainability and Responsibility (Publication Office of the European Union, 2020);

  15. Luetge, C. The German ethics code for automated and connected driving. Phil. Technol. 30, 547–558 (2017).

    Article  Google Scholar 

  16. Xiao, W., Cassandras, C. G. & Belta, C. A. Bridging the gap between optimal trajectory planning and safety-critical control with applications to autonomous vehicles. Automatica 129, 109592 (2021).

    Article  MATH  MathSciNet  Google Scholar 

  17. Nyberg, T., Pek, C., Dal Col, L., Noren, C. & Tumova, J. Risk-aware motion planning for autonomous vehicles with safety specifications. In IEEE Intelligent Vehicles Symposium 1016–1023 (IEEE, 2021).

  18. Zheng, L., Zeng, P., Yang, W., Li, Y. & Zhan, Z. Bézier curve‐based trajectory planning for autonomous vehicles with collision avoidance. IET Intell. Transp. Syst. 14, 1882–1891 (2020).

    Article  Google Scholar 

  19. Jasour, A., Huang, X., Wang, A. & Williams, B. C. Fast nonlinear risk assessment for autonomous vehicles using learned conditional probabilistic models of agent futures. Auton. Robot. 46, 269–282 (2021).

    Article  Google Scholar 

  20. Blake, A. et al. FPR—Fast Path Risk algorithm to evaluate collision probability. IEEE Robot. Autom. Lett. 5, 1–7 (2019).

    Article  Google Scholar 

  21. Bonnefon, J. F., Shariff, A. & Rahwan, I. The social dilemma of autonomous vehicles. Science 352, 1573–1576 (2016).

    Article  Google Scholar 

  22. Nida-Rümelin, J., Schulenburg, J. & Rath, B. Risikoethik (DE Gruyter, 2012);

  23. Rawls, J. A Theory of Justice (Harvard Univ, Press, 1971).

  24. Awad, E. et al. The moral machine experiment. Nature 563, 59–64 (2018).

    Article  Google Scholar 

  25. Contissa, G., Lagioia, F. & Sartor, G. The ethical knob: ethically-customisable automated vehicles and the law. Artif. Intell. Law 25, 365–378 (2017).

    Article  Google Scholar 

  26. Applin, S. Autonomous vehicle ethics: stock or custom? IEEE Consum. Electron. Mag. 6, 108–110 (2017).

    Article  Google Scholar 

  27. Goodall, N. Ethical decision making during automated vehicle crashes. Transp. Res. Rec. 2424, 58–65 (2014).

    Article  Google Scholar 

  28. Hansson, S. O., Belin, M. Å. & Lundgren, B. Self-driving vehicles—an ethical overview. Phil. Technol. 34, 1383–1408 (2021).

    Article  Google Scholar 

  29. Trautman, P. & Krause, A. Unfreezing the robot: navigation in dense, interacting crowds. IEEE/RSJ International Conference on Intelligent Robots and Systems 797–803 (IEEE, 2010);

  30. World Forum for Harmonization of Vehicle Regulations Framework Document on Automated/autonomous Vehicles (UNECE, 2020);

  31. Shariff, A., Bonnefon, J. F. & Rahwan, I. How safe is safe enough? Psychological mechanisms underlying extreme safety demands for self-driving cars. Transp. Res. Part C 126, 103069 (2021).

  32. Liu, P., Yang, R. & Xu, Z. How safe is safe enough for self-driving vehicles? Risk Anal. 39, 315–325 (2019).

    Article  Google Scholar 

  33. Harsanyi, J. C. & Harsanyi, B. J. C. Bayesian decision theory and utilitarian ethicse. Am. Econ. Rev. 68, 223–228 (1978).

    Google Scholar 

  34. Faulhaber, A. K. et al. Human decisions in moral dilemmas are largely described by utilitarianism: virtual car driving study provides guidelines for autonomous driving vehicles. Sci. Eng. Ethics 25, 399–418 (2019).

    Article  Google Scholar 

  35. Pek, C., Manzinger, S., Koschi, M. & Althoff, M. Using online verification to prevent autonomous vehicles from causing accidents. Nat. Mach. Intell. 2, 518–528 (2020).

    Article  Google Scholar 

  36. Shalev-Shwartz, S., Shammah, S. & Shashua, A. On a formal model of safe and scalable self-driving cars. Preprint at (2017).

  37. Maierhofer, S., Moosbrugger, P. & Althoff, M. Formalization of intersection traffic rules in temporal logic. In IEEE Intelligent Vehicles Symposium 1135–1144 (IEEE, 2022).

  38. Yoshida, J. Robotaxi priorities: avoid crashes or avoid Blame? Ojo-Yoshida Report (2022).

  39. Kauppinen, A. Who should bear the risk when self-driving vehicles crash? J. Appl. Phil. 38, 630–645 (2020).

  40. Geisslinger, M. & TUM - Institute of Automotive Technology. TUMFTM/EthicalTrajectoryPlanning: initial release. Zenodo (2022).

  41. Althoff, M. Reachability analysis and its application to the safety assessment of autonomous cars. Ph.D. Thesis. Fak. für Elektrotechnik und Informationstechnik 221 (2010);

  42. Evans, K., de Moura, N., Chauvier, S., Chatila, R. & Dogan, E. Ethical decision making in autonomous vehicles: the AV ethics project. Sci. Eng. Ethics 26, 3285–3312 (2020).

    Article  Google Scholar 

  43. Althoff, M., Koschi, M. & Manzinger, S. CommonRoad: composable benchmarks for motion planning on roads. In IEEE Intelligent Vehicles Symposium 719–726 (IEEE, 2017);

  44. Gogoll, J. & Müller, J. F. Autonomous cars: in favor of a mandatory ethics setting. Sci. Eng. Ethics 23, 681–700 (2017).

    Article  Google Scholar 

  45. De Freitas, J. et al. From driverless dilemmas to more practical commonsense tests for automated vehicles. Proc. Natl. Acad. Sci. USA 118, e2010202118 (2021).

    Article  Google Scholar 

  46. Werling, M., Ziegler, J., Kammel, S. & Thrun, S. Optimal trajectory generation for dynamic street scenarios in a frenét frame. In Proc. IEEE International Conference on Robotics and Automation 987–993 (IEEE, 2010);

  47. Hansson, S. O. The Ethics of Risk (Palgrave Macmillan, 2013);

  48. Geisslinger, M., Karle, P., Betz, J. & Lienkamp, M. Watch-and-learn-net: self-supervised online learning for probabilistic vehicle trajectory prediction. In IEEE International Conference on Systems, Man, and Cybernetics 869–875 (IEEE, 2021);

  49. WHOQOL: Measuring Quality of Life (WHO, 2012);

  50. Lütge, C. et al. AI4people: ethical guidelines for the automotive sector-fundamental requirements and practical recommendations. Int. J. Technoethics 12, 101–125 (2021).

    Article  Google Scholar 

  51. Crash Report Sampling System. National Highway Traffic Safety Administration

  52. Gennarelli, T. A. & Wodzin, E. AIS 2005: a contemporary injury scale. Injury 37, 1083–1091 (2006).

    Article  Google Scholar 

Download references


M.G. and F.P. received financial support from the Technical University of Munich—Institute for Ethics in Artificial Intelligence (IEAI). Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the IEAI or its partners.

Author information

Authors and Affiliations



M.G., as the first author, initiated the idea for this paper and made essential contributions to its conception, implementation, content and experimental results. F.P. contributed to the conception and revised the paper critically. M.L. made an essential contribution to the conception of the research project. He revised the paper critically for important intellectual content. He gave final approval of the version to be published and agrees to all aspects of the work. As a guarantor, he accepts the responsibility for the overall integrity of the paper.

Corresponding author

Correspondence to Maximilian Geisslinger.

Ethics declarations

Competing interests

The authors declare no competing interests.

Peer review

Peer review information

Nature Machine Intelligence thanks Anthony Corso and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Extended data

Extended Data Fig. 1 Our ethical trajectory planning algorithm in four steps.

The small orange balls symbolize trajectories that are sampled in the first step. Next, the trajectories are subjected to validity checks like in a filter screen visualized here (Step 2). Only those trajectories of the highest available validity level (here: five trajectories from ‘valid’) are assigned costs, whereas higher costs are represented with higher transparency (Step 3). In the last step, the trajectory with the lowest cost is selected.

Extended Data Fig. 2 The usage of different ethical principles for risk distribution leads to different choices.

Three exemplary and fictive scenarios which are simplified to two options (A and B) to choose from showcase the trade-offs in risk distribution regarding these principles. In every option, there are two fictive persons which are assigned a collision probability p and an estimated harm H. While option A corresponds to each of the three principles in every case, option B might be an intuitive alternative choice to many people, showing that there are good reasons to incorporate all three principles instead of relying on a single one.

Extended Data Fig. 3 Runtime analysis of the proposed algorithm.

Computation times are broken down separately by Prediction (bottom) and Planning (top) over the number of sampled trajectories. Our extensions compared to the state of the art - namely the Risk assessment and the Responsibility analysis - together require about 2 ms computing time per trajectory. The analysis was performed based on a prototype implementation without parallelization on a single thread of an Intel Core i7 (9th generation) laptop CPU.

Source data

Supplementary information

Supplementary Data 1

Evaluation files for the simulated scenarios with the three shown algorithms and a sample log file for an exemplary scenario.

Source data

Source Data Fig. 4

Statistical source data.

Source Data Fig. 5

Statistical source data.

Source Data Fig. 6

Statistical source data.

Source Data Extended Data Fig. 3

Statistical source data.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Geisslinger, M., Poszler, F. & Lienkamp, M. An ethical trajectory planning algorithm for autonomous vehicles. Nat Mach Intell 5, 137–144 (2023).

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI:


Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing