Researchers from the Technical University of Munich (TUM) Chair developed the first ethical algorithm to be used in self-drive cars, according to a study published in the scientific journal Nature Machine Intelligence.
Ethical questions are important when developing algorithms for self-drive cars. Software needs to be able to deal with unexpected situations and make the best decisions to avoid accidents as much as possible. A team from the Technical University of Munich, Germany, has developed the first ethical algorithm to distribute the levels of risk to all parties involved rather than operating on an either/or principle to avoid accidents.
About 2,000 different scenarios involving potentially critical or fatal accidents were tested on various types of streets and across regions in Europe, the USA, and China. “Until now, autonomous vehicles were always faced with an either/or choice when encountering an ethical decision. But street traffic can’t necessarily be divided into clear-cut, black and white situations; much more, the countless gray shades in between have to be considered as well. Our algorithm weighs various risks and makes an ethical choice from among thousands of possible behaviors – and does so in a matter of only a fraction of a second,” said Maximilian Geisslinger, a scientist at the TUM Chair of Automotive Technology.
The basic ethical parameters used were defined by an expert panel on behalf of the EU Commission in 2020. The suggested principles include prioritizing the worst-off and the even distribution of risk among all road users. To translate these into mathematical models, the team classified vehicles and people moving in the street according to the risk they represent to others. For example, a truck can cause more damage to other road users and experience only minor damage. In contrast, cyclists are less likely to cause serious damage to other vehicles but can suffer potentially life-threading injuries.
The next step involved setting a maximum acceptable risk in each scenario. The authors also added a variable to the calculations to account for the responsibility of each road user. This included, for example, their responsibility to follow traffic rules.
Previous studies considered only a small number of options to deal with critical situations; whenever the situation was unclear, the car simply stopped. The new algorithm includes more possible alternatives with less risk for all. For example, a car wants to overtake a cyclist while a truck is coming in the opposite direction. The question is, can the car overtake and maintain a safe distance from the cyclist and avoid a collision with oncoming traffic? What is the risk for each road user, including the self-drive car?
The new software waits until the risk for all participants is acceptable. In this case, overtake after the truck has passed. It’s no longer a matter of absolute yes or no, but the car considers a large number of options.
“Until now, often traditional ethical theories were contemplated to derive morally permissible decisions made by autonomous vehicles. This ultimately led to a dead end since, in many traffic situations, there was no other alternative than to violate one ethical principle,” said Franziska Poszler, a scientist at the TUM Chair of Business Ethics. “In contrast, our framework puts the ethics of risk at the center. This allows us to take into account probabilities to make more differentiated assessments.”
The new algorithm has been tested in computer simulations. The team now wants to take the next step and try it in self-drive cars in real-life situations. In addition, the code for this algorithm is available as Open Source software.
Geisslinger, M., Poszler, F. & Lienkamp, M. An ethical trajectory planning algorithm for autonomous vehicles. Nat Mach Intell 5, 137–144 (2023). https://doi.org/10.1038/s42256-022-00607-z