Autonomous vehicles (AVs) are amazing and emerging: robotics, sensors, artificial intelligence, and many more exponential technologies, converging all in one package. They are estimated to reduce 90% of accidents caused by human error, reduce half of traffic time, mitigate over 60% of greenhouse gas emissions and, overall, allow us to have more productive time and be safer on the roads, right? Most of us are still skeptical, yet optimistic, about this.
The ethical conundrums of AVs
So many of us have encountered and tussled with the “trolley problem”, a social dilemma in which utilitarian and pacifist approaches are evaluated. In other words, would you kill one person – or even yourself – to save another one or, even so, more?
What does this have to do with AVs? Well, it appears to be really complex to insert morals and ethical philosophy inside an algorithm, but not impossible. In these social dilemmas, human decision is just an in-the-moment kind of reaction. Nonetheless, let’s imagine that a car is programmed – either by software engineers, companies or the government – to be utilitarian, this would become a purposeful and deliberate move, even premeditated murder. Should the car hit someone in order to save you (the passenger), or crash into a wall to save pedestrians?
These questions are overly specific to a context-dependent situation; however, there are still 10% of accidents NOT caused by human error (i.e. by the environment). Thus, many challenges come ahead to AVs and the technologies involved.
As a society, we have had many technological discoveries in the past (like fire or elevators, for instance) and we have made many trials to see whether they are viable or not. The issue is that with artificial intelligence (AI), people do not want to make mistakes, moreover on the topic of transport, where it is important to assess the ethical tradeoffs we are comfortable with.
Maybe there is someone that could give us a starting point on robot rules.
Asimov and ethical driving
Isaac Asimov developed 4 crucial laws in robotics:
- A robot may not harm humanity or, by inaction, allow humanity to come to harm
- A robot may not injure a human being or, by inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with law 1)
- A robot must protect its own existence as long as such protection does not conflict with law 1) nor 2)
Just like it happens in (international) law-making, laws are not corrupt; but the exercise of these laws – meaning rights and obligations – could bring resource inequalities for living beings (or robots). Recently, the Ethics Commission of Germany developed a set of 20 guidelines for automated and connected driving –mainly based on Asimov’s laws -, which try to address the following:
- How much dependence on technologically complex systems – which in the future will be based on artificial intelligence, possibly with machine learning capabilities – are we willing to accept in order to achieve, in return, more safety, mobility and convenience?
- What precautions need to be taken to ensure controllability, transparency and data autonomy?
- What technological development guidelines are required to ensure that we do not blur the contours of a human society that places individuals, their freedom of development, their physical and intellectual integrity and their entitlement to social respect at the heart of its legal regime?
Some researchers have developed quite interesting ideas when linking the trolley problem to AVs. For example, installing a value-of-life system in the AVs and, instead of taking number of lives accountability, count for number or years saved. Another remarkable example by Iyad Rahwan consisted in achieving a consensus of what ethical outcomes people are going to be comfortable with, despite their contextual-dependency (Moral Machine was suggested as a data collection tool).
Converging into the future
It is important to remember that exponential technologies could be used to enhance data collection and the performance of these vehicles. Maybe Internet of Things’ (IoT) technologies would enable cities to connect to AVs and help collect live data faster and in a relevant way. Asimov’s laws of robotics can help us to develop guidelines in a non-bureaucratic, fair way to collect data and decision-making, just like Germany did.
We cannot simply program an AV to choose the option that does less harm or install ethics. In my opinion, more data on how our neural networks operate and a fair allocation of probabilities can empower society and manufacturers to customize their cars to their user and his or her environment.
After all, AVs will be the safest and most abundant option.