Think of me as…

My name is Andrés, aka Serdna, owner of this page. Think of me as Creed Bratton from The Office: a mysterious figure who is prone to saying bizarre, disturbing, or confusing statements on a regular basis. 

Unlike Creed, I’m a young fellow. 22. International Business student at some kind of university in Lima, Perú.

Now that most of my identity is revealed (the rest will be revealed as time passes), I want to talk about my life and things I have planned. I started this page as a personal content platform, but I decided to add one more topic: futurology. I’ve been very prone to study this because I do believe that life is getting better, despite what most people want to see and think in order to be cool or part of the system.

Personally, I think of futurology as a school of thought that, rather than predict the future, focuses on the gap between the present and a future of humanity. I might not be so vicious on making posts about this topic yet. Reason is that I need to study this topic a little bit more.

My posts will not be long, but will be informative. I will talk about the present onwards. Right now I am focusing on editing some clips for my youtube channel – it’s called Serdna but I don’t have a custom URL yet. I recently went to Huaraz, Ancash, to do some trekking activity around montains. It was an amazing place and I will narrate my experience in the next blog post.

Also, right now I am packing my bags to go to Puccalpa, Madre de Dios, which is in the jungle-ish region of Peru. I’m doing some social service there and, obviously, I will be vlogging and blogging my experience, I hope.

That’s it, I don’t wanna’ bore you. Btw, I like this kind of music, if you like it, cool:

“Dank memes are the future”.

Advertisements

Asimov, driverless cars and social dilemmas

Autonomous vehicles (AVs) are amazing and emerging: robotics, sensors, artificial intelligence, and many more exponential technologies, converging all in one package. They are estimated to reduce 90% of accidents caused by human error, reduce half of traffic time, mitigate over 60% of greenhouse gas emissions and, overall, allow us to have more productive time and be safer on the roads, right? Most of us are still skeptical, yet optimistic, about this.

The ethical conundrums of AVs

So many of us have encountered and tussled with the “trolley problem”, a social dilemma in which utilitarian and pacifist approaches are evaluated. In other words, would you kill one person – or even yourself – to save another one or, even so, more?

What does this have to do with AVs? Well, it appears to be really complex to insert morals and ethical philosophy inside an algorithm, but not impossible. In these social dilemmas, human decision is just an in-the-moment kind of reaction. Nonetheless, let’s imagine that a car is programmed – either by software engineers, companies or the government – to be utilitarian, this would become a purposeful and deliberate move, even premeditated murder. Should the car hit someone in order to save you (the passenger), or crash into a wall to save pedestrians?

These questions are overly specific to a context-dependent situation; however, there are still 10% of accidents NOT caused by human error (i.e. by the environment). Thus, many challenges come ahead to AVs and the technologies involved.

As a society, we have had many technological discoveries in the past (like fire or elevators, for instance) and we have made many trials to see whether they are viable or not. The issue is that with artificial intelligence (AI), people do not want to make mistakes, moreover on the topic of transport, where it is important to assess the ethical tradeoffs we are comfortable with.

Maybe there is someone that could give us a starting point on robot rules.

Asimov and ethical driving

Isaac Asimov developed 4 crucial laws in robotics:

  • A robot may not harm humanity or, by inaction, allow humanity to come to harm
  • A robot may not injure a human being or, by inaction, allow a human being to come to harm.
  • A robot must obey the orders given it by human beings except where such orders would conflict with law 1)
  • A robot must protect its own existence as long as such protection does not conflict with law 1) nor 2)

Just like it happens in (international) law-making, laws are not corrupt; but the exercise of these laws – meaning rights and obligations – could bring resource inequalities for living beings (or robots). Recently, the Ethics Commission of Germany developed a set of 20 guidelines for automated and connected driving –mainly based on Asimov’s laws -, which try to address the following:

  • How much dependence on technologically complex systems – which in the future will be based on artificial intelligence, possibly with machine learning capabilities – are we willing to accept in order to achieve, in return, more safety, mobility and convenience?
  • What precautions need to be taken to ensure controllability, transparency and data autonomy?
  • What technological development guidelines are required to ensure that we do not blur the contours of a human society that places individuals, their freedom of development, their physical and intellectual integrity and their entitlement to social respect at the heart of its legal regime?

Some researchers have developed quite interesting ideas when linking the trolley problem to AVs. For example, installing a value-of-life system in the AVs and, instead of taking number of lives accountability, count for number or years saved. Another remarkable example by Iyad Rahwan consisted in achieving a consensus of what ethical outcomes people are going to be comfortable with, despite their contextual-dependency (Moral Machine was suggested as a data collection tool).

Converging into the future

It is important to remember that exponential technologies could be used to enhance data collection and the performance of these vehicles. Maybe Internet of Things’ (IoT) technologies would enable cities to connect to AVs and help collect live data faster and in a relevant way. Asimov’s laws of robotics can help us to develop guidelines in a non-bureaucratic, fair way to collect data and decision-making, just like Germany did.

We cannot simply program an AV to choose the option that does less harm or install ethics. In my opinion, more data on how our neural networks operate and a fair allocation of probabilities can empower society and manufacturers to customize their cars to their user and his or her environment.

After all, AVs will be the safest and most abundant option.