Thursday, November 24, 2016

Robot meets Trolley Problem in an Autonomous Car

Isaac Asimov's Laws of Robotics states that:
  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.  
  2. A robot must obey orders given it by human beings except where such orders conflict with the First law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second law.
What is an autonomous vehicle but a robot programmed to ferry passengers and cargo around.

The famous Trolley problem is a thought experiment in ethics.  The general form of the problem is:
There is a runaway trolley barreling down the railway tracks.  Ahead, on the tracks, there are five people tied up and unable to move.  The trolley is headed straight for them. You are standing some distance off in the train yard, next to a lever.  If you pull this lever, the trolley will switch to a different set of tracks.  However, you notice that there is one person tied to this track and is unable to move.

What do you do?
Do nothing and kill five people or pull the lever and kill one person.
Which is the most ethical choice?

Ever since Artificial Intelligence (AI) and autonomous vehicles started getting closer to reality, this question of ethics is haunting the designers of autonomous vehicles and the algorithms governing them.  Imagine an autonomous car speeding along when it suddenly encounters a bunch of unexpected pedestrians on the road.  If it continues forward, it will kill the pedestrians.  If it swerves to avoid the pedestrians, it will fall off a bridge killing all its passengers.

Autonomous car meets trolley problem.
Robot meets a situation violating its laws.

How do we program a car to behave in a situation like this?
Eject all the passengers to safety and drive off the bridge.  Wish it was that easy.

There has been a lot of research into this area and MIT has even set up a web site to crowd source the opinion of the masses.  [side note: It will be interesting to go through the judging process as: a) a passenger, b) a pedestrian and c) as a third party onlooker.  I am sure that will skew your answers..]

One way to look at this problem is to equate the autonomous car to a chauffeured car.  In this case, the decisions are driven by the chauffeur.  As a human being, the driver's main motive is to stay alive and this skews the decision making.

The outcome of solving this problem may be that people will hesitate to buy or board a driverless car because of the fact that protecting its passengers may not be in the best interest of the vehicle.  What a dilemma!

Again, these are extreme end cases we are talking about.  Maybe, we should use a crowd sourced decision tree and apply it to all autonomous vehicles.  Today, the driver takes responsibility to the actions of the vehicle.  Tomorrow, we should not be blaming the manufacturer for the actions.  The actions should be governed by a set of rules appointed by a global body.

The reality is that millions of people are killed by vehicles manned by people.  This would dramatically reduce with autonomous vehicles and that fact will pivot people into accepting this new transport.  Similar to how horses made the decisions for us while we rode buggies, and when cars came around, no one wanted to trust a human being to make these decisions.  And, look where we are now.  One thing you have to give to the horses, though, is that they don't drink and pull carriages.

All along, designers of vehicles had been concentrating on protecting the passengers by installing seat belts, airbags and other safety equipment in vehicles.  Because of this new ethical dilemma, the designers of autonomous vehicles will have to start thinking of not only protecting the inhabitants of the vehicle, but also those in its vicinity in case an unfortunate event was to occur.  Designers are currently concentrating on the algorithm that powers these vehicles and assists in decision making.  That is all fine and good, but we need to start thinking outside the box to find the solution.

Thinking outside the box could lead into external airbags for vehicles which deploy when they detect an apparent (planned) collision with a living being.  It could also lead to apparel manufacturers designing safety wearables like jackets with airbags.  These could be for people who have high exposure to autonomous vehicle traffic, like construction workers.

But again, these are extreme edge cases we are talking about.