Tuesday, October 11, 2016

Isaac Asimov, the Three Laws of Robotics, and self-driving cars

In a previous post, I mentioned Asimov's Three Laws of Robotics, which were:
  • A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • A robot must obey the orders it by human beings except where such orders would conflict with the First Law.
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
 With the advent of self-driving cars, the Three Laws are getting some popular press. Given that a fully self-driving car is more or less a type of robot, it has to decide what to do in any particular situation. For example, if the car sees someone in the road, and the only ways to avoid that person is to swerve to the left and hit a truck or swerve to the right and plow into a bunch of people sitting at a sidewalk table, what should it do?

It might be said that this could only happen in a philosophy class example like the famous trolley problem, but something like it could happen, and the car manufacturer has to program the car with the best possible response. Writers are now addressing this issue. Here is a recent article from Scientific American. Here and here are two from MIT Technology Review.

No comments:

Post a Comment