- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
It might be said that this could only happen in a philosophy class example like the famous trolley problem, but something like it could happen, and the car manufacturer has to program the car with the best possible response. Writers are now addressing this issue. Here is a recent article from Scientific American. Here and here are two from MIT Technology Review.
No comments:
Post a Comment