Tuesday, March 31, 2015

When should a machine remove Agency from a Human?

The answer is not simple, but certainly when the human in question is breaking the Zeroth AND First Laws of Robotics.

       0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.[1]

Surely I can't be the only person to ask:
"Why did the plane let him fly all those passengers into a mountain?"


I suspect Andreas Lubitz will be used as an exemplar for years to come as we see machines being given more and more authority to over-ride the risky instructions of mere mortals. This is one of those times in history when new principles are developed.

The new Principle can be written quite Simply as "Humans can't be Trusted!"

The same week as Andreas took his life and the life of 149 others, Ford launched a car that can "prevent you from speeding"

I posit that Things will be enacting more and more of our rules for us.
So can I suggest we start getting really good at writing Rules.

For if the Rule is bad the Thing will still enact it!