Isaac Asimov's "I, Robot" and the Three Laws of Robotics

Originally published: August 13, 2017

Isaac Asimov's "I, Robot" is a collection of short stories about robots whose existence is governed by “The Three Laws of Robotics”. Asimov first published the three laws in 1942. They are:

  • First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  • Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

I find it interesting to contemplate these laws in the context of modern artificial intelligence. With some believing that AI may one day usurp the human race, do we consider implementing such laws?

Tim Urban, in his part two of his AI series (aptly titled “The AI Revolution: Our Immortality or Extinction”) gives an example of AI reaching a level of intelligence that is equivalent to the gap between a human and an ant:

A machine on the second-to-highest step on that staircase would be to us as we are to ants—it could try for years to teach us the simplest inkling of what it knows and the endeavor would be hopeless.

Imagine trying to explain your name to an ant. Or to an organism that has no concept of language or words. An organism so primitive compared to humans that we feel virtually no remorse when squishing one. Now imagine AI communicating with us in a form we have no concept of. What happens if it perceives us in the same way that we perceive ants?

Elon Musk is a proponent for developing AI in a responsible and safe way. His fear is encapsulated in the simple, yet potent example of what AI may do if we task it with eliminating spam email:

The machine concludes the best way to get rid of spam is to get rid of humans. 

Are Asimov's three laws merely science fiction? Or will our future relationship with AI require some form of these laws to exist? If so, who will be responsible for defining, implementing and enforcing these laws? How will we ensure that every AI that is created globally abides by the laws? 

In Asimov's story the "The Evitable Conflict" robot psychologist Susan Calvin and world leader Stephen Byerley share a discussion on the purpose of machines and the anti-machine movement:

"But you are telling me, Susan, that the ‘Society for Humanity’ is right; and that Mankind has lost its own say in its future."

"It never had any, really. It was always at the mercy of economic and sociological forces it did not understand—at the whims of climate, and the fortunes of war. Now the Machines understand them; and no one can stop them, since the Machines will deal with them as they are dealing with the Society,—having, as they do, the greatest of weapons at their disposal, the absolute control of our economy."

"How horrible!"

"Perhaps how wonderful! Think, that for all time, all conflicts are finally evitable. Only the Machines, from now on, are inevitable!"

In this discussion the "Society for Humanity" (anti-machine movement) believes that machines are controlling the future of humanity. Yet Susan Calvin states that humanity was never in control. Prior to machines the human species was controlled by economic and sociological forces we didn't understand. This resulted in war, economic depression. But machines learned and understood these forces at a level humanity did not. The machines now controlled these forces. And because of the three laws, the machines controlled them in such a way where the outcomes would result in no harm to humans.

My takeaway from “I, Robot” is that Asimov provided us with a warning: control the machines before they control us. Let’s take one more look at the laws, but replace the word “robot” with “AI”:

  • First Law: AI may not injure a human being or, through inaction, allow a human being to come to harm.
  • Second Law: AI must obey the orders given it by human beings except where such orders would conflict with the First Law.
  • Third Law: AI must protect its own existence as long as such protection does not conflict with the First or Second Laws.