Google doesn't want to become Skynet. The company wants to prevent AIs from turning against humanity.

Posted by Mini Drive on 2016-06-23

There are an infinite of theories about the future. Some of them are more fantasy, but at any time, anyone can develop a technology that seems to have been taken from a movie. Other theorieson the other hand, are very close to true. This is the case of Artificial Intelligence. In an age when personal assistants can organize your life for you, write down your messages, create shopping lists, choose the best route to work, and it is still possible to program them so that they have similar temperaments with a person indeed, it is easy to imagine a world in which AIs are becoming increasingly similar to humans.

We may soon be living like in futuristic movies we like so much.

However, the films often show a future where the machines go to war against humans. Screenwriters and writers show a very plausible hypothesis: even if the artificial intelligences are programmed to "imitate" human reasoning, they are always based on mathematical logic. What does that mean? It is very fun to have a phone that talks to you as if it were a friend. But this is only a simulation, when you do an objective question, your "friend" will always give you the most logical answer possible. It is for this reason that the writers believe that intelligent and autonomous machines can turn against humans.

As one of the companies that invests in technology and artificial intelligence, Google is studying the creation of a self-destruct button, in order that machines do not turn against humanity. A document of ten pages was released on the Internet, signed by Laurent Orseau and Stuart Armstrong. The document details the development of a switch that would shut down the artificial intelligence, keeping the human control over computers and machines.

The system is called interruptability.

"Safe interruptability may be useful to take control of a robot is not behaving well and that could lead to irreversible consequences, or to take it out of a delicate situation, or even use it temporarily to learn a new task by through rewards "the researchers said.

According to the researchers, this is a way to take control of a robot that misbehaves and place it in a more secure situation in order to prevent irreversible consequences occur. The button can also be used to activate a default behavior in robots.

Do you agree with Google's concern?


Older Post


0 comments


Leave a comment

Please note, comments must be approved before they are published