Google Is Working on Plans to Prevent a Skynet Situation


As long as we've had idea of robots, we've had the idea of robot uprisings. Even the term 'robot' originates with the Czech play R.U.R, which describes, you guessed it, a robot uprising. And now Google is taking the first definitive steps to make sure that never happens. DeepMind, AI research lab owned by Google, has released a paper exploring the best ways to prevent self-learning machines from turning off the "big red button" humans might use to shut them down.
 Of course, neither DeepMind or Google are referring to the paper, "Safely Interruptible Agents", as an anti-SkyNet. The authors calmly not that learning machines are "unlikely to behave optimally all the time," in which case a human might need to override it. But what if the self-learning machines learns that if it doesn't want to be bothered, it should just disable the button we humans use to stop it? Fortunately, through the use of some techniques I can't even pretend to understand, researchers seem to have found a way to make sure these self-learning machines won't want to. 
DeepMind is mostly focused on making AI not just smart, but intuitive. A major leap forward came  in 2015 when their AlphaGo team defeated the human world champion of the game Go (which you can watch here). Demis Hassabis, the co-founder of Deep Mind, sees the next problem as "chunking," the human and animal ability to plan ahead for eventualities while still moving forward. Hassabis uses the example of going to the airport, driving to from point A to point B, with the plan in mind to fly from B to C. 
DeepMind has an internal ethics board which they've chosen to keep secret. "I've read Frankenstein a few times. It's important to keep these things in mind," says Hassabis. Hopefully those lessons also apply to computers. 
Previous
Next Post »