One of modern history’s most influential figures has once again stated his belief that AI poses a massive threat to humanity.
Speaking at the South by Southwest interactive festival on Sunday, Elon Musk talked about his fear of artificial intelligence, World War III and the possible coming of a new “Dark Age”.
The CEO of Tesla and SpaceX chatted with Westworld creator Jonathan Nolan and said that although he’s not usually a fan of government regulation and oversight, he does think artificial intelligence needs to be kept on a tight leash.
“This is a case where you have a very serious danger to the public, therefore there needs to be a public body that has insight and then oversight to confirm that everyone is developing AI safely – this is extremely important,” he said.
Using AlphaGo and its predecessor AlphaGo Zero as his main examples, Musk said AI is improving at an exponential rate improvement and may progress more rapidly than experts predict.
You might also like:
- Follow the launch of the world’s most powerful rocket in this spine-tingling short film
- This website lets you track Elon Musk’s car as it travels through space
Google’s AlphaZero is a form of artificial intelligence designed to learn specific tasks on its own without any human input.
It took just 24 hours to reach inhuman levels in various board games, defeating fellow AI programs in Stockfish (chess), Elmo (shogi) and it’s own predecessor in AlphaGo Zero (Go).
“No one predicted that rate of improvement,” Musk said.
“Some AI experts think they know more than they do and they think they’re smarter than they are.
“This tends to plague smart people, they define themselves by their intelligence and they don’t like the idea that a machine can be way smarter than them so they just discount the idea, which is fundamentally flawed.
“I’m very close to the cutting edge in AI and it scares the hell out of me,” he said to the audience.
Musk did admit he expects self-driving technology to be at least 100 or 200 per cent safer than a human driver by the end of the year, but he reiterated the importance of regulation.
“The rate of improvement is really dramatic, but we have to figure out some way to ensure that the advent of digital super-intelligence is one which is symbiotic with humanity,” he said.
“I think that’s the single biggest existential crisis that we face, and the most pressing one.
“The danger of AI is much greater than the danger of nuclear warheads by a lot, and nobody would suggest that we allow anyone to just build nuclear warheads if they want – that would be insane.
“Mark my words: AI is far more dangerous than nukes, by far, so why do we have no regulatory oversight, this is insane.”