He’s at it again.
In what has become a bit of a morbid running joke, physicist Stephen Hawking has returned to his new favourite pastime: doomsday predictions.
Hawking’s flirtation with oblivion began back in 2015 when he mentioned in a Reddit Ask Me Anything (AMA) session that AI might wipe us all out and that contacting aliens is probably a bad idea.
Then last November, he gave us about 1,000 years to leave Earth.
In May, he decided that 1,000 was a bit generous and reduced it to just 100 years.
The latest is this: Hawking says the emergence of artificial intelligence (AI) could be catastrophic unless we find a way to harness it.
Hawking now joins Elon Musk as one of the Very Smart People who sees the potential of AI but cautions that we must be ready for it.
“Unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the history of our civilization,” he said at the Web Summit technology conference in Portugal.
“It brings dangers, like powerful autonomous weapons, or new ways for the few to oppress the many. It could bring great disruption to our economy.”
Hawking later explained that he is “an optimist” that believes we can create AI for the good of the world.
Fingers crossed that AI knows how to stop the planet getting hotter.
In a talk at the Tencent WE Summit in Beijing on Sunday Hawking said that Earth will become a ball of fire by 2600 so the best thing we can do is start making plans to leave Earth immediately.
Hawking thinks that the best way to do this will be via Breakthrough Starshot, a plan to get a nanocraft probe to Alpha Centauri in just over 20 years.
That same probe could get to Mars in less than an hour and Pluto in days, according to Hawking.
Getting a probe there is one thing, moving us there is a much different story. The good news is he seems to have shifted doomsday back to about 500 years later, an improvement on the 100 years he gave us in May.