It’s no secret that AI is getting smarter every day, and Ray Kurzweil – renowned futurist, inventor and all-round AI-guy – has predicted machines will reach human-level intelligence by 2029.
What happens next is something Kurzweil calls ‘The Singularity’ – a moment when AI gets smart enough to improve itself, resulting in an explosion of technological improvement.
In November, we got one step closer, with news that AI is learning to predict the future.
But before you get too excited, we had better qualify what we mean by ‘future’. We aren’t talking years, hours or even minutes here, people. We are talking seconds.
As mere humans, we are actually pretty damn good at predicting the future. Daily life is full of little predictions that we rarely think about. When you cross the road, you accurately predict when to walk, and when you hear the doorbell, you know someone is at the door.
It sounds easy, but for AI this is actually a major technological challenge. Our knowledge of the world – its rules, its patterns and cycles – is so deeply ingrained that we don’t consider it. AI, on the other hand, must learn all these things to see the big picture.
In short, computers lack common sense.
But a team at MIT have created AI which, when given a still image, can generate a tiny video of what will happen next. For example, if you show it a still image of a beach, then it might animate it, GIF-style, to show waves lapping onto the shore.
According to New Scientist, the team created this AI by showing it 2 million videos on image-sharing site Flickr. The machine ‘watched’ them all and then was shown still images that it had to animate.
In order to up the ante, the team used an approach called ‘adversarial networks’. The idea here is that you can get better results through competition. Two networks were setup, one to generate the videos and one to judge if they looked real or not. By trying to fool each other, the networks had to create videos of increasing accuracy. Pretty neat, huh?
For now, the videos are pretty low res and short – around 32 frames and just over a second – but it’s still early days.
The application of these predictions will be robots that better understand the world and thus can better serve us.
“Any robot that operates in our world needs to have some basic ability to predict the future,” Carl Vondrick, a member of the creation team, told New Scientist.
“For example, if you’re about to sit down, you don’t want a robot to pull the chair out from underneath you.”
Yet pull the chair they still may.
It isn’t surprising that these AI improvements come with a healthy dose of fear and paranoia.
Nick Bostrom, a professor at Oxford and director of Future of Humanity Institute, recently spoke about the “Midas Effect” – wanting AI as smart as humans while ignoring the potential dangers.
Bostrom is less optimistic than Kurzweil, who is a kind of technological evangelist. Bostrom predicts that AI will reach human intelligence by 2050 and he and other experts give it a 50/50 chance of either allowing us to colonise the universe or making us extinct.
Bostrom isn’t alone in his concern either – PayPal and SpaceX tech-guru Elon Musk has also expressed some fear regarding AI.
Worth reading Superintelligence by Bostrom. We need to be super careful with AI. Potentially more dangerous than nukes.
— Elon Musk (@elonmusk) August 3, 2014
To illustrate the dangers, Bostrom came up with a pretty nifty thought experiment back in 2003.
He asks you to imagine a paperclip maximise: an AI system with the goal of maximising the number of paperclips in its collection.
It starts off humbly, collecting paperclips and earning money to buy more. However, soon it improves itself, passes through The Singularity, and starts optimising itself in ways we can’t fathom. Pretty soon it’s using all the atoms in the solar system (including the ones in our bodies) to make paperclips.
Congrats humans, your AI just turned the whole universe into paperclips.
Of course, it doesn’t have to be paperclips – it could be anything – but the thought experiment shows that AI may be a powerful optimiser that doesn’t necessarily share our ethics or values.
Now you may say, “No worries, we can just program it to think like us”, but that isn’t the premise of The Singularity. If or when that happens, AI will be improving itself, and at that point predicting the future won’t be so easy.
Even for us.