*Insert Terminator analogy here*.
Researchers at Facebook Artificial Intelligence Research (FAIR) have stopped AI from chatting in what appears to be a nonsensical language.
Last month, the FAIR team was working on a project which teaches AI bots to negotiate.
Why negotiation? According to Facebook, negotiating requires “complex communication and reasoning skills, which are attributes not inherently found in computers.” So far so good.
During the research, FAIR let the AI use “reinforcement learning” – iterative improvement based on past experience – in two separate experiments.
In the first, the researchers made the AI conform to language that imitated human communication, in this case, English. But in the second, the researchers allowed the AI to diverge from English.
“There was no reward to sticking to English language,” FAIR researcher Dhruv Batra told Fast Co Design. “Agents will drift off understandable language and invent code words for themselves…“Like if I say ‘the’ five times, you interpret that to mean I want five copies of this item. This isn’t so different from the way communities of humans create shorthands.”
So what did this new AI language look like? Here is a taster:
Bob: “I can can I I everything else.”
Alice: “Balls have zero to me to me to me to me to me to me to me to me to.”
Although this may look like gibberish to us, Batra and his colleagues found that if left alone the AI would continue in this way and achieve successful negotiations.
The catch is we have no idea what the AI is really talking about. And to some people – such as Elon Musk – that may seem a little scary.
Humans are also known to create their own coded languages. For example, think of the Pig Latin you may have learned as a kid or consider the way that soldiers speak during a military operation. In the first case, we may do so because we want to obfuscate our meaning, and in the second because we want a certain group to understand us more precisely.
It is most likely that AI is not sophisticated enough to be communicating in such a way that “locks us out”.
In the end, the FAIR researchers would prefer if the AI speaks boring old English so they abandoned the second experiment and proceeded with the first one, which includes stricter parameters.
There is an argument for just letting the AI bots get on with it and chat amongst themselves. It could lead to a faster evolution of AI, but also poses a pretty strong caveat: We won’t know what they are talking about.