Any sci-fi lover worth their salt will tell you that artificial intelligence (AI) is a tricky busy. Skynet anyone? HAL? While the promise of cybernetic consciousness has exciting prospects, it also raises complex ethical issues – and potentially poses a threat. Despite all of this, research into AI is developing at an extraordinary rate.
The University of Sydney is testing a farm robot, nicknamed ‘Ladybird’, which can “collect intelligence on nutritional information, autonomous farm surveillance, mapping, classification and detection of pests and, eventually, autonomous weeding and harvesting,” according to ABC Rural.
The University of Technology’s (UTS) library at Ultimo has an underground book bunker controlled by intelligent robots.
Then there are the many military applications of AI, the beginnings of which we are already seeing in the drones used in recent US military endeavours.
Professor Mary-Anne Williams, the Associate Dean (Research and Development) in the Faculty of Engineering and Information Technology at UTS, said that AI is gaining tremendous traction because we now have hardware able to collect vast amounts of digital data and execute highly sophisticated AI algorithms that can find patterns and insights that people simply could not find on their own. “As computing power increases so too does the power and reach of AI. This will continue and accelerate,” she said.
Towards true artificial intelligence
These developments are significant but still do not demonstrate what Dr Kevin Korb, Reader in the School of Information Technology at Monash University, calls artificial general intelligence (AGI), the kind of sophisticated independent thinking machine we see in the movies.
“There is little sign of that in the near future,” he said. “There is no consensus on the way towards achieving that overarching goal and little evidence that all of our successes with tools will turn into such a thing in the foreseeable future.
“Confidence… is based on Moore’s law and its relatives, which, while successful in application with hardware technologies, have never characterised software, which is the central issue for developing an AI.”
Dr Robert Epstein, the lead editor of a seminal book on AI called Parsing the Turing Test: Methodological and Philosophical Issues in the Quest for the Thinking Computer (Springer), agrees.
“The truly intelligent AI was, we thought, right around the corner after Joseph Weizenbaum created his famous ‘Eliza’ programme back in the 1960s,” he said. “Even today, though, no computer program can use human language well enough to pass the Turing test decisively.
“Where AI is doing especially well is in pattern recognition; that is what is allowing companies like Google to create self-driving cars.”
Difficult questions about life, the universe and everything
All of this raises difficult questions, from the nature of life itself to the level of trust we should put in these intelligent machines. Korb believes that, although AGI is not near realisation, we still need to contemplate the implications of such a breakthrough when and if it does come.
“There are plenty of risks to go around,” he said. “If [or] when AGI does arrive, I agree with IJ Good that that will result in an intelligence explosion. We need to think hard [and] long before then about whether we want to launch that, and if so, how. My preference is to build robots that will have ethical behaviour at their core.
“I personally have pursued and will continue to pursue – if not prohibited – designing methods to build ethical behaviour into robots from the ground up. But governments and large research institutions need also to take the risk seriously and support research into coping with it. That has yet to happen.”
AGI is in the future, but AI is very much a concern of the present as more and more intelligent machines infiltrate all aspects of daily life. Personal security, already an issue in our connected world, is especially of concern.
Williams said, “AI algorithms can be used to exploit the trail of digital data that people leave behind as they interact with technologies in their daily lives, at work, at home and at play, and that sensors detect.
“They can also control hardware like driverless cars, robot surgeons, robot soldiers, and service robots to undertake missions that require physical presence and physical work to be done.”
Laws and best practices, Williams said, are mitigating the potential risks, but these lag behind the advancement of technology. “As societal activities becomes more digitised, our reliance on AI technologies grows,” she said. “The risks and implications to individuals, nations and the global community are poorly understood.”
Creating our own doom?
Epstein said that AI by itself is not particularly risky, but, combined with the internet and wireless technologies, it is extremely dangerous. “The internet, which in a book I published in 2008 I labelled the InterNest, is the most serious threat to humankind there is,” he said.
Epstein explained that the threat comprises two parts. Firstly, we are becoming so dependent on it that a prolonged failure of the power grid – as a result of cyber attacks, solar flares, asteroids and so on – will immobilise humanity, very possibly leading to a breakdown in industrialised societies worldwide.
“Even without a major power grid failure, however, the internet is probably dooming us because of the inevitability of the singularity – that moment in time which we [will] inevitably reach between 2025 and 2045 when a true AI will come into existence.
“A nanosecond later, it will jump into the Nest we have so lovingly been building for it and become a massive computational entity (MCE),” he said.
While Epstein admitted that no one knows what will happen next, he said we do know:
(a) There will be no way to kill it (short of shutting down the internet, which it will never permit),
(b) It will have access to and will be able to process and understand all human knowledge as no human ever could, and
(c) It will have complete control over virtually all significant human communications, financial transactions, and weapons systems.
“Nothing we do or say will make the slightest difference in what it decides to do with humanity, but it is reasonable to assume that it will put its own needs ahead of ours,” he warned.
While it is unclear if and when a true AI will emerge and just what level of threat it may pose, we would be wise to consider our response to such technologies, especially given their growing infiltration of our lives.
“A simple internet search could not happen without AI-based information retrieval technology,” Williams said. “Imagine the difficulty of doing business without a search engine. The global financial system, mobile phones and services are driven by AI technologies – there are few areas in business and society untouched by AI.
“AI technologies are making scientific discoveries, writing poems and playing jazz.”
Just in case all of this leaves you worried you have been replaced by a robot or an alien without even knowing it, take Epstein’s very funny online test How human are you?
Interestingly, though, Australia may not be able to continue its own AI research.
“The Defence Trade Controls Act of 2012 is set to take effect from May next year. Its provisions will criminalise AI research in Australia, along with much of the rest of science and technology,” Korb warned. “Some technologists are already planning on leaving Australia. I may have to stop my technical research.
“If the Act is unamended, Australia’s scientific reputation will be trashed and its tertiary education system severely damaged. Chief Scientist Ian Chubb claims he can fix this, but he has no runs on the board and time is running out.”