Featured Image for A philosopher predicts how and when robots will destroy humanity
Why?

A philosopher predicts how and when robots will destroy humanity

In the flurry of progress, rarely do we stop to consider the ethics and morality of our advancement.

Aleksandra Przegalinska considers it every day. She holds a PhD in philosophy of Artificial Intelligence and is the Assistant Professor at Kozminski University and current Research Fellow at the Centre for Collective Intelligence at Massachusetts Institute of Technology (MIT) in Boston.

Aleksandra is primarily concerned with the human-machine relationship and the challenges we’ll face in the future. Ahead of her talk at the Creative Innovation 2017, Techly picked her brain about the philosophy behind AI.

It would be great to understand what drove you to start studying philosophy and, to go even further, to specify this as ‘artificial intelligence’. Where did you get that inspiration to start?

“Philosophy was a particularly interesting topic for me in high school, but when I was studying philosophy I had the impression that I was working on very abstract topics that are not really related to my contemporary reality.

“Even though I was interested in these topics – and I felt what philosophy tackles is very important – I didn’t feel that I was really changing anything by doing philosophy.

“But then I was lucky enough to have a teacher in our philosophy studies who was a specialist in artificial intelligence, and he was working on combining philosophical questions with the practice of artificial intelligence.

“He was saying: ‘okay, through artificial intelligence, we’re gonna change humanity. We’re probably going to transform human beings and these are very relevant philosophical questions to ask and try to answer.'”

One of the things you mentioned there was machine-human interaction. What are the big issues or ethical questions you see with tech implants and wearables?

“Well, there are several questions related to various aspects of human augmentation through technology. Obviously, there’s this big movement called ‘transhumanism’ right now that’s becoming very popular, particularly in Silicon Valley, and it’s advocating for the acceleration of human augmentation through technology.

“I think the goal of transhumanism is to transcend humanity as we know it and move to another level where we would be totally integrated with machines. What is very problematic for many people is that this transcendence through technology is going to lead us to immortality.

“You can see it in many different ways. For instance, you can think about humans who are no longer using their own bodies, but rather exoskeletons, to move around in a physical reality. This body that is an exoskeleton cannot be defeated in any way and cannot be harmed in any way. It’s not going to be sick, and it’s going to be healthy forever.

“Through this, creators are advocating immortality for humans, which means there will be more of us on Earth living for a very long time. Is there space for everyone? Will people want to have children if they live a radically extended life? How we will transform our life if we live for 200 years? This is a completely different lifespan.

“I think, for many people, it is compelling in many ways to think about their own life in this perspective. I think there are several other issues – environmental issues, ecological issues – that are related to the fact that people could potentially live for such a long time.

“These questions can extend to the development of autonomous cars. They’re going to make moral decisions in the situation of an accident and so on.

“These are very current questions. Can we really prescribe morality to the machine, and if so, how are we going to judge them if they violate the rules that we have here – even though they’re autonomous cars?

“I think these are really big questions. They are urgent in many ways, because some of these developments are more science fiction. Some of the ideas in transhumanism seem like something that is very far in the future – far ahead. I would say there are a plethora of problems here.”

One of the things you briefly touched on was the development of technology that becomes unchecked. While I was researching the issue, I came across a thought experiment by Oxford University philosopher Nick Bostrom: the paperclip maximiser experiment.

This theory suggests that you create a robot that’s a fairly simple form of artificial intelligence. Its simple duty is to amass paperclips, but it is also able to teach itself better and better ways to do so. Over time, the robot starts transforming the planet and all of space into a paperclip manufacturing and storing facility.

The theory is intentionally silly, but it’s pertinent to the open letter recently sent to the UN about preventing the use of artificial intelligence in war. Is there an authority that stops artificial intelligence from developing, and at what point do we stop it?

“Wow, this is a big question, because while there are many theories, what Bostrom is really referring to in that thought experiment is called singularity.

“This is the moment where a machine has its own goals that it’s pursuing autonomously. It has a certain sense of identity or subjectivity, and it knows what it wants. It then transforms the world according to its needs.

“This is basically what humans have been doing for a very long time. We were the ones transforming everything around us to suit our goals, first by developing our civilisation and so on and so forth.

“What we are really afraid of, I think, is that somebody will take our place. Machines are the first threat because we don’t think animals will do it.

“In terms of machines and the development of deep learning and machine learning, we see it all progressing really, really fast – and at some point they can reach a stage where they will decide not only what to learn or how to learn, but also whether they want to learn it.

“For instance, when you think about Alpha Go, that was the case: the machine learning system kind of decided what strategy to use to win by itself – it played by itself autonomously. At some point, we can imagine a situation where the machine says: ‘I want to win in this and that, and this is not up to you to decide – I will decide myself.’

“Some people claim – and I think these claims are really legitimate – that even if you want to ramp up this process it’s not going to take less than 1,000 years. So, we don’t have to be afraid right now – we can really think about this process in the long term and control it, regulate it and make it slower or faster. We have time.

“Some other scientists claim we don’t really have that much time because this process can be very, very short. Ray Kurzweil, a pretty famous futurist, claims it’s going to happen in 20 or 30 years.

“But there are several obstacles for machines. The main obstacle is the very slow development of affective computing, which is the kind of computing related to emotion and sensation. Machines cannot really contextualise the world the way we do. Animals can, but machines can’t, so I think that really is an obstacle.

“Even though Ray Kurzweil is very optimistic that singularity will happen, in that case, since I’m working in the field of affective computing, I’m a bit sceptical.

“So, I wouldn’t say ‘stop right now because it’s going to happen any moment’. I would rather say it’s going to take a bit longer, but you really do have to be careful. I would think that the big thing we need to do right now – I think many scientists share this view – is to really understand what’s going on in the machine learning process, because it becomes a bit of a black box for us.

“If we don’t understand machine learning, we will not know what’s going on inside the machine and we won’t know when to say stop because it’s a bit too dangerous for us. We don’t know how machines learn or how they acquire knowledge. Even engineers who design software programs that are able to compete with humans don’t fully understand.”

If we don’t heed that black box you mentioned, and we continue at our current rate of progress, what’s the best and worst outcome for the human-machine relationship?

The best outcome we can count on is a situation where all these programs become really good at what they’re doing but don’t gain consciousness – so they serve us, augment processes, optimise our work, take some of the work we have been doing and generate new kinds of jobs for us. We don’t become unnecessary, jobless humans sitting here and doing nothing; we do have work, and the work we don’t want to do is done by machines.

“I think this is a really nice scenario, although maybe a bit utopian, where we are nicely combined with machines and not in any conflict whatsoever. They are just helping us out and enhancing our work so we become more productive. Possibly, we work less.

“Obviously there is the worst case scenario, and that’s a dystopian future. Maybe for some people, this resembles what happens in the Terminator or in any other movie that really envisions a situation where machines rebel against us. They gain consciousness and don’t want to be instrumentalised and treated as machines.

“I think many science fiction movies were trying to depict this possible future where we are in conflict with machines, or machines are at first used and then they want to use us – or, in the worst case scenario, they really annihilate us because they think we are either unnecessary or a hassle to them.

“There’s also a possibility of weaponised artificial intelligence, where the situation will reach a scale we have never experienced before – because this could be the end. Just as we were afraid of atomic weapons, the A-bomb and so on, at some point we reached a consensus that this has to be stopped, controlled and regulated. I think we need a consensus here, too.

“I think this is a very bad scenario, and it’s a very realistic one. It is much less speculative than the ones related to the further development of artificial intelligence and its gaining of consciousness. So yeah, there are several threats that are related to the development of artificial intelligence.

So, you’re saying that we need urgent action or at least an urgent consensus. As a researcher, what’s your next action in addressing that as urgently as possible? What will you be doing to contribute to that?

“I’m really advocating for a public debate. As a scientist working on such sensitive issues, we need a big, worldwide debate. I think Elon Musk is doing a good job – even though sometimes I’m very sceptical of his various ideas – because he is trying to show that these issues require a big public debate and consensus.

“I think political leaders worldwide are not really taking artificial intelligence very seriously as a topic. They are very often unaware of its development, they are unaware of what threats it poses to humans and they don’t tackle the issue of the future of jobs, which is very much related to the development of artificial intelligence and the automation of jobs.

“Of course, I am from Poland. I now live in the US but I’m from Poland, and in the US I really saw only one debate with Barack Obama at the end of his presidency when he was talking about autonomous cars and taking it seriously as a topic. But it was just one.

“Since then, there has really been nothing. In Poland, in my home country and in Europe, there is too little going on, I think. I would say that scientists who are deeply involved in the field really have an obligation to ignite that debate, to start talking and to encourage politicians and civil society to have that debate. Because the time is now, and it may be too late in a decade or so when they finally wake up to notice that this is an issue.”

Aleksandra Przegalinska will be speaking alongside renowned global speakers at the Creative Innovation 2017 event on 13-15 November at the Sofitel Melbourne.

About the author

Larissa is Techly’s Assistant Editor. She watches so much Youtube that she’s narrowed down her favourite categories – goats, innocent dads getting pranked, and toddlers falling over.

Leave a comment

Comment (2)

    Baldwin

    Monday 25 September 2017

    The psychological problems of the human being -leaders first- are the first reasons of the problems of humanity.
    We are all in a same vehicle: Earth.
    Sometime, the pilots of this vehicle are mad or dangerous.
    We can be as well, as we elect these leaders.
    Could you imagine what could happen if a psychopath or a psychotic was accepted in a space vehicle? All the occupants of this space vehicle would be in danger.
    Extrapolate this to the whole humanity.
    That’s the basic reason which motivates some to alert the humanity.
    Could you imagine the benefits for humanity if all the humans were psychologically equilibrated?
    No more hard delinquency, economies for justice, police, armies,…
    These economies would serve the education, help social policies,…
    We have to accept to change humanity, so for this, we have to change the human to make it more human and less animal.

    Reply

    Eray Ozkural

    Monday 25 September 2017

    Bostrom’s claims are pseudoscientific. Only someone who doesn’t understand cognitive science and mathematical theory of AI… would take such nonsense seriously….

    Reply