Featured Image for Should a self-driving car kill you to save a child? The ethics of autonomous vehicles
Digital

Should a self-driving car kill you to save a child? The ethics of autonomous vehicles

Driverless cars are coming, and this website is trying to find a global ethical standard to program them.

Every human being operates under their own unique set of moral guidelines. While similar ethical themes pop up in studies around the world, there is no clear way to satisfy everyone’s priorities.

So, with ethics being such a murky topic, how do we ensure self-driving cars are making the right decisions? When it comes to killing or saving certain people or animals in an accident, what decision will the car make? Does a ‘right’ decision even exist?

Based on the findings in The Moral Machine Experiment, a new website called Moral Machine is hoping to find out. The site studies how humans around the world react in life-threatening situations involving passengers, pedestrians and vehicles, with the hope of programming better decision-making into AI.

Using 40 million decisions from people in 233 countries, the scientists behind Moral Machine have collected significant amounts of evidence on what humans consider to be correct ethical decisions.

In theory, this data set can contribute to developing global principles for AI ethics and moral judgements.

According to Hussein Dia, an Associate Professor in Transport Engineering at the Swinburne University of Technology, three distinct “moral clusters” can be identified globally: West, East and South.

“The analysis showed some striking peculiarities such as the less pronounced preference to spare younger people in the Eastern cluster; a much weaker preference in the Southern cluster for sparing humans over pets; a stronger preference in the Eastern cluster for sparing higher status people; and a stronger preference in the Southern cluster for saving women and fit people,” he said.

“These findings clearly demonstrate the challenges of developing globally uniform ethical principles for autonomous vehicles.”

Finding the globally acceptable answer certainly requires more than a blanket rule about saving the most lives, or always prioritising children. Our old friend Immanuel Kant would be having a field day.

While this presents a challenge for the rapidly evolving intelligent machine industry, experts are more concerned with our inability to acknowledge that AI ethics is a problem.

There is a distinct lack of regulation and conflict is inevitable where ethics and commercial interests clash.

It’s a case of morals vs money, and I know what I’d be betting on.

Moral test

What decision should a self-driving car make? / Image Credit: Awad et al.

For example, a customer is far more likely to buy a self-driving car if it is programmed to save that customer at any cost. That car is a far more saleable product than another model that chooses to kill its passenger to save a child who runs onto the street.

Along the same lines, it might also be cheaper to kill someone rather than causing several grievous injuries at great medical and legal cost.

“Since A.I. algorithms today cannot provide sufficient details to explain their behaviour, it would be difficult to prove cars are taking actions to kill people to reduce legal expenses,” said Distinguished Professor Mary-Anne Williams, Director of Disruptive Innovation at the Office of the Provost at the University of Technology Sydney.

She warns against “a subscription service that prioritises life according to the magnitude of [insurance] premiums”.

Despite the justified concerns, companies are racing to get the technology on the market with what appears to be only a perfunctory regard for the lack of regulation and the risks surrounding the new technology.

With all this uncertainty, it’s comforting to remember that no matter who your self-driving car is programmed to kill, the incidence of road accidents is likely to plummet with the introduction of intelligent machines.

In Australia, the 1000 or so annual road deaths could drop significantly due to the ability of autonomous vehicles to noticeĀ all relevant information and respond to it much quicker than a human could.

Whether or not the Moral Machine will be able to come up with a globally acceptable answer remains to be seen. As Professor Toby Walsh from the University of New South Wales points out, a publicly available online survey format can be deeply flawed.

“I completed their survey and deliberately tried to see what happened if I killed as many people as possible. As far as I know, they didn’t filter out my killer results.”

Doesn’t sound very morally righteous of her, does it?

You can test your own moral decision-making at Moral Machine here.

[Body images by Roberto Nickson and Awad, Dsouza and Kim et al.]

Leave a comment