We’ve all had that moment, even those lucky enough to live in the more hip and swinging tolerant societies, sometimes you look at your fellow human beings and think, ‘hell, that kid looks shifty!’
Maybe they’ve pulled on a balaclava in the middle of meal, they might be running determinedly out of a supermarket in a big coat with lots of pockets. They might just be a six-year-old boy, in which case it’s fairly certain they’ve just done something a least a little bit naughty.
Imagine for a second though, that there was a way of learning how likely someone was to commit a crime based on nothing more than their facial features? In what, it really must be said, sounds like the basic plot to an early 2000s sci-fi film, two Chinese researchers have set out to answer precisely that question.
Whilst researching a recently released paper, Automated Inference on Criminality, Xiaolin Wu and Xi Zhang of the Shanghai Jao Tong University, fed still images of 1856 people, almost half of which were convicted criminals, into a computer in order to digitally analyse their facial features.
The idea here was to lessen the impact of human bias through automating much of the process. The results were interesting, to say the least. According to the paper’s conclusion, “By extensive experiments and vigorous cross validations, we have demonstrated that via supervised machine learning, data-driven face classifiers are able to make a reliable inference on criminality.”
While it may seem more logical to assume that criminals have certain facial features in common to make them stand out, in fact the opposite is true with the non-criminal data set having more in common and the criminals having a wider variation when assessing factors such as “lip curvature, eye inner corner distance and the so-called nose-mouth angle.”
Therefore it’s more accurate to say that rather than finding it possible to look like a typical criminal, the research actually led to the discovery that it’s possible to look like a typical non-criminal.
In order to ensure the validity of the results, the researchers used four ‘classifiers’, such as logistical regression– a method of describing data, explaining the relationship between one binary variable (in this case, is the subject known to be guilty or not) and several metric variables (in this case, their facial features) all of which found fairly similar results.
In the case of this particular study, one of the more interesting aspects has been the reaction to it, with Motherboard reporting on some fairly heated criticism that the research has received. Since the paper has been printed several flaws have been pointed out, not least in this Hacker News thread, one of which being the idea that since the algorithms have been designed by people subject to human bias, then those same biases would in fact be programmed into the machine.
The problem being that, as Microsoft learned with the embarrassment surrounding their misanthropic chatbot Tay earlier this year with her strange and vaguely terrifying views about Ricky Gervais, programmes such as these are actually quite adept at identifying, and acting upon, human bias within a data set rather than removing them from the equation.
All of which means, we’re thankfully quite a long way from accidentally being locked up as soon as someone takes a photo of us at an angle where are lips are curved the wrong way in order to stop us robbing the corner shop in the future.