The Australian government is preparing to legislate wide-ranging internet censorship, and hoping that sentence won’t scare the hell out of you if they shout “children!”
They’re planning to institute a Children’s E-Safety Commissioner with the power to demand that large social media sites remove material deemed harmful to children or possibly face large fines.
No one can argue with the goal of protecting children, which is exactly why they phrased it like that. But there’s a huge difference between tackling those who target children and erasing swathes of material “deemed harmful to young people”. Because the latter could be absolutely anything the government wants.
They’ve even got “deemed” in their declaration: it doesn’t have to be provably bad, just something they said was bad. And in electronic terms, the Australian government has a worse censorship record than an Oceanian clerk who keeps knocking his inkwell over his work.
“This is about getting informal and rapid action to get content taken down,” said Parliamentary Secretary for Communications Paul Fletcher, in a way which is terrifying if you take a second to think about what those words mean.
He’s talking about censorship without full formal legal channels, or time for appeal. He’s talking about the government pointing at anything it doesn’t like and making it disappear like they were using Harry Potter’s wand: the magical power to erase by using something meant for children.
Targeting social media sites is also a powerful mechanism of control. These sites are registered business concerns, and many have already shown that they’re prepared to bow to local regulation if it means they can keep making local money. Threatening them with fines ensures obedience. It forces a choice between the idealism and IPOs.
Even without these possibilities, the problem of the top-down approach is its clumsiness. You end up with large waves of banned material, instead of specific help for the people under threat.
A real solution to social media’s toxic problems is improved reporting and reaction tools. For example, recently I reported a violently abusive user on twitter. I was given three options: was I the target, or immediate family of the target, or anyone else in the world who saw the rape threats? On clicking the third I was allowed to spend five minutes filling out the rest of the form, only to receive an automated mail telling me that I, and everyone else in the world, would be automatically ignored.
It seems that anyone in the world can abuse another user, but she’s on her own to do anything about it. Which is sort of the problem with online abuse in the first place.
What we need is more powerful tools to protect individual users. But since suspending everyone sending abuse would damage those upward-trending user graphs the services need to attract investors, they’d need to be forced to be faster about it. And that’s where legal pressure would be better put.