A recent report from the Southern Poverty Law Center about the proliferation of far-right figures on Twitter notes that a number of popular right-wing personalities with histories of repellent racial views, and some who have embraced and spread fake and harmful conspiracy theories (like One America News Network anchor Jack Posobiec), have been treated with kid gloves.
I don’t want the SPLC determining who I can and cannot hate-read.
The report’s findings (combined with my own self-imposed hiatus from the site) has given me cause to wonder whether I’ve been looking at the problem of platform moderation through the wrong lens.
The puzzle at the heart of it all is that Twitter bans QAnon content (70,000 accounts!), perma-bans dozens of white supremacist accounts (David Duke!), tracks and removes coordinated inauthentic activity and polices the content of posts about the coronavirus, and yet — the platform has frustrated activists by allowing people with documented histories of bullying, trolling and posting, then deleting, inflammatory tweets to remain prominent users on the site. It has even put them in touch with senior safety and trust technologists to help them work through objectionable content.
The SPLC doesn’t explain Twitter’s reluctance here, and it doesn’t give much purchase to the platform’s global view of speech. That view, as articulated by CEO Jack Dorsey, raises the bar really high for outright bans, gives people who violated policies second chances to behave better and focuses its resources on moderating content that would create imminent, real-world, concrete harm.
I am partial to Twitter’s side here; I don’t want the SPLC determining who I can and cannot hate-read. But it’s one thing to keep the bar really high for account terminations, and it’s quite another for the platform to coddle people who tweet close to the line for the sake of tweeting close to the line. Posobiec, for example, is adept at discovering where the line is on any particular day, which makes him a particularly ingenious troll.
I do not believe that people with histories of expressing repellent racial views off Twitter should be peremptorily booted off the platform.
The result is that your average, non-blue-checked individual user can be banned far more easily for far less objectionable tweets. This is what people mean when they say that content moderation, when mixed with special access, curtails the reach and freedom of regular users. It isn’t content-agnostic. It isn’t neutral when your policies and personnel go out of their way to keep serial abusers of the platform’s policies on the platform.
Dorsey has elsewhere insisted that if Twitter were to react to outside pressure campaigns, it would turn its service into a forum where only viewpoints that he didn’t personally find odious would be welcomed and therefore anodyne and boring. It’s the job of reporters, he has said, to hold people like Posobiec and Alex Jones to account.
Again, he has me — although I would question whether the platform’s dominance has made it nigh impossible for real reporting to have the accountability effect it once had.
What’s going on here? I suspect that for Dorsey and others in Silicon Valley, the high-minded principle of “free speech” is a bulwark against what they see as a media-activist monoculture that deviates markedly from classical liberalism and libertarianism. Dorsey might admit that there are a lot of jerks on Twitter and that jerks can be provocative — but the more speech he censors, no matter how objectionable, the more he becomes a censor of speech.
The line between moderation and censorship is similar to the line between what’s hurtful and what’s harmful: The community standard prevails. And in Dorsey’s version of the world, more speech is a greater good than the level of injury that a community might feel because of someone’s speech.
Twitter cannot be oblivious to the consequences of malicious speech acts published digitally.
I’m in the middle between Dorsey and the SPLC. I don’t know why people with histories of deliberately spreading misleading and harmful information should be given do-overs by Twitter merely because Twitter doesn’t want the world to think it has given activists the power to determine what constitutes hateful views. I also don’t know what standard of moderation would be acceptable to the SPLC — and I suspect it would be a lot more stringent than my own.







