Four years ago, in the run-up to the 2016 election, a lot of people were afraid that malicious foreign actors would use social media disinformation to interfere with American politics. And, as it turned out, they had good reason. But increasingly during President Donald Trump’s term, the threat of disinformation came not from outside the country, but from inside our own house — the White House, to be exact.
What happens when disinformation comes from the top?
In the days surrounding the election, we saw a steady stream of false information on critical election results coming directly from the president and people in his inner circle.
In 2020, tech platforms have shifted their role from being gatekeepers of foreign and public misinformation to navigating a whole new type of content-moderation challenge: What happens when disinformation comes from the top?
During this election cycle, it seems Twitter has been leading the fight against election disinformation. The platform took a decisive stance on the issue, flagging many of President Donald Trump’s own tweets as false or misleading, something the company has done only rarely since it first began fact-checking Trump’s tweets earlier this year. Twitter has added interstitial disclaimer screens over the tweets, so users have to click past the warnings in order to view the underlying tweets. Facebook continued to show similarly problematic posts, without requiring users to click through, but added warnings to the bottom edge of the posts.
Facebook reports it is now prepared to take a stronger role in taking down disinformation, including taking steps to shut down groups dedicated to spreading disinformation or advocating for political violence. For example, on the Thursday after Election Day, Facebook shut down a viral group, “Stop the Steal,” which had gained more than 350,000 members in a single day while promoting false information about the election. As Trump continues to insist he will not concede the presidency to President-elect Joe Biden, Facebook has also announced a prolonged pause on political ads, a policy that may prove to have a substantial impact on crucial political events like the Georgia Senate runoffs than first realized.
Facebook made it clear that they’re continuing the pause because Trump has not conceded: “While multiple sources have projected a presidential winner, we still believe it’s important to help prevent confusion or abuse on our platform.”
— Hector Sigala (@hgsigala) November 11, 2020
The GOP knows exactly what they’re doing.
Misinformation from Trump is no new issue. In 2019, Vice President-elect Kamala Harris launched a campaign urging Twitter to delete Trump’s account, citing Trump’s history of false, offensive, and inflammatory tweets. The big question, of course, is: Why did it take so long for tech platforms and media outlets to wise up to Trump’s antics? One explanation is that these companies — and the American people — were simply caught unaware the first time around. Before Trump, we had become conditioned to accept that the president of the United States would be a person who would generally say things that one could at least argue to be factually true, in some charitable reading of the situation.
However, that doesn’t explain why platforms have been so slow to deal with disinformation coming from the top, when they had countless examples of foreign and domestic election interference since before 2016. One possible explanation is simply that power dynamics have shifted. As the Trump presidency is waning, tech companies may now feel less concerned about repercussions from checking the president. With the decisive results of the election, tech companies can freely reign in disinformation coming even from the White House, safe in the knowledge that America (and the world) will support them.
Just as the independent role of the free press is critical for checking the power of the government, so too, we are finding, is the independent role of companies and private individuals. Tech companies have been able to finally take some decisive action to fight disinformation, standing strong against the power of the president, and they would not have been able to do this without the ability to independently control their platforms and the flow of speech they host on their sites and services. Some policymakers want to restrict that ability, but we must be careful: To be truly beneficial, any regulation of speech for private platforms must be broad enough to allow companies like Twitter and Facebook to continue acting independently, supporting free speech and making sure users have access to verified information.
What is clear in all of this is that tech companies do have the capacity to stop the spread of disinformation and protect the security of our elections — even when it comes from the president.
Tech platforms like Twitter, Facebook, and YouTube have been dealing with misinformation for years now. But throughout the past four long years of Trump’s abysmal record of misrepresenting — nearly everything, tech companies have never taken actions as strong as the ones they are taking now. Each has developed content moderation policies and workflows to flag, take down, and stop the spread of false information, with mixed results. Policymakers have accused platforms of political bias in content moderation, though there is no evidence of this. Some have criticized tech companies for not taking down enough content, while others have criticized them for taking down too much content.









