Facebook’s ‘Supreme Court’ Upheld Its Trump Ban, But It’s Little More Than Judicial Theater


[ad_1]

You may have already heard that on Wednesday Facebook’s oversight board upheld the site’s decision to suspend former President Donald Trump’s account, at least for the time being. That’s a noteworthy outcome in and of itself, with wide-ranging consequences for the way Trump communicates with his supporters and helps define the future of the GOP. But as FiveThirtyEight’s tech and politics reporter, I had my eye on the process as much as the outcome, and what it tells us about the state of Big Tech. Namely, that process shows that when there’s only the threat of regulation, tech companies will find ways to self-regulate that amount to little more than judicial theater.

The oversight board’s ruling was the latest chapter in a story that started in January. After the Jan. 6 attack on the Capitol, Trump’s account was suspended due to his posts that praised the attackers. Throwing the leader of the free world in “Facebook jail” was controversial, to say the least, but other tech platforms followed suit. After Inauguration Day, Facebook referred the case to its oversight board before taking any further action. The board is composed of 20 members, including an impressive slate of former judges, legal experts, and a Nobel laureate.

On Wednesday, the board released its decision, stating that Facebook was justified in suspending Trump’s account, but that it shouldn’t have done so indefinitely because such a suspension is not outlined in the site’s policies. (Instead, the board said, Facebook should have suspended his account for a specified amount of time, or else banned Trump entirely.) The board then said Facebook has to “apply and justify a defined penalty” in the next six months.

In its 12,000-word decision released Wednesday, the board meticulously breaks down whether the suspension of Trump’s account was justified, drawing on Facebook’s community standards, the site’s previous actions, and nearly 10,000 public comments. It also cited several United Nations documents, such as its Guiding Principles on Business and Human Rights — lofty ideals that Facebook has, in various degrees, endorsed, but which are not legally binding. The board criticized Facebook for trying to shirk its responsibility to establish and enforce policies by punting this decision to the board, and for failing to answer questions about whether the site’s own algorithm may have promoted content from Trump leading up to the Jan. 6 attack.

But all of this — the critical tone, the authoritative language, the multiple citations of U.N. documents — is just very well-executed stage dressing. The oversight board is ultimately a creation of Facebook, funded by Facebook and designed to serve Facebook: as a private, for-profit company, Facebook has little incentive to invest in projects that could cause it more harm than good. The social media giant funneled $130 million into an irrevocable trust to fund the board for at least six years, money that helps pay the board members’ six-figure salaries so they can write lengthy musings that ultimately hold superficial authority: The board’s decisions are “binding,” according to its website, meaning that Facebook “will be required to implement it unless doing so could violate the law.” But required by whom? The board, which Facebook created? In practice, Facebook can take the board’s advice, or not. It could dissolve the board tomorrow. It’s all just regulatory pageantry.

That’s the rub: There is no current legal process that can hold Big Tech accountable for its moderation policies. Facebook, along with the rest of the tech industry, are almost entirely unregulated. Aside from hardline restrictions around, for example, child pornography, there are virtually no legal repurcussions for any decisions these companies make. For a long time, that meant Big Tech did pretty much whatever it wanted. Now, with mounting public and political pressure to crack down on some of the industry’s worst habits, and in an effort to avoid actual regulations, Facebook has created a version of self-regulation, and this is what that looks like. 

This isn’t to say that federal regulations are necessarily the answer. It’s hard to imagine legislation that could hold platforms accountable for things like misinformation and extremist content while also protecting free speech and avoiding the destruction of some of the best parts of the internet. I don’t envy anyone trying to solve this complex puzzle. Despite its age, the internet is still in many ways a new frontier. 

But in lieu of regulation, and with an ever-looming threat of regulation, we’re left with judicial theater with zero practical accountability that fails to address some of the most glaring problems of our modern tech oligopoly. This is Facebook’s answer to the problem. It’s up to the rest of us to decide if that is sufficient. 

[ad_2]


Leave a Reply

Your email address will not be published. Required fields are marked *