Section 230 Protects Tech Platforms at Your Legal Expense | Opinion

The Supreme Court will soon hear a case that could transform the way we post and consume information on the internet. They will evaluate a law called Section 230, which makes it possible for platforms to avoid liability, such as defamation lawsuits, for most user-generated posts.

Because of Section 230, tech platforms like Instagram, Reddit, Facebook, and YouTube do not have to employ a newsroom to rigorously fact-check veracity, or verify sources in the way a legacy media company needs to because they are not liable for what their users post on their platforms. No wonder they are pro-Section 230–their entire business models would be upended if they had to bear the legal costs associated with any lawsuit arising out of a user post that is erroneous or defamatory. Companies like Meta, Twitter, and Microsoft argue that online discourse would potentially be more limited and less free if tech companies had to police every single piece of content for its veracity, taking into account all the nuance and gray of our language and our opinions.

Critics of Section 230 argue that the online landscape would be safer with much stricter moderation—in that harmful or illegal activity, like terrorist recruitment videos, could be curbed. One could surmise that if Twitter held liability for the posts that went out on Jan. 6, they might have made different choices in content moderation that day.

If we believe that social media exists and has a purpose in society that cannot be put back into Pandora's Box, we have to find a way to balance these interests. It would be a near impossible task to rigorously moderate, erase, or censor billions of posts and evaluate them for defamation risk. That would require a tremendous amount of manpower and hiring, something tech companies want to limit as they turn to artificial intelligence to tame their platforms.

What is contradictory about the free speech ideology so central to these companies' business models is that the protections they receive do not pass through to the individual. There's a deep conflict of interest: The content more likely to perform well on social media and drive up revenues and eyeballs on platforms is content potentially likely to invite legal risk for the user. While evaluating and removing patently false or dangerous statements is of course important—like in the case of Alex Jones—there are many others with valid information about powerful entities or people, such as whistleblowers and workers at corporations with information in the public interest, who face crippling legal debts for sharing controversial content online. It benefits platforms to have this content, as they are currently free of liability themselves and raking in advertising dollars correlated to views. But the individual poster can face significant legal and financial risk.

At our company Lioness, which helps people bring forward stories about power imbalances and injustices, we have seen social media posts become viral when they include deeply human, controversial content. Our most viral social media post to-date told the story of a Black UPS delivery driver, and described how he experienced vicious racism from the community and the workplace. The story includes a description of how a white woman refused to work with him, claiming that it was because he was Black. The post—an Instagram reel—received thousands of comments and millions of views.

Imagine if that woman had chosen to sue him or our account for defamation. Despite the story being verified by several people and bringing millions of views and advertiser dollars to Instagram and Meta, we and potentially the UPS driver would be on the hook to defend ourselves, legally speaking. While newsrooms are equipped with First Amendment legal teams, many individuals cannot afford the legal expertise even to get a lawsuit dismissed.

Apps on phone
Apps on a smartphone are seen. Matt Cardy/Getty Images

A study confirmed that posts that spark debate tend to do well on social media. In analyzing 47,000 Reddit posts, researchers at University of Central Florida found that disagreement and controversy may spread faster and lead to more engagement on social media than agreement.

Beyond this, social platforms like Medium, Twitter, Blind, and Glassdoor—all protected by Section 230—have become very popular for those who want to share sensitive information about workplaces. But while significant reforms have been made to allow for worker speech—including the abolition of non-disclosure agreements at tech behemoths like Apple and Microsoft—people are still petrified of sharing their experiences should they invite legal attacks from their former employers. Ironically, we have seen workers from some of the biggest free speech platforms in the world balk at speaking up about their experiences there, for fear that they would see fines for every breach of their settlement agreement or a defamation lawsuit.

Free speech will always carry with it some risk for the individual who brings forward a story. As social media companies tout their support of Section 230 far and wide, it's important to realize that online platforms are driven by the individuals who take the risk to benefit the public by inviting in the very lawsuits websites fear.

Ariella Steinhorn and Amber Scorah are partners at Lioness.

The views expressed in this article are the writers' own.

Uncommon Knowledge

Newsweek is committed to challenging conventional wisdom and finding connections in the search for common ground.

Newsweek is committed to challenging conventional wisdom and finding connections in the search for common ground.

About the writer

Ariella Steinhorn and Amber Scorah


To read how Newsweek uses AI as a newsroom tool, Click here.

Newsweek cover
  • Newsweek magazine delivered to your door
  • Newsweek Voices: Diverse audio opinions
  • Enjoy ad-free browsing on Newsweek.com
  • Comment on articles
  • Newsweek app updates on-the-go
Newsweek cover
  • Newsweek Voices: Diverse audio opinions
  • Enjoy ad-free browsing on Newsweek.com
  • Comment on articles
  • Newsweek app updates on-the-go