What Should We Do About Section 230? – Reason
Yesterday, the Attorney General held a workshop on Section 230 of the Communications Decency Act. The question was whether the law can be improved. Section 230 does need work, though there's plenty of room for debate about exactly how to fix it. These are my mostly tentative and entirely personal thoughts on the question the Attorney General has asked.
Section 230 gives digital platforms two immunities one for publishing users' speech and one for censoring users' speech. the second is the bigger problem.
When section 230 was adopted, the impossibility of AOL, say, monitoring its users in a wholly effective way was obvious. It couldn't afford to hire tens of thousands of humans to police what was said in its chatrooms, and the easy digital connection it offered was so magical that no one wanted it to be saddled with such costs. Section 230 was an easy sell.
A lot has changed since 1996. Facebook and other have in fact already hired tens of thousands of humans to police what is said on their platforms. Combined with artificial intelligence, content fingerprinting, and more, these monitors work with considerable success to stamp out certain kinds of speech. And although none of these efforts are foolproof, preventing the worst online abuses has become part of what we expect from social media. The sweeping immunity Congress granted in Section 230 is as dated as the Macarena, another hit from 1996 whose appeal seems inexplicable today. Today, jurisdictions as similar to ours as the United Kingdom and the European Union have abandoned such broad grants of immunity, making it clear that they will severely punish any platform that fails to censor its users promptly.
That doesn't mean the US should follow the same path. We don't need a special, harsher form of liability for big tech companies. But why are we still giving them a blanket immunity from ordinary tort liability for the acts of third parties? In particular, why should they be immune from liability for utterly predictable criminal use of warrant-proof encryption? I've written on this recently and won't repeat what I said there, except to make one fundamental point.
Immunity from tort liability is a subsidy, one we often give to nascent industries that capture the nation's imagination. But once they've grown big, and the harm they can cause has grown as well, that immunity has to be justified anew. In the case of warrant-proof encryption, the justifications are thin. Section 230 allows tech companies to capture all the profits to be made from encrypting their services while exempting them from the costs they are imposing on underfunded police forces and victims of crime.
That is not how our tort law usually works. Usually, courts impose liability on the party that is in the best position to minimize the harm a new product can cause. Here, that's the company that designs and markets an encryption system with predictable impact on victims of crime. Many believe that the security value of unbreakable encryption outweighs the cost to crime victims and law enforcement. Maybe so. But why leave the weighing of those costs to the blunt force and posturing of political debate? Why not decentralize and privatize that debate by putting the costs of encryption on the same company that is reaping its benefits? If the benefits outweigh the costs, the company can use its profits to insure itself and the victims of crime against the costs. Or it can seek creative technical solutions that maximize security without protecting criminals solutions that will never emerge from a political debate. Either way it's a private decision with few externalities, and the company that does the best job will end up with the most net revenue. That's the way tort law usually works, and it's hard to see why we shouldn't take the same tack for encryption.
2. Immunity for censoring users Detecting bias.
The harder and more urgent Section 230 problem is what to do about Silicon Valley's newfound enthusiasm for censoring users whose views it disapproves of. I confess to being a conservative, whatever that means these days, and I have little doubt that social media content mediation rules are biased against conservative speech. This is hard to prove, of course, in part because social media has a host of ways to disadvantage speakers who are unpopular in the Valley. Their posts can be quarantined, so that only the speaker and a few persistent followers ever see them but none knows of that distribution has been suppressed. Or they can be demonetized, so that Valley-unpopular speakers, even those with large followings, cannot use ad funding to expand their reach. Or facially neutral rules, such as prohibitions on doxing or encouraging harassment, are applied with maximum force only to the unpopular. Combined with the utterly opaque talk-to-the-bot mechanisms for appeal that the Valley has embraced, these tools allow even one or two low-level but highly motivated content moderators to sabotage their target's speech.
Artificial intelligence won't solve this problem. It is likely to make it worse. AI is famous for imitating the biases of the decisionmakers it learns from and for then being conveniently incapable of explaining how it arrived at its own decisions. No conservative should have much faith in a machine that learns its content moderation lessons from current practice in Silicon Valley.
Foreign government interference. European governments, unbound by the first amendment, have not been shy about telling Silicon Valley to suppress speech it dislikes, which include true facts about people who claim a right to be forgotten, or charges that a politician belongs to a fascist party, or what it calls hate speech. Indeed, much of the Valley has already surrendered, agreeing to use their terms of service to enforce Europe's sweeping view of hate speechunder which the President's tweets and the Attorney General's speeches could probably be banned today.
Europe is not alone in its determination to limit what Americans can say and read. Baidu has argued successfully that it has a first amendment right to return nothing but sunny tourist pictures when Americans searched for "Tiananmen Square June 1989." Jian Zhang v. Baidu.Com Inc., 10 F. Supp. 3d 433 (S.D.N.Y. 2014). Today, any government but ours is free to order a US company to suppress the speech of Americans the government doesn't like.
In the long run it is dangerous for American democracy to give highly influential social media firms a blanket immunity when they bow to foreign government pressure and suppress the speech of Americans. We need to armor ourselves against such tactics, not facilitate them.
Regulation deserves another look. This isn't the first time we've faced a disruptive new technology that changed the way Americans talked to each other. The rise of broadcasting a hundred years ago was at least at transformational, and as threatening to the political order, as social media today. It played a big role in the success of Hitler and Mussolini, not to mention FDR and Father Coughlin.
American politicians worried that radio and television owners could sway popular opinion in unpredictable or irresponsible ways. They responded with a remarkable barrage of new regulation all designed to ensure that wealthy owners of the disruptive technology did not use it to unduly distort the national dialogue. Broadcasters were required to get government licenses, not once but over and over again. Foreign interests were denied the right to own stations or networks. A "fairness" doctrine required that broadcasters present issues in an honest, equitable, and balanced way. Opposing candidates for office had to be given equal air time, and political ads could to be aired at the lowest commercial rate. Certain words (at least seven) could not be said on the radio.
This entire edifice of regulation has acquired a disreputable air in elite circles, and some of it has been repealed. Frankly, though, it don't look so bad compared to having a billionaire tech bro (or his underpaid contract workers) decide that carpenters communicating with friends in Sioux Falls are forbidden to "deadname" Chelsea Manning or to complain about Congress's failure to subpoena Eric Ciaramella.
The sweeping broadcast regulatory regime that reached its peak in the 1950s was designed to prevent a few rich people from using technology to seize control of the national conversation, and it worked. The regulatory elements all pretty much passed constitutional muster, and the worst that can be said about them today is that they made public discourse mushy and bland because broadcasters were cautious about contradicting views held by a substantial part of the American public.
Viewed from 2020, that doesn't sound half bad. We might be better off, and less divided, if social media platforms were more cautious today about suppressing views held by a substantial part of the American public.
Whether all these rules would survive contemporary first amendment review is hard to know. But government action to protect the speech of the many from the censorship of the privileged deserves, and gets, more leeway from the courts than the free speech absolutists would have you believe. See, e.g., Bartnicki v. Vopper, 532 U.S. 514 (2001).
That said, regulation has many risks, not least the risk of abuse. Each political party in our divided country ought to ask what the other party would do if given even more power over what can be said on line. It's a reason to look elsewhere for solutions.
Network effects and competitive dominance. Maybe we wouldn't need a lot of regulation to protect minority views if there were more competition in social media if those who don't like a particular platform's censorship rules could go elsewhere to express their views.
In practice, they can't. YouTube dominates video platforms, Facebook dominates social platforms, Amazon dominates online book sales, etc. Thanks to network effects, if you want to spread your views by book, by video, or by social media post, you have to use their platforms and live with their censorship regimes.
It's hard to say without investigation whether these platforms have violated antitrust laws in acquiring their dominance or in exercising it. But the effect of that dominance on what Americans can say to each other, and thus on political outcomes, must be part of any antitrust review of their impact. Antitrust enforcement often turns on whether a competitive practice causes consumer harm, and suppression of consumer speech has not usually been seen as such a harm. It should be. Suppression of speech it dislikes may well be one way Silicon Valley takes monopoly profits in something other than cash. If so, there could hardly be a higher priority for antitrust enforcement because such a use of monopoly strikes at the heart of American free speech values.
One word of caution: Breaking up dominant platforms in the hope of spurring a competition of ideas won't work if the result is to turn the market over to Chinese companies that already have a similar scale and even less interest in fostering robust debate online. If we're going to spur competition in social media, we need to make sure we aren't trading Silicon Valley censorship for the Chinese brand.
Transparency. Transparency is everyone's favorite first step for addressing the reality and the perception of bias in content moderation. Surely if the rules were clearer, if the bans and demonetizations could be challenged, if inconsistencies could be forced into the light and corrected, we'd all be less angry and suspicious and the companies would behave more fairly. I tend to agree with that sentiment, but we shouldn't kid ourselves. If the rules are made public, if the procedures are made more open hell, if the platforms just decide to have people answer complaints instead of leaving that to Python scriptsthe cost will be enormous.
And not just in money. All of the rules, all of the procedures, can be gamed, and more effectively the more transparent they are. Speakers with bad intent will go to the very edge of the rules; they will try to swamp the procedures. And ideologues among the content moderators will still have room to seize on technicalities to nuke unpopular speakers. Transparency may well be a good idea, but its flaws are going to be painful to behold if that's the direction our effort to discipline Section 230 takes.
3. What is to be done?
So I don't have much certainty to offer. But if I were dealing with the Section 230 speech suppression immunity today, I'd start with something like the following:
First, treat speech suppression as an antitrust problem, asking what can be done to create more competition, especially ideological and speech competition, among social media platforms. Maybe breakups would work, although network effects are remarkably resilient. Maybe there are ways antitrust law can be used to regulate monopolistic suppression of speech. In that regard, the most promising measures probably are requiring further transparency and procedural fairness from the speech suppression machinery, perhaps backed up by governmental subpoenas to investigate speech suppression accusations.
Second, surely everyone can agree that foreign governments and billionaires shouldn't play a role in deciding what Americans can say to each other. We need to bar foreign ownership of social media platforms that are capable of playing a large role in our political dialogue. We should also use the Foreign Agent Registration Act or something like it to require that speech driven by foreign governments be prominently identified as such. And we should sanction the nations that try to do that.
And finally, here's a no-brainer. If nothing else, it's clear that Section 230 is one of the most controversial laws on the books. It is unlikely to go another five years without being substantially amended. So why in God's name are we writing the substance of Section 230 into free trade deals notably the USMCA? Adding Section 230 to a free trade treaty makes the law a kind of a low-rent constitutional amendment, since if we want to change it in future, organized tech lobbies and our trading partners will claim that we're violating international law. Why would we do this to ourselves? It's surely time for this administration to take Section 230 out of its standard free-trade negotiating package.
Note: I have many friends, colleagues, and clients who will disagree with much of what I say here. Don't blame them. These are my views, not those of my clients, my law firm, or anyone else.
Read this article:
What Should We Do About Section 230? - Reason
- Theres One More Thing Tearing the Trans-Atlantic Alliance Apart. Its Coming to a Head This Weekend. - Slate - February 14th, 2026 [February 14th, 2026]
- Opinion | At the University of Minnesota, neutrality has become censorship - Star Tribune - February 14th, 2026 [February 14th, 2026]
- FTC Continues to Confuse Free Expression and Censorship as It Threatens Apple News - Cato Institute - February 14th, 2026 [February 14th, 2026]
- Putin accused of total censorship after blocking WhatsApp - The Times - February 14th, 2026 [February 14th, 2026]
- FIRE sues Bondi, Noem for censoring Facebook group and app reporting ICE activity - FIRE | Foundation for Individual Rights and Expression - February 14th, 2026 [February 14th, 2026]
- Trump Admin Sued Over Censorship Of ICE-Reporting App, Facebook Group - TV News Check - February 14th, 2026 [February 14th, 2026]
- Russia Escalates Internet Censorship Removing YouTube and WhatsApp From National Domain System - UNITED24 Media - February 14th, 2026 [February 14th, 2026]
- Serbia: Coordinated bot attacks on Instagram accounts of independent media emerge as new weapon of censorship - ipi.media - February 14th, 2026 [February 14th, 2026]
- Platforms bend over backward to help DHS censor ICE critics, advocates say - Ars Technica - February 14th, 2026 [February 14th, 2026]
- Frances censorship of voices calling out international complicity with genocide - Middle East Monitor - February 14th, 2026 [February 14th, 2026]
- Banning social media is a kneejerk reaction that should be resisted - Index on Censorship - February 14th, 2026 [February 14th, 2026]
- Handei follows Heraskevych into Olympic censorship - Inside The Games - February 14th, 2026 [February 14th, 2026]
- Hafta 576 : Cinema, censorship and the crisis in Parliament - Newslaundry - February 14th, 2026 [February 14th, 2026]
- India cuts takedown window to three hours for YouTube, Meta, X and others - BBC - February 11th, 2026 [February 11th, 2026]
- Peter MacKinnon: University censorship is out of control - National Post - February 11th, 2026 [February 11th, 2026]
- How Elite Colleges Aided Censorship During the Red Scares - Inside Higher Ed - February 11th, 2026 [February 11th, 2026]
- Twin Cities artists grapple with censorship in a time of turmoil - MinnPost - February 11th, 2026 [February 11th, 2026]
- Trump Admin Sued Over Censorship Of ICE-Reporting App, Facebook Group 02/12/2026 - MediaPost - February 11th, 2026 [February 11th, 2026]
- Spain considers banning teens from social media and holding tech executives criminally responsible for hate speech - FIRE | Foundation for Individual... - February 11th, 2026 [February 11th, 2026]
- PEN America, 36 Organizational Partners, Call on Texas A&M to Rescind Censorship Policies - PEN America - February 11th, 2026 [February 11th, 2026]
- Sour Bangkok uses censorship to ignite national conversation for Girl from Nowhere The Reset - Campaign Brief Asia - February 11th, 2026 [February 11th, 2026]
- FAMU Says Censoring the Word Black Was a Mistake - Inside Higher Ed - February 11th, 2026 [February 11th, 2026]
- Ego Nwodim Wont Be Censoring Herself to Host the 2026 Spirit Awards - The Hollywood Reporter - February 11th, 2026 [February 11th, 2026]
- Nidhi Razdan on Fear, Self-censorship, and the Newsroom Today - Frontline Magazine - February 11th, 2026 [February 11th, 2026]
- Censorship and Governance: The Modern Assault on Higher Education - The EDU Ledger - February 11th, 2026 [February 11th, 2026]
- AI three-hour takedown rule: When speed becomes the censor - The Federal - February 11th, 2026 [February 11th, 2026]
- TikTok creators flock to UpScrolled app after U.S. takeover. Here's why - CBC - February 9th, 2026 [February 9th, 2026]
- Was I scared going back to China? No: Ai Weiwei on AI, western censorship and returning home - The Guardian - February 9th, 2026 [February 9th, 2026]
- Fact check: Is the EU censoring Americans and meddling in elections? - Euronews.com - February 9th, 2026 [February 9th, 2026]
- US to fund free speech initiatives in Europe, Trump official says - The Straits Times - February 9th, 2026 [February 9th, 2026]
- NBC censors Green Day Super Bowl performance, days after band tells ICE agents to quit - Washington Times - February 9th, 2026 [February 9th, 2026]
- Fight Leftist Indoctrination in Higher Education Without Censorship - American Enterprise Institute - AEI - February 9th, 2026 [February 9th, 2026]
- The European Censorship Files and Americas Allies - Hungarian Conservative - February 9th, 2026 [February 9th, 2026]
- Ai Weiwei: Returning Home, Censorship, and the Age of Surveillance - Gazeta Express - February 9th, 2026 [February 9th, 2026]
- Video. Fact check: Is the EU censoring Americans and meddling in elections? - Euronews.com - February 9th, 2026 [February 9th, 2026]
- Ice Out for Good: Art and censorship in the Minnesota snow - MPR News - February 9th, 2026 [February 9th, 2026]
- Corruption trial to reporter arrests. We're ambling toward censorship | Goshay - Canton Repository - February 9th, 2026 [February 9th, 2026]
- A company that rates news sites says the Trump administration is strangling it - The Washington Post - February 9th, 2026 [February 9th, 2026]
- Discord's Going To Censor Your Account Unless You Provide ID Or Face Scan - SlashGear - February 9th, 2026 [February 9th, 2026]
- Ai Weiwei on China, the West and shrinking space for dissent - Reuters - February 9th, 2026 [February 9th, 2026]
- Ai Weiwei on China, the West and shrinking space for dissent - The Japan Times - February 9th, 2026 [February 9th, 2026]
- Revealing the Structural: Censorship and Discrimination with Art by Yafang Shi - blogTO - February 9th, 2026 [February 9th, 2026]
- Journalists as well as generals have been purged only Xi is safe in China today - Index on Censorship - February 9th, 2026 [February 9th, 2026]
- Researchers say no evidence of TikTok censorship, but they remain wary - NPR - February 4th, 2026 [February 4th, 2026]
- Was there censorship on TikTok after the U.S. takeover? - Good Authority - February 4th, 2026 [February 4th, 2026]
- Finnish Parliamentarian on trial for Bible tweet testifies before U.S. Congress: "European censorship is a worldwide concern - Alliance Defending... - February 4th, 2026 [February 4th, 2026]
- Countries using internet blackouts to boost censorship: Proton - Yahoo - February 4th, 2026 [February 4th, 2026]
- EU lawmakers urge probe of TikTok for alleged censorship linked to Epstein content - Anadolu Ajans - February 4th, 2026 [February 4th, 2026]
- Censorships Deadly Grip On Whistleblowers: The Tragic Story Of Li Wenliang OpEd - Eurasia Review - February 4th, 2026 [February 4th, 2026]
- How Universities and States Are Increasing Surveillance of Professors - The New York Times - February 4th, 2026 [February 4th, 2026]
- Texas A&M Stakes Out Turf as the Epicenter of Higher Education Censorship - PEN America - February 4th, 2026 [February 4th, 2026]
- Video. Russia's war in Ukraine: Are AI chatbots censoring the truth? - Euronews.com - February 4th, 2026 [February 4th, 2026]
- "Dispatch" devs apologize after fan confusion over censorship on the Nintendo Switch: "This is 100% our mistake" - The Daily Dot - February 4th, 2026 [February 4th, 2026]
- JUDICIARY GOP DROPS EU CENSORSHIP BOMBSHELL The documents, obtained and released by The House Judiciary Committee, show the EU has been pressuring... - February 4th, 2026 [February 4th, 2026]
- Opinion | Texas vs. Plato: Censorship in the Academy - The New York Times - February 2nd, 2026 [February 2nd, 2026]
- Its really sad: US TikTok users rethink app over concerns about privacy and censorship - The Guardian - February 2nd, 2026 [February 2nd, 2026]
- Dispatch Dev Says Players "Are Right To Be Pissed" Over Nintendo Censorship - IGN Daily Fix - IGN - February 2nd, 2026 [February 2nd, 2026]
- Chappell Roan's Nipple Ring Dress and the Absurdity of Instagram's Nudity Censorship - Allure - February 2nd, 2026 [February 2nd, 2026]
- TikTok Says Its Weeklong Data Center Outage Is Resolved After Glitches Triggered Censorship Allegations - Forbes - February 2nd, 2026 [February 2nd, 2026]
- Why US TikTok Users Are Deleting the App Amid Censorship, Glitches, and Privacy Fears - Tech Times - February 2nd, 2026 [February 2nd, 2026]
- Finnish Parliamentarian on Trial for Bible Tweet to Testify Before U.S. Congress on Europes Growing Censorship Regime - Alliance Defending Freedom... - February 2nd, 2026 [February 2nd, 2026]
- AdHoc Promises To Address "At Least Some" Censorship For Dispatch On The Switch 2 In The Future - gameranx.com - February 2nd, 2026 [February 2nd, 2026]
- Why TikToks first week of American ownership was a disaster - The Guardian - February 2nd, 2026 [February 2nd, 2026]
- What the US TikTok takeover is already revealing about new forms of censorship | Paolo Gerbaudo - The Guardian - February 1st, 2026 [February 1st, 2026]
- The future of Irans internet connectivity is still bleak, even as weeks-long blackout begins to lift - CNN - February 1st, 2026 [February 1st, 2026]
- The arrest of Don Lemon is blatant censorship. And he is not the only one | Seth Stern - The Guardian - February 1st, 2026 [February 1st, 2026]
- Nintendo's censorship of Dispatch is the definition of unserious - App Trigger - February 1st, 2026 [February 1st, 2026]
- A 19-year-old takes on tech giants: Why product liability may succeed where censorship failed - The Sunday Guardian - February 1st, 2026 [February 1st, 2026]
- Orb: On the Movements of the Earth and its Parallels with Present-Day Censorship - Anime Herald - February 1st, 2026 [February 1st, 2026]
- Fighting back against Texas wave of censorship - FIRE | Foundation for Individual Rights and Expression - January 30th, 2026 [January 30th, 2026]
- Nintendo comments on the censorship of Dispatch on Switch and Switch 2 - Instant Gaming News - January 30th, 2026 [January 30th, 2026]
- Dispatch is censored on Nintendo Switch and Switch 2, and this might be the reason why - Video Games Chronicle - January 30th, 2026 [January 30th, 2026]
- "The core narrative and gameplay experience remains identical" AdHoc reassures Dispatch players on Switch as it confirms Nintendo platform... - January 30th, 2026 [January 30th, 2026]
- Midnight and Spacecoin partner to secure online conversations against censorship, surveillance, and privacy threats - Satellite Evolution - January 30th, 2026 [January 30th, 2026]
- Newsom to probe claims of Trump-critical censorship at TikTok - Politico - January 30th, 2026 [January 30th, 2026]
- Players are returning their Dispatch copies due to Switch censorship - Polygon - January 30th, 2026 [January 30th, 2026]
- Students, faculty and more hold rally at Texas A&M to protest course cancelations, 'censorship' on campus - kcentv.com - January 30th, 2026 [January 30th, 2026]
- TikTok faces app deletions, censorship claims and glitches in days after its ownership change - AP News - January 30th, 2026 [January 30th, 2026]
- PSA: Dispatch's 'Visual Censorship' Settings Can't Be Removed On Switch - Nintendo Life - January 30th, 2026 [January 30th, 2026]
- Censorship and the Ratchet Effect: Threats to Free Speech Outlast Supposed Crises - The Daily Economy - January 30th, 2026 [January 30th, 2026]