Back to search
GovernmentTechnology & Telecommunications

Section 230 — Internet Platform Liability

9 min read·Updated Apr 21, 2026

Section 230 — Internet Platform Liability

Section 230 of the Communications Decency Act (47 U.S.C. § 230, 1996) is the 26-word legal provision that enabled the modern internet — and has become one of the most contested laws in Congress. The core immunity: "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider." This means Facebook, YouTube, Reddit, and virtually every platform hosting user-generated content are not legally liable for what users post — even defamatory or harmful content — as long as the platform didn't create it. Section 230 also provides a "Good Samaritan" protection: platforms can moderate content in good faith (removing posts, banning users) without that moderation converting them into publishers liable for every post. Without these two immunities, platforms would face an impossible choice — moderate everything (infeasible at scale) or moderate nothing (and face liability for every harmful post). The exceptions are narrow: federal criminal law, intellectual property, FOSTA-SESTA (sex trafficking), and CSAM. Section 230 faces reform pressure from both parties — conservatives argue platforms censor speech with no accountability; progressives argue platforms allow harmful content without accountability. Major reform legislation has been introduced in every Congress since 2020 but has not passed.

Current Law (2026)

ParameterValue
EnactedCommunications Decency Act of 1996, § 230 (47 U.S.C. § 230)
Core immunity"No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider"
Good Samaritan protectionPlatforms may moderate content in good faith without losing immunity
ExceptionsFederal criminal law, intellectual property, FOSTA-SESTA (sex trafficking), CSAM, state AGs for consumer protection (limited)
EnforcementPrivate litigation (raised as defense); no federal agency enforcement of § 230 itself
  • 47 U.S.C. § 230(c)(1) — Publisher immunity (no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider — the core shield that protects platforms from liability for user-generated content)
  • 47 U.S.C. § 230(c)(2) — Good Samaritan moderation (no civil liability for any action voluntarily taken in good faith to restrict access to or availability of material the provider considers obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable — protects content moderation decisions)
  • 47 U.S.C. § 230(e)(1) — Federal criminal law exception (nothing in § 230 shall be construed to impair enforcement of federal criminal statutes)
  • 47 U.S.C. § 230(e)(2) — Intellectual property exception (§ 230 shall not be construed to limit or expand any law pertaining to intellectual property)
  • 47 U.S.C. § 230(e)(5) — FOSTA-SESTA exception (2018 amendment: § 230 shall not impair federal or state sex trafficking laws or civil claims relating to sex trafficking)
  • 47 U.S.C. § 230(f) — Definitions (interactive computer service: any information service or system that provides access to the Internet; information content provider: any person responsible for the creation or development of information)

How It Works

Section 230 is a 26-word sentence that fundamentally shapes the internet. It provides two related protections that together enable platforms to host user-generated content while also moderating it — without being sued for either choice. The publisher shield (§ 230(c)(1)) immunizes platforms from being treated as the "publisher or speaker" of third-party content: without it, every platform hosting user content — social media, review sites, forums, comment sections, cloud services — could be sued for defamation, negligence, or other torts based on what users post, just as a newspaper can be sued for what it publishes. Courts have interpreted this broadly: platforms are immune from most state law tort claims based on third-party content, including defamation, fraud, and products liability for marketplace listings, even if they exercise editorial functions like recommendations, ranking, or curation. The Good Samaritan shield (§ 230(c)(2)) ensures platforms can moderate without losing publisher immunity: before § 230, moderating content could convert a platform into a "publisher" liable for everything it missed; § 230(c)(2) resolved this paradox by protecting platforms that remove, restrict, or label content they consider objectionable in good faith, without that editorial activity creating liability for content left up.

Section 230 has important exceptions. Federal criminal law applies regardless — platforms can be prosecuted for facilitating crimes. Intellectual property claims (copyright, trademark) are not barred — the DMCA's notice-and-takedown system governs copyright separately. FOSTA-SESTA (2018) carved out sex trafficking: platforms can be held civilly and criminally liable if they knowingly facilitate trafficking. And § 230 does not protect platforms acting as "information content providers" — if the platform itself creates or materially contributes to unlawful content, it loses immunity. The most active legal frontier is algorithmic amplification: whether § 230 protects platforms when their algorithms recommend or target harmful user content. The Supreme Court addressed this in Gonzalez v. Google (2023) but issued a narrow ruling that left the core question unresolved. The policy debate pits conservative critics (who argue platforms use moderation to silence conservative speech) against liberal critics (who argue § 230 enables profit from harmful content) — goals that point in opposite directions and explain why comprehensive reform has repeatedly failed in Congress despite broad bipartisan frustration.

How It Affects You

If a platform removed your content, banned your account, or labeled your post, Section 230 is why you almost certainly can't successfully sue the platform for it — and why that's intentional. The Good Samaritan protection in § 230(c)(2) explicitly protects platforms that moderate content "in good faith," meaning they can take down posts they consider objectionable, harmful, or against their terms of service, without those editorial decisions converting them into publishers liable for everything else on the platform. If you think a removal was unjust, your options are limited: appeal through the platform's own process (most major platforms have internal appeals), complain to the FTC if the removal was part of a systematic deceptive practice (an unlikely avenue for individual content disputes), or document your case in public. The Moody v. NetChoice (2024) Supreme Court ruling confirmed that platforms themselves have First Amendment rights to make editorial decisions — meaning government cannot easily force platforms to carry your content either. What you can do: if someone posted false content about you, sue the person who posted it (defamation), not the platform that hosted it.

If you've been harmed by content on a platform — defamatory reviews, harassment, non-consensual intimate images, fraud through marketplace listings — Section 230 creates a significant barrier to legal recovery from the platform itself. You can sue the individual who created the content in federal court, but identifying anonymous posters requires a subpoena that courts sometimes resist. The meaningful exceptions: if the harmful content relates to sex trafficking (FOSTA-SESTA), you can sue the platform if it knowingly facilitated trafficking. If the content violates copyright or trademark, the DMCA's notice-and-takedown system requires platforms to remove infringing content within a defined window — and platforms that don't comply can lose § 230 protection for IP claims. State attorneys general can bring limited consumer protection actions. The practical reality: for most private harms from third-party content, Section 230 means the platform is judgment-proof even if the content is obviously harmful.

If you run a website, forum, blog, or app with any user-generated content, Section 230 is the legal foundation that makes your business model viable. The protections apply to you just as fully as to Facebook — a local news site with comments, a neighborhood forum, a product review marketplace, or an API that surfaces third-party data. You can moderate aggressively (remove spam, ban bad actors, curate content) without losing immunity for content you don't catch. You can also do nothing, and you won't be liable for what users post (with the same exceptions that apply to large platforms — federal crimes, IP, and sex trafficking). The main risk area: if your platform is designed or encouraged to generate specific types of harmful content (e.g., a service explicitly built to help users commit fraud), courts may find you're an "information content provider" — losing § 230 protection and becoming liable. Keep your platform's terms of service clear about prohibited uses, moderate in good faith, and consult counsel before building features that could be argued to solicit unlawful content.

If you're watching Section 230 reform in Congress, the window is narrow but the proposals are significant. The Sunset to Reform Section 230 Act introduced in the 119th Congress would set a hard expiration of § 230 immunity for December 31, 2026 — eliminating the protection unless Congress actively reauthorizes it with new conditions. This is an unusual legislative approach that's more of a forcing mechanism than a substantive reform bill. The underlying political divide hasn't changed: Republicans frame the issue as platform censorship of conservative speech; Democrats frame it as platform amplification of harmful content. These goals point in opposite directions — requiring platforms to carry more speech versus punishing them for carrying too much harmful speech — which is why comprehensive § 230 reform has failed repeatedly despite broad bipartisan frustration. The most legally viable reform scenario involves narrowing immunity for algorithmic amplification (requiring platforms to prove their recommendation systems didn't materially contribute to harm) rather than repealing the core § 230(c)(1) publisher immunity.

State Variations

Section 230 is federal law that preempts contrary state laws:

  • 47 U.S.C. § 230(e)(3) explicitly preempts state laws that are inconsistent with § 230
  • Several states have enacted laws attempting to regulate platform content moderation (Texas HB 20, Florida SB 7072) — the Supreme Court addressed these in Moody v. NetChoice (2024), finding the laws likely unconstitutional under the First Amendment but remanding for further analysis
  • State consumer protection laws, data privacy laws, and election laws may impose obligations on platforms that do not directly conflict with § 230
  • Some state courts have interpreted § 230 more narrowly than federal courts, though federal preemption limits this

Implementing Regulations

Section 230 (47 U.S.C. § 230) is self-executing — it provides immunity to interactive computer services through judicial application. No CFR implementing regulations exist. The FCC has not issued rules implementing Section 230, despite periodic debates about whether it should.

Pending Legislation

  • HR 6746 — Sunset To Reform Section 230 Act: sets a hard expiration for Section 230, ending platform legal immunities after December 31, 2026. Status: Introduced.
  • S 3546 — Sunset Section 230 Act: would repeal Section 230, phase out protections over two years, and rewrite definitions for internet access providers, libraries, and schools. Status: Introduced.

Recent Developments

  • Trump administration and "Big Tech censorship": The Trump administration's approach to Section 230 is shaped by the narrative that large platforms (Facebook, YouTube, X) have "censored" conservative voices — a framing that cuts against § 230's protection for platform editorial decisions. FCC Chair Brendan Carr has discussed using FCC authority to address platform content moderation, arguing that large platforms' decisions constitute "common carrier" regulation. However, the First Amendment protections confirmed in Moody v. NetChoice (2024) limit how far government can mandate platforms to carry content.
  • X and government speech issues: Elon Musk's ownership of X (Twitter) and his advisory role in the Trump administration created novel Section 230 and First Amendment issues. When X/Twitter reinstated accounts of political figures, promoted certain content, and moderated against others, the question arose whether government involvement in those decisions converted private platform decisions into state action — potentially triggering First Amendment constraints on government-directed censorship or promotion. Courts have not yet fully addressed this issue.
  • Kids Online Safety Act: KOSA — requiring platforms to minimize design features that harm minors and giving parents more controls — advanced in Congress but faced First Amendment challenges. Supporters argued § 230 should not protect platforms from liability for algorithmic amplification of harmful content to minors; opponents argued KOSA would require platforms to censor protected speech to avoid liability.
  • AI and § 230 boundaries: When AI tools generate responses based on third-party content (search AI, chatbots, recommendation systems), whether the platform is a "provider or user of an interactive computer service" hosting third-party content (§ 230-protected) or an "information content provider" creating its own content (not § 230-protected) remains unresolved. The Supreme Court's avoidance of this issue in Gonzalez (2023) left lower courts to navigate it case by case.
  • Moody v. NetChoice (2024): The Supreme Court held that states' laws (Florida's SB 7072, Texas's HB 20) attempting to prohibit large social media platforms from engaging in viewpoint-based content moderation were likely unconstitutional under the First Amendment — protecting platforms' editorial discretion as speech. The ruling was not final on the merits but significantly constrained state-level Section 230-adjacent regulation.

At My Address

See how Section 230 — Internet Platform Liability plays out in your area

Pull up the federal-data report for any U.S. ZIP — federal spending, environmental risk, hospitals, schools, your reps, all on one page.

Enter your address