AI LEAD Act
Sponsored By: Senator Sen. Durbin, Richard J. [D-IL]
Introduced
Summary
A federal products liability framework for artificial intelligence aims to align legal responsibility with safety by defining who counts as a developer or deployer and by restricting unconscionable liability waivers. The bill draws clear lines about design duties, warnings, and when AI systems are treated as defective to push accountability into AI development and use.
Show full summary
- Developers face liability if a claimant proves the developer failed to exercise reasonable care in design, failed to provide adequate instructions or warnings, breached an express warranty, or sold a product that was defective and unreasonably dangerous.
- The act defines key roles and changes. It defines a "deployer" and a "substantial modification" as a deliberate, unauthorized change that alters an AI product's purpose, use, function, design, or intended use while excluding changes that only reduce or mitigate risks.
- "Harm" is defined broadly to include property damage, personal injury or death, financial or reputational injury, mental distress, and related losses. Courts may infer defect from recurring incidents and treat noncompliance with safety statutes or regulations as evidence of defect for addressed risks.
Your PRIA Score
Personalized for You
How does this bill affect your finances?
Sign up for a PRIA Policy Scan to see your personalized alignment score for this bill and every other piece of legislation we track. We analyze your financial profile against policy provisions to show you exactly what matters to your wallet.
Bill Overview
Analyzed Economic Effects
2 provisions identified: 0 benefits, 0 costs, 2 mixed.
Defines AI systems and roles
This bill would define what counts as an artificial intelligence system. It would call those systems "covered products." It would name who is a developer and who is a deployer. It would define "design," "substantial modification," and what kinds of harm count in a lawsuit. These definitions would change when people or companies can be held responsible for AI problems.
New liability rules for AI makers
This bill would create four ways to sue AI developers. You could sue for careless design, inadequate warnings, a broken express promise, or a product that was dangerous when sold. A claimant would need to prove the claim by a preponderance of the evidence and show proximate cause. The bill treats safety-law violations as defects and allows a permissive inference of defect in some incidents. It also limits strict liability when a deployer made a major unauthorized change and treats risks to users under 18 as likely not open and obvious.
Free Policy Watch
You just read the policy. Now see what it costs you.
Pick a topic. PRIA runs your household against live legislation and sends you a free personalized readout.
Pick a topic to get started
Sponsors & CoSponsors
Sponsor
Sen. Durbin, Richard J. [D-IL]
IL • D
Cosponsors
Josh Hawley
MO • R
Sponsored 9/29/2025
Peter Welch
VT • D
Sponsored 3/18/2026
Angus King
ME • I
Sponsored 3/18/2026
Sen. Blackburn, Marsha [R-TN]
TN • R
Sponsored 3/19/2026
Roll Call Votes
No roll call votes available for this bill.
View on Congress.govTake It Personal
Get Your Personalized Policy View
Take the PRIA Score to see how policy affects your household, then upgrade to PRIA Full Coverage for year-round monitoring.
Already have an account? Sign in