Artificial Intelligence Scam Prevention Act
Sponsored By: Senator Amy Klobuchar
Introduced
Summary
Stop AI-enabled impersonation and scams. This bill would make it illegal to use artificial intelligence to imitate people, businesses, or government officials and require clear disclosures when AI is used in calls or texts.
Show full summary
- Households and consumers would get mandatory start-of-call or start-of-text disclosures when AI is used, plus better FTC complaint logging and an updated consumer portal to share AI-scam information.
- Older adults would be targeted with dedicated outreach by a new Seniors Fraud Advisory Office addition to focus on AI scams and get tailored information for victims.
- Telemarketers and communications providers would face expanded rules that cover text messages and video calls, trigger FCC rulemaking within 270 days, and create a joint FTC-FCC advisory group that lasts 5 years and reports to Congress annually.
Your PRIA Score
Personalized for You
How does this bill affect your finances?
Sign up for a PRIA Policy Scan to see your personalized alignment score for this bill and every other piece of legislation we track. We analyze your financial profile against policy provisions to show you exactly what matters to your wallet.
Bill Overview
Analyzed Economic Effects
3 provisions identified: 3 benefits, 0 costs, 0 mixed.
Ban AI impersonation and empower FTC
If enacted, the bill would make it illegal to use AI to impersonate a government, business, official, or another person to commit fraud. It would also ban copying someone’s voice or image with AI to defraud and ban knowingly helping others do those acts. The Federal Trade Commission would be able to treat those violations as unfair or deceptive practices and use its full enforcement powers and penalties. The bill says this would not limit other FTC authorities.
FTC portal, reports, and advisory group
If enacted, the bill would require the FTC to update its public scam portal with searchable, regional information on AI-enabled scams and contacts for law enforcement and adult protective services. The FTC would log complaints about mail, TV, internet, telemarketing, and robocall fraud in the Consumer Sentinel Network and make those reports available to law enforcement. The FTC and FCC would form an Artificial Intelligence Scams Advisory Group to make model materials and guidance, report to Congress within one year and annually, and end five years after enactment. The FTC would also report to Congress within 180 days and annually on AI-related scams and policy recommendations.
Require AI disclosure for calls and texts
If enacted, the bill would update telemarketing and communications law to cover text messages, app-to-person messages, and video conference calls. It would define AI-generated or prerecorded voice to include speech that appears to be a real person who did not speak. Any call or text that uses AI to emulate a human, or that is sent by an automatic dialing system, would have to clearly disclose that AI or an autodialer is being used at the start of the message. The Federal Communications Commission would have to write implementing rules within 270 days after enactment.
Free Policy Watch
You just read the policy. Now see what it costs you.
Pick a topic. PRIA runs your household against live legislation and sends you a free personalized readout.
Pick a topic to get started
Sponsors & CoSponsors
Sponsor
Amy Klobuchar
MN • D
Cosponsors
Shelley Capito
WV • R
Sponsored 12/16/2025
Roll Call Votes
No roll call votes available for this bill.
View on Congress.govTake It Personal
Get Your Personalized Policy View
Take the PRIA Score to see how policy affects your household, then upgrade to PRIA Full Coverage for year-round monitoring.
Already have an account? Sign in