Washington DC, June 28, 2025 — Encode AI, along with Common Sense Media, Fairplay, and the Young People’s Alliance, led a coalition of 140+ advocacy organizations in calling on the Senate to pull a ban on the enforcement of state-level AI legislation for the next decade.
“We write to urge you to oppose the provision in the House Energy and Commerce Committee’s Budget Reconciliation text that would put a moratorium on the enforcement of state artificial intelligence (AI) legislation for the next ten years,” wrote the coalition. “By wiping out all existing and future state AI laws without putting new federal protections in place, AI companies would get exactly what they want: no rules, no accountability, and total control. As organizations working on the frontline of the consequences of AI development with no guardrails, we know what this would mean for our children.”
“As written, the provision is so broad it would block states from enacting any AI-related legislation, including bills addressing hyper-sexualized AI companions, social media recommendation algorithms, protections for whistleblowers, and more,” the coalition continued. “It ties lawmakers’ hands for a decade, sidelining policymakers and leaving families on their own as they face risks and harms that emerge with this fast-evolving technology in the years to come.”
Today, Senator Blackburn shared the letter stating “Just last year, millions of high school students said they knew a classmate who had been victimized by AI-generated image based sexual abuse. This is why countless organizations are opposing misguided efforts to block state laws on AI. We must stand with them.”
“For over a decade, victims and the public have relied on state governments for what little protection they have against fast-moving technologies like social media—and now AI,” said Adam Billen, Vice President of Public Policy at Encode AI. “Big Tech knows it can stall legislation in Congress, so now it wants to strip states of the power to enforce current and future laws that safeguard the public from AI-driven harms.
The provision was slightly modified this week after the parliamentarian went back on her original ruling on the provision, forcing the authors to make it clearer that it applies only to the new $500M in BEAD funding. Still, a plain text analysis shows that states that take any of the new $500M would be putting their full portion of the existing $42.5B in BEAD funding at risk. The provision continues to put an incredibly wide range of basic protections at risk, opens up small states to lawsuits they can’t afford to fend against, undermines the basic tenants of federalism, and would incentivize states to adopt broad private rights of action as the enforcement mechanism in every AI bill going forward.
The full letter is available here.