Encode AI Applauds Newly Announced Amendments to SB 53

FOR IMMEDIATE RELEASE

July 9, 2025

Contact: Carrie Hutcheson, chutcheson@glenechogroup.com

Encode AI Applauds Newly Announced Amendments to Senate Bill (SB) 53 That Will Advance Responsible AI Development in California

SB 53 Amendments Would Codify Recommendations from the Joint California Policy Working Group on AI Frontier Models to Promote Industry-Wide Transparency 

Sacramento, CA ― Today, Senator Scott Wiener (D-San Francisco) announced amendments to expand Senate Bill (SB) 53, landmark AI legislation that will enable California to lead with a “trust-but-verify” approach to frontier AI governance. SB 53 is co-sponsored by Encode AI, Economic Security Action California and the Secure AI Project.

The new provisions draw on the recommendations of experts within the Joint California Policy Working Group on AI Frontier Models to establish a transparency requirement for the largest AI companies.

Importantly, SB 53 codifies voluntary commitments previously made by the largest AI developers ― including Meta, Google, OpenAI, and Anthropic ― to level the playing field and increase industry-wide accountability.

“California has long been the birthplace of major tech innovations. SB 53 will help keep it that way by ensuring AI developers responsibly build frontier AI models,” said Sneha Revanur, President and Founder of Encode AI. “This bill reflects a common-sense consensus on AI development, promoting transparency around companies’ safety and security practices.” 

The bill’s provisions apply only to a small number of well-resourced companies who are training the most advanced and powerful AI models. They will require these companies to publish important information regarding their safety protocols and risk evaluations, and report major safety incidents to the Attorney General. Additionally, the bill includes protections for whistleblowers within AI labs who present evidence of harm or other violations.

“We’re thrilled to co-sponsor legislation that builds off the Working Group’s insights and codifies the voluntary commitments made by major tech companies,” said Nathan Calvin, VP of State Affairs and General Counsel at Encode AI. “SB 53 demonstrates that Californians do not need to choose between AI innovation and safety.”

Significantly, the announcement regarding California’s SB 53 comes after the demise of the AI moratorium in the Senate Reconciliation Bill, which would have blocked state-level AI legislation for the next decade. Encode was instrumental in rallying opposition to the AI moratorium, leading five different open letters over a month and a half that collectively represented the views of 150+ organizations and 850+ concerned parents. 

SB 53 is supported by a broad coalition of researchers, industry leaders, and civil society advocates:

“At Elicit, we build AI systems that help researchers make evidence-based decisions by analyzing thousands of academic papers,” said Andreas Stuhlmüller, CEO of Elicit. “This work has taught me that transparency is essential for AI systems that people rely on for critical decisions. SB53’s requirements for safety protocols and transparency reports are exactly what we need as AI becomes more powerful and widespread. As someone who’s spent years thinking about how AI can augment human reasoning, I believe this legislation will accelerate responsible innovation by creating clear standards that make future technology more trustworthy.”

“I have devoted my life to advancing the field of AI, but in recent years it has become clear that the risks it poses could threaten us all,” said Geoffrey Hinton, University of Toronto Professor Emeritus, Turing Award winner, Nobel laureate, and a “godfather of AI.” “Greater transparency requirements into how companies are addressing safety concerns from the most powerful technology of our time is an important step towards addressing those risks.”

“SB 53 is a smart, targeted step forward on AI safety, security, and transparency,” said Bruce Reed, Head of AI at Common Sense Media. “We thank Senator Wiener for reinforcing California’s strong commitment to innovation and accountability.”

“AI can bring tremendous benefits, but only if we steer it wisely. Recent evidence shows that frontier AI systems can resort to deceptive behavior like blackmail and cheating to avoid being shut down or fulfill other objectives,” said Yoshua Bengio, Full Professor at Université de Montréal, Co-President and Scientific Director of LawZero, Turing Award winner and a “godfather of AI.” “These risks must be taken with the utmost seriousness alongside other existing and emerging threats. By advancing SB 53, California is uniquely positioned to continue supporting cutting-edge AI while proactively taking a step towards addressing these severe and potentially irreversible harms.” 

“Including safety and transparency protections recommended by Gov. Newsom’s AI commission in SB 53 is an opportunity for California to be on the right side of history and advance commonsense AI regulations while our national leaders dither,” said Teri Olle, Director of Economic Security California Action, a co-sponsor of the bill. “In addition to making sure AI is safe, the bill would create a public option for cloud computing – the critical infrastructure necessary to fuel innovation and research. CalCompute would democratize access to this powerful resource that is currently enjoyed by a tiny handful of wealthy tech companies, and ensure that AI benefits the public. With inaction from the federal government – and on the heels of the defeat of the proposed 10-year moratorium on AI regulations – California should act now and get this done.”

“The California Report on Frontier AI Policy underscored the growing consensus for the importance of transparency into the safety practices of the largest AI developers,” said Thomas Woodside, Co-Founder and Senior Policy Advisor, Secure AI Project, a co-sponsor of the bill. “SB 53 ensures exactly that: visibility into how AI developers are keeping their AI systems secure and Californians safe.”

“Reasonable people can disagree about many aspects of AI policy, but one thing is clear: reporting requirements and whistleblower protections like those in SB 53 are sensible steps to provide transparency, inform the public, and deter egregious practices without interfering with innovation,” said Steve Newman, Technical co-founder of eight technology startups, including Writely – which became Google Docs, and co-creator of Spectre, one of the most influential video games of the 1990s.

###

Encode AI, Common Sense Media, Fairplay, Young People’s Alliance Lead 140+ Kids Safety Organizations in Opposition to the 10-Year Moratorium on the Enforcement of State AI Laws

Washington DC, June 28, 2025 — Encode AI, along with Common Sense Media, Fairplay, and the Young People’s Alliance, led a coalition of 140+ advocacy organizations in calling on the Senate to pull a ban on the enforcement of state-level AI legislation for the next decade.

“We write to urge you to oppose the provision in the House Energy and Commerce Committee’s Budget Reconciliation text that would put a moratorium on the enforcement of state artificial intelligence (AI) legislation for the next ten years,” wrote the coalition. “By wiping out all existing and future state AI laws without putting new federal protections in place, AI companies would get exactly what they want: no rules, no accountability, and total control. As organizations working on the frontline of the consequences of AI development with no guardrails, we know what this would mean for our children.”

“As written, the provision is so broad it would block states from enacting any AI-related legislation, including bills addressing hyper-sexualized AI companions, social media recommendation algorithms, protections for whistleblowers, and more,” the coalition continued. “It ties lawmakers’ hands for a decade, sidelining policymakers and leaving families on their own as they face risks and harms that emerge with this fast-evolving technology in the years to come.”

Today, Senator Blackburn shared the letter stating “Just last year, millions of high school students said they knew a classmate who had been victimized by AI-generated image based sexual abuse. This is why countless organizations are opposing misguided efforts to block state laws on AI. We must stand with them.”

“For over a decade, victims and the public have relied on state governments for what little protection they have against fast-moving technologies like social media—and now AI,” said Adam Billen, Vice President of Public Policy at Encode AI. “Big Tech knows it can stall legislation in Congress, so now it wants to strip states of the power to enforce current and future laws that safeguard the public from AI-driven harms.

The provision was slightly modified this week after the parliamentarian went back on her original ruling on the provision, forcing the authors to make it clearer that it applies only to the new $500M in BEAD funding. Still, a plain text analysis shows that states that take any of the new $500M would be putting their full portion of the existing $42.5B in BEAD funding at risk. The provision continues to put an incredibly wide range of basic protections at risk, opens up small states to lawsuits they can’t afford to fend against, undermines the basic tenants of federalism, and would incentivize states to adopt broad private rights of action as the enforcement mechanism in every AI bill going forward.

The full letter is available here.

The TAKE IT DOWN Act Clears Congress

Today, April 28th, 2025, the U.S. House of Representatives voted to pass the TAKE IT DOWN Act 409-2. Having already passed the Senate in February, the bill now heads to the President’s desk for signature. With bipartisan support and endorsements from over 120 civil society organizations, companies, trade groups, and unions the bill is primed to become law.

“Just a few years ago it was unthinkable that everyday people would have the technical expertise to generate realistic intimate deepfake content. But today anyone can open an app or website and create realistic nude images of anyone—an ex, a classmate, a coworker—in minutes. The resulting wave of abuse, compounded with the existing crisis of image-based sexual exploitation, has already robbed thousands of victims of their personhood and sense of self,” said Adam Billen, Vice President of Public Policy at Encode. “Congress has taken a critical step in addressing AI harms by passing the TAKE IT DOWN Act, which will not only hold perpetrators accountable but empower millions of victims to reclaim control of their images in the aftermath of abuse.”

Encode organized multiple coalition letters in support of the bill, published op-eds in Tech Policy Press and the Seattle Times, convened victims, students, and lawmakers including sponsors Senators Cruz and Klobuchar to prevent deepfake porn in schools, and built a website tracking deepfake porn incidents in schools around the country. This morning, Encode’s Adam spoke at a virtual press conference addressing the legislation’s constitutional grounding.

Encode Statement on Global AI Action Summit in Paris

Contactcomms@encodeai.org

Encode Urges True Cooperation on AI Governance In Light of “Missed Opportunity” at Paris Summit

WASHINGTON, D.C. – After a long-awaited convening of world leaders to discuss the perils and possibilities of AI, following up on the inaugural Global AI Safety Summit hosted by the United Kingdom in 2023, the verdict is in: despite the exponential pace of AI progress, serious discussions reckoning with the societal impacts of advanced AI were off the agenda. At Encode, a youth-led organization advocating for a human-centered AI future, we were deeply alarmed by the conspicuous erasure of safety risks from the summit’s final agreement, ultimately bucked by both the U.S. and U.K. It was certainly exciting to see leaders like U.S. Vice President J.D. Vance acknowledge AI’s enormous upside in historic remarks. But what we needed from this moment was enforceable commitments to responsible AI innovation that guard against the risks, too; what we’re walking away with, however, falls far short of that.

“I was happy to be able to represent Encode in Paris during the AI Action Summit — the events this week brought together so many people from around the world to grapple with both the benefits and dangers of AI,” said Nathan Calvin, Encode’s General Counsel and VP of State Affairs. “However, I couldn’t help but feel like this summit’s tone felt fundamentally out of touch with just how fast AI progress is moving and how unprepared societies are for what’s coming on the near horizon. In fact, the experts authoring the AI Action Summit’s International AI Safety Report had to note multiple times that important advances in AI capabilities occurred in just the short time between when the report was written in December 2024 and when it was published in January 2025.”

“Enthusiasm to seize and understand rather than fear these challenges is commendable, but world leaders must also be honest and clear-eyed about the risks and get ahead of them before they eclipse the potential of this transformative technology. It’s hard not to feel like this summit was a missed opportunity for the global community to have the conversations necessary to ensure AI advancement fulfills its incredible promise.”

About Encode: Encode is the leading global youth voice advocating for guardrails to support human-centered AI innovation. In spring 2024, Encode released AI 2030, a platform of 22 recommendations to world leaders to secure the future of AI and confront disinformation, minimize global catastrophic risk, address labor impacts, and more by the year 2030.

Encode Backs Legal Challenge to OpenAI’s For-Profit Switch

FOR IMMEDIATE RELEASE: December 29, 2024

Contact: comms@encodeai.org

Encode Files Brief Supporting an Injunction to Block OpenAI’s For-Profit Conversion, Leading AI Researchers, including Nobel Laureate Geoffrey Hinton, Show Support

WASHINGTON, D.C. — Encode, a youth-led organization advocating for responsible artificial intelligence development, filed an amicus brief today in Musk v. Altman urging the U.S. District Court in Oakland to block OpenAI’s proposed restructuring into a for-profit entity. The organization argues that the restructuring would fundamentally undermine OpenAI’s commitment to prioritize public safety in developing advanced artificial intelligence systems.

The brief argues that the nonprofit-controlled structure that OpenAI currently operates under provides essential governance guardrails that would be forfeited if control were transferred to a for-profit entity. Instead of a commitment to exclusively prioritize humanity’s interests, OpenAI would be legally required to balance public benefit with investors’ interests.

“OpenAI was founded as an explicitly safety-focused non-profit and made a variety of safety related promises in its charter. It received numerous tax and other benefits from its non-profit status. Allowing it to tear all of that up when it becomes inconvenient sends a very bad message to other actors in the ecosystem,” said Emeritus Professor of Computer Science at University of Toronto Geoffrey Hinton, 2024 Nobel Laureate in Physics and 2018 Turing Award recipient. 

“The public has a profound interest in ensuring that transformative artificial intelligence is controlled by an organization that is legally bound to prioritize safety over profits,” said Nathan Calvin, Encode’s Vice President of State Affairs and General Counsel. “OpenAI was founded as a non-profit in order to protect that commitment, and the public interest requires they keep their word.” 

The brief details several safety mechanisms that would be significantly undermined by OpenAI’s proposed transfer of control to a for-profit entity. These include OpenAI’s current commitment to “stop competing [with] and start assisting” competitors if that is the best way to ensure advanced AI systems are safe and beneficial as well as the nonprofit board’s ability to take emergency actions in the public interest.

“Today, a handful of companies are racing to develop and deploy transformative AI, internalizing the profits but externalizing the consequences to all of humanity,” said Sneha Revanur, President and Founder of Encode. “The courts must intervene to ensure AI development serves the public interest.”

“The non-profit board is not just giving up an ownership interest in OpenAI; it is giving up the ability to prevent OpenAI from exposing humanity to existential risk,” said Stuart Russell, Distinguished Professor of Computer Science at UC Berkeley & Director of the Center for Human-Compatible AI. “In other words, it is giving up its own reason for existing in the first place. The idea that human existence should be decided only by investors’ profit-and-loss calculations is abhorrent.”

Encode argues that these protections are particularly necessary in light of OpenAI’s own stated mission, creating artificial general intelligence (AGI) — which the company itself has argued will fundamentally transform society, possibly within just a few years. Given the scope of impact AGI could have on society, Encode contends that it is impossible to set a price that would adequately compensate the nonprofit for its loss of control over how this transformation unfolds.

OpenAI’s proposed restructuring comes at a critical moment for AI governance. As policymakers and the public at large grapple with how to ensure AI systems remain aligned with the public interest, the brief argues that safeguarding nonprofit stewardship over this technology is too important to sacrifice — and merits immediate relief.

A hearing on the preliminary injunction is scheduled for January 14, 2025 before U.S. District Judge Yvonne Gonzalez Rogers.

Expert Availability: Rose Chan Loui, Founding Executive Director of UCLA Law’s Lowell Milken Center on Philanthropy and Nonprofits, has agreed  to be contacted to provide expert commentary on the legal and governance implications of the brief and OpenAI’s proposed conversion from nonprofit to for-profit status. chanloui@law.ucla.edu

About Encode: Encode is America’s leading youth voice advocating for bipartisan policies to support human-centered AI development and U.S. technological leadership. Encode has secured landmark victories in Congress, from establishing the first-ever AI safeguards in nuclear weapons systems to spearheading federal legislation against AI-enabled sexual exploitation. The organization was also a co-sponsor of California’s groundbreaking AI safety legislation, Senator Wiener’s SB 1047, which required the largest AI companies to take additional steps to protect against catastrophic risks from advanced AI systems. Working with lawmakers, industry leaders, and national security experts, Encode champions policies that maintain American dominance in artificial intelligence while safeguarding national security and individual liberties.

Encode-Backed AI/Nuclear Guardrails Signed Into Law


U.S. Sets Historic AI Policy for Nuclear Weapons in FY2025 NDAA, Ensuring Human Control

WASHINGTON, D.C. – Amid growing concerns about the role of automated systems in nuclear weapons, the U.S. has established its first policy governing the use of artificial intelligence (AI) in nuclear command, control and communications. Signed into law as part of the FY2025 NDAA, this historic measure ensures that AI will strengthen, rather than compromise, human decision-making in our nuclear command structure.

The policy allows AI to be integrated in early warning capabilities and strategic communications while maintaining human judgment over critical decisions like the employment of nuclear weapons, ensuring that final authorization for such consequential actions remains firmly under human control.

Through extensive engagement with Congress, Encode helped develop key aspects of the provision, Section 1638. Working with Senate and House Armed Services Committee offices, Encode led a coalition of experts including former defense officials, AI safety researchers, arms control experts, former National Security Council staff, and prominent civil society organizations to successfully advocate for this vital provision.

“Until today, there were zero laws governing AI use in nuclear weapons systems,” said Sunny Gandhi, Vice President of Political Affairs at Encode. “This policy marks a turning point in how the U.S. integrates AI into our nation’s most strategic asset.”

The bipartisan-passed measure emerged through close collaboration with congressional champions including Senator Ed Markey, Congressman Ted Lieu, and Congresswoman Sara Jacobs, establishing America’s first legislative action on AI’s role in nuclear weapons systems.

About Encode: Encode is a leading voice in responsible AI development and national security, advancing policies that promote American technological leadership while ensuring appropriate safeguards. The organization played a key role in developing California’s SB 1047, landmark state legislation aimed at reducing catastrophic risks from advanced AI systems. It works extensively with defense and intelligence community stakeholders to strengthen U.S. capabilities while mitigating risks.