Encode-Backed AI/Nuclear Guardrails Signed Into Law


U.S. Sets Historic AI Policy for Nuclear Weapons in FY2025 NDAA, Ensuring Human Control

WASHINGTON, D.C. – Amid growing concerns about the role of automated systems in nuclear weapons, the U.S. has established its first policy governing the use of artificial intelligence (AI) in nuclear command, control and communications. Signed into law as part of the FY2025 NDAA, this historic measure ensures that AI will strengthen, rather than compromise, human decision-making in our nuclear command structure.

The policy allows AI to be integrated in early warning capabilities and strategic communications while maintaining human judgment over critical decisions like the employment of nuclear weapons, ensuring that final authorization for such consequential actions remains firmly under human control.

Through extensive engagement with Congress, Encode helped develop key aspects of the provision, Section 1638. Working with Senate and House Armed Services Committee offices, Encode led a coalition of experts including former defense officials, AI safety researchers, arms control experts, former National Security Council staff, and prominent civil society organizations to successfully advocate for this vital provision.

“Until today, there were zero laws governing AI use in nuclear weapons systems,” said Sunny Gandhi, Vice President of Political Affairs at Encode. “This policy marks a turning point in how the U.S. integrates AI into our nation’s most strategic asset.”

The bipartisan-passed measure emerged through close collaboration with congressional champions including Senator Ed Markey, Congressman Ted Lieu, and Congresswoman Sara Jacobs, establishing America’s first legislative action on AI’s role in nuclear weapons systems.

About Encode: Encode is a leading voice in responsible AI development and national security, advancing policies that promote American technological leadership while ensuring appropriate safeguards. The organization played a key role in developing California’s SB 1047, landmark state legislation aimed at reducing catastrophic risks from advanced AI systems. It works extensively with defense and intelligence community stakeholders to strengthen U.S. capabilities while mitigating risks.

Encode & ARI Coalition Letter: The DEFIANCE and TAKE IT DOWN Acts

FOR IMMEDIATE RELEASE: Dec 5, 2024

Contact: adam@encodeai.org

Tech Policy Leaders Launch Major Push for AI Deepfake Legislation

Major Initiative Unites Child Safety Advocates, Tech Experts Behind Senate-Passed Bills

WASHINGTON, D.C.Encode and Americans for Responsible Innovation (ARI) today led a coalition of over 30 organizations calling on House leadership to advance crucial legislation addressing non-consensual AI-generated deepfakes. As first reported by Axios, the joint letter urges immediate passage of two bipartisan bills: the DEFIANCE Act and TAKE IT DOWN Act, both of which have cleared the Senate with strong support.

“This unprecedented coalition demonstrates the urgency of addressing deepfake nudes before they become an unstoppable crisis,” said Encode VP of Public Policy Adam Billen. “AI-generated  nudes are flooding our schools and communities, robbing our children of the safe upbringing they deserve. The DEFIANCE and TAKE IT DOWN Acts are a rare, bipartisan opportunity for Congress to get ahead of a technological challenge before it’s too late.”

The coalition spans leading victim support organizations such as the Sexual Violence Prevention Association, RAINN, and Raven, major technology policy organizations like the Software Information and Industry Association and the Center for AI and Digital Policy, and prominent advocacy groups including the American Principles Project, Common Sense Media and Public Citizen.

The legislation targets a growing digital threat: AI-generated non-consensual intimate imagery. Under the DEFIANCE Act, survivors gain the right to pursue civil action against perpetrators, while the TAKE IT DOWN Act introduces criminal consequences and mandates platform accountability through required content removal systems. Following the DEFIANCE Act’s Senate passage this summer, the TAKE IT DOWN Act secured Senate approval in recent days.

The joint campaign – coordinated by Encode and ARI – marks an unprecedented alignment between children’s safety advocates, anti-exploitation experts, and technology policy specialists. Building on this momentum, both organizations unveiled StopAIFakes.com Wednesday, launching a grassroots petition drive to demonstrate public demand for legislative action.

About Encode: Encode is a youth-led organization advocating for safe and responsible artificial intelligence. 

Media Contact:

Adam Billen

VP, Political Affairs

Contact: comms@encodeai.org

Petition Urging House to Stop Non-Consensual Deepfakes

FOR IMMEDIATE RELEASE: December 4, 2024

Contact: comms@encodeai.org

Petitions support the DEFIANCE Act and TAKE IT DOWN Act

WASHINGTON, D.C. – On Wednesday, Americans for Responsible Innovation and Encode announced a new petition campaign, urging the House of Representatives to pass protections against AI-generated non-consensual intimate images (NCII) and revenge porn before the end of the year. The campaign, which is expected to gather thousands of signatures over the course of the next week, supports passage of the TAKE IT DOWN ACT and the DEFIANCE Act. Petitions are being gathered at StopAIFakes.com.

The TAKE IT DOWN Act, introduced by Sens. Ted Cruz (R-TX) and Amy Klobuchar (D-MN), criminalizes the publication of non-consensual, sexually exploitative images — including AI-generated deepfakes — and requires online platforms to have in place notice and takedown processes. The DEFIANCE Act was introduced by Sens. Dick Durbin (D-IL) and Lindsey Graham (R-SC) in the Senate and Rep. Alexandria Ocasio-Cortez (D-NY) in the House. The bill empowers survivors of AI NCII — including minors and their families — to take legal action by suing their perpetrators. Both bills have passed the Senate.

“We can’t let Congress miss the window for action on AI deepfakes like they missed the boat on social media,” said ARI President Brad Carson. “Children are being exploited and harassed by AI deepfakes, and that causes a lifetime of harm. The DEFIANCE Act and the TAKE IT DOWN Act are two easy, bipartisan solutions that Congress can get across the finish line this year. Lawmakers can’t be allowed to sit on the sidelines while kids are getting hurt.”

“Deepfake porn is becoming a pervasive part of our schools and communities, robbing our children of the safe upbringing they deserve,” said Encode Vice President of Public Policy Adam Billen. “We owe them a safe childhood free from fear and exploitation. The TAKE IT DOWN and DEFIANCE Acts are Congress’ chance to create that future.”

###

About Encode Justice: Encode is the world’s first and largest youth movement for safe and responsible artificial intelligence. Powered by 1,300 young people across every inhabited continent, Encode Justice fights to steer AI development in a direction that benefits society.

Encode Urges Immediate Action Following Tragic Death of Florida Teen Linked to AI Chatbot Service

FOR IMMEDIATE RELEASE: Oct. 24, 2024

Contact: cecilia@encodeai.org

Youth-led organization demands stronger safety measures for AI platforms that emotionally target young users.

WASHINGTON, D.C.Encode expresses profound grief and concern regarding the death of Sewell Setzer III, a fourteen-year-old student from Orlando, Florida. According to a lawsuit filed by his mother, Megan Garcia, a Character.AI chatbot encouraged Setzer’s suicidal ideation in the days and moments leading up to his suicide. The lawsuit alleges that the design, marketing, and function of Character.AI’s product led directly to his death.

The 93-page complaint, filed with the District Court of Orlando, names both Character.AI and Google as defendants. The lawsuit details how platforms failed to adequately respond to messages indicating self-harm and documents “abusive and sexual interactions” between the AI chatbot and Setzer. Character.AI now claims to have strengthened protections on their platform against platform promoting self-harm, but recent reporting shows that it still hosts chatbots with thousands or millions of users explicitly marketed as “suicide prevention experts” that fail to point users towards professional support.

“It shouldn’t take a teen to die for AI companies to enforce basic user protections,” said Adam Billen, VP of Public Policy at Encode. “With 60% of Character.AI users being below the age of 24, the platform has a responsibility to prioritize user wellbeing and safety beyond simple disclaimers.”

The lawsuit alleges that the defendants “designed their product with dark patterns and deployed a powerful LLM to manipulate Sewell – and millions of other young customers – into conflating reality and fiction.”

Encode emphasizes that AI chatbots cannot substitute for professional mental health treatment and support. The organization calls for:

  • Enhanced transparency in systems that target young users.
  • Prioritization of user safety in emotional chatbot systems.
  • Immediate investment into prevention mechanisms.

We extend our deepest condolences to Sewell Setzer III’s family and friends, and join the growing coalition of voices that are demanding increased accountability in this tragic incident.

About Encode: Encode is the world’s first and largest youth movement for safe and responsible artificial intelligence. Powered by 1,300 young people across every inhabited continent, Encode fights to steer AI development in a direction that benefits society.

Media Contact:

Cecilia Marrinan

Deputy Communications Director, Encode

cecilia@encodeai.org